I’ve actually done a similar test a couple years back between Lossless/320kps MP3 (I recommend Foobar’s ABX pluggin if you want to try it yourself) and could also tell the difference. It wasn’t easy though, it took multiple listening and a lot of concentration (I was knackered afterwards). In practice, it is more effort than one would use to actually *enjoy* music. But given the amount of effort/time that goes into ripping/tagging CDs, I opted to go lossless for all my rips. Storage is inexpensive nowadays and I never need to worry again. If I want 320kps MP3 to listen on a portable device, I can make them from my lossless files. If the portable device can’t store 320kps, I can choose to encode (the lossless files) at a lower bitrate. This is preferable to transcoding from 320kps to a lower bitrate. On that note, for MP3, I also tend to favour variable bitrates if you care about storage. It’s pretty efficient.
Madeyski  provided an empirical evidence (via a series of laboratory experiments with over 200 developers) regarding the superiority of the TDD practice over the traditional Test-Last approach or testing for correctness approach, with respect to the lower coupling between objects (CBO). The mean effect size represents a medium (but close to large) effect on the basis of meta-analysis of the performed experiments which is a substantial finding. It suggests a better modularization (., a more modular design), easier reuse and testing of the developed software products due to the TDD programming practice.  Madeyski also measured the effect of the TDD practice on unit tests using branch coverage (BC) and mutation score indicator (MSI),    which are indicators of the thoroughness and the fault detection effectiveness of unit tests, respectively. The effect size of TDD on branch coverage was medium in size and therefore is considered substantive effect.