I'll second that study - I read it too and it seemed on the money.

I'll concede that 128kbit might have some artifacts in certain parts of certain songs. I can hear it quite clearly as a digital processed sound on the snares on New Order's Blue Monday. High frequencies, white noise and other near random sound, and sharp attacks such as kick drums or explosions all suffer the most under MP3.

However, firstly, the encoder you use makes a big difference. A recent article on Ars Technica studied the frequency response of various players. While the Fraunhofer encoder reproduced the sound all the way up to 20 or 22KHz (I can't remember exactly), AudioCatalyst and BladeEnc (and another I can't remember) all dropped off at around 16KHz in standard 128kbit mode. People with good hearing can hear these tones directly, but there's plenty of evidence to suggest that the rest of us can still notice if a cymbal is lacking the really high frequencies. Most of this frequency loss went away as you increased the bit rate - the spectrum would return to normal at 160kbit or 192kbit.

Secondly, double blind is the only way to accurately determine real results. I've seen too many 'experts' able to tell the difference between a well encoded JPEG and a TIFF ten times its size when they know which is which, but very few can identify them when they don't know. And hearing is much more subjective - the memory is not very good at remembering precise tones or sound characteristics over time, and by the time you've loaded up that next track you've probably forgotten the exact details of the comparison part you wanted to look at.

Try things out before you commit to someone else's ideas.

Save the whales. Feed the hungry. Free the mallocs.
_________________________
Owner of Mark I empeg 00061, now better than ever - (Thanks, Rod!) - and Karma 3930000004550