Okay, experiment over. I concluded that lame (v3.92, --alt-preset standard) and the Fraunhofer encoder (iTunes 2.0.4, VBR "highest" quality) are indistinguishable, both from each other and from the original WAV file.

My listening environment was a little unorthodox. I went into a quiet room with my laptop (HP Omnibook 550), my Onkyo SE-U55 USB external sound card, and Grado Labs SR-60 headphones. I converted the MP3 files back to WAV (using lame's decode option) and listened to everything using three instances of WinAmp, one with each set of WAV files. This let me flip back and forth rapidly between tracks and between different versions of the same track.

I tried to listen carefully for subtle details in the background (e.g., musicians talking off-mike), as well as common things like ringing issues on the attack or the effects of the low-pass filter. Honestly, I didn't hear any differences.

Now, iTunes has been optimized to use dual CPUs if you've got 'em. It runs for me between 13x and 20x realtime. Lame can only use one CPU and runs at 1.5x (although I'm sure some AltiVec performance tuning could help it a lot). In addition to the performance difference, there's also a bitrate difference. On one of my test tracks (Count Basie & Oscar Peterson playing "Louis B." from "Satch and Josh", the XRCD remastered version), lame averaged 189kbits/sec, while the Fraunhofer coder averaged 147kbits/sec. Similar numbers showed up elsewhere.

My conclusion: if you use a Mac with iTunes, there's no need to use lame. If you run the built-in Fraunhofer coder with VBR and the "highest" quality, what you get out is perfectly "CD quality" and you get it at a lower bitrate.