At high bitrates layer 2 should be better than layer 3 simply because less is being done to the audio.

I don’t understand how this follows. For any given bit rate they both have to compress the same amount of information into the same space.
My understanding (or lack thereof) was that they are both perceptual encoding (lossy) only the complexity was different. i.e. Layer 3 tries harder to cram data into the bit stream using a better perceptual model. Thus produces higher quality output.

As an example let’s assume say a 256k bit stream. This means that from the original (CD ~1400Kb/s) a compression of about 5:1 is required regardless of the method (Layer 2 or 3). Layer 2 does less to the audio i.e. it doesn’t try very hard when throwing away some of the information. While layer 3 has a better model and tries harder to decide which sections of the information can be thrown away.
The same amount of information is thrown away but layer 3 is better at deciding what to remove?



That started me thinking, is 5:1 loss-less audio compression possible? (Sounds hard, but not that hard...)


More musing... What if you removed the restriction of streaming and compressed globally (on a track basis)?


Bryan