Yes it makes sense. The algorithms they use are HRTF functions, and they are based on the mechanics of the human ear. The idea is similar to the concept of loudness. basically different frequencies of sound is distorted by the physical features of the ear, this distortion tags different frequencies with directional cues, then after the audio signal goes down the nerve to the brain, the brain then corrects for the distortion (thus, untagging) and then it uses the tags to correlate the sound to locations. That is why, even though you have only 2 ears (2 points will form a line, 3 points a plane!) you can correct differentiate the origin of a sound. So if you close your eyes and somebody clapped 2 feet behind your head, versus 2 feet in front of your head you will correctly figure out which one is front and back. (neat eh?) Also interesting, is if you clap your hand above your head versus below, you can also differentiate but to a lesser extent. So interestingly, you only have 2 ears, yet, you can determine a point source in a 3 dimension space.

Anyway, what the algorithms do, is they will take music, and alter its sound stage and stretch it out to make it sound "bigger" or more "in front" than the original recording. It will increase certain frequencies, decrease others, move some frequencies from one channel to the other, mix some, separate others and very effectively trick the listener into thinking the recording is much better. So it can turn a studio recording into a concert hall, or a concert hall into a studio, or a monotone recording into stereo!!! (yep, it can take a single point source, and stretch the point into a complete sound stage) without losing that sense of authenticity.

BTW if you care about noise, you won't want to put the Wow Thing in your home set up, it adds a horrific amount of noise to the signal. This level of noise may be acceptable on the freeway however!!

Calvin