Removing the "noise" from your signal is all about characterizing your noise. If it's single samples that are bogus, and you can write an accurate "bogus" predictor (e.g., reject all samples outside of your [1000,8000] range), then do that first. If your noise is uglier than that, then you've got more work to do, but it should be focused entirely on detecting bogus vs. non-bogus samples. Then, do your downsampling after you've removed the noise. (You might need to keep a bit vector or something to label the samples that you removed, so they don't influence the averaging you do afterward.)

(In some cases, like digital cameras, they can potentially / beneficially combine the "denoising" operation with other things like demosaicing, so it's hard to say that you should *always* do the denoising by itself, first, but that seems better suited to the way you've characterized your noise.)

If you want to get fancier, you would want to get into some kind of statistical model. Given where you think things where last time and what you've observed, you try to predict where things must be the next time around. Rather than tracking your state as a single number, you track it as a probability distribution. If an observation is highly unlikely to have occured given the state you think you're in, you might just reject it altogether.

We did something like this as part of an in-building navigation system we built using signal strength measurements from 802.11 base stations. We took measurements in each and every office (two nights of fun work) and then you just start walking around and comparing your observed measurements to the database and to where you thought you were beforehand. The system knows the tolopology of the building, allowing it to reject any spurious readings that might otherwise indicate, for example, that you were on the opposite side of the building from where you were on the previous reading.

Gory details here:

http://www.cs.rice.edu/~ahae/papers/mobicom2004.pdf


Edited by DWallach (18/08/2006 19:23)