The lines are definitely the two fields that are interlaced togther to give you NTSC video. If you've researched, by now you know NTSC is composed of 2 fields, each updated at 30 frames per second. Gives you a refresh of 60Hz (some poeple think it's 60 FRAMES per second, but it's 60 fields and each field only contains 50% of the vertical information composing a frame). Blah blah blah... The vdeo you recorded includes as much information as could be extracted by the capture device (capturing at ~640x480). Removing the "lines" will in every instance involve some type of processing that will alter (I don't want to say degrade) the output. The most simple and ugliest would be to double each field. This gives of course a blocky effect. The OTHER, is to captuer at a lower resolution like 320x240...

When you are WATCHING TV through the ATI hardware it is generating the output through our hardware overlay. This is the "back-end" scaler of the chip. This back-end scaler (the video scaler) features all sorts of goodies for video processing *while displaying* Including adaptive De-Interlacing. So when you watch live, you don't see any lines.

When you play back the file you may do so in two ways... ATI's file player should use the very same overlay if it is not already being used by 1) the TV's live stream or 2) another video file or 3) a DVD. This will again de-interlace, real-time on playback. If you are already using the overlay then video will play using the front-end scaler (the 3D engine's scaler). You can think of the video as a texture being manipulated on a simple plane. The 3D engine doesn't feature hardware-based adaptive de-interlacing. So you see the lines. It's better than a pure software-based solution because it will scale smoother and faster in its basic form. It's less expensive to filter the texture using the ardware than it would be with a software-based algorithm.

Which brings me to my next point... It would be possible to implement a software-based de-interlacing algorithm to work in conjunctoin with the front-end scaler. This could possibly be done as an extension to Windows Media player or put into another player application (such as ATI's own). However, I think the best solution would be to write a pixel shader program to do the de-interlacing with the 3D hardware. This would be placed into the driver layer responsible for video acceleration.

Right now we are plagued with the same issue, but with DVD playback, in our Mac software. With the move to Jaguar (10.2) we've moved to using the front-end instead of our traditional overlay. This has many benefits for Quartz Extreme's interface, but now we lack the de-interlacing feature so many people loved for their broadcast-sourced DVDs. The video scaling abilities (tap filters, etc..) of the hardware overlay also exceed those of the 3D scaler (again, by design because of their intended purposes). Our Mac video accelerator component doesn't de-interlace either, but interlaced movie-file content is rare unless you're doing video captures (an we don't have an OS X capture solution right now)

I don't know what future drivers will be like for Windows, but I can tell you that I put in a feature request to implement the shader-based de-interlacing. Nice coincidence that you would have mailed me about the same issue.

Ok, I'm nursing a nasty head-cold, so I'm gonna go and pass out now.

Bruno
_________________________
Bruno
Twisted Melon : Fine Mac OS Software