Originally Posted By: Dignan
I worry that this is going to set back autonomous driving, or if it's just what we'd expect to see at this point in its development.


I think it's a setback for Tesla, but it points out how Google's system relies on things other than image recognition: Laser and radar scanners. I found it interesting to read Matthew Inman's experiences just a couple days after reading about the Tesla accident. He describes how the car's radar system noticed a bicyclist behind a hedge before anyone/anything could have gotten a visual.

Something else I heard... I heard this through word of mouth rather than reading about it, so I can't verify its veracity, though it sounds logical...

Supposedly contributing to the Tesla accident was the need for the image recognition to be able to tell the difference between signs/overpasses/etc and objects in the car's path. Supposedly, because the side of the semi was the same color as the sky, the only part of the semi that was identified as looming closer was the top edge of it. So the software thought that the approaching high-positioned horizontal line was merely a roadsign, a bridge, or powerlines. It decided that it was something that was OK to drive under.

The need for that kind of mitigation in the software is critical, because if it wasn't there, the Tesla would be slamming on its brakes every time it passed under a roadsign or a bridge.

The discussion about trucks needing side guards is interesting, but it's unrelated to this situation, it doesn't address the root problem. The problem is a system that depends on unreliable image recognition software rather than more reliable rangefinding technology.
_________________________
Tony Fabris