Originally Posted By: Taym
...
Having said that, I agree that how real automatic piloting systems and human action overlap/interact is non-obvious - your point is very very interesting.

I don't think it applies here though.
Perhaps it does.

Consider that the closer to genuinely trustworthy the automated (as in highly driver assisting but not totally autonomous) driving system becomes, the less response time the human will have when the automated system suddenly discovers it has misunderstood the situation or suddenly abandons its previously considered valid up to that very second understanding, and now throws control (and responsibility) upon the driver, perhaps without any prior warning.

I posit that as the automated driver systems become better they may 'hang on to' vehicle control much deeper/longer into a difficult situation before realizing it actually does not have the capability to exit the current specific situation without high risk of a (machine driving controlled) poor outcome.

If the system is far from perfect and the driver is accustomed to frequently taking over from the machine, each subsequent control handoff might be expected to be similarly routine. If the system is very very good and sudden handoffs back to the driver become uncommon, the likely ability of the driver to become quickly situation aware and respond correctly is lessened.

Perhaps the Tesla system has already become good enough to create this risk. If so, making the driving assistance system better may simply increase the risk of driver failure during unexpectedly poor outcome (to the driving system) events.


Edited by K447 (04/07/2016 02:03)