DEV Community

Discussion on: Is it Ethical to Work on the Tesla Autopilot Software?

 
kspeakman profile image
Kasey Speakman

I wouldn't believe that claim at face value even if it came with a certified government seal of approval and every kind of certification. It might increase consumer confidence, but (much like FDA approval) you don't really know if it is safe until it has real world experience behind it by early adopters. We are in unknown territory here. The government can only try to protect you against failures which are already known to it. If the government checks for DDT, lead paint, dangerous radiation it is only because somebody has already been affected. Then comes the long process of identifying, classifying, and codifying remedies for the failures. Then maybe consumers can be protected. I don't know how you could skip to regulations and standards. Do we just guess at the ramifications and how to remediate them?

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

We can skip to some regulations and standards because even though we've never fielded a fleet of self driving cars before, we have decades of experience fielding complex computer software, embedded systems, aircraft with automation, and regular cars, along with experience with manufacturing and quality control.

Many of the problems that could occur with self driving cars are foreseeable. And one of the one or more of the above industries already likely knows how to mitigate it.

That should be the starting point in my opinion. And from there we can proceed as you suggest where we identify and mitigate the unforeseeable challenges of integrating self driving cars on our roads.

I don't see any reason to re-invent the wheel by starting from scratch with regulations.

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

We know bits and pieces, but the specific combination for self driving cars could play out a lot differently as a whole that what you would get from piecing it together and guessing. Take texting and driving for example. Texting capability was used for a long time before texting-while-driving became a problem. It only became a large enough problem after iPhones were released and the subsequent market shift to touch-based smart phones. Prior to that phones had tactile buttons, so for the most part people could text reliably (i.e. t9word) without taking their eyes off the road. But after the market shift, people were getting into a lot more accidents. Another example, "hoverboards"... a lot of them are prone to randomly catch on fire, prompting airlines to ban them for obvious reasons. We knew how Lithium batteries work. We knew how segways work. But nobody really foresaw that.

It does not make sense to speculate something into law. We already have laws around electronics, cars (and in fact it is a really difficult process to become an automotive manufacturer), etc. I'm sure we will eventually see some laws around self-driving cars specifically. But the right time to do that is when we know which aspects have proven to be dangerous. Guesses get us nowhere toward real safety. And perhaps speculative safety laws will give us imagined safety, which is even worse.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I don't think it views are actually that different. It's just difficult to communicate effectively in the comments section.

At the level you've define the problem, I agree that preventive legislation would be counter-productive.

I was imagining regulation asked at a much lower level. Like requiring these systems be programmed in a safe subset of C (if you want to use C) because overflows, null references, etc. are dangerous.