DEV Community

Discussion on: Is it Ethical to Work on the Tesla Autopilot Software?

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Not being in the automotive industry, I could not judge whether ISO 26262 really matters in practical application. (Not all ISO standards do, despite their titles. My keyboard is not ISO 9995 compliant, for example.)

Regardless of whether they are following a specific ISO standard, it is pretty obvious that they have people responsible for ensuring safety at various levels. (13 safety engineer/tech jobs available at time of writing. Dunno how many are currently employed in safety.)

Should we demand safety? Certainly! But I guess I differ more in the question of "How?" I don't care if they implement every applicable ANSI/ISO/DIN standard. And in fact, they would probably waste a lot of overhead doing so. I'll demand safety with my wallet. I won't buy autopilot features until they prove them. There will always be some early adopters who want to take the risk and be part of the proof. The business consequences if Tesla screws up the safety aspect are colossal, since lives are at stake. Widespread disaster would be public and likely results in the company folding. I can hardly think of a larger incentive to a for-profit business (especially an established one like Tesla) to get things right and keep people safe.

And anyway, it will likely be a pretty niche feature too expensive for most of us at launch. So even if thinking of the worst cases, I doubt it could do too much damage in the proving stage.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Voting with your wallet is hard because even if you had infinite time to evaluate the raw data, the Tesla won't share it with you. So, one day they are going to announce that their car is safer than human drivers and you'll either believe them or not. But it won't be based on your careful evaluation of the data.

We count on governments to handle this stuff for us because we can't do it ourselves. We don't have resources to see if there's DDT on our spinach, lead paint in the toys we buy for our kids, dangerous radiation coming from our smart phones, or catastrophic errors in the software driving our cars.

Thread Thread
 
kspeakman profile image
Kasey Speakman

I wouldn't believe that claim at face value even if it came with a certified government seal of approval and every kind of certification. It might increase consumer confidence, but (much like FDA approval) you don't really know if it is safe until it has real world experience behind it by early adopters. We are in unknown territory here. The government can only try to protect you against failures which are already known to it. If the government checks for DDT, lead paint, dangerous radiation it is only because somebody has already been affected. Then comes the long process of identifying, classifying, and codifying remedies for the failures. Then maybe consumers can be protected. I don't know how you could skip to regulations and standards. Do we just guess at the ramifications and how to remediate them?

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

We can skip to some regulations and standards because even though we've never fielded a fleet of self driving cars before, we have decades of experience fielding complex computer software, embedded systems, aircraft with automation, and regular cars, along with experience with manufacturing and quality control.

Many of the problems that could occur with self driving cars are foreseeable. And one of the one or more of the above industries already likely knows how to mitigate it.

That should be the starting point in my opinion. And from there we can proceed as you suggest where we identify and mitigate the unforeseeable challenges of integrating self driving cars on our roads.

I don't see any reason to re-invent the wheel by starting from scratch with regulations.

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

We know bits and pieces, but the specific combination for self driving cars could play out a lot differently as a whole that what you would get from piecing it together and guessing. Take texting and driving for example. Texting capability was used for a long time before texting-while-driving became a problem. It only became a large enough problem after iPhones were released and the subsequent market shift to touch-based smart phones. Prior to that phones had tactile buttons, so for the most part people could text reliably (i.e. t9word) without taking their eyes off the road. But after the market shift, people were getting into a lot more accidents. Another example, "hoverboards"... a lot of them are prone to randomly catch on fire, prompting airlines to ban them for obvious reasons. We knew how Lithium batteries work. We knew how segways work. But nobody really foresaw that.

It does not make sense to speculate something into law. We already have laws around electronics, cars (and in fact it is a really difficult process to become an automotive manufacturer), etc. I'm sure we will eventually see some laws around self-driving cars specifically. But the right time to do that is when we know which aspects have proven to be dangerous. Guesses get us nowhere toward real safety. And perhaps speculative safety laws will give us imagined safety, which is even worse.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I don't think it views are actually that different. It's just difficult to communicate effectively in the comments section.

At the level you've define the problem, I agree that preventive legislation would be counter-productive.

I was imagining regulation asked at a much lower level. Like requiring these systems be programmed in a safe subset of C (if you want to use C) because overflows, null references, etc. are dangerous.