DEV Community

Cover image for Is it Ethical to Work on the Tesla Autopilot Software?
Blaine Osepchuk
Blaine Osepchuk

Posted on • Updated on • Originally published at smallbusinessprogramming.com

Is it Ethical to Work on the Tesla Autopilot Software?

The more I learn about Tesla's self driving car development, the more concerned I become about the ethics of working as a software developer on the Tesla autopilot software. Let me explain.

Tesla's value offer

Tesla is selling people a prototype-quality level 1 or 2 self driving car. But it is also collecting money upfront for the promise that its cars will become level three capable via a future software update. And that the level three capability will be at least twice as safe a human driver. Of course, full self driving cars aren't actually legal yet. So, that will need to be sorted out by regulators in every jurisdiction (but Elon Musk is confident that they'll come around once he demonstrates how much safer autopilot is than allowing humans to drive).

That's a lot of assumptions

I believe Tesla is making representations to the public and customers that are bordering on deceptive.

Here's a list:

  1. It's possible for Tesla to create a safe level three self driving car in the next few years
  2. Autopilot will be safer than human drivers
  3. Tesla will be able to prove that autopilot is safer than human drivers
  4. The current hardware will support full self driving capabilities
  5. Regulators all over North America will allow the autopilot to be used on their roads without modifications
  6. Tesla will not be sued over their product quality
  7. Tesla will not be forced into a company-ending recall or go bankrupt for other reasons

Let's walk through these points.

1. Is it possible for Tesla to create a safe level three self driving car in the next few years?

There are several parts to this argument.

a) The Tesla autopilot software needs to be perfect

Tesla can't expect credit for the lives it saves to count against the lives that its self driving cars take. The minute a self driving car makes a serious mistake or is suspected of making a serious mistake, there are going to be lawsuits and bad publicity aplenty. What's going to happen when a self driving car kills a bunch of kids?

Let's do some math. By some estimates self driving cars are going to contain something like 200 million lines of code. And the industry average is 15-50 defects/KLOC. So, if self driving car software follows industry average defect rates, we can expect the software in each car to contain 3 to 10 million errors. Are we as a society okay with that?

b) Achieving the requirements for a fully autonomous car is next to impossible

I can't see how Tesla's going to get the accuracy required from its machine learning software for a fully autonomous car anytime soon. We have machine learning in many products with tons of data that are way simpler domains than self driving cars (swipe typing, voice recognition, photo classification, language translation, etc.) and they make mistakes all the time. In self driving car software, you have to chain multiple deep learning systems together. And each part of it has to come to the right conclusion in some tiny fraction of a second, then the system must make the right decision, and be able to execute it.

All this has to happen with whatever computing power is available in the car. And it has to be able to cope with temperature extremes, hardware faults, bit flips from cosmic rays, mud splashed on the sensors and cameras, software errors, mechanical problems, network issues, cyberattack, novel road conditions, aggressive and unpredictable humans, wildlife, and emergent behavior when multiple self driving cars interact (more risks here). Plus, it has to work over the life of the car. Given all of these requirements, you expect me to believe these systems will get everything right 99.9999999% of the time?

c) Tesla's cars are more like prototypes than production quality cars

Building a self driving car without LiDAR is just crazy. There. I've said it. I know Elon Musk thinks he can get around it but I can pretty much guarantee you his cars would be safer if they also had LiDAR. For comparison, Waymo's cars have 3 kinds of LiDAR, 5 radar sensors, and 8 cameras. Which car would you rather trust your children's lives to?

But there are other problems too. GM and Waymo are building redundant systems into their cars so they can cope with failure conditions without killing you. Where are Tesla's redundant systems?

The Toyota unintended acceleration case should be required reading for anyone developing or considering buying a self driving car. Here's an excellent video (slides) of what happened and why extremely high quality engineering processes and redundancies are required for safety critical systems.

d) You can't mess around with safety

When you're developing a system that could cause deaths if it malfunctions, you're building a safety-critical system. It's a whole different ball game from making smart phone apps. You need to follow a process like ISO 26262. As far as I can understand it, deep learning systems cannot be certified under ISO 26262 because we can't know what they'll do in every case. They can't be subjected to formal methods. Nor can they be tested exhaustively. And that's a big problem.

Comparisons with the aviation industry

If you want to see examples of what taking safety-critical systems seriously looks like, look at the aviation industry. No deep learning allowed. Only unparalleled levels of engineering effort and quality assurance.

Airbus A330

For example, the Airbus A330 has quintuple redundancy for its flight control system!

Highlights:

  • the system runs on five computers simultaneously and only one is needed to fly the aircraft
  • each computer has two processors (known as channels) that perform the same computations and then compare their results
  • different processors are used to reduce the chance that a manufacturing or design fault could introduce an error
  • three computers make up the primary system and two computers are available as fallbacks with reduced functionality
  • the system contains four versions of the flight control software programmed by independent teams using different programming languages
  • all sensor inputs are redundant
  • all actuators are redundant

That's a serious commitment to safety (video, slides). By the way, the A330 system I described above was introduced in 1992! Seeing what Airbus was doing over 25 years ago should make you reconsider if Tesla has any business building safety critical systems.

2. Autopilot will be safer than human drivers

The airline industry discovered ages ago that autopilot technology isn't all upside:

  • Pilots' flying skills get rusty when they use the autopilot.
  • And they have trouble regaining situational awareness when the autopilot disconnects unexpectedly.

Tesla is going to face both of these problems.

Air France 447 crash

The autopilot on Air France 447 kicked out unexpectedly after receiving an incorrect air speed reading from a sensor. And three well trained pilots, each with thousands of hours of professional experience and extensive training on that particular jet, couldn't figure out what was happening, ignored the warnings from their computers, and crashed the plane, killing over 200 people.

This doesn't bode well for your average car owner who hasn't done a lick of training or studying since they first got their license.

The Air France crash isn't an isolated incident, by the way. Pilots routinely have trouble gaining situational awareness after their autopilot systems suddenly disconnect.

Consider, what would happen to your night driving or highway passing skills a few years after you got a car that routinely handled those tasks for you. It's the paradox of automation: the more you use it, the worse you'll perform if it unexpectedly fails.

By the way, the Air France crew couldn't gain enough situational awareness in the three minutes they had to respond to their autopilot disconnect to save their lives. What car is ever going to figure out it can't handle a situation three minutes beforehand? None. You may have as little as a couple of seconds in a car. Let that sink in for a minute.

Skipping level 3 autonomy

For this very reason, several car companies have decided to skip level 3 autonomy altogether. I believe Google was the first to publicly abandon the idea of level 3 autonomy and then others followed.

3. Tesla will be able to prove that autopilot is safer than human drivers

Proving that your self driving car is safer than humans in any meaningful way is going to be extremely difficult. I'm not going to drag you through all the statistics. Let me just link you to an article that does a good job of covering the scope of the problem. I'll only hit a few key points here.

graph

Huge sample size needed

Fatal vehicle accidents happen very infrequently (about 1 in every 94 million miles in the US) so you need a huge sample of Tesla autopilot fatal accidents to even have confidence that your average death rate is statistically significant (30 is a rough rule of thumb). That's a lot of deaths and a lot of driving before you can make the claim that your self driving car is statistically safer than human drivers.

Updates or changes reset the clock

Every time you update the software or the hardware, the clock resets to zero because it's basically a new product.

You must count the net outcome of autopilot

There are three indirect kinds of deaths that you'll need to count against self driving cars.

  1. Deaths that occur as a result of people losing their driving skills because they have came to rely on the automation.

  2. Any deaths that result from autopilot/hand-off confusion. People are really bad taking control of cars when the automation fails. But it will be worse when you also consider that self driving cars from different companies may handle the same situation differently or that the same car could behave differently after a software update. I predict these issues will lead to many, many deaths.

  3. Deaths caused by self driving cars doing things that people just wouldn't expect. There are many scenarios where a self driving car could do something no human would expect and cause a third party to crash.

In other words, you need the all-in outcome, not just the deaths prevented by the autopilot for Tesla owners.

Rigorous study required to determine safety of autopilot

You need to conduct a scientific study to actually figure it if Tesla's autopilot is safer than human drivers. You have to take all people who want to buy a Tesla with autopilot, sell them the car, and then randomly assign them to get autopilot or not and wait to see until at least 30 crashes in each group before you can make any claims about safety.

I'm grossly oversimplifying the process but that's the gist of it. You can't just look at the overall fatality or crash rate for all vehicles and compare it the rate for Tesla cars--that's an apples to oranges comparison.

So, in conclusion, car makers will somehow have to get approval to run huge life and death experiments on public roads to collect the data to prove that self driving cars are safe enough to use on public roads. And each hardware or software change resets the clock. Am I the only one who sees a problem with that?

4. The current hardware will support full self driving capabilities

Considering we don't even know what it will take to make a fully autonomous car, I have my doubts.

Let me tell you a little story. I bought voice recognition software in the 1990s because I was writing a lot and the company that made the software promised me that it would work and save me a bunch of time. Guess what? It was so inaccurate that I stopped using it almost immediately. Yet every version of that software promised would-be users that advances in computing power and better algorithms had solved the problems. Lies. In fact, it's 20 years later and my voice now gets processed in the cloud by an unimaginable amount of computing power compared to what I had in my 1990s desktop and voice recognition is still hit and miss.

So, what's the chance the same pattern shows itself with self driving cars? What if it takes 100 or 1,000 or 10,000 times the computational power to go from Tesla's current autopilot to a safe fully self driving car on any road under most conditions (the level of capability that Elon Musk is hinting at)?

5. Regulators all over North America will allow the autopilot to be used on their roads

That's a big assumption if you're already taking deposits for the full autonomy feature. What if approval doesn't come for a decade? Can Tesla survive that? But there are other problems too.

Different regulators may place different restrictions on the level of autonomy they will allow on their roads and the conditions in which that autonomy may be used. Can you imagine the chaos and confusion for everyone if self driving cars had radically different capabilities depending on the jurisdiction it happens to be in?

Regulators could also mandate that self driving cars meet certain requirements before they can be used. What if one of those requirements is LiDAR or adherence to ISO 26262 or no single points of failure? Tesla cannot claim any of these things.

Finally, I wonder about the possibility of a regulator making autopilot illegal after a particularly terrible incident or series of incidents. What would happen if a car with the autopilot engaged slammed into a group of preschooler?

6. Tesla will not be sued over their product quality

Of course Tesla will be sued. It's already being sued.

lawsuits

Toyota has paid out over a billion dollars in connection with the unintended acceleration issues I mentioned previously. And, while I'm no legal expert, I suspect Tesla is even more exposed than Toyota. I expect plaintiff's lawyers in some huge class action lawsuit are going to easily tear Tesla apart for all the reasons I've mentioned throughout this post. How many lawsuits can Tesla afford?

7. Tesla will not be forced into a company-ending recall or go bankrupt for other reasons

Massive recalls seem foreseeable. Can the current hardware support full autonomy? Will regulators require LiDAR? Will regulators require Tesla to install more redundant systems? But even if that stuff all works out in Tesla's favor, just think about the newness and complexity of self driving cars. Aren't big recalls foreseeable?

Bankruptcy for other reasons is another possibility since Tesla is the most shorted stock in the US. Can Tesla get its model 3 production issues straightened out? What if Tesla cannot achieve full autonomy in the next few years? Will regulators allow fully autonomous cars on the road? What if GM, Ford, Waymo or another company get approval for their self driving cars before Tesla? Will people want their deposits back so they can go buy a self driving car immediately? What if monitoring the autopilot is more difficult and tiring than just driving the car yourself?

Putting it all together

I'd like to return to my original question: Is it ethical to work on the Tesla autopilot software? Here are some quotes from the Software Engineering Code of Ethics.

1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.

1.04. Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents.

1.06. Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents, methods and tools.

2.01. Provide service in their areas of competence, being honest and forthright about any limitations of their experience and education.

3.10. Ensure adequate testing, debugging, and review of software and related documents on which they work.

6.07. Be accurate in stating the characteristics of software on which they work, avoiding not only false claims but also claims that might reasonably be supposed to be speculative, vacuous, deceptive, misleading, or doubtful.

6.10. Avoid associations with businesses and organizations which are in conflict with this code.

[All emphasis is mine.]

Of course, I have no first hand knowledge that anything unethical is happening at Tesla. But suppose even half of what I've presented here is true. Is it ethical to work on the Tesla autopilot software?

Staffing problems

Tesla's made the news several times over the departures of key members of the autopilot team. See here, here, and here. Are people leaving the autopilot team on ethical grounds?

And then there's this tweet from Elon Musk:

We are looking for hardcore software engineers. No prior experience with cars required. Please include code sample or link to your work.

If these engineers don't have experience with cars or aerospace, are they at risk of violating principle 2.01 (Providing service in areas of competence...)?

Final arguments

I have serious concerns about what's going on at Tesla. Building a safe self driving car is going to be incredibly difficult. And nobody should be allowed to cut corners in the interest of getting to market faster or making more profit. Tesla itself has called its software releases "beta-tests". I don't know how you can beta test in public with untrained customers as drivers in good conscience. At a minimum I'd like to see self driving cars tested like this.

The aviation industry would never develop a new technology this way

Do you think Boeing or Airbus could get away with beta-testing a deep learning autopilot system in planes carrying passengers? Not likely. What about if they claimed it would be twice as safe as their current autopilot systems? Irrelevant. Can you imagine the firestorm if a passenger jet crashed and killed everyone on board because the beta-software malfunctioned? So, how are we even talking about doing this with cars?

Is Elon Musk's approach to self driving cars destined to fail?

Earlier this month Elon Musk tweeted the following in response to the problems with the model 3 production line:

Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.

I bring this up for two reasons. First, Elon Musk has cultivated a reputation for being the smartest person in pretty much any room. But he does make mistakes. And we shouldn't blindly trust his judgement. Secondly, I wonder if he'll be forced to admit the exact same thing about the Tesla autopilot system in the next couple of years. Read that tweet again--it would work perfectly.

car crash

The bottom line

I'm all for new technology and making roads safer but Tesla's approach just seems reckless and unethical. Call me a stick-in-the-mud if you must but I want Tesla autopilot software that's designed, engineered, built, tested, validated, and supported like modern Airbus jetliner software, not a buggy prototype for a smart phone app built as quickly as possible by whatever software developers Elon could drum up on Twitter.

Agree or disagree. I'd love to hear your thoughts.

Enjoy this post? Please "like" it below.

Top comments (46)

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

It is mass transit aircraft that have these high levels of redundancy you mention. Less so with private aircraft. Tesla is making private transportation so far. Seems a bit askew to pick on Tesla engineers (except for maybe accepting a non-existent work/life balance) when the existing private aircraft fatality rates outpace automotive accidents by an order of magnitude.

Am I going to trust early vehicle autopilot software? No. Nor would I have trusted early airplane auto-pilots in my (imaginary) personal airplane. But I see no need to call out engineers who are trying to break new ground.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Tesla's engineers are building a product that could kill and injure a lot of people if things go wrong.

Thoughtful organizations have looked into these sorts of risks and decided that as a society we might want to ensure that certain processes are followed when companies are developing these kinds of products so that people aren't killed or injured needlessly when the try to use them (or just walk down the street). One such guideline is ISO 26262.

As far as I can see, Tesla isn't following ISO 26262 or anything like it. I think it's fair game to ask if we're okay with that?

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Not being in the automotive industry, I could not judge whether ISO 26262 really matters in practical application. (Not all ISO standards do, despite their titles. My keyboard is not ISO 9995 compliant, for example.)

Regardless of whether they are following a specific ISO standard, it is pretty obvious that they have people responsible for ensuring safety at various levels. (13 safety engineer/tech jobs available at time of writing. Dunno how many are currently employed in safety.)

Should we demand safety? Certainly! But I guess I differ more in the question of "How?" I don't care if they implement every applicable ANSI/ISO/DIN standard. And in fact, they would probably waste a lot of overhead doing so. I'll demand safety with my wallet. I won't buy autopilot features until they prove them. There will always be some early adopters who want to take the risk and be part of the proof. The business consequences if Tesla screws up the safety aspect are colossal, since lives are at stake. Widespread disaster would be public and likely results in the company folding. I can hardly think of a larger incentive to a for-profit business (especially an established one like Tesla) to get things right and keep people safe.

And anyway, it will likely be a pretty niche feature too expensive for most of us at launch. So even if thinking of the worst cases, I doubt it could do too much damage in the proving stage.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Voting with your wallet is hard because even if you had infinite time to evaluate the raw data, the Tesla won't share it with you. So, one day they are going to announce that their car is safer than human drivers and you'll either believe them or not. But it won't be based on your careful evaluation of the data.

We count on governments to handle this stuff for us because we can't do it ourselves. We don't have resources to see if there's DDT on our spinach, lead paint in the toys we buy for our kids, dangerous radiation coming from our smart phones, or catastrophic errors in the software driving our cars.

Thread Thread
 
kspeakman profile image
Kasey Speakman

I wouldn't believe that claim at face value even if it came with a certified government seal of approval and every kind of certification. It might increase consumer confidence, but (much like FDA approval) you don't really know if it is safe until it has real world experience behind it by early adopters. We are in unknown territory here. The government can only try to protect you against failures which are already known to it. If the government checks for DDT, lead paint, dangerous radiation it is only because somebody has already been affected. Then comes the long process of identifying, classifying, and codifying remedies for the failures. Then maybe consumers can be protected. I don't know how you could skip to regulations and standards. Do we just guess at the ramifications and how to remediate them?

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

We can skip to some regulations and standards because even though we've never fielded a fleet of self driving cars before, we have decades of experience fielding complex computer software, embedded systems, aircraft with automation, and regular cars, along with experience with manufacturing and quality control.

Many of the problems that could occur with self driving cars are foreseeable. And one of the one or more of the above industries already likely knows how to mitigate it.

That should be the starting point in my opinion. And from there we can proceed as you suggest where we identify and mitigate the unforeseeable challenges of integrating self driving cars on our roads.

I don't see any reason to re-invent the wheel by starting from scratch with regulations.

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

We know bits and pieces, but the specific combination for self driving cars could play out a lot differently as a whole that what you would get from piecing it together and guessing. Take texting and driving for example. Texting capability was used for a long time before texting-while-driving became a problem. It only became a large enough problem after iPhones were released and the subsequent market shift to touch-based smart phones. Prior to that phones had tactile buttons, so for the most part people could text reliably (i.e. t9word) without taking their eyes off the road. But after the market shift, people were getting into a lot more accidents. Another example, "hoverboards"... a lot of them are prone to randomly catch on fire, prompting airlines to ban them for obvious reasons. We knew how Lithium batteries work. We knew how segways work. But nobody really foresaw that.

It does not make sense to speculate something into law. We already have laws around electronics, cars (and in fact it is a really difficult process to become an automotive manufacturer), etc. I'm sure we will eventually see some laws around self-driving cars specifically. But the right time to do that is when we know which aspects have proven to be dangerous. Guesses get us nowhere toward real safety. And perhaps speculative safety laws will give us imagined safety, which is even worse.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I don't think it views are actually that different. It's just difficult to communicate effectively in the comments section.

At the level you've define the problem, I agree that preventive legislation would be counter-productive.

I was imagining regulation asked at a much lower level. Like requiring these systems be programmed in a safe subset of C (if you want to use C) because overflows, null references, etc. are dangerous.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

There's a lot of nice information here about quality requirements and expectations. Thanks for this nice overview of the situation.

The code of ethics however is not something that people are required to follow, nor do they represent a an agreed upon view of what ethics in software actually are.

By that list of ethics the entire phone app ecosystem and most websites would also not be in existence. The trade-off between quality, honesty and getting shiny stuff is something people are overly comfortable with (oddly, just not in airlines). I've written about this before, Are we forever cursed with buggy software

On a minor note, the 200million lines of code seems quite excessive. The full linux kernel, if all modules, drivers, everything is compiled in, is less then 20million lines. Surely the OS is the dominant source of code in car, thus I don't see it exceeding 20million lines of code. Though in fairness, I don't think that invalidates your point about bugs.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

You're welcome.

And you are correct. Nobody is required to follow the code of ethics to which I linked in the post. If my post implies that, it was not my intention. Most people haven't studied ethics so I was just presenting a default set of ethics so we could talk about the issues.

I'd argue that some of the existing products might not be very ethical. There's certainly a spectrum of 'goodness' out there in the app ecosystem but I'm not here to debate that part of it.

200 million lines of code does seem like a lot. But it's often quoted. Like here for example.

There's a crazy amount of code in (tiny computers/big microcontrollers) and there are a lot of them in modern cars.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

That number of loc doesn’t seem too far outside of what I found when searching for estimates for normal cars (non self-driving) that are on the road today. Generally they’re fine, so I don’t think focusing on the total lines of code is so important.

The real question is simply how effective the self driving component is, which surely will be less code. In a way maybe it’s the weights used by the neural network that are going to be the most important issue rather than the source code for the net itself.

I’m not saying your overall point is invalid, just that the loc argument itself may be a bit of a straw man.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Yup, I totally agree.

LOC is a terrible measure.

A dev on the self driving project doesn't need to be concerned with the code in the micro controller that's managing the left front window opener.

But the general point I was trying to get across is that these cars are more complex and have much more software in them than most people realize.

Cheers.

Collapse
 
blazselih profile image
BlaΕΎ Ε elih

Nicely written and thought provoking. I tend to agree with most of your points.

As someone who has done a bit of work in aerospace and marine, I am also constantly baffled by the level of ignorance, incompetence and recklessness displayed by the self driving industry.

Just for fun, post this to Hacker News and brace for Musk cultist frontal assault.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks, Blaz.

I was actually bracing to be attacked here but everyone's pretty mellow.

Collapse
 
eljayadobe profile image
Eljay-Adobe

In my opinion, Elon Musk is always overoptimistic on deadlines, and under-delivers on quotas.

If you ignore his deadline estimates and production capacity goals, then expectations can be set to something reasonable.

That being said, I'm not negative on Elon Musk. I just take what he promises with an extraordinarily large grain of salt. And, I enjoy my Tesla model S. ;-)

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You're not worried that this is all going to end very badly for Tesla?

Collapse
 
eljayadobe profile image
Eljay-Adobe

The self-driving car? I think it will take significantly longer to develop that technology than their estimates. I liken it to John McCarthy's prediction on how easy the AI problem will be to solve; or how we're merely 10 years away from sustainable fusion energy (and have been for, oh, about 7 decades now). I expect that there will delay after delay after delay, for a long time.

Their production quotas falling far short of their promises? That will erode stockholder confidence, which will impact their market cap.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Both, I suppose.

I agree that the estimates will keep slipping. When I started thinking about writing this post, I started imagining the kind of ML/AI you would need to deal with the uncommon things I've experienced in my driving career. And the more scenarios I recalled/imagined, the more difficult the requirements for the car became.

So I think it's going to be something like the first 99% will take 5% of the effort. And the last 1% will take the other 95%.

Collapse
 
jorgtheelder profile image
Jorgie

The Tesla autopilot software needs to be perfect

If you think that is the case, I don't know that I trust anything else you have to say. The software needs to be good enough, and insurance companies already have the actuary tables to figure out just how high that bar actually is.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I was engaged in a bit of hyperbole there.

But the autopilot needs to be really, really, good. The accuracy needs to be way better than any voice dictation, translation, or image recognition I've ever seen.

Tesla is going to be sued for almost every instance where the car suddenly does the wrong thing and kills someone. If you put thousands or tens of thousands of those cars on the road and they drive for an average of 1 hour per day for 11.6 years (the US average), that's a lot of chances to get sued for each car sold so you need the software to be practically "perfect".

Collapse
 
gigantickludge profile image
Eric Bland • Edited

You mention component redundancy. The Tesla has ZERO redundancy in the electric steering rack and it has had hundreds of failures from loose wiring, broken ground studs and most scary of all the steering rack is falling to pieces due to corrosion and is the subject of a " non-urgent" recall.
Be afraid. Be VERY afraid.

teslamotorsclub.com/tmc/threads/po...

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Yeah, that's the kind of stuff that concerns me. Thanks for sharing, Eric.

Collapse
 
rhymes profile image
rhymes

Musk definitely changed the world (I'm 100% sure we need to convert to electric cars) but he's also a Tony Stark like figure which scares me a bit due to the level of his intelligence and capabilities.

I too do not think Tesla is going to bring us to level 3 (or 4 or 5) cars anytime soon. But building a fleet of shared autonomous electric cars has always been a goal of his since the beginning of Tesla. He's not going back on that but he'll probably scrap level 3 and go Waymo's route. Or at least I hope.

For this very reason, several car companies have decided to skip level 3 autonomy altogether. I believe Google was the first to publicly abandon the idea of level 3 autonomy and then others followed.

I honestly do not understand why they are pushing towards autopilot level 3 instead of developing fully autonomous car. Except for marketing reasons (and the fact they don't have infinite money). It could probably be the end of Tesla if they botch it and they probably will as you say. I'm glad that in the meantime the other car companies, thanks to Tesla and Musk, have started taking electric cars seriously. Well, every car company except FCA :D

Can Tesla get its model 3 production issues straightened out?

Eventually :-D

Thanks for this post, it was very insightful and as I said I hope they will skip autopilot altogether.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks for sharing your thoughts, rhymes. I agree with everything you wrote.

Collapse
 
bternarytau profile image
BTernaryTau

I notice that for part 3, the linked article cites the weaker of the two sets of data I've seen used to defend Autopilot. The stronger set of data deals with all crashes for a larger sample size and compares crash rates from before and after the system is installed.

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Interesting. Thanks for sharing.

I'm not trying to bash on Tesla but the article doesn't actually contain any data (just the 40% number). How many miles driven with autopilot and without? How many crashes in each mode? Which versions of autopilot? Weather factors? Time of day factors? Other factors?

I really hope self driving cars do in fact save lives. Everyone knows someone who has being injured or killed in a car accident and it's just terrible. If we can use tech to prevent even some of those deaths, that would be awesome.

Collapse
 
bternarytau profile image
BTernaryTau

Unfortunately it appears the data itself is not public, which is now really annoying me.

Despite being a Tesla supporter, I do agree with many of your concerns. It's important to minimize both type I and type II errors, and it is problematic that companies working on the technology are incentivized to downplay the former and emphasize the latter.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

I agree.

Collapse
 
ben profile image
Ben Halpern

I'm not sure what my answer to the question is, even after reading this all, but I'd definitely lump it in with a lot of ethically-dubious activity coming out of the California tech sector.

Incredibly thought-provoking article no matter what.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks, Ben. I'm interested to know how it's all going to turn out.

Collapse
 
robdwaller profile image
Rob Waller

Really, really great post. I've done a lot of research over the last year on ML and AI and I have to say I agree with you. The complexity involved in these systems is too high and the accuracy too low for it to be workable any time soon.

Also I don't know if you saw this from the UK the other day but it highlights your point about system cut out, this guy could have killed a bunch of people: dailymail.co.uk/news/article-56684...

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks so much, Rob.

Yes, I saw that. I can't believe Tesla doesn't have cameras watching the driver.

Collapse
 
qm3ster profile image
Mihail Malo

Do you happen to know if they use LiDARs on the cars during in-house testing?
Perhaps the confidence comes from their systems being able to predict all of the critical LiDAR information,creating an observable "virtual LiDAR" layer of abstraction?

Collapse
 
bosepchuk profile image
Blaine Osepchuk

I've seen no mention of LiDAR. Elon called it something like "an expensive crutch."

Collapse
 
bosepchuk profile image
Blaine Osepchuk • Edited

Thanks for sharing your thoughts, Jesuszilla.

So between now and when regulation is put into place, you don't see any ethical dilemma with software developers working for Tesla on a project where:

  • the boss is making claims about the software that likely cannot be met?
  • the company is taking money upfront for a product that does not exist yet and might not exist for years? Or might not be approved for use?
  • taking an approach to developing a technology that most of the other car companies abandoned because they thought it was too dangerous?
  • hiring programmers with no previous experience or training in building safety critical systems (this one is only a problem for me if they don't receive adequate training and supervision)?
  • not following best practices for building safety critical systems?

Sure, self driving cars are going to be developed. I don't have a problem with that. I take issue with the way in which it appears Tesla is going about it.

Cheers.

Collapse
 
rhymes profile image
rhymes

@bosepchuk did you read this thread? twitter.com/atomicthumbs/status/10... - Apparently an ex Tesla employee is spilling the beans on the internals of the tech and he's very worried about the autopilot feature.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks for that, rhymes.

I took a look; it's interesting but I'm not sure how credible that stuff is.

If true, it doesn't paint a very favorable picture the quality of Tesla's software engineering and IT.

Collapse
 
theminshew profile image
Michael Minshew

Great article, as some one who doesn't drive to to phobia I can barely wait for a safe and reliable auto car. The freedom and opportunities that would open for me would be overwhelming. Hopefully tesla or someone finds a solution as that would completely change my life.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Thanks and I'm with you. There are many people who can't or are unwilling to drive and in a society designed around the assumption that just about everyone will own a car, that's mighty isolating.

 
bosepchuk profile image
Blaine Osepchuk

What I would do is irrelevant but I take your point. Everyone has to decide for themselves what their person ethics will allow.