The more I learn about Tesla's self driving car development, the more concerned I become about the ethics of working as a software developer on the...
For further actions, you may consider blocking this person and/or reporting abuse
It is mass transit aircraft that have these high levels of redundancy you mention. Less so with private aircraft. Tesla is making private transportation so far. Seems a bit askew to pick on Tesla engineers (except for maybe accepting a non-existent work/life balance) when the existing private aircraft fatality rates outpace automotive accidents by an order of magnitude.
Am I going to trust early vehicle autopilot software? No. Nor would I have trusted early airplane auto-pilots in my (imaginary) personal airplane. But I see no need to call out engineers who are trying to break new ground.
Tesla's engineers are building a product that could kill and injure a lot of people if things go wrong.
Thoughtful organizations have looked into these sorts of risks and decided that as a society we might want to ensure that certain processes are followed when companies are developing these kinds of products so that people aren't killed or injured needlessly when the try to use them (or just walk down the street). One such guideline is ISO 26262.
As far as I can see, Tesla isn't following ISO 26262 or anything like it. I think it's fair game to ask if we're okay with that?
Not being in the automotive industry, I could not judge whether ISO 26262 really matters in practical application. (Not all ISO standards do, despite their titles. My keyboard is not ISO 9995 compliant, for example.)
Regardless of whether they are following a specific ISO standard, it is pretty obvious that they have people responsible for ensuring safety at various levels. (13 safety engineer/tech jobs available at time of writing. Dunno how many are currently employed in safety.)
Should we demand safety? Certainly! But I guess I differ more in the question of "How?" I don't care if they implement every applicable ANSI/ISO/DIN standard. And in fact, they would probably waste a lot of overhead doing so. I'll demand safety with my wallet. I won't buy autopilot features until they prove them. There will always be some early adopters who want to take the risk and be part of the proof. The business consequences if Tesla screws up the safety aspect are colossal, since lives are at stake. Widespread disaster would be public and likely results in the company folding. I can hardly think of a larger incentive to a for-profit business (especially an established one like Tesla) to get things right and keep people safe.
And anyway, it will likely be a pretty niche feature too expensive for most of us at launch. So even if thinking of the worst cases, I doubt it could do too much damage in the proving stage.
Voting with your wallet is hard because even if you had infinite time to evaluate the raw data, the Tesla won't share it with you. So, one day they are going to announce that their car is safer than human drivers and you'll either believe them or not. But it won't be based on your careful evaluation of the data.
We count on governments to handle this stuff for us because we can't do it ourselves. We don't have resources to see if there's DDT on our spinach, lead paint in the toys we buy for our kids, dangerous radiation coming from our smart phones, or catastrophic errors in the software driving our cars.
I wouldn't believe that claim at face value even if it came with a certified government seal of approval and every kind of certification. It might increase consumer confidence, but (much like FDA approval) you don't really know if it is safe until it has real world experience behind it by early adopters. We are in unknown territory here. The government can only try to protect you against failures which are already known to it. If the government checks for DDT, lead paint, dangerous radiation it is only because somebody has already been affected. Then comes the long process of identifying, classifying, and codifying remedies for the failures. Then maybe consumers can be protected. I don't know how you could skip to regulations and standards. Do we just guess at the ramifications and how to remediate them?
We can skip to some regulations and standards because even though we've never fielded a fleet of self driving cars before, we have decades of experience fielding complex computer software, embedded systems, aircraft with automation, and regular cars, along with experience with manufacturing and quality control.
Many of the problems that could occur with self driving cars are foreseeable. And one of the one or more of the above industries already likely knows how to mitigate it.
That should be the starting point in my opinion. And from there we can proceed as you suggest where we identify and mitigate the unforeseeable challenges of integrating self driving cars on our roads.
I don't see any reason to re-invent the wheel by starting from scratch with regulations.
We know bits and pieces, but the specific combination for self driving cars could play out a lot differently as a whole that what you would get from piecing it together and guessing. Take texting and driving for example. Texting capability was used for a long time before texting-while-driving became a problem. It only became a large enough problem after iPhones were released and the subsequent market shift to touch-based smart phones. Prior to that phones had tactile buttons, so for the most part people could text reliably (i.e. t9word) without taking their eyes off the road. But after the market shift, people were getting into a lot more accidents. Another example, "hoverboards"... a lot of them are prone to randomly catch on fire, prompting airlines to ban them for obvious reasons. We knew how Lithium batteries work. We knew how segways work. But nobody really foresaw that.
It does not make sense to speculate something into law. We already have laws around electronics, cars (and in fact it is a really difficult process to become an automotive manufacturer), etc. I'm sure we will eventually see some laws around self-driving cars specifically. But the right time to do that is when we know which aspects have proven to be dangerous. Guesses get us nowhere toward real safety. And perhaps speculative safety laws will give us imagined safety, which is even worse.
I don't think it views are actually that different. It's just difficult to communicate effectively in the comments section.
At the level you've define the problem, I agree that preventive legislation would be counter-productive.
I was imagining regulation asked at a much lower level. Like requiring these systems be programmed in a safe subset of C (if you want to use C) because overflows, null references, etc. are dangerous.
There's a lot of nice information here about quality requirements and expectations. Thanks for this nice overview of the situation.
The code of ethics however is not something that people are required to follow, nor do they represent a an agreed upon view of what ethics in software actually are.
By that list of ethics the entire phone app ecosystem and most websites would also not be in existence. The trade-off between quality, honesty and getting shiny stuff is something people are overly comfortable with (oddly, just not in airlines). I've written about this before, Are we forever cursed with buggy software
On a minor note, the 200million lines of code seems quite excessive. The full linux kernel, if all modules, drivers, everything is compiled in, is less then 20million lines. Surely the OS is the dominant source of code in car, thus I don't see it exceeding 20million lines of code. Though in fairness, I don't think that invalidates your point about bugs.
You're welcome.
And you are correct. Nobody is required to follow the code of ethics to which I linked in the post. If my post implies that, it was not my intention. Most people haven't studied ethics so I was just presenting a default set of ethics so we could talk about the issues.
I'd argue that some of the existing products might not be very ethical. There's certainly a spectrum of 'goodness' out there in the app ecosystem but I'm not here to debate that part of it.
200 million lines of code does seem like a lot. But it's often quoted. Like here for example.
There's a crazy amount of code in (tiny computers/big microcontrollers) and there are a lot of them in modern cars.
That number of loc doesn’t seem too far outside of what I found when searching for estimates for normal cars (non self-driving) that are on the road today. Generally they’re fine, so I don’t think focusing on the total lines of code is so important.
The real question is simply how effective the self driving component is, which surely will be less code. In a way maybe it’s the weights used by the neural network that are going to be the most important issue rather than the source code for the net itself.
I’m not saying your overall point is invalid, just that the loc argument itself may be a bit of a straw man.
Yup, I totally agree.
LOC is a terrible measure.
A dev on the self driving project doesn't need to be concerned with the code in the micro controller that's managing the left front window opener.
But the general point I was trying to get across is that these cars are more complex and have much more software in them than most people realize.
Cheers.
Nicely written and thought provoking. I tend to agree with most of your points.
As someone who has done a bit of work in aerospace and marine, I am also constantly baffled by the level of ignorance, incompetence and recklessness displayed by the self driving industry.
Just for fun, post this to Hacker News and brace for Musk cultist frontal assault.
Thanks, Blaz.
I was actually bracing to be attacked here but everyone's pretty mellow.
In my opinion, Elon Musk is always overoptimistic on deadlines, and under-delivers on quotas.
If you ignore his deadline estimates and production capacity goals, then expectations can be set to something reasonable.
That being said, I'm not negative on Elon Musk. I just take what he promises with an extraordinarily large grain of salt. And, I enjoy my Tesla model S. ;-)
You're not worried that this is all going to end very badly for Tesla?
The self-driving car? I think it will take significantly longer to develop that technology than their estimates. I liken it to John McCarthy's prediction on how easy the AI problem will be to solve; or how we're merely 10 years away from sustainable fusion energy (and have been for, oh, about 7 decades now). I expect that there will delay after delay after delay, for a long time.
Their production quotas falling far short of their promises? That will erode stockholder confidence, which will impact their market cap.
Both, I suppose.
I agree that the estimates will keep slipping. When I started thinking about writing this post, I started imagining the kind of ML/AI you would need to deal with the uncommon things I've experienced in my driving career. And the more scenarios I recalled/imagined, the more difficult the requirements for the car became.
So I think it's going to be something like the first 99% will take 5% of the effort. And the last 1% will take the other 95%.
If you think that is the case, I don't know that I trust anything else you have to say. The software needs to be good enough, and insurance companies already have the actuary tables to figure out just how high that bar actually is.
I was engaged in a bit of hyperbole there.
But the autopilot needs to be really, really, good. The accuracy needs to be way better than any voice dictation, translation, or image recognition I've ever seen.
Tesla is going to be sued for almost every instance where the car suddenly does the wrong thing and kills someone. If you put thousands or tens of thousands of those cars on the road and they drive for an average of 1 hour per day for 11.6 years (the US average), that's a lot of chances to get sued for each car sold so you need the software to be practically "perfect".
You mention component redundancy. The Tesla has ZERO redundancy in the electric steering rack and it has had hundreds of failures from loose wiring, broken ground studs and most scary of all the steering rack is falling to pieces due to corrosion and is the subject of a " non-urgent" recall.
Be afraid. Be VERY afraid.
teslamotorsclub.com/tmc/threads/po...
Yeah, that's the kind of stuff that concerns me. Thanks for sharing, Eric.
Musk definitely changed the world (I'm 100% sure we need to convert to electric cars) but he's also a Tony Stark like figure which scares me a bit due to the level of his intelligence and capabilities.
I too do not think Tesla is going to bring us to level 3 (or 4 or 5) cars anytime soon. But building a fleet of shared autonomous electric cars has always been a goal of his since the beginning of Tesla. He's not going back on that but he'll probably scrap level 3 and go Waymo's route. Or at least I hope.
I honestly do not understand why they are pushing towards autopilot level 3 instead of developing fully autonomous car. Except for marketing reasons (and the fact they don't have infinite money). It could probably be the end of Tesla if they botch it and they probably will as you say. I'm glad that in the meantime the other car companies, thanks to Tesla and Musk, have started taking electric cars seriously. Well, every car company except FCA :D
Eventually :-D
Thanks for this post, it was very insightful and as I said I hope they will skip autopilot altogether.
Thanks for sharing your thoughts, rhymes. I agree with everything you wrote.
I notice that for part 3, the linked article cites the weaker of the two sets of data I've seen used to defend Autopilot. The stronger set of data deals with all crashes for a larger sample size and compares crash rates from before and after the system is installed.
Interesting. Thanks for sharing.
I'm not trying to bash on Tesla but the article doesn't actually contain any data (just the 40% number). How many miles driven with autopilot and without? How many crashes in each mode? Which versions of autopilot? Weather factors? Time of day factors? Other factors?
I really hope self driving cars do in fact save lives. Everyone knows someone who has being injured or killed in a car accident and it's just terrible. If we can use tech to prevent even some of those deaths, that would be awesome.
Unfortunately it appears the data itself is not public, which is now really annoying me.
Despite being a Tesla supporter, I do agree with many of your concerns. It's important to minimize both type I and type II errors, and it is problematic that companies working on the technology are incentivized to downplay the former and emphasize the latter.
I agree.
I'm not sure what my answer to the question is, even after reading this all, but I'd definitely lump it in with a lot of ethically-dubious activity coming out of the California tech sector.
Incredibly thought-provoking article no matter what.
Thanks, Ben. I'm interested to know how it's all going to turn out.
Really, really great post. I've done a lot of research over the last year on ML and AI and I have to say I agree with you. The complexity involved in these systems is too high and the accuracy too low for it to be workable any time soon.
Also I don't know if you saw this from the UK the other day but it highlights your point about system cut out, this guy could have killed a bunch of people: dailymail.co.uk/news/article-56684...
Thanks so much, Rob.
Yes, I saw that. I can't believe Tesla doesn't have cameras watching the driver.
Do you happen to know if they use LiDARs on the cars during in-house testing?
Perhaps the confidence comes from their systems being able to predict all of the critical LiDAR information,creating an observable "virtual LiDAR" layer of abstraction?
I've seen no mention of LiDAR. Elon called it something like "an expensive crutch."
Thanks for sharing your thoughts, Jesuszilla.
So between now and when regulation is put into place, you don't see any ethical dilemma with software developers working for Tesla on a project where:
Sure, self driving cars are going to be developed. I don't have a problem with that. I take issue with the way in which it appears Tesla is going about it.
Cheers.
@bosepchuk did you read this thread? twitter.com/atomicthumbs/status/10... - Apparently an ex Tesla employee is spilling the beans on the internals of the tech and he's very worried about the autopilot feature.
Thanks for that, rhymes.
I took a look; it's interesting but I'm not sure how credible that stuff is.
If true, it doesn't paint a very favorable picture the quality of Tesla's software engineering and IT.
Great article, as some one who doesn't drive to to phobia I can barely wait for a safe and reliable auto car. The freedom and opportunities that would open for me would be overwhelming. Hopefully tesla or someone finds a solution as that would completely change my life.
Thanks and I'm with you. There are many people who can't or are unwilling to drive and in a society designed around the assumption that just about everyone will own a car, that's mighty isolating.
What I would do is irrelevant but I take your point. Everyone has to decide for themselves what their person ethics will allow.
Good question. I don't have an answer on that but I'd be interested to hear what people think.