When I was a kid, one of my favorite movies was Jurassic Park, because well…dinosaurs. I remember the movie being such a phenomenon too that summer, there were shirts and toys everywhere. I even remember going to the community pool and seeing adults everywhere holding the book with the silver cover and the T-Rex skull on it.
It really was a movie ahead of its time, not just in terms of special effects, or how it covers the topic of cloning, but in that it described a societal nexus we were all headed towards that many people didn’t quite see yet. One of my favorite moments in the movie is when Jeff Goldblum’s character, having just survived a T-Rex attack deliers this line:
Technology has grown, by leaps and bounds, to the point now that many argue Moore’s law is irrelevant and outdated. And we are making advances in everything major area of life to the point that the world we grew up in is completely unrecognizable to that of our children. Furthermore to the point that this question has become all the more relevant today, with regard to artificial intelligence.
Just to be clear these are the thoughts of one developer / architect (me) on this subject and I would recommend you research this heavily, and come to your own conclusions, but these are my opinions and mine alone.
We have reached a period of time where more and more businesses and society in general are looking to artificial intelligence as a potential solution to solve a lot of problems and more and more the question of AI ethics has become prevalent. But what does that actually mean and how can an organization build AI solutions that serve to benefit all of humanity rather than cause unintended problems and potentially harm members of society.
The first part of this comes down to the recognition that artificial intelligence solutions need to be fully baked and great care needs to be given to supporting the idea of mitigating built in bias in both training data and the end results of the service. Now the question is what do I mean about bias. And I mean actively searching for potentially bad assumptions that might find their way into a model based upon a training dataset. Let’s take a good hypothetical case that strikes close to home for me.
If you wanted to build a system to identify patients that were at high risk for pneumonia. This was a hypothetical I talked to a colleague about a few months ago. If you took training data of conditions they have and an indicator of whether or not they ended up getting pneumonia, this would seem like a logical way to tackle the problem.
But there are potential bias that could occur based on the fact that many asthmatics like myself tend to seek proactive treatment, as we are at high risk, and many doctors treat colds very aggressively. Mainly because when we get pneumonia it can be life threatening. So if you don’t account for this bias it might skew the results of any AI system. Because you likely won’t see many asthmatics appear in your training data that actually got pneumonia.
Or another potential consideration could be location, if I take my data sample just from the southwest like Arizona, dry climates tend to be better for people with respiratory problems and they might have lower risk of pneumonia.
My point is the idea of how you gather data and create a training data set is something that requires a significant amount of thought and care to ensure success.
The other major problem is that every AI system is unique in the implications of a bad result. In the above case, its life threatening, in terms of a recommendations engine for Netflix, it means I miss a movie I might like. Very different results and impact on lives. And this cannot be ignored as it really does figure into the overall equation.
So the question becomes how do we ensure that we are doing the right thing with AI solutions? The answer is to take the time to decide on what values as an organization we will embrace at our core for these solutions. We need to make value driven decisions on what type of implications we are concerned about and let those values guide our technology decisions.
For a long time values have been one of the deciding factors between successful organizations and unsuccessful ones. The one example that comes to mind was the Tylenol situation where a batch of Tylenol had been tampered with. The board had a choice, pull all the Tylenol on market shelves for public safety and hurt their shareholders or protect share holders and deny. The company values indicated that customers must always come first and it made their decision clear. And it was absolutely the right decision. I’m giving a seriously abridged version, but here’s a linkto an article on the scare.
Microsoft actually released an AI School for business to help customers to get a good starting point for figuring that out. They also made several tracks for a variety of industries to help with what should be considered for each industry. Microsoft has also made their position on ethical AI very clear in a blog post by Company President Brad Smith and Our Approach: Microsoft AI
Below are the links to some of the training courses on the subject:
- Introduction to AI technology
- Introduction to AI technology for Business Leaders
- Define an AI strategy to create business value
- Identify guiding principles for responsible AI in your business
- Understand the importance of building an AI-ready culture
- Discover how to foster an AI-ready culture in sales
- Examine the Microsoft approach to Artificial Intelligence
Along side this, there has been a lot of discussion around this, from some of the biggest executives in the AI space, including Satya Nadella:
But one of the most interesting voices I’ve heard with regard to the ethics and future of AI is Calum Chace, and I would tell you to watch this as it really goes into the depth of the challenges and ways that if AI is not handled responsibly we are looking at another major singularity in human evolution:
This is a complicated and multi-faceted topic that is great food for thought on a Friday. Empathy is the most important elements of any technology solution as these solutions are having greater and greater ramifications on society.
Top comments (0)