Over the past few weeks, I was really happy to give a talk about a topic that has been preoccupying me for a long time, ever since I started reading science fiction and being confronted with the moral conflicts that advanced technology always brings to the front of the stage.
In order to get notified when more parts of this talk, as well as other talks that I am preparing, are published, please subscribe to my YouTube channel.
In the talk, I mention a simple example: Imagine a self driving car who has to decide if it should go left and kill one human, or go right and kill 10 of us. Now imagine that the one person on the left is a child. How is a machine supposed to make this kind of decisions? Earlier this kind of moral question was only a theoretical one (see for instance Isaac Asimov's laws of robotic). But now we need to think about these issues in a much more practical manner. We now have the data and the power to run artificial intelligence models, and we must deal with all kinds of ethical questions.
The main issue we are dealing with these days is the bias in data. The talk explains in detail what bias is, and how it creeps into the AI models. We also talk about a few other ethical issues linked to AI, and explore some ways to improve the situation. Finally we end with a few positive examples where AI promotes inclusion and diversity, and shows that we should not abandon artificial intelligence, but that we should do our best to use it for the greater good.
For the viewer's convenience, I split the 45 minutes talk in 7 parts which are between 3 and 7 minutes. I published parts 1 to 5 already and the rest will follow in the next few days. Hopefully this talk is useful for you! And if you like it, please subscribe to my YouTube channel.