DEV Community

Mat
Mat

Posted on

3 lessons for building better AI (from the 2019 Turing talk)

On Monday I went to the 2019 Turing talk, hosted by the British Computer Society (BCS) and the Institute of Engineering and Technology (IET). It's a free talk that they organise every year. This year they invited Dr Krishna Gummadi to talk about bias in AI:

Machine (data-driven learning-based algorithmic) decision making is increasingly being used to assist or replace human decision making in a variety of domains ranging from banking (rating user credit) and recruiting (ranking applicants) to judiciary (profiling criminals) and journalism (recommending news-stories). Recently concerns have been raised about the potential for bias and unfairness in such algorithmic decisions.

Against this background, in this talk, we will attempt to tackle the following foundational questions about man-machine decision making:

(a) How do machines learn to make biased or unfair decisions?
(b) How can we quantify (measure) and control (mitigate) bias or unfairness in machine decision making?
(c) Can machine decisions be engineered to help humans control (mitigate) bias or unfairness in their own decisions?

So, what can go wrong?

The talk focused on real world examples of biased machine learning. One of the examples was COMPAS. This is a tool for predicting recidivism (whether criminals are likely to reoffend) and it was biased against black defendants. You can read much more about it in the Machine Bias article from Propublica. The details of the analysis is also covered in the accompanying article How We Analyzed the COMPAS Recidivism Algorithm.

I had heard about this before, but what stood out to me from this talk is that the designers weren’t just careless in their implementation. They had considered how to make the algorithm fair, but they still got it wrong.

They thought that if they removed race as a feature from the machine learning, its decisions would be fair, because all else being equal, the algorithm should give the same output for people of different races. But in practice, defendants who reoffended were more likely to be given lower risk scores if they were white, and defendants who didn't reoffend were more likely to be given higher risk scores if they were black.

Lesson 1: Optimal for the whole population != optimal for sub-groups

Apparently, one of the big problems was the way they defined the objective function. It meant that the algorithm optimised for the population as a whole, but the algorithm was still allowed to perform differently for some subgroups than others.

This made me think of youtube accidentally promoting lots of clickbait, extremism and conspiracy theories. It wasn't designed to treat these videos differently, but promoting them happens to be the best way to achieve its goal (lots of views).

It's possible to constrain the objective function, to trade off overall accuracy for fairness, but it seems like this has a lot of subtlety to it, and you have to understand what "fair" really means for the domain you're working in.

Ethical dilemmas in setting objectives. It's impossible to optimize all group errors simultaneously (Chouldechova 2016)

Lesson 2: Sampling bias in your training data biases the result

Another example was "predictive policing" (where do you put police to deal with drug related crime?) - the problem was that the training data came from police reports, and if you compare to other estimates of drug usage, policing was already disproportionally targeting black neighbourhoods. When people act on the results of the algorithm, this creates a vicious cycle where the algorithm perpetuates the bias that's already present.

Training bias in predictive policing: estimated drug use in Oakland vs 2010 Oakland Police Department drug crime reports

Lesson 3: Latent representations capture human biases

The final part of the talk explained the problems with using latent representations in machine learning. When you use something like word2vec, you are distilling information about each word down into a vector that is not inherently meaningful to humans. This representation of the word encodes a lot of information, and the consequences of that can be very unpredictable.

For example, in machine translation, when converting from a language with gender neutral pronouns (turkish) to english, the algorithm (trained on news articles) just makes up the pronouns based on the cultural stereotypes it’s picked up on.

Gender bias in machine translation: gender neutral phrases in turkish are translated to gendered english ones (*she* is a cook, *he* is an engineer, *he* is a doctor, *she* is a nurse)

How can we do better?

This talk was fascinating to me because it presented the current state of AI research and the ways that machine learning can fail in the real world.

I work in the public sector, where AI is basically seen as magical fairy dust you can sprinkle on your bureaucracy to make it more efficient. But no matter how sophisticated the algorithms are, if a system makes important decisions about peoples' lives, you are going to run into problems if you assume it works perfectly all the time. Even if you have a human in the loop, it's very easy to just trust whatever the algorithm spits out.

With any complex system, there will be situations where it fails. I think to build better quality software we need to acknowledge that we can't eliminate all sources of failure, but we can make the system more failure tolerant. If mistakes can be corrected by other systems or processes, then some number of mistakes may be acceptable (like how the legal system has appeal courts). You can design constraints or safeguards into the system to protect against bad outcomes. Even if problems will only become apparent much later on, you should still find ways to use that feedback to improve the system over time.

Top comments (0)