DEV Community

Cover image for Would you willingly participate in creating a product like these?
Yoandy Rodriguez Martinez
Yoandy Rodriguez Martinez

Posted on

Would you willingly participate in creating a product like these?

A friend told me about this site. I didn't believe her at first, then the tweets started and my hopes in the future of the human kind took a nose dive out of my office Window. To make things worst I've found this list.

Take a peek at the second item:

Discrimination

HireVue - App that scans your face and tells companies whether you’re worth hiring. [summary]

AI-based Gaydar - Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans. [summary]

Racist Chat Bots - Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

Racist Auto Tag - a Google image recognition program labeled the faces of several black people as gorillas.

PredPol - PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighborhoods. [summary]

COMPAS - is a risk assessment algorithm used in legal courts by the state of Wisconsin to predict the risk of recidivism. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased against blacks (even worse than humans). [summary][NYT opinion]

Infer Criminality From Your Face - A program that judges if you’re a criminal from your facial features. [summary]

iBorderCtrl - AI-based polygraph test for travellers entering the European Union (trial phase). Likely going to have a high number of false positives, considering how many people cross the EU borders every day. Furthermore, facial recognition algorithms are prone to racial bias. [summary]

Faception - Based on facial features, Faception claims that it can reveal personality traits e.g. "Extrovert, a person with High IQ, Professional Poker Player or a threats". They build models that classifies faces into categories such as Pedophile, Terrorist, White-Collar Offenders and Bingo Players without prior knowledge. [classifiers][video pitch]

As Software Engineers/Developers/Freelancers from a third world country, my friends and I use to joke about "The Oppenheimer Factor", meaning an hypothetical scenario where our work is used to create some nefarious piece of software. This list is not related to that, these are people that willingly participate in tearing apart the few good things we have left on this Earth. So I ask you, would you willingly become one of those people? Why?.

Top comments (10)

Collapse
 
idanarye profile image
Idan Arye

Do these products work? How accurate are they? If we take, for example, the example you opened with - if an AI can accurately predict personality traits from people's faces, does that not mean that said personality traits are externalized in people's faces? This is not something the modern principles are comfortable with - so are these products considered unethical because they expose truths that don't fit with the right ideology?

Collapse
 
yorodm profile image
Yoandy Rodriguez Martinez

It's not a matter of ideology, although I see why (given the rise of the SJW movement) you may think that. If you say phisical traits are a reflection of a persons character you open the door to all kinds of discrimination, and it has been proven once and again that the science behind it is at best "weak".

Also that's the "science" behind phrenology and other movements. Movements that makes people gather other people against their will and ship them to other countries or move them into camps with signs that read "Arbeit Macht Frei" on the entrance.

Collapse
 
idanarye profile image
Idan Arye

Phrenology was popular two centuries ago - I think it is probably safe to assume they did not have machine learning back then. And I doubt the rest of these movements are basing their ideology on ML. The problem with these movement is not that the idea that there is some correlation between physical traits and mental traits is 100% wrong, but that these movements were over-simplifying that correlation and were basing it on racist ideology rather than scientific research. ML can do it better.

Machine learning is able to mimic the human ability to recognize patterns, but it can recognize much more complex patterns and achieve greater accuracy than humans. So if there is some truth in the idea of correlation between physical and mental traits, a powerful enough ML fed with enough examples may be able to model that correlation. If there is 0 connection between the physical and the mental, then it should be impossible for any algorithm to accurately predict one based on the other.

But these companies claim that they have managed to do it. These claims should be taken as scientific results, and judged with scientific methods. They can be criticized - maybe "they were overfitting" or maybe "the training and test sets were biased" - and these criticisms can be checked against the documented research and against attempts to replicate these studies (which could be a problem, I assume, if their algorithms are proprietary...). But "these results are immoral" is not valid scientific criticism, because science is trying to discover reality and reality is not obliged to our moral standards.

Collapse
 
drbearhands profile image
DrBearhands

I feel there are a number of misconceptions going on here. So an AI 101 is in order:

Supervised learning classifiers - which includes most of the examples above - learn a relation between input and output features in a given data set. It doesn't invent it, the relation exists.

Now, for the caveats:

It is possible to "overfit" a model, meaning that it will work for the data set, but not for new data. You can detect and prevent this by reserving a subset of your data set to validate your results.

The classifier will learn any bias that is present in your data set. This is a common and well-understood problem, and has produced some interesting urban legends. This problem is not exclusive to AI but affects any data analysis. For instance, medical science is based mostly on research done on white, highly educated, young, men, simply because (a) it is sourced from students, (b) variables must be eliminated so reducing variations in race/age/gender is actually good and (c) men don't have menstrual cycles murking up the data. You can't blame scientists for this, they just can't find enough participants (privacy laws don't help here either). Bias exists in humans in spades, which is why you can't e.g. train an AI on past court rulings.

Most models (can't think of any counter examples right now) will give you a numerical certainty value rather than an enumerable answer.

Finally, models have recall and precision scores. In short, you model might simply not be very good.

Ethics

But lets suppose the classifiers are well-made and have found a correct correlation.

There are still ethical concerns left

  1. Correlation is not causation. Suppose the alt right is right and skin color is correlated to criminal behavior, but, as with most things, this correlation isn't perfect (say even 95%). Should we judge 5% for something they have not done because of a characteristic they cannot change? Of course not. More generally, you want to judge people based on things that matter, not whatever is correlated to it.
  2. Some civil disobedience is necessary. Nobody likes terrorism and pedos, but without civil disobedience we wouldn't have a democracy. Technology, including AI, might make this impossible if we aim for total security.
  3. One could argue that knowledge is power and AI is giving certain people too much of that. E.g. persecution of homosexuals in certain countries.
 
idanarye profile image
Idan Arye

No, that's the opposite. Since there is no correlation between the shape of the skull and the morals of a person, they are over-complexifying something that does not exist.

That's an ideological claim, not a scientific one. A scientific claim would be that "no such correlation has been found in empirical studies", which means we can assume there is no such correlation, not that we should automatically reject new studies that find correlation.

Computers can be made to produce any arbitrary output.

True, which is why we test our software. The core principle of the scientific method is that the correctness of a model is strongly tied to its ability to provide accurate predictions. Same is true for machine learning - so if there really is no correlation then no ML solution should be able to provide reasonably accurate results when tested on a test set.

Thread Thread
 
yorodm profile image
Yoandy Rodriguez Martinez

The thing with products is that there's always an "ideology" component behind it. "We are better than these people because they have big noses" is as good an ideology as any to sell a product, mostly when you know there's already a market for such a thing (e.j bigotry, prejudice and all the -isms). Throw a little sciency mumbo jumbo and you've built yourself a company, add some mystical component to it and you can built a religion, pair it with a manifest and you can made a movement.

Thread Thread
 
idanarye profile image
Idan Arye

I seriously doubt they are using computer vision algorithms to detect the nose, measure its size, and put it through a monotonous function to get a number determining the person's personality. It is much more likely that they are feeding the face data to a neural network which determine the actual relationship.

Thread Thread
 
yorodm profile image
Yoandy Rodriguez Martinez

Ok, since we're both educated persons (I have college degrees in both philosophy and software engineering) let stop with the oversimplifications.
On they site they state this:

Researchers from Edinburgh University studied more than 800 sets of identical and non-identical twins to learn whether genetics or upbringing has a greater effect on how successful people are in life. Writing in the Journal of Personality, the researchers found that identical twins were twice as likely as non-identical twins to share the same personality traits, suggesting that their DNA was having the greatest impact.

This my friend is behavioral genetics, in a very simple and biased form.

Our face is a reflection of our DNA

That's true, but the implication that those are the genes related to, let's say, violent behavior is as far as I know, a fallacy.

In fact, It’s already possible to make some inferences about the appearance of crime suspects from their DNA alone.

Yes, if you have someones DNA you can describe he/she/. But I dare you to find an article in a respected journal that states that the reverse procedure can be made with at least a 50% accuracy.

The list of fallacies in their statements keeps going.

Thread Thread
 
yorodm profile image
Yoandy Rodriguez Martinez

On the other hand. Thanks for being the "devils advocate" here. Always glad to have a healthy debate!!

 
idanarye profile image
Idan Arye

How long ago? Because effective ML is relatively new and it changes everything.

But at any rate, if the science is wrong then these products should not work, and selling a product that doesn't work is not a very good business model. Tbe customers are big companies - not some kids is a back alley buying drugs - and these usually demand to see that the solution they are going to spend lots of money and manpower on actually performs well. So if it can't provide results - it won't be bought, and no harm will be done.