DEV Community

Siddhant Dubey
Siddhant Dubey

Posted on • Originally published at Medium on

What I Learned from Trying to Make a Lie Detector Using a Neural network

Over this weekend I tried to build a lie detector that would take the spectrogram of some audio and then decide whether it was a lie or…

Over this weekend I tried to build a lie detector that would take the spectrogram of some audio and then decide whether it was a lie or not.
Going into this experiment, I was quite convinced that there was no way this would actually work. So I did the usual, I collected my data, I cleaned it, and made a training and validation set. Now I will admit that the method of data gathering I chose, recording my voice say different truths and lies, was not the most scientific but for a home experiment, it worked fine. At this point, all I was focused on was whether or not it would work. This led me to forget the most important question. What happens if it does work?

Clearly, a lie detector isn't as big of a problem as people bringing dinosaurs back to life, right? Time to answer that question, but first we have to look at the results of the experiment. I trained the network and the results were very shocking to me. It had an error rate of 0!

Of course, the error rate should have been very small, considering my dataset was small, and since all of the files came from the same source. So I dismissed this as a case of overfitting.

Unknowingly, my mindset had shifted from wanting this network to succeed to wanting it to fail. Why? Probably because I realized that was definitely not an ethical thing to do.

Now comes the really interesting part. I fed it audio files of myself of different lies and truths, and it identified the file correctly every single time. If this was a normal neural network, I would be absolutely elated. This time, however, I felt an intense amount of apprehension.

I decided to test it on other people's voices and the results were just a tiny bit better than a human guessing whether something was a lie or not. I felt a lot of relief, but why?

You might be asking, why is a lie-detecting neural network a problem? I mean, polygraphs exist and those are fine.

Yes, but polygraphs can't become web apps with always listen modes on. Polygraphs can't become Alexa skills to infiltrate the homes of people across the world. Polygraphs can't take in information continuously while far far away from the subject.

Making neural networks do interesting tasks has become incredibly easy, but with that also comes the lack of thought as to what the network actually does. We rush to make it because of how cool it but we forget to ponder the ethics of the action.

Now, this isn't a Skynet level problem, but just like all tech, neural networks can be used by elements of society that don't exactly have our best interests at heart. That's why policing AI and Machine Learning becomes so important. It would be incredibly easy for someone to wreak havoc with a seemingly harmless app if there isn't a proper way to police neural networks for harmful intent.

Of course, although I overfit this version of a lie detector, other versions have been iterated by multiple researchers throughout the past and iterations will continue to be made. It isn't a question of whether we can do it, because we definitely can, it is a question of how to do it ethically.

After all, humanity's moral compass is one of its finest traits.

Top comments (2)

Collapse
 
elmuerte profile image
Michiel Hendriks

polygraphs exist and those are fine.

No, polygraphs are absolute bullshit. There is no such thing as lie detectors. They only way to detect lies is by fact checking, and not by analysis of any biometric measurements produced by humans.

Collapse
 
mike1237 profile image
mike1237

Variations of this technique have been in use for decades.
Commonly known as "Voice Stress Analysis", it supposedly works by detecting changes in timber of a vocal sample based on a users stress level. (More stress = tighter vocal chords = high frequency sample = potential lie)

There have been many studies debunking this technique as well.

en.wikipedia.org/wiki/Voice_stress...