DEV Community

KhoPhi
KhoPhi

Posted on

Google Duplex - The Conversation about Ethics?

In case no idea what Duplex is:

I for one, don't care, talking to a robot, whether I know or not, as long as it gets the job done, can convey the thoughts in a manner that is complete.

Heck, we "talk" to Siri, Google Assistant and Alexa everyday.

Google's Duplex is more like the other way round, where the Assistant rather initiates the conversation, and does it so well, it's human-like.

Is it ethically wrong? What are the moral implications?

Should such robot's 'disclaim' themselves at the very beginning of such calls?

Are you likely to pick a call by a bot if you know it's a bot, and the bot tells you, it's a bot?

And just a fun scenario:

Top comments (16)

Collapse
 
scottishross profile image
Ross Henderson

I would say yes. Having a quick discussion with my brother about it I came up with the scenario of the Google AI being used to call businesses constantly to set up fake appointments.

A simple solution to this is to make the AI start the conversation like "Hi, this is Google Assistant calling on behalf of Joe Smith, I would like to make an appointment for him please".

Human on the other side knows it's an AI and can answer accordingly. Doesn't need to pretend to do formalities, can simply state fact. Both efficient and fair to the person on the otherside.

I also would say it is important for AI to be as humanised as possible, as we can all agree standard computer voices are annoying and most people would just hangup.

Collapse
 
darksmile92 profile image
Robin Kretzschmar

Good point! I totally agree that this would make it more convenient to use because the called person can just talk straight facts if he/she knows it is an AI.

This would also eliminate trouble with slangs, unclear instructions and other scenarios where the assistent does not understand the answer correctly. I think Google's AI is very advanced but there can always be times when it does not understand the person on the phone.

And the next point is: what if there are personal questions to be answered? Like "do you like to get your hair cut only or also overdo your beard / dye your hair"? Because sometimes they need to know how much time they plan on your appointment.

Or at the doctors: most of the time they want to know a rough description what the appointment is for so they can re-schedule another one if necessary.

Collapse
 
scottishross profile image
Ross Henderson

I have been wondering about this about the personal questions aspect. I believe my solution of informing them it's an AI may reduce that issue, though you could also do the fancy and have the AI post either the message on your phone screen so you can answer, or just have it reply "I don't know". If creating the appointment then failed the AI could go back to the user and say "I couldn't book the appointment because...", allowing the user to then give more detailed instructions.

We're at the precipice of incredible technology here, so who knows where it will go.

Collapse
 
hoggworks profile image
Brian Hogg

I've read that Google will have limits in place, both in terms of limiting the number of times a given business will get calls from the Assistant, and the number of times you'll be able to use it in a given time-frame.

Collapse
 
rhymes profile image
rhymes • Edited

I agree, if this thing is programmable it can also be used to do a lot of social engineering for hacking purposes

Collapse
 
hoggworks profile image
Brian Hogg

So far, with Google Duplex specifically, the only things they seem to let you do is ask to make appointments, without much in the way of ability to socially engineer, so at least with Google's current implementation, there doesn't seem like there'd be an issue.

Thread Thread
 
rhymes profile image
rhymes

I guess we'll see in the coming years :-)

Thread Thread
 
hoggworks profile image
Brian Hogg

Indeed. I just mean that, if Google is taking your actual Google account as the basis for information it gives to the stores it calls about you, the chance of being able to trick people seems very low.

Now, when the folks who do robocalls have access to some other implementation, when in several years an open sourced system with enough training is able to be used in this way (if such a thing ever happens), then that would obviously be more cause for concern. But I feel like Robocalling is popular because it's so cheap.

Collapse
 
hoggworks profile image
Brian Hogg

I'm honestly a little confused as to why so many people think this is inherently ethically dubious. Having the discussion about it is fine, and at every step we should examine what our technologies might be able to do negatively, but why is this inherently ethically bad? The responses to this I've typically seen are just "because it's not a real person," but my answer to that is always "and?"

It seems to me that the only places where ethics will come into this is in the content of the call: if the AI says something scammy, then sure, that's bad, but surely that's an issue with what's being said, not who or what is saying it?

None of the examples I've seen so far have been unique to AI; they've all been examples of abuses that happen now, so it seems -- at least inasmuch as Google Duplex is concerned -- like a whole lot of nothing, ethically speaking.

Collapse
 
khophi profile image
KhoPhi

I share similar thoughts.

When a human calls a human, no one declares to the other their 'humanness'.

I like the thought shared in this comment by JG: 9to5google.com/2018/05/10/google-d...

Where exactly is the immoral deception coming into play? Duplex may not have announced itself as a bot, but it never announced itself as a human either.

If Google has to have Duplex announce its bot status, should we also have to announce our human status? Otherwise it might be immoral deceptive to allow someone to think we might be a bot.

Does it really matter if your talking to a human or a bot? Especially in the use case of scheduling an appointment. IMHO, if I were answering that call, I personally would prefer talking to Duplex than a real person. The call would be more succinct and to the point. Duplex would have access to the user's calendar to know when an appointment could and could not work instantly

Collapse
 
hoggworks profile image
Brian Hogg

Or, even if you were to dismiss that specific comparison as being silly because there's never been a need to identifying as human before, there are so many others you can make:

If you're a man with a high-pitched voice, you aren't expected to identify as a woman;
If you're a woman with a deep voice, you aren't expected to identify as a man;
You're not expected to state whether the accent you're speaking with is actually your own.

Collapse
 
peter profile image
Peter Kim Frank

An AI / Assistant accepting voice as an input seems quite acceptable. It should have a "trigger word" (IE, "Alexa," or "Hey Google"), and otherwise clear it's memory of the sounds around it. We've gotten pretty

Voice as an output is way more ethically grey in my mind. I would agree with @scottishross that a disclaimer would be appropriate in these situations.

What happens when the AI gets confused and the conversation leaves it's pre-determined bounds? Without a disclaimer, things could get very crazy.

Collapse
 
rhymes profile image
rhymes • Edited

I'm thinking about a pseudo philosophical question:

let's say that tomorrow we have Duplex on our phones and we all get used to robots calling to book appointments. The question is: why should robots be explicitly programmed NOT to be recognisable as robots? Why are trying so desperatly to trick our brains into thinking we're engaging with a human being instead of just developing super advanced robots that we all know are robots and we accept them as such?

I don't have the answer, just the question :D

A few updates:

Should our machines sound human?

Also Zeynep Tufekci thread here is worth a read:

And finally, Google confirmed they're going to have these bots identify themselves as bots:

it seems as if Google taking extra steps to ensure the public that itโ€™s taking a stance of transparency following the online outcry. That includes making sure that Duplex will make itself โ€œappropriately identifiedโ€ in the future, for the benefit of all parties involved.

from Google now says controversial AI voice calling system will identify itself to humans

Collapse
 
kspeakman profile image
Kasey Speakman

I would love to have something like this when I have to call automated phone systems. Let the AI deal with the call queues and menu navigation. If seems like it would be about as satisfying as spamming spammers.

Collapse
 
msoedov profile image
Alex Miasoiedov • Edited

As long as it not in a spammer hands we are fine

Collapse
 
nektro profile image
Meghan (she/her)

that's never something we can guarantee