I would say yes. Having a quick discussion with my brother about it I came up with the scenario of the Google AI being used to call businesses constantly to set up fake appointments.
A simple solution to this is to make the AI start the conversation like "Hi, this is Google Assistant calling on behalf of Joe Smith, I would like to make an appointment for him please".
Human on the other side knows it's an AI and can answer accordingly. Doesn't need to pretend to do formalities, can simply state fact. Both efficient and fair to the person on the otherside.
I also would say it is important for AI to be as humanised as possible, as we can all agree standard computer voices are annoying and most people would just hangup.
An AI / Assistant accepting voice as an input seems quite acceptable. It should have a "trigger word" (IE, "Alexa," or "Hey Google"), and otherwise clear it's memory of the sounds around it. We've gotten pretty
Voice as an output is way more ethically grey in my mind. I would agree with @scottishross
that a disclaimer would be appropriate in these situations.
What happens when the AI gets confused and the conversation leaves it's pre-determined bounds? Without a disclaimer, things could get very crazy.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Another post on the subject:
Google Duplex - The Conversation about Ethics?
Nkansah Rexford
With a few comments relevant here
I would say yes. Having a quick discussion with my brother about it I came up with the scenario of the Google AI being used to call businesses constantly to set up fake appointments.
A simple solution to this is to make the AI start the conversation like "Hi, this is Google Assistant calling on behalf of Joe Smith, I would like to make an appointment for him please".
Human on the other side knows it's an AI and can answer accordingly. Doesn't need to pretend to do formalities, can simply state fact. Both efficient and fair to the person on the otherside.
I also would say it is important for AI to be as humanised as possible, as we can all agree standard computer voices are annoying and most people would just hangup.
An AI / Assistant accepting voice as an input seems quite acceptable. It should have a "trigger word" (IE, "Alexa," or "Hey Google"), and otherwise clear it's memory of the sounds around it. We've gotten pretty
Voice as an output is way more ethically grey in my mind. I would agree with @scottishross that a disclaimer would be appropriate in these situations.
What happens when the AI gets confused and the conversation leaves it's pre-determined bounds? Without a disclaimer, things could get very crazy.