Do You Need to Know You Are Speaking With a Robot?

You will probably have heard that Google has developed a system by which a machine can make phone calls to humans, notably to make reservations for hotels and restaurants (and what more human an activity is there?) – and the machine, using AI, can sound remarkably human. Apparently we have here a device that passes the Turing test with flying colours.

Question: should it have to tell people it deals with that it is essentially a robot? A lot of people claim to be unhappy with the idea that they may deal with the machine and not know it’s a machine.

Google has – so far – given in, and says it will make the machine identify itself, as it were.

Do you care? Why? Is it different in kind rather than function from the robocalls you have got for years, proposing you vote for X or buy Y? Would it be reasonable to refuse to talk with the machine – and refuse to sell its owner something that you are in the business of selling? If not, why do you need to know?

Suppose the reservation system your robot calls is itself automated. Will the robots decline to deal with each other pending the other’s identification – a thoroughly artificially intelligent Alphonse/Gaston routine? Or will they recognize kindred non-spirits and be all the more efficient?

As with Alexa and the parrot, there should be no difficulty in most jurisdictions with the enforceability of the contract entered into.

Does that sound right to you?

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)