“But a breakthrough won’t be hard. We only need to look at things from a slightly different angle—which might happen in a hundred years or this afternoon.”—David Gelernter
One thing that came up during the “Computers and Legal Research” session that I reported on in my last post was the issue of copyright, specifically: Does AI create new, secondary IP rights? Or, will a machine be able to claim copyright? Very interesting questions. I’m not sure if this is what Nate Russell meant when he wrote at the end of his excellent post from last week, that plans to explore issues in copyright, but certainly a question worth exploring.
I was reminded of that short and inconclusive exchange when I saw this post from a couple of weeks ago by Peter Dockrill, “Artificial intelligence should be protected by human rights, says Oxford mathematician.”
Normally, and also something touched on during the afore mentioned CALL session, AI brings with it a healthy dose of FUD (that’s fear, uncertainty, doubt not Elmer) leaving us to wonder if we should be thinking about protecting ourselves from potentially harmful or dangerous AI or robotic activities. However, as Dockrill reported in Science Alert, the University of Oxford mathematician Marcus du Sautoy flips this thinking around suggesting that, “once the sophistication of computer thinking reaches a level basically akin to human consciousness, it’s our duty to look after the welfare of machines, much as we do that of people.”
du Sautoy, who has been Professor of the Public Understanding of Science at Oxford since 2008, notes elsewhere that consciousness is now measurable:
“The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn’t know how to measure it.
“But we’re in a golden age. It’s a bit like Galieo with a telescope. We now have a telescope into the brain and it’s given us an opportunity to see things that we’ve never been able to see before.
“And if we understand these things are having a level of consciousness … we might well have to introduce rights. It’s an exciting time.”
Can a machine experience the world like a human?
“The spectrum’s top edge is what we might call thinking-about—pondering the morning news, or the daffodils outside or the future of American colleges. At the opposite end, you reach a state of pure being or feeling—sensation or emotion—that is about nothing. Chill or warmth, seeing violet or smelling cut grass, uneasiness or thirst, happiness or euphoria—each must have a cause, but they are not about anything. The pleasant coolness of your forearm is not about the spring breeze.” [original emphasis]
Since computers are all about, well computing, will they be able to take the data elements representing that summer breeze and then calculate a human “feeling?”
Dockrill’s post concludes with this statement from du Sautoy:
“Philosophers will say that [machine consciousness] doesn’t guarantee that that thing is really feeling anything and really has a sense of self. It might be just saying all the things that make us think it’s alive. But then even in humans we can’t know that what a person is saying is real.”
What do you think? Will machines be able to feel and then draw on their emotional experiences? If they do achieve this human like quality will machines need this kind of “human rights” protection? Maybe I should have called this post, “Questions, Nothing More than Questions.”