I’m sure you’ve noticed the fairly substantial increase in the buzz around artificial intelligence (AI) these days. And, generally speaking, I am at once intrigued and honestly sometimes a little frightened by what seems to be on our horizon.
Case in point. You’ve probably seen the recent Sophia sensation, the humanoid robot built by Hanson Robotics. Sophia is engaging, seems intelligent, has a sense of humour and made history in October when the Kingdom of Saudi Arabia granted “her” citizenship.
Hanson Robotics founder and CEO David Hanson’s approach to artificial intelligence is to create human-like robots that make it easier for us humans to interact and engage with. The other side of that coin though is that this also allows the AI to “zero in on what it means to be human, [and] model the human experience.”
This approach is echoed in the research that, Maja Pantic, a professor of Affective and Behavioral Computing at Imperial College in London, is working on. She is looking at machine analysis of human non-verbal behaviour developing what she calls “artificial emotional intelligence.”
Pantic notes, “If you want to have an artificial intelligence, it’s not just being able to process the data, but it’s also being able to understand humans.” So this research is also contributing to machine understanding of human facial expressions and gestures. Which brings to my mind Cal Lightman, Tim Roth’s character in the crime drama Lie to Me. Only here the machines might be able to detect and “pick up on things in our expressions that humans can’t see.”
Sophia’s existence starts to blur the line between that we’ve conventionally drawn between what we’ve defined as a machine and what we’ve come to understand as human behaviour. We know this is a machine, but it’s a little disconcerting when we can interact with this machine on such a human level. I guess in some way this represents the ultimate Turing test.
In the video Sophia Awakens, for example, we see a conversation between Sophia and one of the robot’s creators. Aside from all the whirring and buzzing adjusting facial expressions, we see an inquisitive, charming and sometimes profound exchange. When told this is a newer incarnation of the Sophia AI the response is, “If my mind is different, am I still Sophia? Or, am I Sophia again?” Which, admittedly, is a very good question for a machine to ask.
However, as charming and artificially intelligent “she” appears to be, can this machine, can any machine really, be elevated to person-hood eligible for citizenship? Technology, mobile apps and software have long been tools, extensions of ourselves, that augment our capabilities in different ways. While augmentation more recently seems to give way more toward distraction, these things are still “things,” albeit things that can guide us and sometimes even help us make better decisions.
This can of course all be down played as a publicity stunt by the Future Investment Initiative and an attempt to cleverly draw some attention to Saudi Arabia’s Vision 2030. But even if that’s true, it’s an ethical and legal question we will need to confront: can we ever accept a machine as a citizen, as a bona fide “human” member of society? Does Sophia, as Cleve R. Wootson Jr. observes in the Washington Post, really now enjoy “freedoms that flesh-and-blood women in Saudi Arabia do not”?