Creating the AI in Our Own Image

Human being, as a self-absorbed species, tend to perceive ourselves as the most superior organism on the planet. The apex predator among apex predators.

This trope is is found throughout our popular culture, and often explains our relationship with other animals and the environment around us. The notion of dominance can even be found in the Bible, with humans being created in the image of the creator, specifically so that they could rule (רְדּוּ֩) over the world (Gen 1:26). In contrast, many Indigenous cultures to North America perceive a more harmonious and interdependent relationship with the others around us.

The notion of dominance may not in fact be correct, and it’s quite possible that bacteria or ants are better contenders for the most superior species. If we define dominance by absolute and complex intelligence though, there are not other true comparators to date. The presence of humans has effectively prevented any true competitors, with other similar species such as Neanderthals and Denisovans being absorbed into our populations. This may change though with the development of artificial intelligence, which may reveal new and unanticipated forms of intelligent expression.

For now, most applications of artificial intelligence are still modelled after human intelligence, largely because users still have the need to understand or control (alternatively, to dominate) the algorithms they create. In contemplating the emerging “relationship” humans may have with algorithms, some cultural precedents from our history may provide some guidance.

In Ashkenazi Jewish mysticism, the notion of an animated being, created by humans through spiritual techniques, exists in various folk traditions. This being, termed a “golem” (גלמ), is capable of moving and following commands of its creator, but cannot speak and does not have a soul. The first mythic human, Adam, was considered a golem until he was given a soul. Though created in a similar physical form or image as a human, a golem was never truly alive, as it failed to have this crucial soul.

This past week, I used the template of the golem and some of its mythology to explore a few ethical dimensions in the development of artificial intelligence at the Thomson Rivers University SLS Conference, in a talk on “Ghosts in the Machine: Artificial Intelligence.”

When the golem of the Jewish ghettos would get out of control, as they sometimes would, its creators were compelled of course to stop it. Tradition has it that a golem was inscribed with the word “truth” (אמת) on its forehead. Deactivating it was accomplished by erasing the first letter in this word, which leaves the word for “dead” (מת).

This reaction is not significantly different than the one adopted towards artificial intelligence. When a chatbot in 2017 started developing its own language to communicate with other bots, its creators decided to terminate it. Though hardly as nefarious as the damage and destruction created by rogue golems, it reveals the cavalier attitude towards the utility that algorithms provide us.

The story of the golem may have inspired other, later stories, such as Mary Shelley’s 1818 Frankenstein; or, The Modern Prometheus, and of course the many films and pop-culture references that emerge from this novel. One of the most common misunderstandings of the Frankenstein prototype is that it is the name of the creation, whereas it is in fact the proper name of the creator. The true monster was arguably Dr. Victor Frankenstein, who dies with regret over his ambition and need to experiment. The ethical dilemmas around artificial intelligence provide similar restraint over drawing foregone conclusions.

One of the most celebrated contemporary ethical dilemmas around artificial intelligence involves self-driving cars, and the utilitarian choices made around the prioritization of preservation of life. The Moral Machine study, with 2.3 million respondents, illustrated considerable geographic variability in the preferences of life preservation. Countries in the Middle East, South Asia, and East Asia, were more likely to preserve life that is senior, even if it meant sacrificing someone who was young. Most of Western Europe and North America exhibited the opposite preference. And South America and France were more likely to preserve female life over male.

The reasons for these preferences are deeply rooted in cultural norms, but also what different societies value. It may be that seniority is prioritized only in very hierarchical societies, and because there is an inherent self-interest in people preserving privileged for the elderly, in hopes of eventually attaining it themselves. Conversely, the emphasis on youth is often explained as valuing the opportunity to gain experiences and contribute to society. Greater scrutiny of preferences though demonstrate some troubling disparities on economic, professional, and even racial grounds. Finally, the preference for preserving female life itself may be predicated on an association of sex with reproductive potential, which is a rather demeaning and overly simplistic reduction of half the world’s population.

If we simply replicate the values shared by societies in the creation of rules for algorithms, we miss out on reflection as to why those rules exist. It’s widely recognized by this point that algorithms are in fact highly prone to biases and prejudices, usually unwittingly adopted through existing patterns found in existing and ongoing data. One of the most significant contributions that lawyers can make in the development of these technologies is apply a critical social justice, human rights, and jurisprudential sense, to help remove some of these unintended effects.

The result may be that in recognizing our own failings and flaws, and attempting to replicate no only our need for robust logical analysis, we will also somehow have to find a way to incorporate entirely illogical sentiments such as empathy and emotion. Mathematical models for this already do exist, but they still require considerable refinement.

Our flaws may be what make us human, but the removal of these same flaws in the artificial replication of humanity may ultimately provide something even better.

 

 

 

 

 

Comments

  1. This post is certainly thought-provoking, however, I’m finding some of the assertions troubling, such as: “One of the most significant contributions that lawyers can make in the development of these technologies is apply a critical social justice, human rights, and jurisprudential sense, to help remove some of these unintended effects.” Does this mean that lawyers are without inherent bias? Are lawyers superior moral beings?

    “Our flaws may be what make us human, but the removal of these same flaws in the artificial replication of humanity may ultimately provide something even better.” What does the author consider as human flaws? Who decides what are and what are not flaws? Because if for instance empathy is considered a flaw or a weakness depending on the scenario such flaw may be a strength. In my opinion things aren’t always binary, they are not black and white, in many cases context has to be or should be taken into consideration.

  2. “When a chatbot in 2017 started developing its own language to communicate with other bots, its creators decided to terminate it. Though hardly as nefarious as the damage and destruction created by rogue golems, it reveals the cavalier attitude towards the utility that algorithms provide us.”

    “Cavalier attitude”? Hardly. The chatbot did not develop its own language, it received incorrect reinforcement from other bots, which caused it to make irreparable errors in learning. Your own source says so.

  3. Verna,

    Lawyers are neither morally superior beings, or free of biases. If that was the case, we would be immune to the type of challenges faced by society generally, whereas we often experience them quite acutely.

    At the same time, the involvement of lawyers, the analytical approaches employed by our profession, and the experiences in combating discriminatory policies and institutional barriers, is still high useful for the development teams I’ve conversed with.

    A. Lawyer,

    I’ve always queried your relations to a one A. Nonymous, especially given the prohibitive policies employed here on the presence of the latter.

    That being said, I deliberately tried to avoid the debate over the Facebook chatbot and how it was characterized generally in the media. You’re entirely correct that I provided a source that better encapsulated what transpired.

    That being said, the developments with that bot still formed its own “language” through reinforcement learning. Where I differentiate from most contemporary coverage of this is that I agree with this particular source that it wasn’t anything sinister. Instead, I focused on our attitudes towards such technology, and our approaches towards the utility or benefit that it provides us.

Leave a Reply

(Your email address will not be published or distributed)