Feelings, Nothing More Than Feelings …?*

But a breakthrough won’t be hard. We only need to look at things from a slightly different angle—which might happen in a hundred years or this afternoon.”—David Gelernter

One thing that came up during the “Computers and Legal Research” session that I reported on in my last post was the issue of copyright, specifically: Does AI create new, secondary IP rights? Or, will a machine be able to claim copyright? Very interesting questions. I’m not sure if this is what Nate Russell meant when he wrote at the end of his excellent post from last week, that plans to explore issues in copyright, but certainly a question worth exploring.

I was reminded of that short and inconclusive exchange when I saw this post from a couple of weeks ago by Peter Dockrill, “Artificial intelligence should be protected by human rights, says Oxford mathematician.”

Normally, and also something touched on during the afore mentioned CALL session, AI brings with it a healthy dose of FUD (that’s fear, uncertainty, doubt not Elmer) leaving us to wonder if we should be thinking about protecting ourselves from potentially harmful or dangerous AI or robotic activities. However, as Dockrill reported in Science Alert, the University of Oxford mathematician Marcus du Sautoy flips this thinking around suggesting that, “once the sophistication of computer thinking reaches a level basically akin to human consciousness, it’s our duty to look after the welfare of machines, much as we do that of people.”

du Sautoy, who has been Professor of the Public Understanding of Science at Oxford since 2008, notes elsewhere that consciousness is now measurable:

“The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn’t know how to measure it.

“But we’re in a golden age. It’s a bit like Galieo with a telescope. We now have a telescope into the brain and it’s given us an opportunity to see things that we’ve never been able to see before.

“And if we understand these things are having a level of consciousness … we might well have to introduce rights. It’s an exciting time.”

Can a machine experience the world like a human?

Yale computer science professor David Gelernter wrote about the “spectrum of consciousness” and AI in the Wall Street Journal in March. In one passage he described human consciousness this way:

“The spectrum’s top edge is what we might call thinking-about—pondering the morning news, or the daffodils outside or the future of American colleges. At the opposite end, you reach a state of pure being or feeling—sensation or emotion—that is about nothing. Chill or warmth, seeing violet or smelling cut grass, uneasiness or thirst, happiness or euphoria—each must have a cause, but they are not about anything. The pleasant coolness of your forearm is not about the spring breeze.” [original emphasis]

Since computers are all about, well computing, will they be able to take the data elements representing that summer breeze and then calculate a human “feeling?”

Dockrill’s post concludes with this statement from du Sautoy:

“Philosophers will say that [machine consciousness] doesn’t guarantee that that thing is really feeling anything and really has a sense of self. It might be just saying all the things that make us think it’s alive. But then even in humans we can’t know that what a person is saying is real.”

What do you think? Will machines be able to feel and then draw on their emotional experiences? If they do achieve this human like quality will machines need this kind of “human rights” protection? Maybe I should have called this post, “Questions, Nothing More than Questions.”


Andy Williams – Feelings (1975)


  1. The main problem with this line of thought is that it completely ignores the very purpose of copyright, which is to promote artistic or intellectual creation. It treats copyright as a moral right, when historically, IP has been all about balancing the interests of creators vs the public.

    In the case of AI, a computer has no impulse for creation, at least just yet. It merely spits out a result when told to do so.

    When an AI starts understanding the value of money, when it starts having creative impulses that are independent of a human-master’s trigger, when it responds to financial incentives and gets to manage its own finances, then, and only then, can we maybe start thinking of granting them IP rights.

  2. If the AI is the property of a corporation (a distinct legal person), it’s the corporation that owns copyright in any work the AI creates. I picture a future where the robot is the body that does all the doing, each robot being property of a corporation through which the robot has all the societyal standing of personhood. In that world, a robot without a corporate personhood could be considered the way one would consider a human without a soul.

  3. Hi Tim,
    You’re hot on the trail for where my next post is heading. I’ve read a few interesting articles recently about copyright and AI: Digital Originality; Copyright for Literate Robots. Coding Creativity – Copyright and the AI author.
    One author (Annemarie Bridy) thinks that due to a spat of 1980s arcade game display litigation, “in all likelihood, courts would rely on the video game cases to hold that ownership of the copyright in generative code translates directly into ownership of the copyright in the works produced by it.” This is an “analytically loose” albeit “intuitively satisfying” solution to the authorship issue: the programmer owns the work of its creation because it is just a tool.
    There are counter opinions too. You could ask: is automation antithetical to authorship? Where is the incentive to an automaton?
    I was amazed to learn — and really appreciate too — how we are coming back on an issue that 50 years ago was considered real, imminent, and major. I mean computer authorship specifically. Can you believe that in 1965 the US Copyright Office considered that to be one of the top 3 major problems?
    It almost completely fell by the wayside until now.

  4. To say “If the AI is the property of a corporation (a distinct legal person), it’s the corporation that owns copyright in any work the AI creates” gives me doubt. If you can support a work for hire theory, fine. But will AI be recognized as being able to produce “original” works at all? Copyright must also vest in a first owner, and the author must be a person, so how do you wedge the AI into that scheme? It doesn’t actually look to me like it satisfies the conditions for subsistence of copyright.

  5. Nate, Copyright Act s. 5(1)(b)(i) clearly contemplates a first owner that is a corporation, so I don’t think it’s much of a stretch.

    Can I claim CPD for this?

  6. Yes, but that section does not apply here. Authorship in Canada is mostly reserved for humans. A corporation cannot be an “author” unless it’s a photograph, I don’t think. Section 5(1)(b)(i) refers to a cinematographic works and also refers to the “maker”, which is different from an “author”.