The Intellectual Property Rights and Existential Threat of Large Language Models

The publisher Springer Nature is issuing books with such subtitles as A Machine-Generated Literature Overview, while ChatGPT is being credited as co-author on research papers published in Elsevier journals. Yet Springer Nature’s premier journal, Nature, declared in January, that papers generated by a large language model (LLM), such as ChatGPT, will not be accepted for publication: “An attribution of authorship,” states Magdalena Skipper, editor-in-chief of Nature, “carries with it accountability for the work, which cannot be effectively applied to LLMs.” This soon became part of Nature’s authorship policy. Then on March 16th, the U.S. Copyright Office launched a new AI initiative on “the scope of copyright in works generated using AI tools,” while providing copyright registration guidance for “Works Containing Material Generated by Artificial Intelligence”: “If a work’s traditional elements of authorship were produced by a machine,” wrote the Register of Copyrights and Director of the Office, Shira Perlmutter, “the work lacks human authorship and the Office will not register it.”

Just like that, it seemed with the public launch of ChatGPT on November 22, 2022, machines were generating remarkable, original works. The texts being produced were far removed from robotic simulations of humans. The compositions may strike you at first as uncanny simulacrum. But then at some point, as you and others are fooled by human v. ChaptGPT texts, that machine is generating text much as we do out of what we have already written. You can imagine that organizations, as influential as Nature and the U. S. Copyright Office, must have been under considerable pressure to take a stand on this unprecedented phenomenon.

What is fascinating for those with an interest in intellectual property is how Skipper and Perlmutter essentially went metaphysical rather than legal. They are appealing to the ineffable and transcendent in claiming a lack of “accountability” before the editor (where it once was before God) for an act of “human authorship.” They make no reference to the LLMs’ lack of legal standing on which to base a claim to property; they know better than to challenge whether the work in question possesses intellectual properties. Reverting to the metaphysical, I’m coming to realize, is a natural response in facing the existential threat posed by LLMs. I don’t mean by this threat the AI doomsday scenario of a paperclip apocalypse, nor “the dramatic economic and political disruptions (especially to democracy) that AI will cause,” as described in the March 22nd open letter signed by 25,000 people calling for a six-month moratorium on developing generative AI more powerful than GPT-4.

Without taking anything from what is behind those concerns, what interests me here is a different sort of existential threat, one aimed at our Cartesian regard for ourselves: I think – reflected in the language I generate – therefore I am. Or in academic terms, roughly speaking, publish or perish. To preclude a LLM from publishing might then seem one of a number of defensive moves intended to protect our claim to exist. “Many researchers doubt that the machines’ minds,” reports Oliver Wang, “will ever be truly connected to the physical world — and, therefore, will never be able to display crucial aspects of human intelligence.” Whatever it is to be “truly connected,” like astronauts to a space station or angels walking this earth, is surely reflected in how we speak about the world, just as weaving whole cloth out of this language, as LLMs do, is bound to reflect “crucial aspects of human intelligence.”

As unsettling as this may be, we do, at least, have legal structures to deal with the intellectual property that LLMs generate. Those who employ LLMs, as well as those who program them, can be said to possess intellectual property rights to the resulting compositions under the long-standing copyright tradition of works made for hire. It may somewhat stretch the sense of “employ,” especially in the Canadian context, in which contractors, rather than employers, retain copyright. But then in Canada and elsewhere, there is typically a contractual assignment of rights – likely involving, in this instance, those employing a LLM, program creators, and publishers – that can readily entail the “accountability for the work” that Skipper seeks for research articles. I raise this because, rather than limiting the potential contribution of LLM to science and thus humankind due to the existential challenges chatbots pose, we may want to find ways of cautiously taking hold of the intellectual gains this new technology can make possible, while seeking to learn from machine learning.


Disclaimer: No part of this column was generated by a LLM, although it could have been with a few prompts in a fraction of the time.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)