An Algorithm’s Charter Rights

Everywhere I go during the holidays I seem to be surrounded by Apple’s Siri, Amazon’s Alexa, and Google’s Assistant. While these computers don’t yet talk the way do, it did have me thinking about the expression rights that might be protected by the Charter.

In 1996, the United States District Court for the Northern District of California ruled in Daniel J. Bernstein et al., v. United States Department of State et al. that software source could be protected under the American First Amendment,

…the particular language one chooses change the nature of language for First Amendment purposes. This court can find no meaningful difference between computer language, particularly high-level languages as defined above, and German or French. All participate in a complex system of understood meanings within specific communities. Even object code, which directly instructs the computer, operates as a “language.” When the source code is converted into the object code “language,” the object program still contains the text of the source program. The expression of ideas, commands, objectives and other contents of the source program are merely translated into machine-readable code.

The code today can be far more complex than at that time, and this decision did not address whether the product from a code would itself be a form of expression. Intrinsic to the function of these modern algorithms is also access to large data sets of information.

The Supreme Court of the United States further lay the ground for this question in 2011 in William H. Sorrell, Attorney General of Vermont, et al., with a regulatory case involving the prohibition of sale of patient information by doctors to pharmaceutical companies for the purposes of data mining. The Court ruled that the creation and dissemination of information is speech under the First Amendment, and there was no need to provide an exception in this case. The State’s justification based on the patients’ privacy interests, and the impact on healthcare costs, were both rejected.

In 2012, Eugene Volokh and Donald M. Falk argued in the Journal of Law, Economics & Policy that the First Amendment protections should be extended to search engine results. Because search results still reflect a search engine’s editorial choice about opinions and facts, Google is exercising classic free speech in selecting which information it presents to a user and how it is presented.

Tim Wu of Columbia law criticized this position in the New York Times that such protections would only be incidentally related to the constitutional protections,

It is true that the First Amendment has been stretched to protect commercial speech (like advertisements) as well as, more controversially, political expenditures made by corporations. But commercial speech has always been granted limited protection. And while the issue of corporate speech is debatable, campaign expenditures are at least a part of the political system, the core concern of the First Amendment.

The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all.

However, a review in 2013 in the University of Pennsylvania Law Review by Stuart Minor Benjamin indicated that the broad interpretation provided to speech under the First Amendment would require a significant reconceptioning of these constitutional principles if there was to be any extensive regulation of algorithms as proposed by Wu.

Benjamin rightly notes that although many of the algorithms used in commercial contexts may not themselves be unique, obtained through open source or adopted from others, their products may be. The application of the same or similar algorithms to different data sets, or even the same data set processed in different ways, may in fact create a different result worthy of constitutional protection.

These discussions were soon followed by a Superior Court of California decision in S. Louis Martin vs Google, Inc decision in 2014, which ruled on a special decision to strike the plaintiff’s complaint that Google’s algorithm code was a constitutionally protected activity.

Yet Benjamin’s conclusion rests on an assumption that humans are still making substantive editorial decisions into the algorithm process, and with the next generation of machine learning and artificial intelligence we are already seeing that this is not necessarily the case.

This issue landed squarely in Canada in the Supreme Court’s decision last year in Google Inc v Equustek Solutions Inc et al, which upheld the injunction provided by the Supreme Court of British Columbia that restricted certain search results for the online sale of allegedly counterfeit products,

[48] This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders. We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods.

The applicability of this ruling more broadly is still up for debate, but the question over an algorithm’s Charter rights is now firmly before Canadian jurisprudence.

Veenu Goswami, a recent UofT law graduate, published an article in the current issue of the Western Journal of Legal Studies, “Algorithms, Expression and the Charter: A Way Forward For Canadian Courts,” to help develop an analytical framework for assessing whether algorithms should have s. 2(b) Charter protections:

  1. courts should examine whether the algorithmically generated content is sufficiently connected to preferences instituted by the algorithm creator
  2. courts should determine whether the algorithm was designed for a purpose connected to any of the values underpinning section 2(b)
  3. When both stages of this test are satisfied, the state should bear the burden of justifying restrictions on the content pursuant to the reasonable limit inquiry in section 1 of the Charter

The author rebuffs the purposive approach employed by Irwin Toy, which he claims trivializes a purposive approach towards s. 2(b) through an effective categorical inclusion for all predominantly mechanical products and tools. He claims this to be only a modest departure from the Irwin Toy analysis to develop a more workable model for the novel nature of computer algorithms.

Wu’s functional approach towards fails to justify why the distinction between human speech should be treated differently than that of an algorithm, if the generated speech is designed for the same task as the human’s. Goswami points to the Court’s identification of the key values behind s. 2(b) in Canadian Broadcasting Corp v Canada (AG), the search for truth, facilitating social or political participation, and individual self-fulfillment, and provides some examples of how algorithms can indeed meet these values to demonstrate that a categorical exclusion is unlikely to be warranted.

Goswami references Benjamin’s foreshadowing of an algorithm that may at some point demonstrate a choice or volition indistinguishable from humans,

I agree with Benjamin’s caveat. The Charter was not enacted to provide constitutional protection to machines. The rights and protections it guarantees are generally restricted to natural or legal persons. However, my test is designed to allow courts flexibility in reconsidering the expressive interests at stake in a potential case where a machine displays a level of choice or volition more akin to that of a human.

In practice, the first prong of my test is likely to be satisfied by most current algorithms. For example, many algorithms currently in use are designed to quickly manipulate complex data in a way that would be practically impossible for humans to achieve. These algorithms would clearly meet the first step of my test because a specific line of reasoning synthesizes the output.

[emphasis added]

While this may have been true at the time of writing, the technology, once again, has already surpassed what can be envisioned by the law. Last year, Google announced an algorithm better at writing algorithms than its own algorithm-writers. Benjamin’s hypothetical is already a reality.

Instead, communication expressed through an algorithm could be reviewed as communication through a new medium. The actual algorithms generated, whether deliberately coded or artificially created by another code, could still be considered a form of intellectual property.

At the end of the day, algorithms are still just computer codes that enjoy existing copyright and trade secret protections. They are just property. Patents are a more difficult question.

In 2011, the Federal Court of Appeal ruled in Canada (Attorney General) v., Inc. that the business methods employed through computer implemented or related inventions are not necessarily excluded. The Canadian Intellectual Property Office (CIPO) has since released new guidelines for a purposive construction of patent claims, as well as a new examination practice for computer-implemented inventions.

The actual expression still belongs to the natural or legal person owning the algorithm that conveys the expression. Any Charter rights would then be evaluated in light of the claimant who is owning and using the algorithm, which shouldn’t require much of a departure from established s. 2(b) jurisprudence at all. The purposive values test is incorporated into the Irwin Toy analysis, and any sufficient connection to the creator’s preferences largely become irrelevant.

As in Equustek, there are plenty of societal interests that could be justified from a regulatory perspective, even if Charter rights attach to algorithms. And as sci-fi affectionados well know, the 2nd Rule of Asimov’s Robot Laws aren’t really much of a concern as long as the 1st is still obeyed. A more complicated question is the extent of liability attached to the owner of algorithms that are self-generating and then creating harmful or even defamatory content.

A person or company that owns an algorithm, which is expressing something either similar or different from what the creator originally intended, is not very much different from a person or company who owns a radio or television station that broadcasts content that they are unaware of or disagree with. Ownership of the property conveying the meaning still allows s. 2(b) rights to be properly engaged, and the majority of the analysis would then be under s. 1, based on the owner’s Charter rights.

The robots won’t have their day in Court any time soon, at least not as a party before it.

Comments are closed.