Challenges Around the Right to Be Forgotten in Canada
This past Friday, the Office of the Privacy Commissioner of Canada released a draft position on online reputation which includes a call for several measures that help protect Canadians, including the right to ask search engines to de-index web pages and removing information at the source.
In the accompanying press release, Privacy Commissioner Daniel Therrien stated,
There is little more precious than our reputation. But protecting reputation is increasingly difficult in the digital age, where so much about us is systematically indexed, accessed and shared with just a few keystrokes. Online information about us can easily be distorted or taken out of context and it is often extremely difficult to remove.
Canadians have told us they are concerned about these growing risks to their reputation. We want to provide people with greater control to protect themselves from these reputational risks. Ultimately, the objective is to create an environment where people can use the Internet to explore and develop without fear their digital traces will lead to unfair treatment.
The basis for de-indexing and source takedown mentioned in the draft position is Personal Information Protection and Electronic Documents Act (PIPEDA), which the OPC claims applies to online content and search results,
Search engines must meet their obligations under the Act.
This includes allowing individuals to challenge the accuracy, completeness and currency (the extent to which the information is up-to-date) of results returned for searches on their name. Such challenges should be evaluated on a case-by-case basis, and decisions to remove links should take into account the right to freedom of expression and the public’s interest in the information remaining accessible.
Take down rights would be based on the right to withdraw consent for personal information that a person has posted online. If the information has been posted by others, a mechanism should exist to challenge the accuracy, completeness and currency of the information, and have it corrected, deleted or augmented as appropriate.
Not everyone seems to agree with the OPC’s proposal. Michael Geist posted a column in the Globe & Mail this weekend, examining the implications for search engines,
Even if the report can be justified as consistent with existing privacy law, the proposed approach features a remarkable level of micro-managing of search engine activity. The de-indexing right is limited solely to search results for a specific name, meaning that alternate searches will not be affected. For example, an embarrassing court decision might be blocked for a person’s name, but parallel searches for facts arising from the case would not.
Geist also questions whether PIPEDA applies to search results, as the vast majority of users are not engaging in commercial activities through their search results. He describes the ads and other commercial activities as secondary activities to the actual search function that these engines provide.
While that may be somewhat correct for the user, the search engine itself is providing a commercial transaction within Canada by providing results, even if the compensation for doing so is by a third party. The commercial transaction itself, the search results accompanied with the ads, is still occuring in Canada. However, as David Fraser points out, subjecting search engines to PIPEDA without further clarification would require them to obtain consent before including any individuals in any search results. This is obviously an entirely unfeasible approach, without further clarification or statutory amendment.
Although Geist also raises some Charter concerns around these proposals, the majority of Charter jurisprudence around s. 2(b) focuses on increasing accuracy in the search for the truth in a democracy. Differences which are strictly opinions would not be covered by the OPC position would not likely be included, and would therefore not raise any significant constitutional issues. As the Court stated in Hill v Church of Scientology of Toronto,
120 Although it is not specifically mentioned in the Charter, the good reputation of the individual represents and reflects the innate dignity of the individual, a concept which underlies all the Charter rights. It follows that the protection of the good reputation of an individual is of fundamental importance to our democratic society.
The OPC position would also be consistent with the Court’s approach in Grant v Torstar by allowing webpages to amend or edit their content in a manner that is more consistent with maintaining an accurate record.
Geist also critiques the fact that the OPC appears to have diverged from the majority of stakeholders it consulted on this issue. However, this is not entirely true, as the OPC Commissioner also stated,
While it’s important to take action on de-indexing, we are also recommending that Parliament undertake a study of this issue. Elected officials should confirm the right balance between privacy and freedom of expression in our democratic society.
If the OPC position, as further developed through further investigation and consultation, is fully enshrined in carefully constructed legislation, it’s unlikely that either existing statutory shortcomings or Charter challenges would prevent its implementation.
What Parliament would want to consider in doing so is the 2015 decision in Crouch v. Snell, which struck down Nova Scotia’s Cyber-safety Act, created in response to cyberbullying concerns following the death of Rehtaeh Parsons.
Given the broad expressive definition provided in Irwin Toy v. Quebec (Attorney General) and R. v. Keegstra, even cyberbullying contains expressive content by conveying meaning, to the extent that it falls short of violence or threats of violence. The purpose of such legislation therefore to control or restrict a particular form of expression that is likely to cause fear, intimidation, humiliation, distress or other forms of harm.
The Act failed in a s. 1 analysis because it did not provide sufficiently clear and sufficient standards to avoid arbitrary or discriminatory application for a section of the statute which allows for a protection order where “there are reasonable grounds to believe that the respondent will engage in cyberbullying of the subject in the future.”
To avoid this type of vagueness in any de-indexing or removal, a constitutionally valid statute would want to want to provide greater precision of the type of issues that can be challenged, the criteria that would be used for assessing them, and provide a predictable outcome. The Court explained this concept in R. v. Nova Scotia Pharmaceutical Society,
A vague provision does not provide an adequate basis for legal debate, that is for reaching a conclusion as to its meaning by reasoned analysis applying legal criteria. It does not sufficiently delineate any area of risk, and thus can provide neither fair notice to the citizen nor a limitation of enforcement discretion. Such a provision is not intelligible, to use the terminology of previous decisions of this Court, and therefore it fails to give sufficient indications that could fuel a legal debate. It offers no grasp to the judiciary. This is an exacting standard, going beyond semantics. The term “legal debate” is used here not to express a new standard or one departing from that previously outlined by this Court. It is rather intended to reflect and encompass the same standard and criteria of fair notice and limitation of enforcement discretion viewed in the fuller context of an analysis of the quality and limits of human knowledge and understanding in the operation of the law.
The Cyber-safety Act was also critiqued in Crouch because it allowed for a protection order to be made ex parte, given the anonymity of cyberbullying activity online, and the speed of dissemination. The protection order provisions under the Act were also found here to lack a rational connection to the legislative objectives. The Act also failed on minimal impairment in that it unnecessarily catches material that has very little to do with preventing cyberbullying, and applied to both public and private communications. With de-indexing or removal, these particular concerns should not be an issue, because it is the search engine itself, not the author of the challenged content, that would be the target of any such action.
Where any statute would require the greatest scrutiny would be in proportionality. Whereas the Act in Crouch lacked proportionality given the overbreadth of speech involved, the prevention of truth because it simply hurts someone’s feelings or self-esteem, does not have any defences, and prevents expression relating to “individual self-fulfillment, truth-finding or political discourse.”
While a right to be forgotten statute may infringe in all of these manners as well, the OPC’s proposal also includes the ability to lower the ranking of a search result, or provide a qualifier that the result may be inaccurate or incomplete. Doing so would therefore not prevent the expression from occurring, but would simply give it less prominence and presumably less authority. The OPC position is less of a right to be forgotten as observed in the EU, and more of a right to challenge the presumption of accuracy as assumed through the ranking of search results. But Geist has a concern about this solution as well,
This would vest editorial power in search engines they have generally been reluctant to assume. While algorithmic decision-making is far from neutral and deserves greater scrutiny, using privacy law to justify intentionally obscuring results by lowering ranks transforms information intermediaries into knowledgeable publishers.
To relegate search engines simply as neutral information intermediaries is to oversimplify the role of search engines in modern life. Nor would they necessarily be knowledgeable publishers, as the onus would still be on the individual to seek the remedy or provide the “knowledge” to modify, qualify, or de-list a search result.
As search engines are the primary source of information for every single one of us in Canada today, the questions of how we deal with private information online will continue to be a pressing challenge for policy makers, legislatures, and courts in the years to come.
I promise to dig into the consultation further and submit a response, but on first impression, I worry about confusing the _source of the information_with the current leading technology for _finding_ it.
There is an obvious problem with simply forbidding newspapers or similar organizations from having obsolete, outdated or incorrect information in their internet archives. We’d need a “Ministry of Truth” to do that, and I rather doubt that we want to start that discussion.
Because we’ve had a major shakeout in web search (anyone even remember AltaVista and Lotus Magellan?) persons who have been harmed but not libeled have asked the dominant search engine, Google to remove links to the information they do not wish to see. In Costeja , a Spanish citizen asked for the record of a long-since debt be delisted. In Equustek, the company asked for links to sellers of arguably stolen technology to be removed.
It is clear that there is a real problem with false, misleading or inaccurate information on the internet, and that it is a severe as the problems of revenge port or libel. What is not clear to me is how we should address this in a technology-neutral way.
I very much agree with Equustek’s obtaining a court order against the sites in question, and then asking the court to grant a temporary injunction requiring Google to de-index those sites. It dresses, first, the source of the disinformation, and second, a common way to access it. Court orders and injunctions sound like an approach that would also work for libel and “fake news” campaigns, as well as less severe threats to one’s self-esteem or reputation.
In my opinion, the first thing to do is dress the source of the information, to make a case that it should be removed, sealed or restricted in some way that is consistent with the law. If a good case can be made for an injunction, then Google should be ordered to de-index the questioned pages.
Of course, I’m then assuming that a single removal will suffice: in the case of neo-nazi propaganda or illegal goods, Google, Bing and friends are currently playing “whack-a-mole” to weed out new postings of the offending material, and doing Artificial Intelligence research in hopes of find good ways to re-identify it when it pops up.
I’m sympathetic to the plight of those who suffer from unfair, inaccurate, or spiteful content. But does this approach really work in practice? Does it carry too much baggage with it around issues such as expecting search engines to be arbiters, and what its ramifications are to the concept of consent? It is not a place we should go without some sober reflection.