A Canadian Model for Bridging the Private Governance of Online Speech in the Wake of New Privacy Proposed Legislation

Author: Karen Eltis, Faculty of Law, University of Ottawa

We are witness to a parallel or alternate dimension where the constitutional rights democratic nations have toiled to enshrine and interpret, including freedom of speech, can be effectively – and imperceptibly – bypassed. While this situation prevailed prior to COVID-19, the pandemic has fast-tracked erratic private mediation of expression out of sheer necessity.

Questions respecting social media platforms’ ad hoc or arbitrary reactions to sensitive matters abound, including Twitter’s unprecedented restraint of the New York Post’s Hunter Biden piece and Facebook’s slowdown of the story.

Presumably recognizing some degree of state-like responsibility, platforms are deploying algorithmic decision-making tools and arbitrarily imposing prior restraints. Private American companies are reluctantly, but surely, stepping into the role of international arbiters of expression, employing artificial intelligence to contain (and promote) a wide array of content.

U.S. Scholar Evelyn Douek observed “platforms insisted that they didn’t want to be ‘arbiters of truth.’ The coronavirus changed their minds overnight”. Whereas agile development – particularly Facebook’s commendable break from its previous position on Holocaust denial – is welcome, the principles of policy-setting must be addressed and structured.

Looming “Sarbanes-Oxley” type punitive regulation of platforms and heavy fines for ‘under-suppression’, however, tend to mis-incentivize arbitrary censorship. They unintentionally abdicate policy-making to private actors, thus entreating us to revisit how to best operationalize human rights protections in the digital age.

While pending U.S. initiatives focus on “privacy” or “anti-trust”, let us consider that our current situation genuinely concerns the broader principles of democratic legitimacy and human rights. Therefore, the debate must be inclusively repurposed beyond privacy, to underscore freedom of expression, the right to vote and the right to equality, which are but a few of the rights and freedoms at stake.

Fortunately, the timing is particularly propitious as the interests of equality-seeking peoples and their supporters, plus those of advertisers fueling platforms, have serendipitously converged against racism following the heinous killing of George Floyd. Canada too has stepped forward with long-awaited privacy reforms.

In the spirit of “not letting a crisis go to waste”, this is a transformative moment in history. The interests of platforms, advertisers and marginalized groups have begun to intersect, moving meaningfully towards revisiting how to sensibly transposing equality-promoting brick and mortar guardrails against online hate to the cyber realm. without confrontational micromanagement ill-suited to the digital age.

To this end, the Canadian Ombudsman model, tweaked for enforcement, can help lift us out of our paralysis. Born of a creative – and rocky – marriage of the civil law and common law traditions while also integrating Indigenous traditions, Canadian normative hybridity is valuable for its chameleon-like interoperability.

In effect, hybrid legal systems such as Canada’s provide unique insights and bridge conceptual gaps in terminology (such as privacy, which means different things in various jurisdictions).

Like most outsiders, it elucidates a novel path, rejecting the idea that – for purposes of limiting hate speech – we must choose between equality and expression as zero sum. As evidenced repeatedly by our Supreme Court, it strives to find equipoise between expression and expression (chilling), equality and indeed other constitutional democratic values reflected in international law.

Furthermore, Canada’s ombudsman model – though far from flawless – is anchored in broad flexible principles to go “beyond consumer protection”, or the GDPR’s somewhat unforgiving “rules to follow”. Properly revised towards oversight, such an approach in principle might better resonate with corporate preference for innovation-oriented self-regulation.

Although in need of ‘more rigorous enforceability’, Canada’s Cooperative Ombudsman model encourages cooperation between industry and regulators rather than confrontation[1]. It further addresses the issue of arbitrary over suppression of content spurred by companies’ fear of liability.

Importantly, the model is one that emphasizes broader human rights, or “liability- regulation”, which provides the wrong incentives to erratically over-censor.

Plainly put, the Canadian approach ultimately helps surmount some of the conceptual and cultural barriers, perhaps eventually leading to a “conceptual middle ground” or, at least, an interoperable one in the interim.

It might also assist with predictability and accountability, so that people understand the rules and can appeal if necessary.

Platforms have a de facto “public trust to protect in the public interest”. Doing so requires creating collaborative alliances to integrate human rights in corporate governance. For democracy, as John Turner so rightly reminded us, “does not happen by accident” – or by arbitrary ‘content moderation’.

Comments are closed.