Column

Electronic Frontier Foundation Takes on Online Speech Moderation With TOSsed Out

The Electronic Frontier Foundation (EFF) announced on May 20th that it had launched TOSsed Out, a new iteration of EFF’s continuing work in tracking and documenting the ways that Terms of Service (TOS) and other speech moderating rules are unevenly applied to people by online services. Sometimes, posts are deleted. Sometimes accounts are banned. For many people, the internet represents an irreplaceable forum to express their ideas, communicate with others, etc.

We have long been fans of the EFF and were delighted to hear that cybersecurity guru Bruce Schneier is leaving IBM, in part to focus on teaching cybersecurity to the next generation but in part to focus on his role as a public interest cybersecurity specialist. Since he is already on the board of the EFF, he is in a great position to be of help.

But back to TOSsed Out, which follows in the path of Onlinecensorship.org, which EFF launched in 2014 to collect reports from users in an effort to encourage social media companies to operate with greater transparency and accountability as they regulate speech. TOSsed Out will focus on the ways that people are negatively affected by these rules and their erratic enforcement.

Commercial content moderation practices negatively affects lots of folks, especially people who are marginalized. This includes black women who share their experiences of racism to sex educators whose content is deemed too explicit. TOSsed Out’s mission is to show that trying to censor social media ends up removing legal, protected speech.

You can find the TOSsed Out website at https://www.eff.org/tossedout. It provides some examples of online content moderation gone astray – with future examples to be added. The EFF is attempting to make clear the need for companies to embrace the Santa Clara Principles which it created to establish a human rights framework for online speech moderation, require transparency about content removal, and specify appeals processes to help users get their content back online. Those are all good objectives and we support the Principles. As of June 2019, three of the largest internet platforms—YouTube, Facebook, and Twitter—began to implement the recommendations outlined in the Principles.

There has, however, been a movement to apply the First Amendment to private companies in spite of the fact that it applies only to governmental speech. Of course, it makes perfect sense that Facebook pages and Twitter accounts, which are made public forums by politicians, are subject to the First Amendment. By way of example, see Knight First Amendment Institute v. Trump, in which the court ruled that the President could not block followers who expressed opposing points of view – note that the case is on appeal and was argued on March 26, 2019 in the U.S. Court of Appeals for the 2nd Circuit.

It is true that we now live in a world where private social media entities can limit, control and censor speech as much or more than governmental entities. There has been a growing number of people advocating that the First Amendment should be extended to cover these entities.

The new thesis is that when a private actor has control over online communications and online forums, these private actors are analogous to a governmental actor. The notion is that the U.S. Supreme Court should relax the state action doctrine and interpret the First Amendment to limit the “unreasonably restrictive and oppressive conduct” by private entities such as social media entities – that censor freedom of expression.

Some conservatives believe that the majority of tech entrepreneurs are liberal. They ask: Do their algorithms, which search for and remove objectionable content, contain biases?

But extending the First Amendment to private businesses is controversial and does not seem to be a majority position. These businesses have discretion over the content they wish to promote or forbid.

In any event, one hurdle to applying the First Amendment to social media companies, mentioned above, is the state action doctrine, a key concept in constitutional law. This was examined in the April 2019 ABA Journal, which noted the U.S. Supreme Court explained in the Civil Rights Cases (1883) that the 14th Amendment limits “state action” and not “individual invasion of individual rights.” Translated, this means that the Constitution and the Bill of Rights limit the actions of governmental actors, not private actors.

Just last year, a federal district court in Texas affirmed that traditional view, ruling in Nyabwa v. Facebook that a private individual could not maintain a free speech lawsuit against Facebook, stating that “the First Amendment governs only governmental limitations on speech.”

Most legal experts view it as unlikely that social media platforms will be held to First Amendment constraints, believing that no court could see these platforms as being fully state actors subject to the First Amendment.

Most social media forbids hate speech that offends or attacks people on the basis of race, ethnicity, national origin, religions, gender, sexual orientation, disability disease or other traits. Social media is very cognizant of the controversy surrounding their policies. Let’s look at Facebook, the big kahuna of social media. Facebook is certainly trying, especially recently, to establish a balance between freedom of speech and unacceptable speech.

On its community standards page, (https://www.facebook.com/communitystandards/, Facebook acknowledges that striking a balance is an ever-evolving effort.

Twitter has a Hateful Content Policy which may be found at https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy. Its general guidelines and policies may be found at https://help.twitter.com/en/rules-and-policies#general-policies.

Legally speaking, social media companies are not compelled to do anything about hate speech. 72% of respondents to a June 2018 Pew Research Center survey believe that social media platforms actively censor political views that those companies find objectionable. https://www.pewinternet.org/2018/06/28/public-attitudes-toward-technology-companies/.

There is increasing pressure on social media to stamp out hate speech. A lot of that pressure comes from advertisers who do not want to be affiliated with a platform that allows it.

Facebook (which owns Instagram, SnapChat and WhatsApp), Twitter and YouTube have hired thousands of new moderators to filter out content in violation of their standards. Moderators are inconsistent. There are Facebook users whose posts on racial issues were deleted by Facebook but white friends, when posting the same posts, did not have their posts deleted.

The Silicon Valley mindset is that every problem can be solved by algorithms – the current thinking is that the solution is at hand but they just haven’t gotten it quite right yet.

Social media and other providers are now thinking about the broader social impact of their platforms and the possibility that they might be regulated if they don’t act.

For those interested in this subject, on March 27, 2019, the Congressional Research Service released a report entitled Free Speech and the Regulation of Social Media Content (https://fas.org/sgp/crs/misc/R45650.pdf), a 43-page document which takes an extensive look at some of the issues we have raised.

Facebook and YouTube are currently in a dither about what to do with deepfake videos which are getting harder and harder to detect as the technology improves. Furthermore, on June 5, 2019, YouTube announced plans to remove thousands of videos and channels that advocate neo-Nazism, white supremacy and other bigoted ideologies in an attempt to clean up extremism and hate speech.

The new policy will ban “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion,” the company said. The prohibition will also cover videos denying that violent events, like the mass shooting at Sandy Hook Elementary School in Connecticut, took place. This is sure to reignite the debate about whether the First Amendment should be extended to private companies.

People rely on internet platforms to share experiences and build communities, and not everyone has good alternatives to speak out or stay in touch when a tech company censors or bans them. Rules need to be clear, processes need to be transparent, and appeals need to be accessible.

Amen to all of that. But regulation may not be the answer and it may present its own dangers. It is currently a sea of confusion with no clear channel markers in sight.

Comments are closed.