What Should be Done (If Anything)?
A Moment Symposium
by Josh Tapper
As the Internet has expanded the frontiers of 21st-century freedom of expression, it has given rise to new opportunities for hate speech.
But what constitutes hate speech, which broadly refers to language that incites prejudice against racial, religious and ethnic groups and is legislated and regulated by governments around the world? There is no one definition. Ancillary questions like “Who is being hurt?” and “Does the speech harm the public good?” are subjective and not easily answered.
Take the inflammatory anti-Islam film Innocence of Muslims, which triggered riots across the Arab world in the fall of 2012 when a trailer was released on YouTube. Google, which owns YouTube, elected to leave the video online despite a White House request to take it down, arguing it did not violate the site’s freedom of expression guidelines. Access was, however, blocked in Egypt and Libya, where violence was particularly brutal.
That decision set off a firestorm of debate over how online intermediaries should police hate speech. Does oversight lie with companies like YouTube, which says hate speech can only be made against individuals, not groups, or Twitter, which combats “abusive behavior” rather than hate speech? Or does it fall upon governments, tasked with protecting their citizens from exclusion and discrimination?
Not surprisingly, speech regulation differs from country to country. The European Union and Britain prosecute hate speech criminally, as do Australia, New Zealand and Canada. With its First Amendment laws, the United States stands alone among Western liberal democracies in protecting hate speech.
But as hate speech finds fertile ground in our chaotic virtual world, governments are increasingly excluded from the conversation as online platforms create their own rules for what’s acceptable and what isn’t. Moment asked eight leading legal scholars and activists: What should be done?
Bob
Corn-Revere
The exact thing that is wonderful about the Internet is the same thing that people find troubling. Everybody has a voice and everybody can publish to the world. While that’s one of the most important innovations of the Internet, it also makes it possible for hateful people to have a platform. How to deal with this is not an easy thing, but we don’t solve the problem by removing the platform for speech we hate. The proper way to deal with hate speech, both on and off the Internet, is to respond in ways that answer the claims being made while moving toward civility. The answer is never to silence the other side. Hate speech is in many ways a matter of perspective. Once you start thinking of government sanctions or pressuring entities to enforce policies against hate speech, you need to define what you’re talking about, and that lack of a definition becomes important.
Bob Corn-Revere is a First Amendment lawyer and adjunct scholar at the Washington, DC-based Cato Institute.
Jillian
York
I’m a free-speech absolutist, not because I think hate speech is a good thing, but because I don’t think that censorship solves the underlying problem of hate in our culture. Governments and big social media platforms, like Facebook and Twitter, should keep their hands off of speech. Those particular platforms are spaces with billions of users and some of the rules that they have are often blurry and unevenly applied. It’s impossible for Facebook to be an arbiter of speech. A better way of handling that would be to implement broader controls so that users can do a better job of controlling what kind of speech shows up in their timelines and feeds. The average user should be more empowered to control and respond to the type of speech that they see and find offensive on the platform. If I’m in a Facebook group and I see an anti-Semitic comment, I have two recourses. One is to respond, which may or may not be a good idea—I might feel threatened by doing that. The other is to report it to Facebook and hope they do something. The only thing that Facebook can do is remove it from the group. That doesn’t go very far in solving the problem. It lets someone know that that kind of speech is against Facebook’s terms of service, but it doesn’t teach them anything about why that speech is problematic. I’d much rather see approaches that attempt to guide people making those comments onto some sort of path where they’re educated about what they are saying.
Jillian York is director for international freedom of expression at the Electronic Frontier Foundation in San Francisco.
Charles
Asher Small
Companies have to assume corporate responsibility for what they’re permitting to circulate. It can’t fully be left up to the private market, and governments must be responsible for ensuring that hate is not propagated throughout the world or through their territories. The United States is becoming the greatest host of hate propaganda on the Internet. How do we protect the First Amendment and be responsible to a global reality where radical Islamists and other organizations use the United States to export their hate throughout the world? From somebody who understands the history of hatred and the history of anti-Semitism, propaganda can literally kill. It’s happening in different parts of the world because of propaganda being circulated on the Internet. There is the argument that in the marketplace of ideas, good and truth will prevail—and that argument is being used widely in countries like the United States. Propaganda is being used in the most pernicious sense to implement horrific ideologies. We can’t hide behind a fig leaf. People are dying. What is our responsibility? Do we cut it off? We have to take responsibility for what’s going on in the world.
Charles Asher Small is director of the Institute for the Study of Global Antisemitism and Policy and a Koret Distinguished Scholar at Stanford University.
Jeffrey
Rosen
For better or worse, policy makers online—many of whom are 22-year-olds in flip-flops and T-shirts—have more power over who can speak and who can be heard around the globe than any king, president, Supreme Court justice or online community member. Nevertheless, to keep the Internet free and open, it’s important that intermediaries freely adopt something close to First Amendment standards, as interpreted by the Supreme Court. In other words, they should only remove hate speech that is intended, and likely, to produce imminent lawless action. In their current hate speech policies, most intermediaries don’t go quite this far. In the Innocence of Muslims debate, Google rightly resisted calls by President Obama and the president of Egypt to remove the video on grounds that later turned out to be mistaken, mainly that it produced riots in Benghazi. Intermediaries are legally entitled to adopt whatever standards they like, as long as those standards comply with federal statutes. It’s certainly appropriate to remove speech in some circumstances, such as when it offends community values, even if that speech would be protected under the Constitution. Open free-speech platforms are virtual communities. Therefore it’s appropriate to strike a balance that U.S. courts might not embrace.
Jeffrey Rosen is a law professor at George Washington University and legal affairs editor of The New Republic. He is president and CEO of the National Constitution Center in Philadelphia.
Leslie
Harris
When it comes to religious speech, it’s important to understand that online intermediaries are not theologians. The processes that they use are not fine-grained to separate out speech that might be hate speech from speech that might simply state a religious tenet. Intermediaries need to draw the line fairly narrowly between religious attacks on individuals and broad statements that might be a statement of somebody else’s religious belief. This is really hard. They need to have processes in place that allow people on both sides to appeal, and to provide opportunities for counter-speech wherever possible. A lot of the intermediaries are, in some ways, free-expression platforms, with people naturally speaking on either side of an issue. Intermediaries ought to be creative when they’re concerned about speech by highlighting, linking to or suggesting to users something that offers a different opinion. You can’t tell people what good counter-speech is, but it should lead people to think about and question the assumptions and rhetoric that constitute hate speech. The approach in Europe doesn’t work because it doesn’t promote dialogue. If we’re going to have a more open society in the United States and a constitutional system that allows a broader set of speech, we ought to take advantage of that.
Leslie Harris is president and CEO of the Center for Democracy & Technology and an authority on online free speech governance.
Christopher
Wolf
The most productive way to deal with online hate speech, such as anti-Semitism, is through counter-speech, having supporters of Israel and people concerned about anti-Semitism speak up when they see attacks online. So even if it doesn’t rise to the content level that’s subject to removal from the intermediary sites, there’s still opportunity for those of us who care about the issue to speak up. “If you see something, say something” is not a motto reserved for bus stations and airports. When we see anti-Semitism online, we ought to point out its lies. You don’t have to respond in kind and in the same forum where the speech occurs. There are vast opportunities online to spread right-minded messages. The intermediaries get it more or less right if you’re talking about taking down content under terms of service. We don’t want to see hate speech censored or removed, but it needs to be responded to.
Christopher Wolf is national chair of the Anti-Defamation League Civil Rights Committee and co-author of Viral Hate: Containing Its Spread on the Internet.
Susan
Benesch
There are, of course, criminal laws in most countries prohibiting hateful speech, and certainly prohibiting incitement of violence. We can’t rely entirely on criminal law or censorship, which are the two traditional methods of dealing with such speech. Unfortunately, criminal law by itself is not very effective in preventing the effects of such deeds. And one of the major changes that digital communication has made in the world is that it’s much more difficult to censor speech. I’m not saying that abhorrent speech shouldn’t be taken down from certain sites, but we have to be realistic and realize that this will not effectively censor them because there are simply too many virtual locations for such abhorrent speech. One of the most interesting options emerging is called inoculating the audience. Rather than trying to suppress the speech or punish the speakers, one can try to work with the putative audience to make them less susceptible to hateful or inflammatory speech.
Susan Benesch is founder of the Dangerous Speech Project and a faculty associate at Harvard University’s Berkman Center for Internet & Society.
Richard
Warman
Online intermediaries are made up of real people, so they should respond like human beings and not hide behind a corporate wall of anonymity. That includes remembering the golden rule about treating people the way you would want to be treated. Companies should set up a clear policy on hate speech and enforce it, including making reporting easy and having staff deal with it sooner rather than later. Private companies don’t have the First Amendment restrictions that governments do, and they can take the position that they don’t have to facilitate extremist hate speech that targets our neighbors because of their religion or the color of their skin. No one has the right to poison the communal well. If U.S.-based intermediaries want to be global, they have to remember and respect that most Western democracies believe it’s a right and a social good to legally control hate speech.
Richard Warman is an Ottawa-based civil rights attorney whose work focuses on the spread of white supremacist hate speech on the Internet.
It’s interesting to see everyone focusing on “hate” speech when, in New York, all it takes to put someone in jail is “annoying” speech. See the documentation of one case that has been slowly winding its way up through the appellate court system over the past few years:
http://raphaelgolbtrial.wordpress.com/
It’s well recognized that charging someone with “aggravated harassment” on the grounds that his speech was “annoying” violates constitutional principles, but nobody seems to be doing anything to have those principles implemented in America’s cultural capital. If you can’t protect “annoying” speech, why even bother trying to protect “hate” speech?
The U.S. Constitution was written to protect all citizens rights and freedoms in every degree of perception. Every law has its boundaries and protocals and they need to be followed and if broken, enforced with one or more legal consequences. It does not give the right of disgruntal or otherwise, any citizen to twist, spin and/or abuse those laws by an uncivil, unruly, verbally or physically abuse another citizen.
If the uncivil person elects to break protcol using remarks and statements that slander, demean, berate, demoralize, etc, another citizen in any way, then they are breaking the freedom of speech law by using their words, as ammunition to hurt another citizen; verbal attacks are forms of aggression to attack and mentally beat up (causing mental anquish, fear, anxiety, depression, PSTD, etc.) against one or more citizens. Any attack in any form, physically and/or mentally is wrong and is breaking the law; physical attacks are against the law, as are mental attacks (verbal slander, harassment, bullying, taunting, are as abusive as much as a physical attack to one’s body) and mental attacks need to be treated and enforced like any physical attack; in actuality, a mental attack is the same as a physical attack ( both are aggressive and abusive) in that, the human body is battered and hurt as a result of a beating. Those who act aggressively towards another human being/citizen are breaking the laws that protect all citiens, bottom line, verbal abusers ( and demonstration of hate is an abuse) are not following any of the laws of the US Constitution.