The Evolving Force of Censorship In a Social Media Landscape

The prevalence of social media and online communities as platforms for expressing one’s personal thoughts, views, and opinions has reshaped the way we share, monitor, and regulate these expressions. In less than a generation, the meaning of censorship, and how we address it, has evolved into a multifaceted creature, and a challenging one to capture. Censorship in the print world has by no means diminished, but we see the rise of a greater, less definable counterpart in the online world. What censorship is, and, equally importantly, who has a right to enforce it, is more challenging to grasp than ever before.

The power of censorship has shifted with the expansion of the universal online community. We are capable of publically engaging with information in a way that was never possible prior to the days of the web. This network and the dominant social media platforms that encourage and facilitate this engagement create a space for the dissemination and regulation of all kinds of material. As content consumers become content creators and distributors, and vice versa, the role of the censor overlaps. Whether this is problematic or advantageous is debatable, and the definition of who holds the right to determine what is or is not appropriate for publication becomes more obscure.

The way we engage with censorship, not only of others but of ourselves, demonstrates that we believe in an uncontested right to freedom of unfiltered self-expression, but with an arguably paradoxical right to avoid exposure to that which does not align with our personal views. If we look to the fundamental freedoms as defined by the Canadian Charter of Rights and Freedoms, we see that “freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication” (s. 2, Part I) is guaranteed to each citizen. The First Amendment to the US Constitution similarly decrees, “Congress shall make no law … abridging the freedom of speech, or of the press.” These statements seem clear enough in their authority, and those who are eager to defend their stated opinion or belief regularly reference either. However, issues arise when this basic human right conflicts with any basic human right held by another individual, whether in the real, tangible world, or that of an online community.

When we begin a discussion of online censorship, we are immediately confronted with the most obvious, large-scale examples of censorship. Yet, at the same time, we have what Moisés Naím and Philip Bennett call the “curious paradox” of the age of the internet, as information becomes quickly and easily disseminated. The evolution of the internet created a breeding ground for a new kind of censorship, one that is both easier to identify, but more challenging to define. The most publicized, and outrageous by North American standards, is the banning of many international news sources and social media platforms in China (Naím & Bennett). Censorship flourishes in some parts of the world, but so too does the ease of overcoming censorship. Geographical location is both determinate and inconsequential to the sharing of information and content. As the internet allows news to be shared globally, modes of censorship have adapted to meet the flexibility of online information distribution methods. Naím and Bennett’s assessment of the evolving world of censorship and journalism in the internet age notes that not only is the web “the most powerful force disrupting the news media,” communication tools such as Facebook, YouTube, Twitter, and blogs serve to facilitate the “shifting [of] power from governments to civil society and to individual bloggers, netizens, or citizen journalists.” The creation of content and the distribution of media is no longer limited to print, nor is it held solely in the hands of a select few, making the regulation of content and media more complex.

Many sites rely heavily on social media platforms as an audience containment system. This includes news sources, and many Facebook and Twitter users rely on these channels as a streamlined resource for information. Self-reported information collected by the Pew Research Center in 2014 indicates that nearly “two-thirds (64%) of U.S. adults use [Facebook], and half of those users get their news there — amounting to 30% of the general population.” Twitter also acts as a news source to a similar degree, despite reaching a smaller percentage of the population — it’s still a reported news source for 8% of the total number of respondents — however, it acts as the most convenient resource for breaking news. Those who find their way to news sites through social media channels engage with the source material differently, remaining on the article for a third of the time that those who are direct to site users do (Anderson & Caumont). While social media platforms are becoming a prominent resource for news, this does not necessarily mean that the news is presented on social media sites in an unfiltered way.

As social media’s role as a primary news resource for online users develops and expands, the implications of censorship and content moderation are called into question. Keith Loria assesses the responsibility of social media sites in censoring content, and questions whether by engaging in the “business of censoring content” sites are “by extension, [censoring] users.” When it comes to offensive or controversial subject matter posted or shared by users, most sites have regulations in place to validate or justify practices of censorship. These are often public, and, as Loria notes, found “within the terms of service (TOS) that all users must agree to.” Thus, users cannot be surprised, nor can they fight back, when certain content that falls within the parameters of the site’s terms of service is removed, usually without notice. He also points out that “the censorship process itself need not be disclosed, and these private entities have the final word when it comes to what users can and cannot post.” By agreeing to these terms and conditions, users relinquish any authority over what, and how, their content is removed. While this is arguably understood, what users cannot be sure of is how the information they are being exposed to is similarly filtered, including news. As more users turn to social media for their news, the responsibility of social media platforms to “err on the side of free speech” (Loria) becomes more weighted. By “entrust[ing] the social media corporations to make morality decisions,” or refusing to broaden the scope of what is deemed acceptable to be shared through these platforms, journalist and professor Robert Quigley warns that we run the risk of “having controversial topics or even noncontroversial topics taken right out of the public discourse” (qtd. in Loria). Along with Quigley, many argue that social media sites should reevaluate their stance on controversial content and embrace this new role as information gatekeeper. While Loria suggests that this consideration “doesn’t seem to be the norm yet” among social media platforms, the move towards a broader scope of what is deemed acceptable comes with its own set of responsibilities that must be considered.

Over the years, newspapers and magazines have been shedding the mediums of paper and ink and taking up residence in the digital realm. This transition made it easier for newspapers to establish a discourse around stories using an embedded comments feature, thus creating a space for engagement with readers and allowing for readers to communicate directly and instantly with each other. Anne-Marie Tomchak notes the positive intentions behind online comments features on articles, stating that news organizations saw comments as a way to “turn what was once a one-way street into a multi-headed conversation.” It is unlikely that news organizations could foresee the labor intensity and the moral implications of allowing comments to be published and attached to articles. News sites must now moderate comments, using either keyword identification tools or direct human-powered review, and must make controversial decisions based on moral integrity and practicality. As Nicholas White, the editor of online-based news site The Daily Dot, states, “to have comments, you have to be very active, and if you’re not incredibly active, what ends up happening is a mob can shout down all the other people on your site.” Allowing comments, while initially intended to encourage conversation, can lead to an “environment that … becomes about silencing voices and not about opening up voices” (qtd. in Tomchak). Despite the intentions of news sites in encouraging a public discourse, negative consequences of allowing a platform for voicing one’s opinions and thoughts freely, and with little accountability, results in the need to enforce restrictions and limitations in order to maintain a standard of valuable engagement.

Facing the challenge raised by Nicholas White of alienating certain audiences, many news sites opt to moderate online comments. Others have simply chosen to remove their comment function, or turn off comment sections on stories that deal with controversial topics. Some moderation strategies are more rigorous than others, and some sites that still wish to allow readers the opportunity to engage directly with other readers requires that users “register and provide ‘real life’ contact information” (Hughey & Daniels 336). By removing anonymity, these platforms hope to reinstate a level of “social expectation and accountability” (Hughey & Daniels 342) in their users. Hughey and Daniels argue that this practice and other modes of moderation demonstrates the significance of “guard[ing] the health of the public sphere” (343) upheld by news sites. While this is a respectable endeavor, and at both a laborious and financial cost to the content publisher, it serves to ensure an environment conducive to healthy and democratic discourse.

The positive results of content moderation are clear; users are able to engage with other readers knowing that they are protected from verbal abuse or offensive images, and online news sources and social media platforms reduce the risk of losing users. In reality, content moderation is executed so seamlessly, users may be unaware of what they’re not being exposed to on a daily basis. It’s reasonable to assume that many web users are aware, to some extent, of the offensive and disturbing material that lives in the darker corners of the internet, but trust that it will not make its way on to their Facebook news feed. While many assume that undesirable content is blocked using technological tools, and this is accurate in many cases, much of the behind-the-scenes moderation is performed by humans on some level. Many social media sites and apps, including Facebook, Twitter, and Whisper, employ moderators, both domestically and internationally, who must screen online content before it is published and remove anything that may be considered offensive or inappropriate as defined by the regulations of the site. These employees are faced with “the worst of humanity in order to protect the rest of us” (Chen). The effects of this work are troubling, and many employees struggle with the exposure to a routine onslaught of disturbing images or comments. Unfortunately, even considering the negative effects on moderators, human discernment seems essential to the regulation of online content in order to inject a necessary level of humanity into the process of censorship.

Where preventative or “active” moderation fails or cannot be employed, reactive censorship steps in. Many sites that host user-created content, from Facebook to Craigslist, allow users to report any content that they find or deem offensive. Once content is reported, website moderators determine whether the content has in fact violated the terms of the site, and content is removed or disregarded. While there is still a regulating authority, the act of reporting shifts the model of censorship from one that is top-down to one that is peer-to-peer. Facebook is even trying to facilitate a more compassionate route to reporting, allowing users to “resolve their differences without Facebook taking any action at all” (Green). Users are asked to identify how the offending content made them feel, using vocabulary determined by their age; younger users may identity content as “mean,” while older users report material that is “inappropriate” (Green). In this way, Facebook seeks to serve as a “digital counsellor,” prioritizing reported material and handling each issue with a layer of human involvement. By using this model, Facebook seems to act as a mediator among users, while still maintaining a level of authority. Due to the subjective nature of offensiveness, it seems it is impossible to create a reliable and consistent set of regulations for what is to be considered appropriate and what is not, and to define what is considered within the bounds of free speech.

With all the current forms of censorship that dictate our ability to share information, it is unsurprising that we engage in self-censorship in our online interactions. This behavior, termed the “spiral of silence” (Hampton et al) occurs when we are unwilling to share opinions that we believe are not aligned with those of our audience or peers, thus creating an environment in which only the most prominently held opinions are expressed or represented. Self-censorship is motivated by the avoidance of negative social consequences, whether it’s opting to not engage in an argument or simply not wanting to bore others (Das & Kramer 121). This avoidance factor is dependent on our audience — if we are unable to define our audience, we are unable to predict their response. In recognizing our role as content creators, we must realize that our practices of self-censorship contribute to a greater filtering of the content that appears and is shared on social media. By silencing ourselves, we may be indirectly and unintentionally silencing any number of other voices and stories.

The practice of moderating content demonstrates that censorship, whether self-motivated or dictated by a higher authority, is pervasive throughout online communities and results in what James Grimmelmann refers to as the “paradox of tolerance” (“Reddit Nailed”). As social media users, we believe we are entitled to a certain level of freedom of speech, yet how that freedom is granted or regulated cannot be systematically defined. In order to allow for a space for open communication, we must create boundaries — to allow for tolerance, we must determine what will not be tolerated. Finally, we must agree upon who has the right to define these boundaries, and who is responsible for dictating what content falls within these boundaries. We as users are equally responsible for recognizing that we are consuming content that is being actively filtered, and to consider “what sort of commitment these [social media] companies owe us — not necessarily on a legal level, but on a moral or ethical one” (qtd. in Misener). It is important to examine and question how we are censored and how we censor ourselves online, as this contributes to the greater ongoing discussion surrounding freedom of speech, privacy, and the right to consume, engage with, and share content through social media as these platforms continue to grow in their role as essential to the dissemination of information.

 

Works Cited

Anderson, Monica & Andrea Caumont. “How Social Media is Reshaping News.” Pew Research Centre. 24 September 2014. Web. 6 February 2016.

Canadian Charter of Rights and Freedoms, S. 2, Part I of the Constitution Act, 1982, being Schedule B to the Canada Act 1982 (U.K.), 1982, c. 11.

Chen, Adrian. “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” Wired.com. 23 October 2014. Web. 4 February 2016.

Das, Sauvik & Adam Kramer. “Self-Censorship on Facebook.” Association for the Advancement of Artificial Intelligence. 2013. 120-127. Web. 6 February 2016.

Green, Chris. “What happens when you ‘report abuse’? The secretive Facebook censors who decide what is – and what isn’t abuse.” The Independent. 13 February 2015. Web. 6 February 2016.

Hampton, Keith, Lee Rainie, Weixu Lu, Maria Dwyer, Inyoung Shin, & Kristen Purcell. “Social Media and the ‘Spiral of Silence’.Pew Research Center. 26 August 2014. Web. 6 February 2016.

Hughey, Matthew W. & Jessie Daniels. “Racist comments at online news sites: A methodological dilemma for discourse analysis.Media Culture & Society. April 2013, 35(3), 332-347. Web. 6 February 2016.

Loria, Keith. “Should Social Media Censor Content?EContent. 24 November 2014. Web. 6 February 2016.

Misener, Dan. “What Facebook and Twitter ban: New tool tracks social media censorship.CBC. 17 November 2015. Web. 8 February 2016.

Naím, Moisés & Philip Bennett. “The Anti-Information Age: How governments are reinventing censorship in the 21st century.” The Atlantic. 16 February 2015. Web. 4 February 2016.

Reddit Nailed with Censorship Accusations After Banning Online Forums.” Sputnik News. 12 June 2015. Web. 6 February 2016.

Tomchak, Anne-Marie. “Is it the beginning of the end for online comments?BBC. 19 August 2015. Web. 6 February 2016.

U.S. Constitution. Art./Amend. I. Web.

3 Replies to “The Evolving Force of Censorship In a Social Media Landscape”

  1. Your essay provides a good overview of some instances of social media censorship affecting web users in 2016, particularly on Facebook. I agree many are likely unaware that certain social media sites essentially double-filter the content both in terms of what they believe we want to see (the algorithm) and what they deem acceptable (moderation and complaint response).

    Your primary thesis seems to be that all users should at least be made aware of both layers of censorship and, specifically, should not should consider Facebook a pure ideas marketplace. All of which is very easy to agree with.

    Just to expand on some of that, one issue that constantly arises is how Facebook seems to struggle with differentiating art from pornography. And further, that their TOS make it very difficult for users to dispute this distinctions, especially from outside the United States.

    An interesting example of this just cropped up in France. Because, of course it would be France. A school teacher there was censored by Facebook in 2011 for posting what was, admittedly, a pretty explicit image of 19th century art: Gustave Courbet’s, “The Origin of the World.” His account was also suspended indefinitely, for which Facebook has never offered a complete explanation.

    This teacher took the matter to the French courts, which just ruled, not surprisingly, in his favour. But the most interesting aspect of this case was Facebook’s eventually-rejected argument that all such cases needed to be settled in California courts. Obviously, the French courts disagreed.

    This case, therefore, also raises a point from class that many of these supposedly “international” social platforms are all based in one very insular cultural community: Silicon Valley. Thus, they share the values of that community and may censor other value systems.

    Or, as the French teacher put it:

    “On one hand, Facebook shows a total permissiveness regarding violence [notably following the recent Paris attacks] … on the other hand, (it) shows an extreme prudishness regarding the body and nudity”

    Another interesting thread in your essay was the discussion of comments sections. I am certainly a supporter of such forums, but I agree there is always the threat of the discourse devolving in mob rule. I think requiring “real life” information from posters is a good bulwark against that type of abuse.

    Your brief mention of China’s state-directed web censorship left me wanting a bit more detail. I was curious how Chinese social sites might be monitored or controlled differently, especially given that Facebook, Twitter, Instagram, G+, and many other similar sites, are all still banned there.

    But actually, Facebook is currently embroiled in an overall web censorship scandal of its own. They are attempting to expand their reach into the developing world with a program called “Free Basics” that provides satellite access to stripped-down versions of Wikipedia, BBC, and, of course, Facebook, among other major sites. They claim to be bringing the Internet to those who could not otherwise afford it.

    However, an Indian court has declared the entire scheme a form of illegal censorship that creates second-class online citizens who are cut off from the wider Internet. Their point, which appears fair, is that Free Basics provides no one at all with access to the Internet, per se. Facebook is now scrambling to save, well, face.

    All of which is to say the questions of online censorship, on social media and elsewhere online, are far from settled and the battle to keep the web open rages on. Your essay did a good job of focussing the reader’s attention on some of these issues.

  2. This essay does a terrific job of starting a conversation on the changing nature of censorship in the digital age. In direct support of its thesis, it has educated readers in how censorship works on social media and online spaces. Importantly, it also helps the reader reflect on their own complicity in censorship of themselves and of others.

Leave a Reply