Facebook’s Role in Moderating a Global Conversation
What role does Facebook serve in your life? Facebook is now home to close to 2 billion users, many of which access the site at least once per month and 1.28 billion of which access the website on a daily basis. With the large majority of users living outside the US, Facebook is tasked with moderating the daily discourse of an internationally diverse community. Facebook’s formula for the systematization of how content reviewers are to censor material is given as “protected category + attack = hate speech”. On the surface this may seem reasonable -- but what is the given definition or criteria for what is or is not a “protected category”? Earlier this week Propublica published an investigative piece detailing how certain inconsistencies can arise in the application of Facebook's censorship guidelines. These inconsistencies often do not protect classes of people who would otherwise be protected under discrimination laws in the US. The larger issue however is the problem of censorship itself -- even well-intentioned censorship can mistakenly silence legitimate political discourse and involuntarily remove dissent. The Guardian has written a variety of pieces on Facebook’s struggles in determining what is appropriate to remove.
Facebook acts as a filter, an arbitrator, a judge which deems that which is relevant to particular individuals. Americans mainly access news online and Facebook is increasingly used as a central hub for this purpose -- especially to those under the age of 30. Users are sharing less personal or ‘original’ content on Facebook and sharing more news articles or other media. Whether or not users on Facebook are sharing personal stories, their own political beliefs, news, memes, videos, historical photos (such as Vietnam war photos), or other content Facebook filters what you see and don’t see on your news feed. There certainly may be content that should be removed but there are ethical and legal knots. Even when it comes to the spectre of Terrorism, there are blurred lines. Countries do not always agree as to who or what group should be sanctioned or designated as a terrorist group. National security interests collide and nation states have divergent political and economic goals.
The Difficulties and Failures of Censoring Terrorism
The global threat of terrorism is central to many of the issues that are at hand with censorship and the role that social media should play in moderating public discourse. The issues regarding censorship on Facebook include not just removing specific posts or content but also banning or removing entire groups or persons. The need to categorize and remove content that is associated with the organization and recruitment of terrorist groups is a valid one. However, groups that Pakistan may deem as terrorist organizations may be viewed differently in Iran or India. 41 out of 64 groups that are banned in Pakistan operate on a variety of social networks including Facebook where they are associated with over 700 pages and groups. 160,000 users are members of these Facebook groups or like their pages. The Kashmiri conflict is also home to potential ambiguity in distinguishing legitimate political activism from incitements to violence and, furthermore, terrorism. Last year Facebook was under fire for removing posts of users discussing the killing of Burhan Wani, a member of Hizbul Mujahideen who was killed by the Indian army on July 8th, 2016. Syed Salahuddin, a leader of the group, was just last week deemed a terrorist and sanctioned by the US State Department. However, such labels do not mitigate the issues that are faced with censoring extremist content on social media.
On June 26, 2017 Facebook, Twitter, Microsoft, and YouTube announced that they are forming the “Global Internet Forum to Counter Terrorism” which aims to reduce online presence of terrorist groups and ideologies. Most would probably agree that Twitter or Facebook should remove posts from groups such as ISIS and ban the respective accounts. However, sanctioning content that can be directly linked to armed insurgencies, militant terrorism, or other violence is less clear-cut than one would imagine. Are rebel groups in the Democratic Republic of Congo, Somalia, Sudan, Yemen, Donetsk or Syria always worthy of complete censorship and silencing? Is it still not possible to view videos online that can clearly delineate the Syrian or Ukranian conflicts as they evolved from protests in the streets to armed struggles? On Vice News youtube, one can watch videos beginning with the Euromaidan protests in Kiev during November/December 2013 all the way through the conflict up until a few months ago. As demonstrations and protests have become more active in the past months in Kashmir, it be wise to consider when political dissent becomes illegitimate terrorism. India and Pakistan would likely have antithetical definitions in either case. The aforementioned regional struggles are only examples of some of the geopolitical obstacles that Facebook must navigate while moderating the global social network.
The German Network Enforcement Law
On June 30th, 2017 the German Bundestag passed the Netzwerkdurchsetzungsgesetz -- or network enforcement law (Full Text - German) that will require social media companies to remove any content that is deemed illegal by German law. The illegal content must be removed within a limited time frame or companies are liable to face fines up to €50m ($57m USD). This law covers not only hate speech but also content that is associated with defamation (libel, slander), treasonous forgery, anti-constitutional organizations (including Nazi symbolism and related propaganda), celebrating criminal offenses, calls to organizing criminal groups, and various other types of speech. Facebook and other companies are told to remove content that is “obviously illegal by German law” within 24 hours; for more opaque cases the time frame is extended to 7 days. Germany is home to some of the toughest laws concerning holocaust denial, incitements to violence and other abuses of speech. The new legislation has been called “misguided” and a “minefield for U.S. tech” by Mirko Hohmann of the Global Public Policy Institute.
Unlike YouTube or Google -- Facebook is hesitant to “geo-block” content in only certain geographical regions. Does this mean that Facebook should only block content from German citizens, or people who are posting in Germany? How does Facebook, Twitter, or Youtube determine a user’s location--is it based upon where the content was posted or where they reside? There are numerous unanswered legal questions. What may be more troubling, however, is the notion that this law may give companies a cost-benefit incentivization to excessively delete ambiguous content. It must be noted that the determinations of what must be removed is left up to the hasty judgement of social media companies rather than judges and courts. This leaves corporations with the onus of regulating freedom of speech. David Kaye, UN Special Rapporteur to the High Commissioner for Human Rights, criticized the law in a similar fashion and also pointed out his concerns with the “provisions that mandate the storage and documentation of data concerning violative content and user information related to such content, especially since the judiciary can order that data be revealed. This could undermine the right individuals enjoy to anonymous expression”. If the US were to enact similar legislation in order to more strictly regulate hate speech and other undesirable content -- what would be the result? Freedom of speech is given a wider scope in American law and society.
Censorship and Freedom of Speech
This is surely not the first time Facebook has come up against legal and geopolitical constraints, but it does emphasize the ethical dilemma of “freedom of speech”. It is not possible to defend an absolute notion of freedom of speech -- take the case of a political activist giving a talk on a university campus who is interrupted by a jeering protest within the lecture hall. It would not be possible to rewardingly listen to both, so one must win out over the other. However, the issue is not whether limits need to be imposed upon free speech but how far those limits can extend. John Stuart Mill gives a very bold claim for his endorsement of freedom of speech:
If all mankind minus one were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person than he, if he had the power, would be justified in silencing mankind.
- John Stuart Mill (1806–1873). On Liberty. 1869. Chapter II: Of the Liberty of Thought and Discussion
The limitations on such expression are summed up by Mill’s harm principle: “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others”. In other words, coercion is only justified in stopping coercion or harm. Censorship is a form of coercion, a halting of speech, and is therefore only justified when it can be instrumental in preventing harm. Mill’s harm principle here is often not inclusive of hate speech--as it would have to deprive someone of a right directly. Perhaps one could argue that Mill is outdated and his views of freedom of speech are not suited to the fast-pace of 21st century online communication. Nonetheless, if we are to value democratic ideals we must caution ourselves against unnecessary, monolithic control of public debate and private discourse.