Social media platforms, such as “Google,” “Facebook,” “YouTube,” and “Twitter,” exert control over the written content and visual expression of billions of users worldwide. While it was initially believed that these platforms were spaces for the free exchange of ideas, over time, and with their growing popularity, the restrictions on what we can do or say have increased, alongside the pressure these platforms face from external entities. Thus, we must view these platforms as acting as “new governors” of online expression.
In this context, Jillian C. York, an American activist and author on free speech, discusses in her new book how social media platforms created in Silicon Valley have established a system that governs how people express themselves online. This process is governed by a broader system referred to as “surveillance capitalism,” a term used to describe a comprehensive process of monitoring and controlling human behavior for profit.
Politicized Censorship:
The author notes that the prevailing idea among civil liberties advocates is that social media companies should strive to reduce censorship by only responding to court orders from countries where they have offices, employees, or other significant assets, for fear of becoming tools of government surveillance. However, over time, it has become clear that in addition to the formal requests companies receive to remove or restrict content, there are many informal ways in which governments engage with these companies.
The author cites a Facebook policy known as the “newsworthy news exemption,” which has faced criticism from activists who believe it favors politicians’ speech over their own. This policy allows posts that violate community standards to remain on the platform if the company believes the public’s interest in viewing them outweighs the risks of harm. Twitter has similarly been accused of continuing to allow hate speech from former President Donald Trump, despite it violating the platform’s rules.
York emphasized in her critique of social media platforms that when the speech of ordinary citizens is valued less than that of politicians, and when activists are silenced by state authorities or social media companies, it points to forms of electronic suppression online.
Profit Priority:
The author points out that anyone following the statements of “Facebook” founder Mark Zuckerberg will notice that he views his platform merely as a means, whereas ensuring freedom of expression is a matter decided by governments and their peoples. In this context, when a foreign government orders “Facebook” to remove a piece of content, the company simply follows orders, as Zuckerberg does not see the protection of users and their views as Facebook’s responsibility.
It is noteworthy that some social media platforms base their policy-making and decisions on market research, which is troubling – according to the author – because the majority of market research is conducted in only a few countries that are usually wealthier and more conservative. This indicates that these companies choose to serve wealthier users at the expense of others’ right to free expression and access to information.
The author believes that social media companies have strayed far from their original missions, as officials within them have filled their pockets with income generated from advertisers exploiting citizens around the globe and have become more aligned with authority.
Violent Content:
Terrorists seek to promote their ideologies among people, as their attacks lack meaning without that. Social media merely provides a free platform for any terrorist. However, censorship may not always be an effective solution. Here, the author cites an example that highlights a fundamental problem in how policymakers at social media platforms handle violent and extremist content. When videos of ISIS beheading victims began appearing from Syria in 2011, the decision to keep them online was deemed “newsworthy.” However, when the murder of American journalist James Foley was broadcast globally in the same way, those assessments changed, and the clip was hastily deleted.
With growing public and media demands to do something about ISIS’s broadcasts, “Facebook” took the initiative. In June 2017, the company announced it would be introducing artificial intelligence to help remove extremist content, which led to the deletion of hundreds of thousands of videos containing such material. Just a few months after companies started using AI, “Facebook” and “YouTube” were accused of erasing evidence of atrocities, jeopardizing cases against war criminals.
Pornography Dilemma:
The author sheds light on the dilemma social media platforms face in distinguishing between content that deals with nudity and sex in art and pornographic content. A set of leaked documents regarding “Facebook’s” policies showed that it attempted to establish a coherent policy in this regard, differentiating between “handcrafted art,” such as painted images and sculptures, which are allowed to depict nudity, and “digitally produced art” using modern means, which faces greater restrictions.
Despite a few social media platforms choosing to allow the posting of pornographic content, others, particularly “Facebook” and “YouTube,” opted to ban explicit content from the outset and quickly sought solutions to ensure their platforms were not flooded with pornography, fearing they might become undesirable to advertisers.
Fighting pornography has not been easy; regardless of how much companies invest in content monitoring, porn publishers find ways to present their material on social media. Consequently, most platforms tightened their monitoring practices, resulting in a decrease in pornographic content, but this also led to the banning of much other content, such as information related to sexual health, due to broad filtering techniques.
Machines Instead of Humans:
Automated processing is increasingly being used to enforce content policies set by social media platforms. However, despite all the advancements made in developing automated technologies, current machine learning algorithms still face challenges in discerning the nuances between concepts. Moreover, machine learning algorithms learn from the data inputs they receive. Thus, artificial intelligence reflects the perspectives of its creators, and can therefore be biased or discriminatory.
Regarding the monitoring of written or audio content, the evident problem is that companies simply do not invest the same amount of resources in non-major languages as they do in English, Spanish, French, and a few other languages. In some cases, they lack any support for certain languages altogether. This, of course, impacts both the effectiveness of automated and human monitoring.
Algorithms simply cannot replace human expertise. Once social media companies remove humans from the equation and leave matters in the hands of machines, one can only imagine the type of cultural standards that will become prevalent in the future.
Spread of Hatred:
By 2017, “Facebook,” “Twitter,” and “YouTube” had adopted policies prohibiting most forms of hate speech, which are rarely interpreted or applied consistently. For example, while content that blatantly defends Nazism is almost certain to be removed if reported, anti-Muslim speech may not be treated in the same way.
It is noteworthy here that instead of tackling the problem of hate speech with the precision it requires, most companies have resorted to automated monitoring tools that cannot distinguish the nuances that define hate speech.
According to the author, the problem with platforms lies not in allowing anyone to express themselves freely, but in their design to monetize and capitalize on any content that gains popularity, even if it advocates for genocide. This explains why social media companies often delay taking action against incitement and hate speech. Therefore, the author believes that the role of platforms in the spread of hate speech must be addressed, while simultaneously addressing the issue at its roots in homes, classrooms, and corridors of power.
Immediate Requirements:
The COVID-19 pandemic demonstrated that social media companies have the will to censor certain forms of expression, and perhaps the best evidence of this is the disappearance of a video called “Plandemic,” which promoted many false claims about the coronavirus. “YouTube,” “Facebook,” and “Twitter” hurried to delete the video, which some observers viewed as evidence that companies had always been able to manage harmful content. However, they chose not to do so, focusing their attention on things that governments and other powerful entities wanted them to censor.
The author pointed out that there are changes social media companies must make immediately, the most prominent being disclosure and transparency about what they censor, ensuring each user’s right to contest content removal, providing users with information about how data feeds content recommendation algorithms, obtaining user consent for their data usage, and granting users more options regarding what they see while browsing. They should also immediately work to engage civil society in policymaking transparently, conduct a full audit to assess the alignment of current policies with human rights standards, and make changes where necessary.
Source:
Jillian C. York, Silicon Values: The Future of Free Speech under Surveillance Capitalism, Verso, 2022.