Content moderation in the Trump era is a political battleground.
In the conflict between freedom of speech and content moderation, who sets the standards?
Content moderation on social media platforms has historically functioned like a parent dealing with a group of teenagers: if you live under my roof, you follow my rules. However, as social media has increasingly permeated our lives and politics, questions arise about who really has control, who sets those standards, and whether our civil liberties are at risk. With the administration of elected President Donald Trump, this debate is likely to intensify even further and reach a critical point.
The evolution of content moderation began slowly but accelerated as the influence of social media grew. The need for regulation became evident when platforms like Facebook, Twitter, and YouTube played crucial roles during the Arab Spring, where Facebook was used as an organizing tool for activists. However, this brought about a series of controversies. YouTube debated whether to allow violent videos for educational purposes, while Twitter implemented its policy of "tweets hidden by country."
In 2013, leaked documents from Facebook revealed the types of content they moderated. The following year, concerns about radicalization on the networks emerged, and YouTube changed its stance on certain videos after the viral video of journalist James Foley’s beheading. Twitter also faced criticism for its lack of control over harassment in the context of the release of the movie Ghostbusters. Meanwhile, those moderating content on these platforms reported terrible working conditions.
All of this was complicated by the arrival of 2016, a year marked by misinformation during the presidential election between Hillary Clinton and Trump. Although Facebook launched a fact-checking program, the platforms struggled to curb the spread of false information. In Myanmar, content on Facebook contributed to acts of ethnic violence against the Rohingya. Additionally, Facebook Live became a space for broadcasting suicides and shootings, including the case of Philando Castile's murder.
In 2018, the arrival of TikTok brought new challenges, while Twitter removed millions of bots to mitigate political misinformation. That same year, YouTube published its first transparency report, and Facebook established its Oversight Board, allowing users to appeal decisions. The massacre in Christchurch, broadcast on Facebook Live, led to the creation of the Christchurch Call to Action, aimed at preventing the use of the internet by terrorists.
During this time, Trump signed an executive order against "online censorship," targeting Section 230 of the Communications Decency Act for what he considered biases against him and other conservatives. This escalation was partly due to many of his tweets being labeled as misleading by Twitter, which prompted congressional investigations into content moderation.
The arrival of the COVID-19 pandemic further exacerbated the situation, with the proliferation of misinformation about the virus. Regulations on moderation expanded to address hate speech and electoral misinformation. In this context, January 6, 2021, represented a turning point: giants like Facebook and Twitter temporarily banned Trump’s accounts for inciting violence during the Capitol insurrection.
Since then, Trump has regained his presence on social media, but the debate over content moderation has not ceased. Republicans argue that this practice silences conservative voices. With Elon Musk’s acquisition of Twitter, who declared himself a "free speech absolutist," and Republicans establishing subcommittees on the "weapon of censorship," a more hostile environment toward social media platforms is expected.
Future discussions about content moderation will focus on legislative and reputational issues. With a possible second term for Trump, moderation will also become a campaign issue, while debates over Section 230 and its impact on network liability are anticipated.
As artificial intelligence begins to play a role in content moderation, the landscape will continue to change. Despite predictions about what could happen with a second term for Trump, it is still early to foresee how moderation standards will be defined and who will establish them. This environment will increasingly be shaped by political power, public perception, and technological evolution, creating new challenges in regulating online spaces.