Among the chaos of several new Republican appointments to Meta’s board and user reports of being unable to unfollow the Instagram accounts of President Trump and Vice President J.D. Vance, Meta has whipsawed in the complete opposite direction since the GOP victory in the November election and rolled out new content moderation policies and completely abandoned third-party fact checkers.
Despite CEO Mark Zuckerberg hailing this as a way of “restoring free expression,” the move will ultimately fuel the spread of disinformation and let more harmful media circulate on Meta’s platforms, putting users at risk of exposure to dangerous content.
The platform has scaled back automatic filters. While topics like terrorism and child sexual exploitation will still be blocked, lower severity content will be allowed on its platforms, and Meta will wait for users to report them. As the largest social media company worldwide, letting this content spread is extremely irresponsible.
Seeking to gain favor with the new Republican administration — which Zuckerberg described as a “cultural tipping point” — Meta drastically loosened its community standards to allow more extremist content. It’s outrageous that these standards now allow, for example, degrading language to describe women and members of the LGBTQ+ community.
“Users are now allowed to, for example, refer to ‘women as household objects or property’ or ‘transgender or non-binary people as ‘it,’” according to a section of the policy prohibiting such speech that was crossed out,” reported CNN.
According to the community standards page on hateful conduct, Meta will “allow allegations of mental illness or abnormality when based on gender or sexual orientation,” citing that the “common non-serious usage of words like ‘weird’” is typically considered acceptable.
This seemingly simple change in language could easily have the potential to spiral into yet more hate for these vulnerable communities. With its power to unite people with shared passions like fishing, quilting and karaoke, social media has also proven that it’s more than capable of being used to breed coordinated, targeted violence in large communities.
An Amnesty International report found that in the years leading up to 2017, Facebook’s algorithms recklessly amplified violent and hateful rhetoric, fueling genocide in the Myanmar military’s campaign of ethnic cleansing against the Rohingya. This disturbingly violent example, disappointingly, is not isolated: The Youth Endowment funded reported in 2023 that 60% of children nationwide saw real-world acts of violence on social media. Meta’s ability to magnify the objectification of marginalized groups will only feed this effort under the revised policies.
In his video announcement, Zuckerberg defended criticism of fact checkers — which have mostly come from right-wing figures — and accused them of being “too politically biased.” Of course, these accusations aren’t grounded in evidence. The fundamental principles of independent fact-checking does not include supporting or dismantling any political voice, but rather to keep misinformation contained.
Contrary to many demonizing portrayals, the role of fact checkers is not to impose censorship. Rather, they only provide suggestions and flags to content, and the social media platform makes the ultimate decision in whether to keep or remove it.
Instead of fact checkers, Meta’s new plan is relying on user-generated “community notes,” in which anyone can comment and offer context to a post. This feature, which has been used by social media company X for four years, has clearly proved to be unreliable, difficult to verify and misleading. This isn’t just hypothetical — a WIRED investigation found that these notes worsened the disinformation problem because of how easily it could be manipulated by determined users, who can downvote or upvote posts. Altogether, community notes carry the potential to actually skew media toward extremist right-wing views instead of reducing bias and favoring a middle political ground.
The prevalence of Meta platforms among young people means that dangerous content has enormous potential to damage the minds of susceptible users. Across Meta’s 3.29 billion monthly active users, the accounts held by children expose them to high risks of being influenced and influencing others, something that could easily spiral into a larger web of confusion and hateful speech.
Meta’s rush to get rid of fact checkers illustrates how dramatic and rushed the transition to align with the right-leaning power hierarchy in Washington. Zuckerberg’s decision was so abrupt that the third party fact-checking partners had no advance notice of it before he announced it.
In the next four years, Zuckerberg has expressed hope for collaborating with all three Republican-controlled branches and being more involved in discussions regarding future tech-related government policy, which could be manipulated in his company’s favor. The offered cost? Creating a platform that’s friendlier to these extremist right-wing perspectives, even if it means a highly distorted version of the truth becoming commonplace.
As users, we need to be vigilant and keep alert about the media we share and consume online. The first step is to recognize these company policies for what they truly are: attempts to squeeze profits through the sacrifice of moderated, appropriate and accurate content.