Meta’s updated content policies could potentially increase the risk of inciting further acts of mass violence and genocide.
Recent announcements regarding content policies by Meta have raised concerns about the safety of vulnerable communities worldwide. Meta’s recent policy changes have been perceived as potentially dangerous, especially in light of the company’s past contributions to mass violence and human rights abuses, as seen during the Rohingya crisis in Myanmar in 2017.
Mark Zuckerberg, Meta’s founder and CEO, unveiled changes to the company’s content policies on January 7. These changes were seen as an attempt to align with the new Trump administration and included lifting restrictions on previously banned speech, such as harassment of racial minorities. Additionally, the company announced a shift in content moderation practices, particularly reducing the use of automated content moderation. Although these changes were initially implemented in the US, Meta hinted at the possibility of rolling them out on a global scale. This shift apparent retreat from Meta’s previous commitments to responsible content governance has raised red flags within various communities.
Amnesty International and other sources have highlighted Meta’s algorithms as prioritizing and amplifying harmful content, including misinformation and content inciting racial violence in order to maximize user engagement and profitability. Studies have shown that these algorithms favor content that elicits strong emotional reactions, often at the expense of human rights and safety. With the elimination of existing content safeguards, the potential risks of harm are expected to increase significantly.
A former Meta employee expressed concern about the implications of these changes by stating that they believed this to be a precursor to genocide. This sentiment is shared by various experts in human rights who have voiced apprehension about Meta’s involvement in contributing to violence in fragile and conflict-ridden societies.
The tragic consequences of Meta’s actions in Myanmar in 2017 serve as a sobering reminder of the dangers of unregulated content dissemination. Facebook’s platform in Myanmar had become a breeding ground for anti-Rohingya sentiments, which played a role in escalating violence against the community. Without appropriate safeguards, Facebook’s algorithms exacerbated the existing tensions and contributed to the atrocities witnessed in the region. The platform was found to have radicalized local populations and incited violence against the Rohingya, according to a UN report.
Despite facing criticism for its past actions, Meta seems to be repeating the same mistakes by removing vital protections. By making these changes, Meta appears to be enabling hate speech and harassment against marginalized groups, including trans individuals, migrants, and refugees.
In response to Meta’s reluctance to acknowledge its past wrongs, a whistleblower complaint was filed with the SEC. The complaint accuses Meta of failing to address warnings about its role in contributing to violence against the Rohingya and calls for an investigation into potential violations of securities laws. Meta’s executives had previously downplayed the impact of Facebook’s algorithms on polarization, despite evidence to the contrary. The company also resisted calls for human rights impact assessments and oversight committees to address international policy issues.
It is essential for companies like Meta to recognize their responsibilities in upholding human rights standards globally. While freedom of expression is a fundamental right, it cannot come at the expense of endangering vulnerable populations. Meta’s refusal to take corrective action and prioritize profit over accountability only serves to perpetuate the risks faced by marginalized communities.