Meta’s move: Risks if Asia is left unchecked
A few months before Myanmar descended into chaos in 2017, digital rights advocates expressed alarm on Facebook’s failure to curb disinformation. What followed was a harrowing chapter in modern history: hate-filled posts fueled violence, and misinformation worsened the persecution of the Rohingya. Facebook later admitted it hadn’t done enough to prevent its platform from being weaponized. That failure cost lives.
Meta’s recent decision to halt its fact-checking program in the United States may not appear as dire—yet. But the lessons from Myanmar remind us that content moderation and fact-checking are not just technical policies; they are moral responsibilities. While Meta’s move is currently limited to the United States, it raises important questions for the global fight against disinformation, particularly in regions like Asia where social media dominates public discourse.
Global implications
Fact-checking programs form a critical layer of defense in our battle against disinformation. Beyond addressing falsehoods, they establish trust in an era awash with competing narratives. By pausing its US initiative, Meta signals a troubling disengagement from this responsibility. Worse, the company’s framing of fact-checkers as politically biased undermines their legitimacy, and lends credibility to regimes eager to dismiss inconvenient truths as partisan propaganda.
Asia, with its vibrant but vulnerable democracies, faces particular risks if Meta expands this rollback. The region is uniquely susceptible due to linguistic diversity, limited digital literacy, and the unparalleled reach of platforms like Facebook. In countries like the Philippines, misinformation doesn’t just distort debates—it actually shapes votes, frames issues, fuels divisions, and erodes trust in democratic institutions.
PH elections in the crosshairs
The timing of Facebook’s decision is critical, especially for the Philippines which heads into midterm elections in May. Overseas Filipino voters, many of whom rely heavily on social media for election updates, are particularly vulnerable. Disinformation targeting these voters could ripple back home, influencing not just individual votes but family-wide decisions.
The introduction of internet voting for the 2025 elections only heightens the stakes. While promising to expand electoral participation, it also creates fresh opportunities for misinformation, including claims about the system’s integrity. This could spread unchecked among Filipinos overseas and discourage participation or cast doubt on the results. These risks are all too real in a political landscape where trust is fragile and disinformation campaigns are increasingly sophisticated.
Asia’s unique challenges
Asia’s digital ecosystem is fertile ground for disinformation. Platforms like Facebook and TikTok dominate as primary sources of information for millions, yet the region struggles with significant hurdles. Many users lack the tools to discern credible content, and non-English languages are often left out of content moderation efforts.
The region’s digital vulnerabilities are further underscored by its history as a testing ground for online manipulation. In 2016, SCL Group, the parent company of Cambridge Analytica, used the Philippines as a “petri dish” for testing behavior modification tactics, influencing the electoral victory of former president Rodrigo Duterte, before deploying them elsewhere. While regulatory frameworks in the region have since evolved, they remain locked in a game of catch-up, striving to match the pace of evermore sophisticated influence operations.
Disinformation thrives in these contexts. State-backed campaigns, coordinated influence operations, and algorithmic blind spots create a perfect storm for manipulating public opinion. The absence of robust fact-checking would leave communities exposed, not only to misinformation but to its long-term consequences: weakened democratic institutions and diminished trust in governance.
Meta’s move on fact-checking is not global yet, but it underscores the importance of avoiding a one-size-fits-all approach to global content moderation.
Lessons from the past
In Myanmar, the failure to address disinformation fueled real-world harm. Asia cannot afford a repeat. If anything, Meta’s shift should serve as a wake-up call—a reminder that disinformation is not merely a technical problem but a threat to human dignity and democratic values.
The fight against disinformation is a shared global responsibility. As platforms like Meta reevaluate their commitments, it is critical that governments, journalists, civil society, and citizens step up to fill the gaps. The stakes are high—truth itself is at risk—and only through collective efforts and localized solutions can we safeguard the integrity of our information ecosystems.
—————-
Paco A. Pangalangan specializes in countering misinformation and disinformation, policy advocacy, and fostering democratic resilience. He is a former adviser on Misinformation and Disinformation at the International Committee of the Red Cross (ICRC) in Geneva, and a fellow at the University of Washington’s Center for an Informed Public.
Women in the forefront