Now Reading
Labeling AI-generated content
Dark Light

Labeling AI-generated content

Moira Gallaga

The Philippine information ecosystem is fast-changing, vibrant, and densely interconnected across social media platforms, online news, personal messaging networks, and creator-driven video communities. It is also highly vulnerable to misinformation, identity manipulation, and political persuasion campaigns. An analysis done by Yvonne Chua for the fact-checking project Tsek.ph and Reuters Institute’s Digital News Report for the Philippines notes that Filipino concern over online disinformation has climbed to a record high of 67 percent in early 2025.

Filipinos are among the world’s most active social media users and artificial intelligence tools that can convincingly imitate faces and voices are increasingly accessible to everyday content creators. This combination of high exposure, high engagement, and low friction in content production raises the issue of labeling requirements for AI-generated content.

It is argued that such a measure will more effectively help protect citizens against scams, impersonation, manipulated political messaging, and nonconsensual use of likeness. It can also be used to support journalists, educators, and fact-checkers in tracing and contextualizing claims and to reinforce proper ethical norms in digital communication. Digital rights advocates and media accountability groups support labeling as a transparency measure that helps citizens make informed judgments. They emphasize that trust is a fragile resource in the Philippine information ecosystem, and knowing when content is artificially generated is a basic condition for public awareness and informed consent.

While there are benefits to mandate labeling of AI-generated content and even legal and compliance experts agree that transparency will be beneficial, there are still some pragmatic concerns. It’s noted that there is no Philippines-specific rule forcing disclosure of AI use at this time. Making disclosure mandatory raises questions about scope, evidence, and enforcement, and raises worries over unclear obligations, compliance cost, and the practicalities of proving when labeling is required. There is concern that the demands of heavy compliance might disadvantage independent, small-scale creators or entities.

Another concern is if the rules are too broad and the definitions are vague, they may unintentionally penalize satire, visual experimentation, and legitimate artistic editing—forms of expression central to Filipino digital creativity. Even worse, such a gap with regard to enforcement of the rules could be abused and used to stifle creative forms of expressions. In this present time where we seek accountability for malfeasance and corruption in government, the use of satire and parody has become a powerful tool for raising public awareness and for speaking truth to power.

There are already several bills pending in both the Senate and the House of Representatives that seek to deal with AI-generated media content. Prominent among these bills is Senate Bill No. 25 known as the “Artificial Intelligence Regulation Act” or “Aira,” which establishes a national AI policy ecosystem focused on safety, accountability, skills development, and transparency. On the other hand, House Bill No. 2312 which is called the “Anti-Deepfake Personality Rights Protection Act,” contains a specific provision requiring all deepfakes regardless of purpose to have a clear and conspicuous disclaimer or label indicating that the content is artificially generated or manipulated.

It is quite apparent that this sort of disclosure requirement for AI-generated content will be generally welcomed as it is a countermeasure against disinformation, manipulation, and violations of personality rights. However, it is just as vital to find the right balance, wherein labeling should protect the public, but at the same time be able to preserve legitimate creative expression and create a level playing field for its stakeholders. This can be achieved by prioritizing high-risk content such as political persuasion, impersonation, fraud, and nonconsensual intimate media; defining thresholds clearly that distinguishes between harmless stylistic editing from deceptive simulation.

See Also

The labeling of AI-generated content should not be simply viewed and pursued as a mere regulatory exercise; it should be treated as a strategy to protect the Philippine information ecosystem itself. An ecosystem which relies on trust, shared meaning, and the ability of citizens to verify the authenticity of what they encounter online. Thoughtful, carefully scoped implementation can safeguard against deception while preserving the creativity, expressiveness, and openness that make Philippine digital culture so dynamic. If done properly, labeling will help strengthen and avoid destabilizing the information environment Filipinos rely on every day.

—————-

Moira G. Gallaga served three Philippine presidents as presidential protocol officer, and was posted at the Philippine Consulate General in Los Angeles, California and Philippine Embassy in Washington.

Have problems with your subscription? Contact us via
Email: plus@inquirer.net, subscription@inquirer.net
Landline: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© 2025 Inquirer Interactive, Inc.
All Rights Reserved.

Scroll To Top