Individual rights to one’s image
During a recent trip to the Philippines we had an interesting conversation about artificial intelligence during lunch with family. The generational divide was ably represented: Boomer, Gen X, Millennial, and Gen Z, so it was quite a lively and insightful discussion on the topic from a variety of perspectives and experiences. It started with the current prevalent use of AI and its impact on education, cognitive thinking, employment prospects of the youth, and then on how it is being used to create these really sophisticated deepfakes and its implications for intellectual property, image rights, and personal privacy.
As I thought more about that conversation, it brought to mind a news report I came across a few months back about a Danish proposal to amend their copyright law in order to strengthen protection against digital imitations of people’s identities. It would be the first law of its kind in Europe. The bill would be introduced in the Danish Parliament in the winter of 2025/2026.
The new Danish bill introduces to two types of protection to their current Copyright Act: imitation protection for performing artists from the sharing of realistic, digitally generated imitations of their artistic performances without their consent, and general protection against realistic, digitally generated imitations of personal characteristics such as appearance and voice without that person’s consent. The latter protection makes an unprecedented effort that classifies a person’s face, body, and voice as works protected under copyright law. This will give individuals control over their identity in digital environments, combining intellectual property and personality rights and allowing individuals to take legal action against unauthorized AI-generated representations of themselves or their work.
Upon my return to Sweden, I started checking if something similar was also being considered in the Philippines. It turns out that Sen. Bam Aquino has recently filed Senate Bill (SB) No. 758 also known as An Act Recognizing Individual Rights Over One’s Likeness and Identity, Regulating the Creation and Use of Deepfakes Through Disclosure, Consent, and Platform Accountability, Providing Remedies and Penalties Therefore, and for Other Purposes. From the bill’s lengthy title you can get a good idea of its objective and purpose. It’s not that different from the Danish law, in fact, on the bill’s explanatory note it cites that the legislation builds on best practices, including Denmark’s move to give its citizens ownership over their own digital likeness.
Sen. Aquino mentioned his intention to file such a bill in early July in order to curb the spread of AI-generated deepfakes intended to defraud the public. He also urged authorities to investigate an AI-generated video that falsely portrays him as endorsing a supposed government-backed investment initiative. These sort of deepfake videos are often linked to phishing websites. It can also be used for financial fraud and identity theft, as well as weaponized for political manipulation.
While celebrities and high-level personalities may have the means, influence, and resources to push back against unlawful and unauthorized use of their images, ordinary people won’t have that. Aquino’s bill helps remedy that by giving ordinary people whose likeness or voice will be taken advantaged of by unscrupulous people using AI deepfakes protection under the law and legal remedies that would compel platforms to take down the material.
However, there is a crucial detail I would like to point out between SB No. 758 and the Danish legislation. The Danish version has clear exceptions in the sense that the proposed protections does not apply to imitations for the purpose of parody, caricature, or satire as Section 24 of the Danish Copyright Act permits such use of works that are protected by copyright.
I don’t see such an exception on SB No. 758, and while I find this bill totally commendable and much needed legislation, it is important that such a safeguard to free speech and expression should be incorporated. For example, Section 7b prohibits the use of deepfakes without consent if it used in a misleading, unauthorized commercial, sexual, or political context. Commercial and sexual context can likely be easily identified and defined, but what about political context? Without safeguards, that may be interpreted in a manner where it can be used to suppress legitimate criticism delivered in the form of parody and satire.
Fortunately, the bill will still need to go through the relevant committee and will be debated upon. Hopefully, when a final version is ready to be passed, it will include such safeguards so it doesn’t just protect individuals from AI deepfakes but also our right to freely express ourselves.
—————-
Moira G. Gallaga served three Philippine presidents as presidential protocol officer, and was posted at the Philippine Consulate General in Los Angeles, California and Philippine Embassy in Washington, D.C.

