Now Reading
Stepping stones to better AI regulation in PH
Dark Light

Stepping stones to better AI regulation in PH

Avatar

As AI’s influence grows, so do concerns about its ethical use, potential biases, and societal impact. To navigate these complexities, a robust regulatory framework is essential. Regulatory discussions are always complex ones, and my brushes with them also show that they tend to be quite siloed in approach, especially when technology is concerned. Enter the 4E framework—education, engineering, enforcement, and ethics—a holistic approach to ensuring that AI serves Philippine society responsibly and equitably across multiple dimensions.

Education: Empowering the Masses

The first pillar of the 4E framework is education. AI is often perceived as the domain of specialists and tech enthusiasts. This perception must change. Broad public training and mass education initiatives are crucial to demystify AI and make its benefits accessible to all. Consider the initiative of Finland to train 1 percent of its citizens in AI and more recently Dubai’s announcement to produce one million prompt engineers within three years.

By educating the public, we can foster a more informed and engaged society capable of participating in discussions about AI’s role and impact. Educational programs should be inclusive, spanning schools, universities, and community centers, ensuring that knowledge about AI is widespread and diverse. By democratizing AI education, we pave the way for a society that is not only knowledgeable but also resilient to the challenges posed by rapid technological advancements.

Engineering: Building AI with Integrity

The second pillar, engineering, focuses on the technical backbone of AI. It is imperative to develop AI infrastructure that aligns with our values. This involves ensuring widescale data center availability and adequate energy supply to support the intensive computational demands of AI. Reinforcement Learning with Human Feedback (RLHF) and proactive model drift monitoring are essential to ensure that AI models remain aligned with human values over time.

Additionally, democratizing access to both hardware and software is critical. This means making advanced AI tools and resources available to a broader range of researchers, developers, and institutions, not just those with significant financial backing. This can help foster innovation and diversity in AI development, preventing a concentration of power and influence. Moreover, the data used to train AI must be meticulously curated to avoid unintended biases.

Equally important are the rules and guidelines for using AI, such as prompt engineering, which must be clearly defined and adhered to. By prioritizing ethical engineering practices, we ensure that AI systems are not only effective but also fair and trustworthy. This comprehensive approach to AI engineering guarantees that the technological infrastructure is robust, inclusive, and aligned with societal values, paving the way for responsible AI development and deployment.

Enforcement: Encouraging Responsible AI

Enforcement, the third pillar, is about establishing a robust system of incentives and penalties. Responsible AI research and development should be rewarded, while misuse or abuse must be met with appropriate sanctions. Current relevant regulations such as the Data Privacy Act or Republic Act (RA) 10173, the Cybercrime Prevention Act (RA 10175), and the Intellectual Property Code (RA 8293), to name a few, conveniently do not explicitly mention AI.

Defining standards for acceptable use of AI is critical to prevent harmful applications and ensure that AI technologies are used in ways that benefit society.

Research incentives, primarily the Department of Science and Technology’s (DOST) Grants-in-Aid (GIA) program, list AI as a priority area. However, it does not specifically mention Generative AI as of this writing, and grantees have historically been limited to members of academia.

Effective enforcement mechanisms require collaboration between governments, industry, and civil society to create a balanced and fair regulatory environment. By fostering a culture of accountability, we can ensure that AI development is guided by principles that protect and benefit all members of society.

Ethics: Upholding Societal Values

The final pillar, Ethics, is perhaps the most crucial. Risk management, regulatory compliance, and ethical considerations must be at the forefront of AI decision-making processes. This involves upholding societal values such as accuracy, accountability, transparency, and fairness, as well as ensuring a human-in-the-loop approach where human judgment is integral to AI operations. Ethical AI practices build trust and legitimacy, which are essential for the long-term acceptance and success of AI technologies. By prioritizing ethics, we create a foundation for AI that respects human dignity and promotes the common good.

Core Values Guiding the 4E Framework

See Also

Central to the 4E framework are the values of accuracy, accountability, transparency, fairness, and a human-in-the-loop approach.

  • Accuracy ensures that AI systems deliver reliable and precise outputs.
  • Accountability means that developers and users of AI are responsible for their actions and decisions.
  • Transparency involves clear and open communication about how AI systems work and make decisions.
  • Fairness guarantees that AI technologies do not perpetuate or exacerbate social inequalities.
  • Human-in-the-loop approach ensures that human judgment remains a critical component of AI decision-making processes, safeguarding against over-reliance on automated systems.

A Path Forward

As of this writing, several AI bills are pending in Congress. At a cursory glance, several of them overlap and could be combined:

  • HB 7396, Artificial Intelligence Development and Regulation Act of the Philippines, by Rep. Robert Ace Barbers
  • HB 7913, Artificial Intelligence (AI) Regulation Act, by Rep. Keith Micah Tan
  • HB 7983, Artificial Intelligence Development Act, establishing the National Center for AI Research (NCAIR), by Rep. Keith Micah Tan
  • HB 9448, Protection of Labor Against Artificial Intelligence (AI) Automation Act, by Rep. Juan Carlos Atayde
  • HB 10385, AI Regulation Act, establishing an AI Bureau within the DICT, by Reps. Bryan Revilla, Lani-Mercado Revilla, and Ramon Jolo Revilla III
  • HB 10460, Protecting Employees for Termination or Layoff due to Automation by AI, by Rep. Lordan Suan

The Department of Trade and Industry has recently launched the Center for AI Research and its updated National AI Roadmap. Against the lens of the 4E framework, it appears that much of the current legislative and executive activity is focused on the Enforcement Pillar. This is not surprising, but it highlights the gaps we need to fill. However, the other areas such as education, engineering, and ethics could also be downstream benefits should one or several of the above bills be passed. This also does not preclude sector-specific regulation that may eventually emerge, such as in education, telecommunications, and law enforcement.

Meanwhile, the 4E framework offers a structured approach to AI regulation that balances innovation with responsibility. By focusing on these four pillars, we can develop AI technologies that are not only advanced but also ethical, transparent, and fair. As AI continues to evolve, we must adopt broad regulatory frameworks to guide AI development and ensure that it serves the best interests of humanity. By doing so, we can harness the transformative potential of AI while safeguarding against its risks, creating a future where AI is a force for good in the Philippines.

Dominic Ligot is the founder, CEO, and CTO of CirroLytix, a social impact AI company. He also serves as the head of AI and Research at the IT and BPM Association of the Philippines, and the Philippines representative to the Expert Advisory Panel on the International Scientific Report on Advanced AI Safety.

 


© The Philippine Daily Inquirer, Inc.
All Rights Reserved.

Scroll To Top