As AI shakes up the workplace, new challenges arise

There’s no denying that artificial intelligence (AI) is here to stay in the modern workplace. Amid this rapid adoption, questions on AI ethics, responsible use and compliance with government regulations are not so often discussed.
For instance, Ken Suarez, a product manager from Nueva Ecija, says that AI has helped him perform simple yet repetitive and time-consuming tasks.
“It is supplementary to my knowledge and skills,” he tells the Inquirer. “Since my field is technology-related, we are highly encouraged to use AI because we are trying to leverage and explore the extent to which one could use it.”
So far, it has been good in tasks that could leverage the vast training data used by AI models, Suarez explains. But he takes the work of AI with a grain of salt.
“You could consider AI as your junior at work, so you are checking whether their work is okay. It’s ethical if we use AI as a tool and we have due diligence to check its output,” Suarez says.
He is also aware of the limits one should place when using AI for work. “There would only be risks if you do not have due diligence in omitting private information. It’s in the terms and conditions of AI or large language models that they can possibly use the things you input for training,” he explains.
“Although they would say that they would use [your data] responsibly, mabuti nang sigurado (It’s better to be sure),” he adds.
Surely, similar discussions are ongoing in boardrooms and offices worldwide. Companies should be aware of the possible risks, ethical considerations and responsibilities that are part and parcel of AI use, says Alvin Toh, cofounder and chief marketing officer of Singapore-based Straits Interactive.
Founded in 2013, Straits Interactive describes itself as an educational technology company providing services such as AI-enhanced training and expert advisory on data protection and governance.
It recently debuted Capabara, a capability-as-a-service software that helps institutions develop new generative AI tools, such as personalized AI tutors or chatbots.
AI is everywhere
Toh says that organizations have much to learn about risks present in every stage of the AI lifecycle, and he observes that business leaders and professionals have varying degrees of understanding about these issues.
While some are deliberate in setting their own AI policies—for instance, avoiding free versions of AI models to avoid data leakage and use for model training—some are not even aware of these dangers.
“Some mistakenly believe they don’t need to worry about AI risks because they haven’t formally implemented generative AI—failing to realize that many of their employees may already be quietly using publicly available AI tools in ways that are noncompliant with corporate policies,” Toh explains to the Inquirer in an email interview.
He says that whether businesses actively deploy AI or not, generative AI tools are embedded in commonly used applications to boost productivity, such as text generation, data analysis, meeting summaries, transcription and automated action items.
“The caveat is that many of these AI-powered features are enabled by default, and by agreeing to their terms of use, organizations may unknowingly allow these AI models to process their data for further training under the pretext of ‘improving services,’” Toh adds.
Ethics, responsibility
Ethical use and responsible AI use intersect with privacy and data use issues but are often conflated with each other. Toh says that to use AI responsibly is to mitigate risks and prevent the negative impact of AI’s limitations and potential harm, while being ethical is to focus on what’s “morally right” and align with societal values.
As such, he distills questions about AI ethics into the “3 A’s,” namely: autonomy, asking if the system could act on its own without human intervention; agency, or the controls and limitations placed to prevent undesired actions of AI developers and deployers; and assurance, or asking how organizations could ensure safety, security and trustworthiness in AI use.
Meanwhile, there are also three questions under the “3 I’s”: indicators, or knowing what metrics are there to use as a yardstick in measuring the AI system’s performance; interfaces, or how humans would interact with the AI system; and intentionality, or what the designed purpose of the AI system is.
Toh describes the biggest ethical risks in AI adoption as characterized by the deployment of autonomous AI systems, machine bias in legal and decision-making systems, privacy and surveillance concerns, and the lack of transparency and explainability in AI comes up with its decisions or outputs, more commonly known as the “black box problem.”
Risks of low-cost models
Possibly speeding up AI adoption are new, low-cost models such as what Chinese tech startup DeepSeek unveiled earlier this year, one that has displayed similar abilities with current models but was developed at a fraction of the cost with lower-performing hardware.
Toh says that while Deepseek’s entry challenges traditional market dynamics and lowers barriers to entry in the AI sector, it also highlights challenges and issues.
He explains that organizations can implement DeepSeek’s models without expertise in AI safety, as evidenced by a Straits Interactive research study showing many AI startups prioritizing rapid market entry over privacy and security considerations.
Also, businesses can expose themselves to unintentional harm, especially in sectors such as health care, finance and human resources where “AI-powered decisions can significantly impact individuals.”
Most critically, Toh says that self-hosted AI models may be susceptible to data poisoning, where AI training datasets are compromised; prompt injection, where malicious inputs are used to manipulate AI systems to unintended behaviors; and unauthorized tampering. These can be utilized by malicious actors to exploit AI systems or misinformation campaigns.
“DeepSeek represents both the promise of accessible AI and the imperative for corresponding governance structures,” Toh says.
Balancing innovation, accountability
As it is, Toh recommends that organizations develop industry standards for open-source AI implementations that also establish safety and ethical requirements and provide more opportunities for professionals to learn AI ethics.
Additionally, he says regulatory frameworks are important in balancing innovation with safety, aside from mandating transparency about AI’s capabilities, limitations and data usage policies.
“As organizations increasingly adopt these cost-efficient models, the AI community must work collectively to ensure that technical progress advances alongside ethical considerations and responsible deployment practices,” he explains.
Businesses should review the privacy policies, security settings and terms of use of AI-powered integrations into their systems, focusing on what data are collected, how those are processed and stored, and whether AI models are trained using organizational data.
Platforms, such as Straits Interactive’s Capabara, can also help in maintaining compliance with AI governance frameworks, building trust and empowering users.
Government regulation
Alongside private sector efforts, Toh highlights that there are also government actions to introduce AI governance frameworks, with the European Union (with the EU AI Act) and the South Korean government (with the AI Basic Act set to be enforced in January) setting strict regulations to ensure accountability.
“Ideally, governments should enforce AI safety regulations while allowing room for responsible innovation,” he says.
In the Philippines, Toh observes that House Bill (HB) No. 10944, filed by Caloocan Rep. Mary Mitzi Cajayon-Uy, provides valuable insights into the trajectory of regulation in the country.
HB 10944, among other things, proposes a Philippine Artificial Intelligence Board, a database of AI laboratories and companies in the country and stiff penalties for violations.
“While it might be too early to say whether these measures are sufficient, it is certainly part of the government’s commitment to ensuring that AI is implemented in a responsible and ethical manner,” Toh comments.