Now Reading
SC: AI must only help, ‘not replace,’ human judgment
Dark Light

SC: AI must only help, ‘not replace,’ human judgment

The Supreme Court has crafted a framework that will guide courts in the use of artificial intelligence (AI)–or what it chooses to refer to as “augmented intelligence”—to ensure that judges, lawyers and employees and their cognitive abilities are only aided and not replaced by technology.

The high tribunal en banc on Feb. 18 issued the resolution after Philippine courts started studying and pilot-testing AI tools, including for transcription and research, in their proceedings, as early as three years ago.

The framework is meant to provide a comprehensive guide on the use and development of “human-centered augmented intelligence” grounded on the promotion of the rule of law, social justice as well as privacy and data protection.

“Human control must be paramount in any use of AI. While human-centered augmented intelligence aims to approximate aspects of human intelligence and cognition, it must always be understood that this should not replace human discernment,” it stressed in the 25-page guidelines.

This puts human discernment and reasoning at the center of the country’s judiciary branch even as it pushes to innovate and digitize court processes.

According to the Supreme Court, the use of any AI tools, which should be implemented in phases beginning with a pilot test, should first be approved by the en banc, which has the rule-making power and administration supervision over all courts, it noted.

Disclosure

The reasons for resorting to AI tools and the version used in the processing of court documents, such as voice-to-text transcription, translation, compilation, summarization, processing, copyediting, proofreading or citations, should be disclosed.

“In case a member of the Judiciary, or court official or employee uses an AI tool or any output thereof in any process in the Judiciary, such use must be clearly disclosed and explained in plain and understandable language to cultivate public trust and confidence,” the high court said.

Users must also indicate the level of AI involvement and human oversight for the output, which should be “comprehensively documented” to determine accountability.

Advisory body

“Responsibility ultimately falls on the designer, developer, or user of an AI tool. As such, a user of an AI tool … is personally responsible for the output the tool produces and its consequences,” it said.

The Supreme Court will form a permanent committee on human-centered augmented intelligence, which will be composed of stakeholders from the legal sector and related fields, to serve as its primary advisory body that will oversee the development, design and responsible use of AI tools.

Consultations with stakeholders will also be conducted to regularly monitor and evaluate the effectiveness of the AI tools in courts and to avoid “overreliance” that could lead to unintended consequences.

Data privacy

Before an AI tool could be used for court work, the high tribunal said, a comprehensive risk assessment must also be held to protect against threats and “unacceptable harm,” such as “data poisoning, model leakage, or attacks against both software and hardware that may lead to changes in data or system behavior.”

For one, the use of predictive AI must be classified as “high-risk system” because of its tendency to come up with “inaccurate or inadequate” predictions, the high court noted.

See Also

The AI tools should also not worsen “existing inequalities” or trigger new forms of discrimination especially when such systems use “natural language or mimic human-like conduct” or data that are incomplete that may tend to create biases against marginalized groups or sectors.

Data protection and privacy is also a priority in AI use, especially as the court processes information in every stage of its operations. Confidential, privileged and sensitive information should not be processed in any AI tool without the approval of the court, it noted.

“Users of AI tools must be aware of what data is entered into the tool, how the data is processed by the tool, and who has access to the data used by the tool. Any use of an AI tool must be consistent with data privacy regulations and judiciary-issued data governance policies,” said the Supreme Court.

The digitalization push of the Supreme Court is part of its Strategic Plan for Judicial Innovations 2022-2027, a five-year blueprint that aims to enhance and speed up the delivery of justice to more people at a shorter period.

The Sandiganbayan and first- and second-level courts reported an average 50-percent reduction in transcription time, with some achieving up to 80 percent, after pilot-testing Scriptix, an AI transcriptor, from 2023 to 2024.

In June 2025, the Office of the Court Administrator directed court stenographers to use Scriptix in their transcription tasks after the Supreme Court authorized the tool’s procurement.

Have problems with your subscription? Contact us via
Email: plus@inquirer.net, subscription@inquirer.net
Landline: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© 2025 Inquirer Interactive, Inc.
All Rights Reserved.

Scroll To Top