Now Reading
Using Generative AI with care
Dark Light

Using Generative AI with care

Avatar

In the rapidly evolving landscape of business and management, the allure of generative AI (GenAI) applications is undeniable. These tools promise unprecedented efficiency in generating reports, crafting emails, summarizing data and even offering strategic insights. However, as with any technological advancement, their use demands a critical and cautious approach, particularly from managers and business leaders responsible for steering organizations in complex environments.

Understanding what GenAI really is

At the core of GenAI applications lies a sophisticated statistical mechanism. These tools are trained on enormous amounts of text sourced from social media, digital publications and various corners of the internet. This vast dataset allows GenAI to predict the most likely sequence of words following any given prompt, creating outputs that are impressively coherent and contextually relevant.

However, it is crucial to recognize that texts merely represent language, and language, in turn, merely represents the realities of the world. Neither text or language is the reality. They are constructs—reflections shaped by the biases, perceptions and contexts of their creators. Consequently, GenAI, being a product of statistical training on such texts, can only approximate human language, although convincingly. It lacks the intrinsic capability to discern the actual truth, accuracy, or grounding in reality of the information it generates.

Fluency without actuality

The primary function of GenAI is not to establish factual accuracy but to produce fluent, coherent language. This design philosophy means that GenAI applications excel at mimicking human discourse but do so without an inherent understanding of the veracity of content. The outputs are the result of probability algorithms predicting the next most likely word, not an assessment of truthfulness or factual correctness.

For business leaders, this distinction is critical. Decisions made in boardrooms, strategic planning sessions and operational reviews require more than fluent narratives; they demand insights grounded in reality, supported by empirical evidence and critical analysis. Relying on GenAI without this understanding risks mistaking persuasive but baseless language – sometimes referred to as bullshit — for substantive knowledge.

The risk of misplaced learning

One of the subtle yet profound risks of extensive GenAI use is the illusion of learning. When managers and decision-makers frequently engage with GenAI-generated content, they may believe they are gaining insights into real-world phenomena. In reality, they are often engaging with discourse about reality, not reality itself.

This discourse can indeed contain factual information, but it is also interwoven with misperceptions, biases and even propaganda, especially considering the diverse and unfiltered nature of the training data. The danger lies in the potential for these distorted representations to influence business thinking, leading to decisions based on skewed or incomplete understandings of the actual business environment.

Real-world example: In 2023, CNET, a prominent technology news website, faced significant backlash after it was revealed that numerous articles had been generated using AI tools. These AI-written articles contained serious errors and instances of plagiarism, leading to a loss of credibility and trust among readers. The incident highlighted the risks of over-reliance on AI for content creation without adequate human oversight, underscoring the importance of critical evaluation in maintaining journalistic integrity.

Erosion of critical thinking skills

Another significant concern is the potential erosion of essential cognitive abilities among GenAI users. Critical thinking—the ability to surface and question assumptions, rigorously examine evidence, and resist misinformation and manipulation—is foundational to sound business leadership. Over-reliance on GenAI can dull these skills, as the convenience of readily generated content may discourage the deep analytical processes necessary for robust decision-making.

When leaders accept GenAI outputs uncritically, they risk becoming passive recipients of information rather than active interrogators of knowledge. This shift can undermine the intellectual rigor required to navigate complex business challenges, foster innovation and sustain competitive advantage.

Real-world example: In 2019, the CEO of a UK-based energy firm was defrauded of 220,000 euros after scammers had used AI-based voice technology to impersonate the executive’s superior. The AI-generated voice convincingly demanded an urgent transfer of funds, which the CEO authorized without sufficient verification. This incident underscores the importance of maintaining critical thinking and verification processes, even when instructions appear to come from credible sources.

The path forward: Enhancing, not replacing, critical thinking

Given these risks, how should business and management professionals approach the use of GenAI? The answer lies in a balanced and informed strategy that integrates GenAI as a tool for efficiency without compromising the commitment to critical thinking and empirical validation.

1. Prioritize scientific understanding: Business decisions should be anchored in a scientific understanding of organizational realities. This involves grounding strategies in data-driven analysis, empirical research and validated methodologies rather than solely relying on AI-generated narratives.

See Also

2. Use GenAI as a starting point: Consider GenAI outputs as preliminary drafts or thought starters, not definitive answers. Use them to generate ideas, structure reports, or explore different perspectives, but always subject the content to rigorous scrutiny and validation.

3. Cultivate critical thinking: Organizations should invest in training programs that enhance critical thinking skills among managers and leaders. Encourage practices that question assumptions, evaluate evidence critically and consider alternative viewpoints.

4. Cross-verify information: Always cross-reference GenAI-generated content with credible, authoritative sources. Triangulate information from multiple data points to ensure accuracy and reliability.

5. Ethical and responsible use: Develop clear guidelines for the ethical and responsible use of GenAI within the organization. This includes transparency about when and how AI-generated content is used in decision-making processes.

Ignoring the limitations of GenAI can have far-reaching consequences for businesses. Decisions based on inaccurate or misleading information can lead to strategic missteps, financial losses, unfair decisions and reputational harm. It is essential for business managers to fully understand the potential and limitations of GenAI for business uses.

(This article reflects the personal opinion of the author and not the official stand of the Management Association of the Philippines or MAP. The author is chair of the MAP Shared Prosperity Committee. He is full professor of Business Ethics and Governance at De La Salle University and chair of Responsible AI Council of the Analytics and AI Association of the Philippines. Feedback at map@map.org.ph 


© The Philippine Daily Inquirer, Inc.
All Rights Reserved.

Scroll To Top