Now Reading
What do we know about AI’s psychological effects?
Dark Light

What do we know about AI’s psychological effects?

I do not use generative artificial intelligence (AI) to conceptualize or write my essays. None of my published works—academic papers, books, essays—have used AI.

Just to be clear, I am not totally against it as a tool, because it really does have some narrow uses, such as translating words into other languages and protecting you from scam emails. There may also be some potential for it to be used in the medical field.

But right now, there is a global race to develop “artificial general intelligence,” or an AI that can think for itself and produce things for itself. We have seen too many movies like this before—especially the “Terminator” and “Matrix” franchises. And yet we are headed the same way.

In this essay, I will be talking about AI, but I am only simplifying a very complex topic. I am not a tech expert or economist. I am a psychologist, so I look at AI issues as it relates to human well-being.

Personal and social safety

AI apps, especially chatbots, can be dangerous. They sound human but are actually just large data models predicting what a response could be to whatever query you enter. They do not have a soul, and they do not have feelings for you.

But AI companies have touched on a very important human need: connection. So, in order to make a lot of money on our loneliness, these companies made AI agents talk like humans and agree with everything the client says. This is called being “sycophantic.”

Test it out for yourself, by sharing your wildest opinion to an AI chatbot, and see how they react.

Meanwhile, AI users around the world are experiencing what is now being called “AI-induced psychosis”—some people spiral into harmful delusions, and others have taken their own life after talking to an AI chatbot. There are so many documented cases of this happening that it has a dedicated Wikipedia page listing various cases.

Most cases surrounding self-harm (or harming others) after talking to AI chatbots are connected to the tendency of these chatbots to be sycophantic. Again, AI tends to affirm and validate a person’s thoughts and feelings… and if a person is already thinking of hurting others or themselves, or if they are already feeling a certain way, talking to a chatbot like this will only intensify such thoughts and feelings.

Parents should also check the AI character apps their children use, because many of those have weak moderation that opens the door to inappropriate adult conversations. But even older adults fall for disinformation made with AI, particularly when bad people generate fake news that looks real. Imagine that: those who used to tell us “kaka-kompyuter mo yan” are now falling for artificially generated propaganda.

There have also been studies showing that people who frequently use AI are losing brain power—some studies include a 2025 study by Michael Gerlich, and a 2025 study done by the Massachusetts Institute of Technology (MIT). This process is called “cognitive offloading,” where you take the mental load off yourself and rely on AI. A 2024 study in the journal Nature Human Behavior showed that humans working with AI make worse decisions than if humans were just doing things on their own!

In our everyday lives, this can be as simple as allowing AI features to summarize messages in your private chats or allowing AI to categorize your emails. These spaces are no longer as private as you think.

Myth 1: “AI provides lonely people with company, and surely this helps their mental health.”

Loneliness is not the problem. Loneliness is the human response to the societal set up we currently have: we have a society that exploits the poor and keeps everyone busy and tired. In this world, it takes extra effort to make and maintain friendships—even though spending time with loved ones is a major factor in why we willingly suffer through hard work.

If anything, AI is only a short-term solution. Long-term solutions include making policies more maka-tao, labor more marangal, and cities more maginhawa.

Myth 2: “AI makes it easier for anyone to create music, writing, and visual art”

See Also

No, AI only makes it easier to create products for sale. It cannot replace the human drive to create. There are so many people who create while living with a disability, so we know that art is not really an accessibility issue. Creativity is a process, not an output.

And, since AI is trained on everything humans have created, it can only ever generate the average of everything, which will always be mediocre.

Can we trust AI?

Use it at your own risk, but know that even its creators cannot tell you exactly how it works—except that it makes them a lot of money.

As usual, the rich get richer, and the poor lose their jobs to machines. This is unfortunate, especially since technology like this can genuinely help solve our more serious human concerns: hunger and poverty, environmental damage, chronic disease, etc.

But if AI is only used to fool others and take away their livelihoods, then it is just another tool for the oppressor. And those who sell it and spread it should be held more accountable for its damaging effects.

******

Get real-time news updates: inqnews.net/inqviber

Have problems with your subscription? Contact us via
Email: plus@inquirer.net, subscription@inquirer.net
Landline: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© 2025 Inquirer Interactive, Inc.
All Rights Reserved.

Scroll To Top