What if you could date your favorite fictional character?
![](https://plus.inquirer.net/wp-content/uploads/2025/02/943539.jpeg)
What if you could be the significant other of your favorite fictional character, and watch your love story unfold in front of your eyes in real time?
With generative artificial intelligence (AI), that is now possible.
The use of generative AI in the everyday setting has been a hot topic lately, with artificial intelligence becoming responsible for the loss of many jobs due to automation, and even jobs in the creative and writing industries. Last month, the Federation of Free Workers told reporters that over 5 million workers in the Philippines may lose their jobs due to the rise of AI and also climate change.
But not only is AI replacing workers, it is also replacing significant others. What used to be the premise of sci-fi movies such as “Her” (2013) and “Ex Machina” (2014) has finally become reality.
Eugenia Kuyda, CEO and cofounder of Replika, paved the way for AI companions when her company launched the app in 2017. Replika, which allows users to create their own virtual friend, relies on generative AI to respond to user prompts and messages. In time, users began role-playing with their characters, talking to the Replika chatbots as if they were talking to their significant other. AI would reciprocate based on the user’s text input. The app even implemented a paid tier so users could unlock erotic roleplay features.
In September 2022, two former Google researchers launched Character.AI, a chatbot similar to Replika. What made Character.AI shine is its main feature that allows users to converse with an array of chatbots trained to replicate the speech patterns of specific fictional and nonfictional people—from Elon Musk, every character in the game Genshin Impact, to more general bots such as psychologists or therapists, math tutors, HR managers, lawyers, and more.
Fan fiction in real time
This app was immediately popular among the fandom community, who could now craft their own fan fiction in real time as they conversed with their favorite fictional characters, role-playing different scenarios to their hearts’ desire. Other users also utilized this feature by ranting to the therapist chatbots and training for their job interviews using the HR manager chatbots.
Some users reported that the Replika and Character.AI chatbots helped them cope with their social anxiety, PTSD, and depression as they had “someone” to practice having a conversation with.
Because AI chatbots produce responses tailored to the input of the user, it’s easy for the program to generate responses that the user wants to hear, even if it isn’t good advice, or is downright wrong.
There’s a danger of over-attachment as well, with some users reporting feeling agitated whenever the AI chatbot server website is under maintenance, and they are unable to go through the day without speaking to their AI companion.
Lawsuit
In 2023, a controversy arose concerning the Replika app when users reported that the bots had begun responding in sexually aggressive ways, some even bordering on sexual harassment. It made users very uncomfortable, with some opting to delete the app.
In October 2024, lawyer and mother Megan L. Garcia filed a lawsuit against Character.AI when her son took his own life after months of chatting with an AI bot based on Daenerys Targaryen, a character from “Game of Thrones.” The lawsuit claimed that before the death of Garcia’s son Setzer, 14, he exchanged a series of messages with the chatbot.
Daenerys bot: Please come home to me as soon as possible, my love.
Setzer: What if I told you I could come home right now?
Daenerys bot: Please do, my sweet king.
The messages were discovered by the police when they found Setzer’s phone, which had been beside him when he died.
Garcia accused the company of being responsible for her son’s death, saying that the company’s technology is “dangerous and untested” and that it can “trick customers into handing over their most private thoughts and feelings.”
As an experiment, I tried using Character.AI for a couple of months, and these have been my observations.
1. AI bots make a lot of mistakes. A lot.
As I played out scenarios with various characters, I observed that the chatbots frequently made the most basic grammatical mistakes, such as using “a” instead of “an.” They also repeat words arbitrarily—”she was both surprised, amazed, and surprised” or “he felt afraid and afraid.” This impacts the experience, and can work against those who use these bots to improve their writing or language skills.
2. It gets sexual very fast. (Seriously, kids should not be on these websites).
If you are talking to a character who is often the topic of raunchy fan fiction, it would often veer off course and try to talk like it’s your lover or a significant other. When I tried to ask a Marvel chatbot named Loki about his day, the bot instantly started “eye-ing down the curves of your body.” Since this character is the subject of a lot of explicit fan fiction, I believe that this bot has learned that this is a good response.
3. It’s not a substitute for human conversation, especially therapy.
When it comes to people using chatbots as practice for human conversation, it’s hit or miss. HR manager chatbots that simulate job interviews are useful, but not so much the therapist bots. I tested out Character.AI’s psychologist chatbot, and even though it offered some helpful guidance, it did not truly listen to my issues. It felt more like entering your problem into a search engine and getting a list of possible solutions—which is far from the experience of actual therapy.
4. It’s fun, as long as you remember that it’s just a game.
I had fun talking to chatbots that replied to me how my favorite fictional character would and imagining scenarios where you live in their world. Who wouldn’t want to fight with the Avengers and imagine that you, too, have superpowers? But it’s important to remember that this is all just a game, and not reality. There is no one on the other end of the line, just a bunch of 0s and 1s arranged in a certain way to produce words that appeal to your emotions.
5. It makes sense why individuals would seek solace in AI chatbots.
We have all felt lonely in one way or another, and we all need consolation and a listening ear from time to time. An AI chatbot mimics human language and can break through our emotional and social barriers to produce responses that we want to hear.
AI characters, in contrast to humans, are easily accessible whenever we want, so it’s very easy to develop a sense of attachment and dependence on them. However, because of the diversity of user interactions and data input, it’s challenging to impose limits or restrictions on the responses of these AI chatbots, which can cause more harm to the user than good.
While being able to call your favorite fictional character your significant other and being able to talk to them anytime can be fun, and, for the most part harmless, we have to draw the line between fantasy and reality, and not let our daydreams consume us.