Now Reading
Do we understand how LLMs understand us?
Dark Light

Do we understand how LLMs understand us?

Avatar

In law school, we were trained through the Socratic method. Professors rarely gave direct answers, but asked questions again and again. As a freshman, you might think it was all about memory. You would try to recall every line from every reading. By the time you graduate, you would have understood: it was never about perfect recall. It was about learning how to think, how to understand.

So when ChatGPT arrived, I was naturally curious. It seemed to appear overnight and quickly became the fastest tool in history to reach 100 million users. From proofreading emails to summarizing research, large language models, or LLMs, have become embedded in our daily routines.

But beneath the convenience lies a deeper question: Do we really understand how these models understand us? The short answer is not quite. These tools are impressive, even astonishing, but their inner workings remain opaque. In the language of AI, we call them black boxes. We know what we feed into the system and we see what comes out. But what happens in between is hidden from view. The model does not explain its choices. We simply judge by output.

Yet even a basic grasp of how these systems work matters. It shapes how we use them, how we regulate them, and how we live with them. Much of the current progress began in 2017, when Google researchers introduced a new model architecture called the Transformer. Earlier models processed language in sequence, like a typewriter moving left to right. But the Transformer could look at an entire sentence at once, weighing which words mattered most. It introduced something called self-attention, a way for the model to focus on relationships between words, regardless of order.

This architecture is the foundation of models like GPT, which stands for Generative Pretrained Transformer. GPT is trained on massive data sets and generates original text based on patterns it has seen before. Because it can consider context more broadly, it feels conversational. It feels human. But it is not.

Picture the model as a vast library. Each word you type becomes a vector, or a coordinate, in this library. These vectors give the model a way to understand not the definition of a word, but its position in the space of meaning. Inside this library is a tireless librarian, retrieving and linking these word-vectors to form a coherent reply. We know the librarian is working because we see the results. But how it chooses what to retrieve remains unclear. Words are transformed into embeddings, vectors that shift in meaning depending on what came before or after. From each embedding, the model creates three new vectors: a Query, a Key, and a Value. Imagine a word sending out a request, identifying itself, and offering something in return. The librarian weighs these relationships, mathematically choosing which ones to emphasize based not on meaning, but on statistical patterns.

LLMs do not understand language the way we do. They do not think. They do not intend. They are extraordinary machines for recognizing patterns and predicting the next most likely word. Their output can appear insightful, but that does not mean the system knows what it is saying. When we ask, Do we understand how an LLM understands, we are really asking whether we trust the responses of a machine that predicts language, not meaning. And whether we are comfortable outsourcing more and more of our own understanding to a system that does not actually possess any.

Socrates once said that wisdom begins in recognizing what we do not know. From that space of humility, we learn to ask better questions, especially when others stop asking them. It is tempting to embrace new technologies without question. But if we want to keep our humanity, we must continue to question. We must understand what these systems can and cannot do, and how they should be used responsibly.

See Also

This is why the field of AI explainability grows more important every day. Socrates died not knowing everything. But he lived in pursuit of understanding. Perhaps in the age of AI, more than ever, that is how we should be living too.

—————-

Leo Ernesto Thomas G. Romero is a CPA-lawyer for a publicly listed company, focusing on corporate and tax matters. He is pursuing a Master’s in Fintech under the MMU-AIM Pioneer Cohort of 2025, and was recently named a finalist for Fintech Lawyer of the Year at the 2025 ALB SE Asia Law Awards. He can be reached at lromero.msfintech2025@aim.edu or leo-ernesto.t. romero@stu.mmu.ac.uk.

Have problems with your subscription? Contact us via
Email: plus@inquirer.com.ph, subscription@inquirer.com.ph
Landine: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© The Philippine Daily Inquirer, Inc.
All Rights Reserved.

Scroll To Top