When someone mentions artificial intelligence today, the conversation almost inevitably turns in one of two directions: into enthusiasm for its capabilities or concern about its consequences. Yet it rarely stops at a question that is perhaps more important than both. How is this technology changing our concept of knowledge? What does it mean to know something in a world where a machine returns a comprehensible, confident and, if we use the best models, almost always correct answer to every question?
To even approach this question, we must abandon one of the most widespread oversimplifications. The claim that generative artificial intelligence “merely predicts the next word” is technically accurate, but flawed in the same way it would be flawed to say the brain merely transmits electrical impulses. It is true, but this description tells us nothing about thinking, memory or consciousness.
During training, a language model analyzes vast amounts of text and compresses them into a substantially smaller mathematical structure. The training material encompasses a large portion of humanity’s digitized knowledge. Although this is by no means all the knowledge humanity possesses, since immense parts of it remain embodied in unwritten practices and experiences, it is the most extensive collection of recorded knowledge ever gathered in one place.
For the model to master it, it must extract its essence: repeatable patterns, semantic relationships and hidden connections between concepts. The result is a geometry of meanings in which ideas are arranged according to their mutual relationships. “King” and “queen” are close in this space because the model itself discovered they share a similar semantic relationship. Based on the training, a kind of conceptual map emerges, illustrating the relationships between ideas.
This is best understood through a metaphor. The Enlightenment encyclopedia was the first great attempt to organize, connect and make human knowledge available to everyone. A large language model goes a step further: it not only gathers and organizes knowledge, but compresses it into a mathematical space that can adapt to each individual interlocutor. It is a kind of active encyclopedia that responds, adapts and creates.
But this metaphor hides something even more surprising. The conceptual map that emerges during training bears a striking resemblance to one of the oldest philosophical theories. Almost two thousand five hundred years ago, Plato conceived that behind all individual phenomena lie abstract, immutable forms, which he called Ideas. The mathematical space of a large language model is a modern version of this concept: it stores the relationships between meanings, not individual texts. The machine has extracted these abstract forms from our writings.
This is where the crucial twist occurs. Plato’s world of Ideas was normative. It served as the standard by which we distinguished truth from the shadows in the cave, to use the famous allegory. The mathematical space of the language model, however, is descriptive. It reflects what is consistently present in the texts on which it was trained. This is not necessarily the same as what is true. The model does not distinguish between what is frequently repeated and what is true. And crucially: there is no arbiter within the system itself. That role belongs to the user. It is the user who must distinguish the Ideas from the shadows. The model provides an answer, and the user must judge whether it holds true.
We often hear the objection that models “do not understand” and cannot explain their answers. This is becoming less true, as advanced models provide coherent explanations, including arguments and the limitations of their answers. The problem, therefore, is no longer the quality of the explanation, but the type of knowledge that underpins it. When an expert explains why something is true, it is supported not only by their individual experience, but by the entire social infrastructure of knowledge: the process of verification, doubt, and professional consensus, and above all, personal responsibility for what is said. An expert stands behind their claim with integrity and reputation. A model, on the other hand, explains in isolation and reconstructs the most probable explanation from mathematical patterns of the past. An explanation in itself, therefore, is not yet knowledge. Knowledge is created only when someone takes responsibility for the explanation.
Perhaps it is time to start distinguishing between different ways of knowing. An expert knows because they understand and participate in the living process of science, which seeks what might tomorrow refute our current theories. A model “knows” because it has extracted a pattern from a massive amount of material. Although modern systems are already discovering new solutions in mathematics or biology that surpass the human capacity for synthesis, they still lack the capacity for theoretical thinking and strategic engagement in the world.
The danger, therefore, does not lie in wrong answers, although these also occur. The danger is that we begin to accept seemingly very well-founded answers and in doing so stop questioning their origin. The model offers us the statistical consensus of the past, while science demands the deliberative consensus of the present, which is created through discussion, verification, and doubt. Generative artificial intelligence does not take away our questions, but it can unaccustom us to them with its comfortable, highly persuasive certainty.
We have created a highly useful mathematical tool that is closer to Plato’s world of Ideas than anything so far in human history. It is the best encyclopedia the world has ever seen, and at the same time the most convincing substitute for thinking imaginable. Both are true. But if we stop asking questions and merely consume the machine’s excellent answers, we return to Plato’s cave voluntarily. With one single difference: the shadows on the wall are now in high resolution and look exceptionally convincing.



