on
Limitations of LLMs and the Path to AGI
AI undoubtedly has massive potential and is likely as disruptive as the internet. However, I currently don’t see the present models evolving towards AGI.
It seems to me that LLMs can solve well-formulated problems of increasing scope, yet they fundamentally fail at truly complex problems. And I mean fundamentally. LLMs are statistical machines that increasingly surpass human intelligence in certain domains. But they exhibit a core weakness: they lack (human) qualities such as experience, intuition, and emotion. Thus, they miss several dimensions of intelligence-centric complexity that are fundamental to motivation and to action. While I don’t believe that only these dimensions enable intelligent life, I do believe that more than the current mechanics of LLMs is necessary. Agentic systems will make a difference, but they won’t enable the disruption to AGI.
Consider this: The human brain has an estimated storage capacity of 2.5 petabytes. Modern LLMs barely reach the terabyte range (open models, after quantization, are around 100 GiB; uncompressed closed models may be a few TB in size). On the one hand, training becomes exponentially more complex the larger the models get, making a 2.5 petabyte model completely unrealistic in the medium term. On the other hand, humans and LLMs differ in that human cognition is magically composed of a condensation of their entire life experience (in Kahneman’s words, System 1). The LLM, in contrast, only has a few megabytes of context in the form of input tokens; the rest of its vast knowledge consists of compressed facts and statistical correlations. And this is precisely the kind of fundamental human characteristic that LLMs simply lack — one I believe is central for intelligent life and long-term meaningful action.
Imagine a complex task, such as developing a utilitarian election program for a country based on all newspaper articles and statistical data, one that is compatible with the constitution and in the best interest of the population. Good luck to anyone who tries to accomplish this task using hundreds of LLM agents. Humans, however, succeed at such things, particularly in the sense that they do not lose sight of the big picture.
It’s a different story for problems that are largely self-contained. Mathematical proofs, software development, medical diagnostics, data analysis… Here, LLMs will conquer the world; here, they are revolutionary.
Human experience is incremental and cumulative. Emotions are not just reactions, but drivers of behavior, learning processes, and decision-making. A human’s life experience is an infinitely rich, continuously updated context that shapes System 1 and informs every one of our actions. In contrast, LLMs currently lack this central aspect of humanity, or a similarly sophisticated technical system. I believe that simply daisy-chaining more and more GPUs won’t fill these central gaps. And I hope that humans won’t soon succeed in formalizing AGI in such a way that a savant LLM will help us find the missing pieces to building true AGI.