This does not prove that each tech giant knows it is on the wrong path. What is sold as reasoning may be useful, but it is not evidence of real understanding or general intelligence. That nuance matters because what is today presented as a step toward AGI is actually the refinement of a paradigm. Big tech continues to invest colossal sums in infrastructure, data centers, and chips, as if scaling were the inevitable path to a general mind. However, a growing portion of that investment risks being oversized if real progress continues to come from compression and optimization, not brute force. But it does demonstrate something unsettling: even within the scientific elite, there is serious doubt as to whether the dominant path leads to AGI or only to the refinement of useful, impressive, and marketable systems. The race, however, remains concentrated in the same direction. A Large Language Model (LLM) is not intelligence in the strong sense; it is a statistical system trained to map inputs to outputs with extraordinary effectiveness. That same study reports that the cost of inference has plummeted, and much smaller models are reaching thresholds that previously required massive systems. The dominant trajectory does not show the birth of AGI; it shows LLMs that are increasingly compact, cheap, and efficient. And there is the financial heart of the bubble. The issue is that the magnitude of the bet is justified as if it were financing the birth of AGI, when the evidence points to the industrial refinement of the same paradigm. That is why the bubble will eventually burst. Not because it has been proven to be the correct path, but because it is the most visible one and best supports the competitive narrative among tech giants. An LLM can write, summarize, translate, and program. A Stanford study shows that frontier models continue to improve, but also that the field is becoming more convergent: the entire industry is refining the same lane. And that gap is fueled by a distorted logic: big investments are directed at LLMs because they produce the visible results that the public and investors want to see, even if that does not imply real progress toward AGI. Progress exists, but it increasingly looks like incremental optimization rather than a conceptual breakthrough capable of producing a general mind. The paradox becomes clear when looking at efficiency. But that promise rests on a confusion. An LLM can even simulate reasoning. The great technological promise of this decade is AGI: a general artificial intelligence capable of understanding the world, reasoning, and acting with human-like flexibility. Its position is clear: these models do not understand the world as a truly general system would, and different architectures are needed, closer to 'world models' than simple text predictors. What the market wants to see does not always coincide with what needs to be discovered. When that difference becomes impossible to hide, it will be exposed that much of the enthusiasm rested on an illusion: confusing commercial performance with scientific progress. The author is a data analyst. But simulating it is not possessing it. Not because AI is fake or useless, but because it is being valued as if we were building a mind, when in reality we are refining a mathematical function. It is not that data centers are going to become useless. Yann LeCun — head of AI at Meta, a Turing Award winner, and one of the most influential figures in the field — has argued that LLMs are not enough to achieve human-level reasoning and autonomy. Even if OpenAI talks about 'reasoning' in GPT-5.4 and explains that these models use more internal computation before responding, that does not alter their fundamental nature: it is still a glorified mathematical function.
The AI Paradox: From Scale to Optimization
The article analyzes the current AI race, arguing that big tech companies are investing in scaling, while real progress is being achieved through optimization. The author questions whether this path leads to true AGI or if it's merely a bubble based on an illusion.