Meta’s chief AI scientist, Yann LeCun, mentioned he’s not thinking about giant language fashions (LLMs), calling them a product-driven expertise that’s reaching its limits.
“I’m not so thinking about LLMs anymore,” LeCun mentioned throughout a current speak at NVIDIA GTC 2025. He added that at the moment, LLMs are primarily dealt with by product groups which are making small enhancements by including extra knowledge, rising computing, and utilizing artificial knowledge.
LeCun defined that his focus has shifted to 4 areas he considers extra elementary for machine intelligence: understanding the bodily world, persistent reminiscence, reasoning, and planning.
“There’s some effort, in fact, to get LLMs to motive, however in my view, it’s a really simplistic means of viewing reasoning,” he mentioned. “I feel there are in all probability higher methods of doing this.”
LeCun expressed curiosity in what he referred to as “world fashions” — methods that kind inner representations of the bodily setting to allow reasoning and prediction. “All of us have world fashions in our minds. That is what permits us to control ideas, basically,” he mentioned.
He criticised the present reliance on token prediction, which underpins how LLMs function. “Tokens are discrete… Whenever you prepare a system to foretell tokens, you may by no means prepare it to foretell the precise token that may observe,” LeCun mentioned. He argued that this method is inadequate for understanding high-dimensional and steady knowledge like video.
“Each try at attempting to get a system to know the world or construct psychological fashions of the world by being educated to foretell movies at a pixel degree has failed,” he mentioned. As a substitute, he pointed to ‘joint embedding predictive architectures’ as a extra promising method.
These architectures, in accordance with LeCun, make predictions in summary illustration house reasonably than uncooked enter house. He described a technique the place a system observes the present state of the world, imagines an motion, after which predicts the following state — a core part of planning.
“We don’t do [reasoning and planning] in token house,” he mentioned. “That’s the true means all of us do planning and reasoning.”
He additionally criticised present agentic AI methods that depend on producing many token sequences and choosing the right one. “It’s type of like writing a program with out figuring out easy methods to write it,” he mentioned. “You write a random program after which check all of them… It’s fully hopeless.”
Responding to rising claims concerning the imminent arrival of synthetic normal intelligence (AGI), or what some name synthetic main intelligence (AMI), LeCun remained sceptical.
The submit I’m Not So Keen on LLMs Anymore, Says Yann LeCun appeared first on Analytics India Journal.