Why OpenAI Co-Founder Believes AGI Won’t Arrive in the Next Decade

Even as OpenAI invests billions in compute to accelerate the path to AGI, former co-founder Andrej Karpathy believes true AGI remains a decade away. In a recent podcast with Dwarkesh Patel, he opined that while AI agents like Claude and Codex are impressive, achieving artificial general intelligence (AGI) still feels distant.

Karpathy pointed out persistent challenges, including continual learning and cognitive constraints, that needed to be tackled. “In my mind, this is more accurately described as the decade of agents,” he said.

He envisions that AI agents will grow increasingly capable over the next decade, eventually working alongside humans, much like interns or employees. Yet, today they remain limited.

AI researcher Gary Marcus, a long-time critic of the current AI trajectory, told AIM that he has argued for years that AGI is unlikely to arrive this decade. He said that while achieving AGI within 10 years is “possible, but not certain,” large language models (LLMs) alone won’t be enough.

At the same time, Karpathy is blunt about today’s AI agents, calling them “slop” and accusing the industry of exaggerating their capabilities.

He is not alone in his assessment. Google DeepMind chief Demis Hassabis recently dismissed claims that today’s AI systems are PhD intelligences, calling the label “nonsense”, and arguing that current models lack the consistency and reasoning needed for true general intelligence.

While Karpathy and Hassabis remain cautious about the timeline for AGI, OpenAI CEO Sam Altman is far more bullish. In his Reflections blog published earlier this year, Altman wrote that “we are now confident we know how to build AGI as we have traditionally understood it,” adding that 2025 could see the first AI agents “join the workforce” and reshape company output.

Meanwhile, xAI CEO Elon Musk estimated that the upcoming Grok 5 LLM has about a 10% chance of achieving AGI in the near future.

Bottleneck for AGI

A major bottleneck for AGI, according to Karpathy, is continual learning, which would enable AI to retain and build upon knowledge over time. “These models don’t really have a distillation phase of taking what happened, analysing it obsessively, thinking through it… We’re missing it,” he said.

He also compared neural networks to human cognitive structures, saying, “We’ve stumbled by with the transformer neural network, which is extremely powerful, very general… That to me almost indicates that this is some piece of cortical tissue… But I still think there are many brain parts and nuclei that are not explored.”

Moreover, Karpathy differentiated his approach from Richard Sutton’s animal-inspired vision of AGI. “Brains came from a very different process, and I’m very hesitant to take inspiration from it because we’re not actually running that process,” he said.

He explained that humans and animals can learn from raw sensory input without relying on pre-labelled data, while AI systems still depend heavily on large-scale pre-training and imitation learning. Karpathy pointed out that animals are born with built-in hardware and instincts, like a zebra running within minutes of birth, abilities shaped by evolution rather than reinforcement learning.

Taking this further, Marcus said that challenges such as generalisation, abstraction, stable memory, reasoning, causal understanding, reliable world models, and continual learning still remain unresolved. “We still have a long way to go,” he said.

He emphasised the need for advances in integrating symbolic reasoning and building richer world models. Progress, he added, will depend on “serious research efforts beyond LLMs,” while excessive focus and funding on scaling existing LLM infrastructure could actually slow development.

Adding another perspective on AI’s challenges, Fei-Fei Li, in a recent interview with a16z, pointed out a key limitation of LLMs, saying they struggle with spatial reasoning. “Language is fundamentally a purely generated signal,” she said, adding that “you don’t go out in nature and there’s words written in the sky for you.”

She added that the three-dimensional world follows physical laws, and to extract, represent, and generate that information is fundamentally quite a different problem.

Reinforcement Learning is Worse

Karpathy also discussed the limitations of reinforcement learning (RL), emphasising that it is often overrated. He explained that while RL can produce correct results, it overweights the steps that coincidentally led to success, even if many of those steps were unnecessary or wrong.

Reinforcement learning is a lot worse than the average person thinks, he quipped.

He added that the process of RL creates noise and high variance in learning, making it less efficient and more error-prone than commonly perceived. Karpathy also compared RL to imitation learning, saying that even though imitation has its own flaws, RL isn’t as magical as people think.

The Road Ahead

Despite the challenges, Karpathy remains optimistic about where AI agents are headed. He believes that while the algorithms will evolve over the next decade, the core architecture, which is a giant neural network trained with gradient descent, is likely to persist.

Since the early versions are impressive now and will continue to get better, Karpathy is of the opinion that AI agents will become more capable and fit smoothly into people’s daily work.

The post Why OpenAI Co-Founder Believes AGI Won’t Arrive in the Next Decade appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...