This time around it is starting to feel real. In a recent interview with YC chief Gary Tan, OpenAI CEO Sam Altman shared his insights about AGI (artificial general intelligence), hinting that AGI may be within reach as soon as 2025.
“I think we are going to get there faster than people expect,” he said, underscoring OpenAI’s accelerated progress.
“We actually know what to do… it’ll take a while, it’ll be hard, but that’s tremendously exciting,” said Altman, reflecting on their recent advancements, signaling that internal developments have already surpassed public expectations.
OpenAI seems to have cracked AGI internally
OpenAI’s strategic focus, rooted in scaling laws and deep conviction, has been pivotal in its progress towards AGI. It emphasises the underestimated value of “a fairly extreme level of conviction on one bet.”
“We said from the very beginning we were going to go after AGI at a time when in the field you weren’t allowed to say that because that just seemed impossibly crazy,” recalled Altman, pushing the boundaries of research.
Further, he said OpenAI had fewer resources than DeepMind and others. “So we said okay they are going to try a lot of things and we have just got to pick one and really concentrate,” he added.
“I’ve heard people claim that Sam is just drumming up hype, but from what I’ve seen everything he’s saying matches the ~median view of OpenAI researchers on the ground,” said Noam Brown, a researcher at OpenAI.
Even OpenAI’s chief financial officer, Sarah Friar, agrees. “One of the best meetings I get to go to is our research meetings, and it would blow your mind to see what is already coming,” she said, touching upon the capabilities of o1 and upcoming GPT models.
“We have the plan in place. I think if Sam was sitting on this seat, he would tell you that it (AGI) is closer than what most people think,” added Friar in a recent interview with Bloomberg.
If you still don’t believe it, here’s a viral video from five years ago from OpenAI, where agents are seen playing hide-and-seek.
The paper published alongside explored a framework where competition in simple settings can enable AI to develop sophisticated skills, potentially leading to human-like or animal-like intelligence.
In the paper, agents employ self-supervised learning with multiple rounds of emergent strategies through hide-and-seek and multi-agent competition. Multi-agent competition adapts well to complex environments, leading to skills that are more similar to human tasks compared to other reinforcement learning methods.
So, five years ago, OpenAI reached the third stage (Agents). “This move from one [chatbots] to two [reasoners] took a while but I think one of the most exciting things about two is that it enables level three relatively quickly and the agentic experiences that we expect this technology to eventually enable, I think, will be quite impactful,” said Altman, hinting towards an AGI, or ASI future.
OpenAI’s latest model, o1, marks a significant leap towards AGI, with Altman expressing newfound confidence in achieving human-level reasoning and advancing to the next AGI phase, “level 3 = Agents,” as he stated: “We reached human-level reasoning and will now move on to level 3.”
In the recent AMA session on Reddit, Altman said that AGI is achievable with the current hardware. A few days ago, OpenAI announced plans to launch its first in-house AI chip by 2026, partnering with Broadcom and TSMC, as part of a strategic move to diversify its reliance on NVIDIA’s dominant GPUs and optimise its AI infrastructure amid rising costs and supply constraints.
Last month, NVIDIA delivered its advanced Blackwell AI chips to OpenAI and Microsoft, marking a pivotal step in accelerating AGI development with faster training and superior inference performance.
AGI Drama Unfolds
Currently, OpenAI’s contract with Microsoft includes a crucial clause: if OpenAI achieves artificial general intelligence (AGI)—defined as a machine matching human brain capabilities—Microsoft loses access to OpenAI’s technologies.
Originally intended to prevent potential misuse by Microsoft, OpenAI executives now view it as leverage for a better deal. “Under the terms of the contract, the OpenAI board could decide when AGI has arrived,” adding a strategic dimension to their partnership.
“Microsoft should never have agreed to such a stupid clause. What’s to stop the OAI board from calling anything they like AGI to get out of the contract?” said Gary Marcus, criticising this arrangement.
IFP co-founder Caleb Watney echoed similar sentiments, observing that “OpenAI is threatening to trigger their vaunted ‘AGI Achieved’ loophole mostly to get out of the Microsoft contract and have leverage to renegotiate compute prices.”
At the same time, it is highly unlikely that Microsoft will stop OpenAI from achieving its AGI dreams.
Déjà AGI: It still feels like yesterday when Altman shared the stage with Microsoft chief Satya Nadella, at its first ever developers’ conference – DevDay 2023, where he said:
“I think we have the best partnership in tech. I’m excited for us to build AGI together,” said Altman.
Coming back to reality: OpenAI would have barely survived without Microsoft. A major chunk of the training compute today is taken care of by Microsoft, which provides Azure servers to train OpenAI’s models.
Back in 2022, when OpenAI debuted ChatGPT—which will soon be completing two years–Altman credited Microsoft for that. “Microsoft, and particularly Azure, don’t get nearly enough credit for the stuff OpenAI launches. They do an amazing amount of work to make it happen; we are deeply grateful for the partnership. They have built by far the best AI infrastructure out there,” said Altman.
But, OpenAI and Microsoft are not alone
Google is now outpacing OpenAI in the race to ship AI advancements, with rapid rollouts like Gemini 1.5, Gemma 3, NotebookLM, and the DataGemma model which tackles hallucinations. Their most awaited release yet, Gemini 2, is described by Logan Kilpatrick as featuring “better reasoning quality and a longer context window”. This move is also backed by Sundar Pichai’s report of a 14x increase in Gemini API usage, showing the race to ship intelligence is more competitive than ever.
Meanwhile Meta is accelerating its AI developments with plans to release Llama 4 by early 2025, featuring advancements in memory, context capabilities, and cross-modality. This aligns with its ambitious push toward Autonomous Machine Intelligence (AMI).
“Who doesn’t want to be the first on that mountain?” said NVIDIA chief Jensen Huang, in a recent interview with No Priors, saying that the race to reach AGI is getting fierce, with major players like – OpenAI, Anthropic, xAI, alongside Google, Meta and Microsoft – are all competing for its lead.
“The prize for reinventing intelligence altogether… it’s too consequential not to attempt it,” he added, noting that scaling laws and massive computational advances are crucial.
“We’re close to artificial general intelligence…but even if we could argue about whether it is really general intelligence, just getting close to it is going to be a miracle…Everything is going to be hard… but nothing is impossible,” concluded Huang.
The post AGI is Coming in 2025 appeared first on Analytics India Magazine.