Is It Too Soon to Talk About AGI and Regulation?

Is It Too Soon to Talk About AGI and Regulation?

After AI, the world is rushing towards building AGI (artificial general intelligence). And true to all races, everyone involved wants to be the first to get there to gain an advantage over others. However, everyone has a different approach towards building it. Some call LLMs the path to AGI, while others say language models won’t lead to AGI at all.

Its potential implications are both fascinating and complex. Sray Agarwal, principal consultant for responsible AI at Fractal, told AIM that it is not easy to think about regulating AGI because, currently, we are all enmeshed in trying to regulate AI, and we don’t even know the exact path towards AGI.

According to Agarwal, the past year has been transformative for the public’s understanding of AI, thanks to advancements like GPT and Gemini. “A few years ago, we had to rely on search engines or read lengthy white papers to extract information. Now, tools like ChatGPT allow us to summarise, create, edit, and verify any content with ease,” Agarwal said.

Beyond text-based tools, the ecosystem of generative AI has expanded to include text-to-anything models like Adobe Firefly, Midjourney, Stable Diffusion, and the most recent OpenAI’s Sora. This has enabled users to generate images, presentations, and even audio/video content.

“We’ve seen a shift with increasingly versatile AI tools allowing users to create and manage multiple types of content,” he noted. This evolution is setting the stage for AGI, where the capabilities of AI systems extend beyond passive responses to executing tasks autonomously.

What Even is AGI?

Agarwal explained that AGI is not just an extension of AI but a leap forward. “AGI is designed to mimic the cognitive abilities of the human brain, enabling systems to learn, understand, and execute tasks as effectively as humans. It’s about bridging the gap between human and machine intelligence,” he said.

While AGI remains hypothetical and has not yet been fully realised, glimpses of its potential can be seen in agent-based LLMs that perform tasks autonomously, such as reading emails, summarising them, and even drafting replies.

The transition from current AI systems to AGI is marked by a shift from noun-based operations (information processing) to verb-based actions (task execution). “Today’s agentic models are the first step toward AGI. These systems are capable of not only processing information but also executing actions with minimal human intervention,” Agarwal remarked.

The Controversy Around LLMs and AGI Development

The role of LLMs as a pathway to AGI has been a topic of debate. Agarwal highlighted the progression from traditional machine learning to deep learning and finally to LLMs. “What sets LLMs apart is their accessibility and user-friendliness. They’ve democratised AI by enabling people to interact with these systems using natural language,” he said.

However, the question remains whether LLMs are sufficient for achieving AGI. Agarwal believes that while LLMs and agent-based generative AI models are a significant step towards it, AGI will require contributions from a broader range of AI methodologies.

“Any user-centric AI system lays the groundwork for AGI. With the increasing number of researchers and developers working on AGI, we will likely see innovative approaches leveraging LLMs and beyond,” he added.

AGI and the Path to Singularity

Agarwal touched upon the concept of singularity, where machine intelligence equals or surpasses human intelligence. He described AGI as either the stepping stone or the initial stage of singularity. “We may not know whether singularity is achievable, but AGI can be seen as a precursor to it,” he said.

Yet, the leap from AGI to artificial superintelligence (ASI), where AI systems surpass human intelligence by a wide margin, raises ethical and existential questions. Agarwal, however, expressed scepticism over the plausibility of ASI. “Humans are at the top of the food chain because we’ve evolved to dominate. If there’s a new form of intelligence that threatens this hierarchy, humans would likely intervene to prevent it from surpassing us,” he explained.

The Role of AGI in Healthcare and Problem-Solving

Agarwal also explored the practical applications of AGI, particularly in fields like healthcare. He cited the example of Fractal’s Vaidya AI model, which helps with medical concerns.

“Tools like Vaidya act as a first-aid mechanism, offering preliminary advice and recommendations,” he said. For instance, when Agarwal caught an ear infection, Vaidya suggested remedies and medications but cautioned him to consult a doctor for serious concerns.

While he acknowledged the utility of AI in healthcare, Agarwal emphasised that AGI’s focus should extend to complex problems like drug discovery and disease management.

Regulating AGI – The Challenges Ahead

As AGI inches closer to reality, regulatory frameworks will play a crucial role in ensuring its ethical deployment. Agarwal warned that without proper regulations, AGI could pose risks to privacy, security, and societal structures.

“The development of AGI must be accompanied by robust governance models to prevent misuse and ensure alignment with human values,” he said.

India, as a growing hub of AI innovation, needs a balanced approach to AI regulation. While learning from global frameworks like the EU AI Act or the NIST guidelines, India should prioritise adaptability to its unique socio-economic context.

“Rather than all-encompassing rigid regulations, a flexible execution framework tailored to specific industries and use cases could be more effective,” Agarwal said. Such a framework would allow innovation to flourish while ensuring that foundational principles of fairness, accountability, and human-centricity are upheld.

Furthermore, India should focus on creating an ecosystem of proactive governance, where startups and enterprises incorporate ethical practices by default rather than waiting for laws to mandate them.

“It’s not like they [AI startups] will reduce the speed,” he said, giving the example of putting the horse before the cart for regulations. “But if it is a car, you can put in a Ferrari engine.”

AGI and Regulation is Not a Premature Conversation

The discussion on AGI and regulation may feel premature to some, but laying the groundwork for governance and ethical considerations now will ensure that future advancements are guided by foresight.

Giving an example of the Character AI mishap where a teenager committed suicide after speaking to an LLM chatbot, he said that it’s essential to have regulators and innovators collaborate closely from the outset of such transformative projects.

“This partnership will not only establish trust but also preemptively address potential pitfalls, ensuring AGI serves humanity positively.”

As AI evolves, it’s clear that the conversation isn’t just about technology but about shaping a future where innovation and responsibility coexist. With its rich history of adapting global frameworks to local needs, India is well-positioned to lead in defining the standards for ethical and responsible AI in the years to come.

By involving all stakeholders—governments, private organisations, educators, and even the younger generation—we can create a world where technology amplifies human potential without compromising societal well-being.

An important thing here is that everyone should consider deploying an ML model if they want to be in the field of regulating AI. “They should sit with somebody or at least know how to deploy it,” Agarwal said.

The post Is It Too Soon to Talk About AGI and Regulation? appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...