Master Manipulator Altman Wants to be the AI Showrunner

This week, OpenAI CEO Sam Altman and two other AI experts testified before the US Congress to discuss the regulation of AI. The experts were testifying before a Senate subcommittee in a hearing called ‘Oversight of AI: Rules for Artificial Intelligence’.

While AI regulation has been contested globally, the testimony saw a consensus on two sides of the political spectrum in the Senate to establish a new regulatory body to oversee the industry. Unlike previous congressional hearings with tech executives like Mark Zuckerberg, Altman received a positive response from lawmakers, who shared his concerns about the potential risks of AI.

The testimony marked Altman’s christening as the foremost figure in the field of AI, as Altman himself stressed during the testimony—and the Senators seemed to have bought it.

Altman advocated for new AI laws and regulations to address the potential unintended consequences of powerful AI models. He proposed the establishment of a regulatory body which would accord licences to organisations for the distribution and potentially the creation of large models. That proposed stand-alone agency could in theory, also revoke those same licences from companies deemed to have behaved badly.

However, Altman advocated for the preservation of open-source initiatives and cautioned against stifling smaller startups. He proposed that licensing should only become mandatory once the model exceeds a specific size.

He offered to help and said that he could provide the Senate with an extensive list of crucial elements including specific criteria that a model must meet before its deployment. It should be noted that if this is considered as it is, it would choke any chance of trial and error in the open source community or for models by other competitors—which OpenAI has ample.

Others in the community also believe that OpenAI recognises the potential of open-source projects surpassing their own, if not restricted. On the other hand, according to reports, OpenAI is reportedly planning to release an open-source AI model.

The question that arises is: Why is OpenAI considering a return to the “open” approach? One could argue that this is for the optics and they don’t want to be judged as a behemoth, and rather be perceived as a small firm leading this wave – one that must be counselled with before making any moves.

Chess, not checkers

Nevertheless, Sam Altman’s actions during the Senate hearing were perceived by some as manipulative and self-serving. Critics believe that he prioritised OpenAI’s interests over the AI community, positioning the company as a prominent player in the Senate’s discussions. Altman’s performance led to him being praised and perceived as influential, while others, such as Gary Marcus, were seen as being overshadowed or ineffective in their questioning. Many argued that this is an example of established players attempting to control a technology through legal means and gaining influence over regulations. The smoothness and strategic nature of Altman’s approach have been compared to the tactics of a cunning Bond villain.

Altman at times during the testimony put his own innovation to question, to ensure he falls on the good side. He dabbled between the good and bad just enough.

He said that OpenAI expects significant economic impacts from AI, including increased productivity, job creation, transformation, and displacement. They want to collaborate with the US government and that the firm is investing in research to mitigate future economic disruptions caused by AI without directly shaping policies.

Two birds, one stone

At one time, it seemed like the world was against OpenAI with linguists like Noam Chomsky completely writing it off, while others thought it was too problematic and could possibly bring armageddon because the technology was advancing at an alarming rate.

OpenAI co-founder Elon Musk rode the wave and became one of the signatories of the petition to decelerate giant AI experiments and pause the training of models more powerful than GPT-4. Gary Marcus and AI visionary Yoshua Bengio were amongst the prominent signatories of the petition.

On Musk’s part it appeared like a simple competitive move to get ahead. The narrative also seemed to stand because interestingly reports suggested that Musk had roped in DeepMind researcher Igor Babuschkin to work on a rival to ChatGPT.

On the other hand, countries were banning it left right and centre. It was banned in Italy for privacy violations – France, Spain, and Germany are investigating the company’s compliance with the EU’s General Data Protection Regulation or GDPR.

So, the route which Altman took during the testimony seemed to be a very calculated manoeuvre to position himself to become the King’s advisor to not only neutralise competition – but to also ensure that OpenAI falls on the good end of AI regulation when it kicks in the United States – its biggest market.

The post Master Manipulator Altman Wants to be the AI Showrunner appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...