Anthropic is Now Big Tech’s Favorite Child—OpenAI, Not So Much

Not so long ago, OpenAI and Microsoft seemed unbeatable. Now, however, their competitors are catching up. Anthropic’s Claude leads in models, Microsoft-backed GitHub has added Claude to Copilot and Amazon has added Claude to Q Developer (Code Whispherer). Additionally, Meta’s Llama, now rivals ChatGPT in users and is a household name for open source and beyond.

At the same time, OpenAI’s initial o1 feedback has been mixed. OpenAI CEO Sam Altman has referred to the model as “GPT-2 for reasoning”, while calling it “deeply flawed”.

Anthropic vs OpenAI

OpenAI seems to be focusing more on voice features in addition to its reasoning capabilities, notably catering to its end users. In contrast, Anthropic appears to prioritise the engineering and API side.

OpenAI recently introduced expressive, steerable voices for speech-to-speech experiences and reduced costs through prompt caching. Text inputs were discounted by 50%, and audio inputs by 80%. The move has made advanced real-time AI more accessible.

(Source: Morgan McGuire)

Meanwhile, Anthopric is also experimenting with voice dictation on Claude mobile app with select users, with up to 10 minutes of recording.

OpenAI is also facing growing competition from Anthropic in AI coding. The company is advancing its own tools to handle complex coding tasks and automate actions like code generation, though Anthropic’s recent launch of ‘computer use’ has given it an edge.
“AI coding can’t fully replace engineers yet and needs ‘some coaching’,” shared Anthropic co-founder Daniela Amodei. She also noted that their Claude model has significantly boosted productivity, potentially reshaping hiring strategies.

As both companies push for AI dominance, OpenAI’s upcoming releases could impact the market, potentially challenging products like GitHub Copilot, Cursor, and other coding assistants.

Perfects User Experience

Anthropic is enhancing users’ interaction with AI by focusing on screen navigation. Last week, the company introduced features enabling the AI to control computer screens, allowing it to browse the web or type on behalf of users.

‘Computer use’ is an experimental public beta feature through which Claude 3.5 Sonnet can now navigate computer interfaces in a manner similar to human users. This means the AI can view a screen, move a cursor, click buttons, and type text, allowing it to perform various tasks.

OpenAI: "Look at our new AI-powered internet search! with a fine-tuned model!"
Perplexity: "Look at our AI-powered internet search combining multiple models!"
Google: "We built a custom pipeline for AI summaries!"
Microsoft: "We did AI search built around Bing years ago!"
Claude: pic.twitter.com/2oNkCapmUu

— Ethan Mollick (@emollick) November 1, 2024

There’s more.Anthropic has also launched its analysis tool within Claude. It allows users to perform data analysis directly in the platform by running JavaScript code. With this, Anthropic has made a mark in the field of AI user experience.

This feature, also available in preview mode, enables Claude to handle complex tasks like data cleaning and in-depth analysis from CSV (comma-separated values) files. Designed to help teams across functions by delivering precise insights, it aims to help marketers analyse customer behaviour and finance teams create dashboards.

This itself feels like AGI, Anthropic is far better than OpenAI.pic.twitter.com/Qe9r18osZu

— Uttkarsh Singh (@Uttupaaji) October 24, 2024

Last month, Anthropic made Claude Artifacts available to all users on iOS and Android, allowing anyone to easily create apps without writing a single line of code.

Anthropic is undoubtedly taking how humans interact with AI to the next level.

‘While Anthropic has built something that still requires a computer as an interface, it is likely that in the future we will move away from screens and interact with AI agents using a new kind of device or interface.’ — AIM

OpenAI has also not given up, yet. At OpenAI DevDay 2024 in London, Romain Huet, head of the company’s developer experience, showcased the o1-preview demo, revealing both promise and setbacks.

After three attempts, the Swift maps app coding stalled, but, later, a demo presenting o1-mini alongside Cursor successfully controlled a drone, complete with an impressive backflip.

Nice! The drone flies and does a backflip pic.twitter.com/GBOGGuywIv

— Tarek Ayed (@tarekayed00) October 30, 2024

OpenAI also released an advanced voice feature on the ChatGPT desktop app and not long ago, it released a feature that now lets users search for content from their previous conversations on ChatGPT.

In addition, OpenAI launched its search engine yesterday. ChatGPT search now offers improved web search capabilities for timely, accurate answers, blending natural language interaction with up-to-date data in sports, news, stock quotes, and more.

Masters Voice Features

Recently, OpenAI launched a Realtime API for developers, enabling them to add advanced voice and natural speech-to-speech conversation features to their applications. This API enables companies to build voice-powered customer service systems that can handle complex tasks, from booking trips to guiding users through software.

Surprisingly, Anthropic’s Claude has little to no developments in that area.

OpenAI is certainly killing it. This new API is an extension of ChatGPT’s Advanced Voice Mode with sight, released just a few days before the API update. It provided six distinct voices, along with smooth audio input and output options.

For example, users can now ask ChatGPT for recipe ideas by showing a photo of their fridge or get help with a math problem by sharing a picture of the problem itself.

This update is similar to Gemini Live, Google’s conversational AI assistant. It claims to help its users plan events, ask for advice, discuss historical events, and even explore new local topics and ideas.

Needless to say, OpenAI’s API update is in contrast to traditional methods that rely on multiple models for speech transcription and response. It connects to OpenAI’s latest GPT-4o model using a WebSocket, which allows developers to manage functions and respond based on user requests.

Wellness company Healthify Me was one of the early adopters of this, using the API for real-time nutrition coaching through their AI coach, Ria. It uses OpenAI’s GPT-4 Turbo, and the machine learning model for speech called Whisper.

Currently, the Realtime API is priced based on text and audio tokens. Audio input is priced at $100 per million tokens, and output is priced at $200 per million tokens. OpenAI has also built strong safety features into the API, including automated abuse detection and human review mechanisms.

OpenAI plans to extend the API’s capabilities in the future. It aims to support additional modalities like video and visual inputs.

Contrary to Anthropic’s Claude Sonnet 3.5 Artifacts, OpenAI recently unveiled canvas. This was a new interface for working with ChatGPT on writing and coding projects.

Not surprised. OpenAI’s new canvas interface for ChatGPT falls short against Anthropic’s Claude Sonnet 3.5 for coding, with developers consistently favouring Claude’s capabilities in generating, debugging, and learning code quickly.

“On-demand software is here,” said Joshua Kelly, CTO of Flexpa, who created a custom app in seconds with Claude, underscoring how Claude Artifacts empowers users to quickly develop tailored apps and pushes forward the vision of everyone as a potential app developer.

Meanwhile, GitHub also set a new standard in the coding arena with its multi-model lineup—Claude 3.5 Sonnet, Gemini 1.5 Pro, and OpenAI’s o1-mini and o1-preview. This has brought unmatched versatility and developer choice across VS Code, Xcode, and beyond, positioning GitHub as the ultimate toolkit for today’s code-generation needs.

Money Talks

While both OpenAI and Anthropic have seen significant user growth compared to last year, their revenue generation strategies reveal vastly different approaches.

According to the above analysis, most of OpenAI’s revenue growth arises from paid subscriptions for its AI models, like ChatGPT, while Anthropic earns the majority of its income through API services.

Innovations like real-time API and voice and speech control soared OpenAI’s revenue to $4 billion in 2024, a 580% increase from last year. Their forecasted earnings are even more impressive, with projections suggesting they could reach $11.6 billion in 2025.

For Anthropic, the leap in usability has contributed to revenue growth, reaching $1 billion this year, a 1,000% increase. Like OpenAI, much of this revenue is generated from API access, catering to developers looking for seamless AI integration.OpenAI would have barely survived without Microsoft. The tech giant’s deep-rooted partnership with OpenAI, including over $13 billion invested to date, now brings anticipated quarterly losses of $1.5 billion. Microsoft attributes this cost to its equity stake in OpenAI as the latter faces mounting expenses to sustain its rapid growth trajectory.

The post Anthropic is Now Big Tech’s Favorite Child—OpenAI, Not So Much appeared first on AIM.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...