Google Likely to Release Gemma 3 Next Month

Google is hosting its next hardware launch event ‘Made by Google’ on August 13. The company has already confirmed that it will announce the Pixel 9, Pixel 9 Pro, and the Pixel 9 Pro Fold at the event in California.

At most product launch events, hardware announcements steal the limelight. However, Google’s software-related announcements are also highly anticipated. One of the key updates to look forward to is on Gemma, Google’s own language model.

What about Gemma 3?

Meta recently released Llama 3.1. It outperformed OpenAI’s GPT-4o on most benchmarks in categories such as general knowledge, reasoning, reading comprehension, code generation, and multilingual capabilities.

Similarly, last week, OpenAI released GPT-4o mini, a cost efficient LLM. Priced at 15 cents per million input tokens and 60 cents per million output tokens, GPT-4o mini is 30x cheaper than GPT-40 and 60% cheaper than GPT-3.5 Turbo.

Gemma 2’s last update came over a month ago. The competitive environment in which LLMs operate is changing quickly. For Google to stay in the market, it will be essential for it to innovate and set Gemma apart.

At Made by Google, the tech giant is most likely to release the updated version of Gemma, aka Gemma 3, to stay relevant.

Limitations of Gemma 2

Data engineer Maziyar Panahi highlighted issues with Gemma 2’s performance when compared with models like Llama-3-70B and Mixtral. Panahi ran these models in Medical Advanced RAG.

Panahi noted, “Gemma-2 (27B) trailed… Gemma-2 missed several obvious documents—quite a few mistakes noted! Gemma-2 tends to over-communicate, overlook details, and add unsolicited safety notes.”

Initial technical problems also plagued Gemma 2, as mentioned by a user mikael110 on Reddit. A tokeniser error was corrected relatively quickly, but a more critical issue related to “Logic Soft-Capping” persisted.

This feature, crucial for the model’s performance, was initially overlooked due to conflicts with the model’s architecture.

Hugging Face has also said that biases or gaps in the training data can lead to limitations in the model’s responses. It also struggles to grasp subtle nuances, sarcasm, or figurative language.

Indian Developers Love Gemma 2

Despite initial problems, Gemma 2 remains popular among Indian developers. They say they are more comfortable with Gemma than Llama.

“750 billion tokens are spread across 30 languages, and considering an equal distribution over all 30 languages, it comes out to be 25 billion tokens per non-English language. A language like Hindi is very rich, so I feel it’s grossly underrepresented in Llama 3,” said Adarsh Shirawalmath, the founder of Tensoic.

Similarly, OdiaGenAI released Hindi-Gemma-2B-instruct, a 2 billion SFT with 187k large instruction sets in Hindi. The company said Gemma-2B was chosen as the base model due to 2B versions for CPU and on-device applications and efficient tokenisers on Indic languages compared to other LLMs.

Recently, Telugu LLM Labs also experimented with Gemma and released Telugu Gemma.

“Models using Llama 2 extended its tokeniser by 20 to 30k tokens, reaching a vocabulary size of 50-60k. Continuous pre-training is crucial for understanding these new tokens.

In contrast, Gemma’s tokeniser initially handles Indic languages well, requiring minimal fine-tuning for specific tasks,” said Adithya S Kolavi, the founder of Cognitive Lab.

Not Everything is Lost for Gemma

According to Kolavi’s leaderboard for Indic LLMs, Llama 3 performs significantly better than Llama 2 on most benchmarks. However, compared to Gemma, it falls a little short. Gemma’s tokenisation for Devanagari is efficient when compared to Llama 2.

DeepMind engineer Anil Rohan wrote on X that Gemma 2 27b clearly outperforms Llama 3 70b and other open weight models with excellent post training.

“Gemma probably does a better job at Indic tokenisation than GPT-4 and Llama 3,” said Vivek Raghavan, the co-founder of Sarvam AI, in an exclusive interview with AIM. However, he added that Llama 3 has its own advantages.

“I think Llama 3 looks quite good. There are many open models and There are many open models and we have a strategy where we leverage all of them,” he added.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...