How AI Dragons Set GenAI on Fire This Year

If you thought the buzz around AI would die down in 2024, think again. Persistent progress in hardware and software is unlocking possibilities for GenAI, proving that 2023 was just the beginning.

2024 – the Year of the Dragon — marks an important shift as GenAI becomes deeply woven into the fabric of industries worldwide. Businesses no longer view GenAI as just an innovative tool. Instead, it is being welcomed as a fundamental element of their operational playbooks. CEOs and industry leaders, who recognise its potential, are now focused on seamlessly integrating these technologies into their key processes.

This year, the landscape evolved rapidly and generative AI became increasingly indispensable, progressing from an emerging trend to a fundamental business practice.

Scale and Diversity

An important aspect is the growing understanding of how GenAI enables both increased volume and variety of applications, ideas and content.

The overwhelming surge in AI-generated content is leading to consequences we are just starting to uncover. According to reports, over 15 billion images were generated by AI in one year alone – a volume that once took humans 150 years to achieve. This highlights the need for the internet post-2023 to be viewed through an entirely new lens.

The rise of generative AI is reshaping expectations across industries, setting a new benchmark for innovation and efficiency. This moment represents a turning point where ignoring the technology is not just a lost opportunity, but could also mean falling behind competitors.

“The top open source models are Chinese, and they are ahead because they focus on building, not debating AI risks,” said Daniel Jeffries, chief technology evangelist at Pachyderm.

cards visualization

China’s success is underpinned by its focus on efficiency and resource optimisation. With limited access to advanced GPUs due to export restrictions, Chinese researchers have innovated ways to reduce computational demands and prioritise resource allocation.

“When we only have 2,000 GPUs, the team figures out how to use it,” said Kai-Fu Lee, AI expert and CEO of 01.AI. “Necessity is the mother of innovation.”

He further highlighted how his company transformed computational bottlenecks into memory-driven tasks, achieving inference costs as low as 10 cents per million tokens. “Our inference cost is one-thirtieth of what comparable models charge,” Lee further said.

The rise of Chinese AI extends beyond its borders, with companies like MiniMax, ByteDance, Tencent, Alibaba, and Huawei targeting global markets.

MiniMax’s Talkie AI app, for instance, has 11 million active users, half of whom are based in the US.

At the Wuzhen Summit 2024, analysts noted that as many as 103 Chinese AI companies were expanding internationally, focusing on Southeast Asia, the Middle East, and Africa, where the barriers to entry were lower than the Western markets.

ByteDance has launched consumer-focused AI tools like Gauth for education and Coze for interactive bot platforms, while Huawei’s Galaxy AI initiative supports digital transformation in North Africa.

AI Video Models

Models like Kling and Hailuo have outpaced Western competitors like Runway in speed and sophistication, which represents a shift in leadership in this emerging domain. This is reflected in advancements in multimodal AI, where models like LLaVA-o1 rival OpenAI’s vision-language models by using structured reasoning techniques that break down tasks into manageable stages.

The Rugged Boundary

In 2023, it became clear that generative AI is not just elevating industry standards, but also improving employee performance. According to a YouGov survey, 90% of workers agreed that AI boosts their productivity. Additionally, one in four respondents use AI daily, with 73% using it at least once a week.

Another study revealed that when properly trained, employees were able to complete 12% of tasks 25% faster with the assistance of generative AI, while the overall quality of their work improved by 40%. The greatest improvements were seen among low-skilled workers. However, for tasks beyond AI’s capabilities, employees were 19% less likely to produce accurate solutions.

This dual nature has led to what experts call the ‘jagged frontier’ of AI capabilities.

On one side, AI now performs impressive abilities and tasks with remarkable accuracy and efficiency that were once deemed beyond machines’ reach. On the other hand, however, it struggles with tasks that require human intuition. These areas, defined by nuance, context, and complex decision-making, are where the binary logic of machines currently falls short.

Cheaper AI

As enterprises begin to explore the frontier of generative AI, we might see more AI projects take shape and become standard practice. This shift is driven by the decreasing cost of training LLMs, thanks to advancements in silicon optimisation, which is expected to halve every two years. Alongside growing demand and global shortages, the AI chip market is set to become more affordable in 2024, with new alternatives to industry leaders like NVIDIA emerging.

Moreover, new fine-tuning techniques such as self-play fine-tuning are making it possible to strengthen LLMs without relying on additional human-defined data. These methods use synthetic data to develop better AI with fewer human interventions.

Unveiling the ‘Modelverse’

The decreasing cost is enabling more companies to develop their own LLMs and highlighting a clear trend towards accelerating innovation in LLM-based applications in the next few years.

By 2025, we will likely see the emergence of locally executed AI instead of cloud-based models. This shift is driven by hardware advances like Apple Silicon and the untapped potential of mobile device CPUs.

In the business sector, SLMs will likely find greater adoption by large and mid-sized enterprises because of their ability to address niche requirements. As implied by their name, SLMs are more lightweight than LLMs. This makes them perfect for real-time applications and easy integration across various platforms.

While LLMs are trained on massive, diverse datasets, SLMs concentrate on domain-specific data. In such cases, the data is often from within the enterprise. This makes SLMs tailored to industries or use cases, thereby ensuring both relevance and privacy.

As AI technologies expand, so do concerns about cybersecurity and ethics. The rise of unsanctioned and unmanaged AI applications within organisations, also referred to as ‘Shadow AI’, poses challenges for security leaders in safeguarding against potential vulnerabilities.

Predictions for 2025 suggest that AI will become mainstream, speeding up the adoption of cloud-based solutions across industries. This shift is expected to bring significant operational benefits, including improved risk assessment and enhanced decision-making capabilities.

Organisations are encouraged to view AI as a collaborative partner rather than just a tool. By effectively training ‘AI dragons’ to understand their capabilities and integrating them into workflows, businesses can unlock new levels of productivity and innovation.

The rise of AI dragons in 2024 represents a significant evolution in how AI is perceived and utilised. As organisations embrace these technologies, they must balance innovation with ethical considerations, ensuring that AI serves as a force for good.

The post How AI Dragons Set GenAI on Fire This Year appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...