Jensen Huang Brings re:Invent to Life

Jensen Huang Brings re:Invent to Life

Jensen Huang is everywhere, and so is NVIDIA. The company seems to be stealing the spotlight at all the major events – be it Google Cloud Next, AWS re:Invent or Microsoft Ignite, bringing the party to life at each one of them.

During the recent re:Invent keynote, AWS and NVIDIA jointly announced a strategic initiative to provide a new class of supercomputing infrastructure, software, and services tailored specifically for generative AI.

The duo have also decided to deploy NVIDIA’s much-anticipated GH200 chips, initially slated for release in 2024. The installation of NVIDIA’s GH200 chips will occur within AWS’s cloud infrastructure, emphasising the global availability of this advanced hardware for AWS customers.

NVIDIA x AWS

The event also presented the initiative it has taken to set up the world’s fastest GPU-powered AI supercomputer, which will be a giant leap towards reshaping industries and driving technological progress at an unprecedented pace. This innovation is named Project Cieba and will aim to feature 16,384 NVIDIA GH200 superchips, which will process a staggering 65 exaflops of AI, propelling NVIDIA’s next wave of generative AI innovation.

At #AWSreInvent, @AWSCloud CEO Adam Selipsky and our CEO Jensen Huang spotlight the pivotal role of #generativeAI in cloud transformation, highlighting their companies’ growing partnership. https://t.co/TrmbOu3GXw

— NVIDIA (@nvidia) November 28, 2023

Apart from this, AWS is also working with NVIDIA on introducing three new Amazon EC2 instances, including P5e instances for large-scale generative AI and HPC workloads and G6 and G6e instances for a wide range of applications, such as AI fine-tuning, inference, and graphics.

“NVIDIA and AWS are collaborating across the entire computing stack, spanning AI infrastructure, acceleration libraries, foundation models, to generative AI services,” said the CEO of NVIDIA, Jensen Huang.

NVIDIA x Hyperscalers

Clearly, NVIDIA thrives in a collaborative environment, and at AWS re:Invent, it became clearer, as it also shares partnerships with rivals Google Cloud, Microsoft Azure and Oracle.

“Our partnership with NVIDIA spans every layer of the Copilot stack — from silicon to software — as we innovate together for this new age of AI,” said Microsoft chief Satya Nadella, at Ignite 2023.

At this event, NVIDIA and Microsoft announced their partnership to launch an AI foundry service on Microsoft Azure, aiming to boost the development of custom generative AI applications for enterprises and startups. This service integrates NVIDIA’s AI technologies and DGX Cloud AI supercomputing with Azure’s infrastructure, providing a comprehensive solution for creating and deploying tailored AI models.

The partnership also emphasised custom model development, leveraging NVIDIA’s AI Foundation Models and tools and making these advancements accessible through Azure’s cloud platform and marketplace. This collaboration signifies a major step in facilitating advanced AI application development and deployment in various industries.

“Many of Google’s products are built and served on NVIDIA GPUs, and many of our customers are seeking out NVIDIA accelerated computing to power efficient development of LLMs to advance generative AI,” shared Google Cloud chief Thomas Kurian at the Next event held mid-this year.

At Google Cloud Next, NVIDIA partnered with Google to drive advancements in AI computing, software, and services, alongside enhancing AI supercomputing capabilities.

The duo is working together to optimise Google’s PaxML for NVIDIA GPUs, facilitating large language model development, and integrating serverless Spark with NVIDIA GPUs for accelerated data processing. In addition to this, Google Cloud had said to feature NVIDIA H100 GPUs in its A3 VMs and Vertex AI platform and gain access to the NVIDIA DGX GH200 AI supercomputer and more.

Recently, Oracle also announced a multi-year partnership with NVIDIA to speed up the AI adoption for enterprises, which helps customers solve business challenges.

In a recent interview with AIM, Oracle said that it is well-equipped in terms of infrastructure, as NVIDIA selected OCI as the first hyper-scale cloud provider to offer NVIDIA DGX Cloud. “When NVIDIA thinks of cloud and data, they think of Oracle,” said Oracle’s Chris Chelliah. He said that it utilises MySQL HeatWave data for real-time anomaly detection on NVIDIA clusters for its customers.

NVIDIA is Omnipresent

NVIDIA’s diverse partnerships with leading cloud providers and hyperscalers uniquely position it across various facets of the AI landscape. While everyone is busy building their in-house silicon capabilities to handle AI workloads, the nature of partnerships seems to be changing rapidly.

From an innovation and AI advancements standpoint, Google Cloud seems to be NVIDIA’s favourite, while Microsoft Azure stands a pivotal partner for enterprise reach and application development, given its strong enterprise focus and extensive customer base.

Oracle differentiates itself in data management and AI-driven solutions, particularly through its emphasis on real-time data processing capabilities. AWS, on the other hand, plays a critical role in security-focused AI solutions, addressing the increasing concerns around AI security and reliability.

Overall, these partnerships provide NVIDIA with a multifaceted platform to expand its AI capabilities and market reach, with the impact of each partnership aligning with NVIDIA’s strategic focus areas, whether it be AI innovation, enterprise application, data management, or ensuring security in AI solutions. Simply put, everybody likes to NVIDIA.

The post Jensen Huang Brings re:Invent to Life appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...