With a series of announcements showcasing Amazon’s prowess not only in the cloud space but also generative AI, and even taking digs at OpenAI’s security flaws in the process, AWS re:Invent2023 at Las Vegas had a lot to offer. Advancing their AI chip ambitions, the company announced two new AI chips, AWS Graviton4 and AWS Trainium 2, at the event.
Almost a decade ago, Amazon realised that in order to consistently improve the cost-effectiveness of tasks, it needed to redefine general-purpose computing for the cloud era, thereby pushing innovation to the silicon level. “In 2018, we became the first major cloud provider to develop our own general compute processors,” said Adam Selipsky, CEO of Amazon Web Services, at the event.
Releasing Graviton Power
Adam Selipsky at AWS re:Ignite 2023. Source: AWS Youtube
AWS Graviton, a family of processors, which was first released in 2018, was designed for cloud computing infrastructure running in Amazon Elastic Compute Cloud (EC2). The fourth version – Graviton4, which was unveiled yesterday, is the most powerful and energy efficient chip built by AWS, with 50% more cores and 75% more memory bandwidth than the previous version Graviton3.
Furthermore, Selipsky announced the preview of R8g Instances that is powered by Graviton4 and is part of the memory optimise instance family. R8g Instances are designed to deliver fast performance for large data sets, and are energy efficient for memory intensive workloads.
In 2020, CEO Andy Jassy, who was then the CEO of AWS, emphasised Amazon’s commitment to advancing the cost-effectiveness of machine learning training by investing in proprietary chips. With these chips, AWS lowered the cost barrier for ML training.
Similarly in 2018, the release of Graviton was to break the existing market of processors where Intel was comfortably placed. The lack of hardware options for building data centres and cloud services gave them an advantage. Furthermore, the power efficiency of Arm cores made Graviton well-suited for mobile computing and enterprises with extensive arrays of data centres, especially AWS. Today, Amazon has got 150 Graviton-based instances across the EC2 portfolio and more than 50,000 customers.
AWS Had It Planned All Along
With GPUs being the indispensable component for AI compute, AWS has strategically placed themselves in the said race, and it wasn’t a recent development. Trainium, a chip built for training machine learning models, and Inferentia, a chip optimised for running inference on those models, were released a few years ago. At the event, Selipsky released the second version of Trainium 2.
In 2015, Amazon acquired Annapurna Labs for $350 million and in 2017, AWS launched Graviton, thereby entering the chip race.
Chirag Dekate, VP analyst at Gartner, had earlier quoted that Amazon’s true differentiation is in bringing their technical capabilities. “Microsoft does not have Trainium or Inferentia,” he said.
The chips were designed to provide accelerated performance and cost-effectiveness for ML training workloads on the cloud platform. This was something that started before ChatGPT rage, but probably gained steam post OpenAI’s chatbot.
AWS has already found a number of partners that have utilised their AI chips. Companies such as Adobe, Deutsche Telekom, Leonardo AI have deployed Inferentia 2 for their generative AI models at scale. Similarly, Trainium has already been used by partners such as Anthropic, Databricks and Ricoh. The use cases lie in the internal search team as well, where the chip has been used to train large, deep learning models.
Amazon’s partnership with AI research company Anthropic has been a crucial one to showcase their united power in the generative AI space. By not just investing in the company but hosting Claude models on Amazon’s Bedrock platform, the partnership goes beyond simple compute.
Anthropic CEO and co-founder, Dario Amodei said that AWS is the “primary cloud provider for our mission critical workloads,” and that there are three components to their partnership – compute, customer and hardware. On the hardware aspect, Amodei spoke about working to optimise Trainium and Inferentia for their use cases.
The biggest chip competitor NVIDIA has also built a strategic partnership with AWS. NVIDIA chief Jensen Huang, also made an appearance at the AWS event and spoke about how AWS was the “world’s first cloud to recognise the importance of GPU accelerated computing.” The act was seen as a reassurance of powerful combat teams, making it a symbiotic partnership favouring both parties.
Amazon’s steady growth in silicon innovation has helped shape Amazon’s stance in the current AI market. With crucial partnerships AWS is proving to be an essential part of AI compute power.
The post How Amazon’s Silicon Innovation Is Instrumental in AWS Success appeared first on Analytics India Magazine.