E-commerce behemoth Amazon and startup Databricks struck a five-year deal to focus on using Amazon’s Trainium AI chips that could cut costs for businesses seeking to build their GenAI apps.
Databricks will use AWS Trainium chips to power a service that helps companies customise an AI model or build their own using Mosaic AI. It acquired AI startup MosaicML last year in a $1.3 billion deal and is expanding its services to democratise AI and position its Lakehouse as the top platform for GenAI and LLMs.
The company raised $37 million and offers technology up to 15 times cheaper than competitors, serving clients like AI2, Replit, and Hippocratic AI. It claims that its MPT-30B LLM, a 30-billion parameter model, is superior in quality and more cost-effective for local deployment than GPT-3.
Meanwhile, Amazon says customers pay less to use its homegrown chips compared with the competition, such as NVIDIA’s GPUs, which dominate the AI chip market.
The partnership also includes support for AWS’s Graviton2-based EC2 instances, which can deliver significantly better price-performance ratios—up to 4x—when building lakehouses on AWS. This optimisation is crucial for enterprises aiming to manage costs while maximising performance in their data operations.
Databricks VP Naveen Rao highlighted that the partnership will allow companies to build AI models significantly quicker and more cost-effectively, ultimately making AI faster and cheaper for businesses by passing on the savings gained from using Amazon’s AI chips.
Impact on Enterprises
Databricks, the cloud platform, has introduced a pay-as-you-go model for its Lakehouse Platform through AWS Marketplace, allowing customers to easily discover, launch, and build lakehouse environments directly from their AWS accounts.
This model simplifies onboarding, consolidates billing under existing AWS management accounts, and enables organisations to leverage their AWS contracts for greater flexibility in resource management. The Lakehouse Platform unifies enterprise analytics and AI workloads on a single platform, eliminating data silos and promoting better collaboration across workflows.
Integrated with AWS Lake Formation, it enhances data governance by allowing centralised management of data access policies, ensuring consistent security enforcement across Databricks and AWS services while supporting a wide range of functions, from data processing to machine learning.
The Partnership
The deal comes as Databricks, Amazon and other enterprise technology companies like Microsoft, Salesforce and Snowflake, a rival of Databricks, aggressively court businesses to earn more revenue. Meanwhile, corporate technology executives say it is time to show AI investment is generating returns.
Also, it’s noteworthy that Databricks will continue using NVIDIA processors as part of a pre-existing agreement with AWS, notwithstanding the change. Along with its Inferentia series of AI chips used to develop and operate AI models, Amazon debuted the second iteration of its Trainium chips in November last year.
All in all, the agreement adds to Amazon’s position in the cut-throat AI chip space, as it looks to advance its position further.
The two companies also have an existing partnership where customers can run Databricks data services on Amazon’s cloud-computing platform, Amazon Web Services. Databricks also rents NVIDIA’s GPUs through AWS, and will be using more of them as part of the deal.
Customers using AWS have generated over $1 billion in revenue for Databricks, and AWS is the data company’s fastest-growing cloud partner, he added.
While early AI successes have relied on using a company’s private data to customise AI, for instance, building a bespoke customer service chatbot can help lower staffing costs, for Amazon, that means continuing to position itself as a neutral provider of technology, offering businesses the capabilities to use and compile a variety of AI models from many vendors on its platform.
They also make money by renting out analytics, AI, and other cloud-based software that taps AI-ready data so that companies can build their enterprise technology tools. The San Francisco-based firm said it was valued at $43 billion last September.
An AWS Inferentia chip (left) and an AWS Trainium chip. (Photo: Amazon)
Jonny LeRoy, the CTO of Grainger, is using AI to help customers navigate their product offerings. The Illinois-based company is using a combination of AI models and a retrieval-augmented generation system from Databricks to build its customer-service tool, and is planning to use Amazon’s chips under the hood, LeRoy said.
Although Amazon isn’t widely considered a leader in AI innovation, some technology analysts and business leaders say, it needs to show that it can compete against Microsoft and Google. A part of Amazon’s AI reboot involves its AI chips, Trainium and Inferentia, which are designed specifically for building and using AI models.
Compared with NVIDIA’s more general-purpose GPUs, such custom chips can be more efficient because they were designed for just one thing.
Amazon’s pitch for its custom AI chips is its lower cost. Customers can expect to pay about 40% less than they would using other hardware, said Dave Brown, vice president of AWS compute and networking services.
“No customer is going to move if they’re not going to save any money, and if their existing solution is working well for them,” Brown said. “So it’s important to deliver those cost savings.”
Despite all this, there is no official statement mentioning how many Amazon customers use its custom chips rather than NVIDIA’s GPUs.
The post What Databricks-AWS Partnership Means for Enterprise appeared first on AIM.