Red Hat unleashes Enterprise Linux AI – and it’s truly useful

Screenshot of RHEL AI logo

Red Hat has officially launched Red Hat Enterprise Linux (RHEL) AI into general availability. This isn't just another product release; it's a truly useful AI approach that RHEL administrators and programmers will find exceptionally helpful.

RHEL AI is meant to simplify enterprise-wide adoption by offering a fully optimized, bootable RHEL image for server deployments across hybrid cloud environments. These optimized bootable model runtime instances work with Granite models and InstructLab tooling packages. They include optimized Pytorch runtime libraries and GPU accelerators for AMD Instinct MI300X, Intel and NVIDIA GPUs, and NeMo frameworks.

Also: Enterprises double their generative AI deployment efforts, Bloomberg survey says

This is Red Hat's foundational AI platform. The program is designed to streamline generative AI (gen AI) model development, testing, and deployment. This new platform fuses IBM Research's open-source-licensed Granite large language model (LLM) family, the LAB methodology-based InstructLab alignment tools, and a collaborative approach to model development via the InstructLab project.

IBM Research pioneered the LAB methodology, which employs synthetic data generation and multiphase tuning to align AI/ML models without costly manual effort. The LAB approach, refined through the InstructLab community, enables developers to build and contribute to LLMs just as they would to any open-source project.

With the launch of InstructLab, IBM also released select Granite English language and code models under an Apache license, providing transparent datasets for training and community contributions. The Granite 7B English language model is now integrated into InstructLab, where users can collaboratively enhance its capabilities.

Also: Can AI even be open source? It's complicated

RHEL AI is also integrated within OpenShift AI, Red Hat's machine learning operations (MLOps) platform. This allows for large-scale model implementation in distributed Kubernetes clusters.

Let's face it: AI isn't cheap. Leading Large Language Models (LLM) out there cost tens of millions to train. That's before you even start thinking about fine-tuning for specific use cases. RHEL AI is Red Hat's attempt to bring those astronomical costs back down to earth.

Also: AI spending to reach $632 billion in the next 5 years, research finds

Red Hat partially does that by using Retrieval-Augmented Generation (RAG). RAG enables LLMs to access approved external knowledge stored in databases, documents, and other data sources. This enhances RHEL AI's ability to deliver the right answer rather than an answer that just sounds right.

This also means you can train your RHEL AI instances from your company's subject matter experts without needing a Ph.D. in machine learning. This will make RHEL AI a lot more useful than general-purpose AI for getting the work done you need to do instead of writing Star Wars fan fiction.

Also: Anthropic's new Claude Enterprise plan brings AI superpowers to businesses at scale

In a statement, Joe Fernandes, Red Hat's Foundation Model Platform vice president, said, "RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI."

RHEL AI isn't tied to any single environment. It's designed to run wherever your data lives — whether it be on-premise, at the edge, or in the public cloud. This flexibility is crucial when implementing AI strategies without completely overhauling your existing infrastructure.

The program is now available on Amazon Web Services (AWS) and IBM Cloud as a "bring your own (BYO)" subscription offering. In the next few months, it will be available as a service on AWS, Google Cloud Platform (GCP), IBM Cloud, and Microsoft Azure.

Also: Stability AI's text-to-image models arrive in the AWS ecosystem

Dell Technologies has announced a collaboration to bring RHEL AI to Dell PowerEdge servers. This partnership aims to simplify AI deployment by providing validated hardware solutions, including NVIDIA accelerated computing, optimized for RHEL AI.

As someone who's been covering open-source software for decades and who played with AI when Lisp was considered state-of-the-art, I think RHEL AI offers a significant shift in how enterprises approach AI. By combining the power of open source with enterprise-grade support, Red Hat is positioning itself at the forefront of the AI revolution.

The real test, of course, will be in the adoption and real-world applications. But if Red Hat's track record is anything to go by, RHEL AI could very well be the platform that brings AI out of the realm of tech giants and into the hands of businesses of all sizes.

Artificial Intelligence

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...