Lenovo Storage Portfolio Refresh Goals to Pace Up AI Inference

To this point, 2025 has been the yr of agentic AI and real-time LLM deployment. However one other piece of the AI stack is coming into sharper focus: storage.

As enterprises transfer from experimentation to real-world deployment, they’re rethinking how infrastructure helps inference at scale. Duties like feeding massive language fashions with high-speed knowledge, operating retrieval-augmented technology (RAG) workflows, and managing hybrid cloud environments all depend upon quick, environment friendly, and scalable storage methods. Actual-time inference can pressure bandwidth, improve latency, and reveal the boundaries of legacy infrastructure. Lenovo is responding with what it calls the most important storage portfolio refresh in its historical past, aimed toward enhancing knowledge throughput, decreasing energy calls for, and simplifying deployment throughout hybrid environments.

Among the many key additions in Lenovo’s portfolio are new AI Starter Kits that mix compute, storage, and networking in pre-validated configurations for RAG and inferencing workloads. These kits embrace options like autonomous ransomware safety, encryption, and failover capabilities, with an emphasis on decreasing integration complexity for IT groups.

The corporate can be introducing what it describes because the trade's first liquid-cooled hyperconverged infrastructure equipment. This "GPT-in-a-box" system, a part of the ThinkAgile HX collection, makes use of Lenovo Neptune liquid cooling to help high-density inference workloads whereas decreasing power consumption by as much as 25 % in comparison with previous-generation methods.

Lenovo says its new ThinkSystem Storage Arrays supply efficiency positive factors of as much as 3 times over the earlier technology, together with energy and density enhancements that purpose to shrink datacenter footprints. The corporate claims these methods can ship as much as 97 % power financial savings and 99 % higher storage density when changing legacy onerous drive-based methods.

Different updates embrace the ThinkAgile SDI V4 Sequence, which makes use of a software-defined method to mix compute and storage assets for containerized and virtualized AI workloads. Lenovo claims as much as 2.4 instances quicker inference efficiency for giant language fashions, in addition to positive factors in IOPS and transaction charges.

Scott Tease, VP and normal supervisor of Lenovo’s Infrastructure Options Product Group, stated the brand new storage choices are aimed toward serving to companies scale AI extra successfully: “The brand new Lenovo Knowledge Storage Options assist companies harness AI’s transformative energy with a data-driven technique that ensures scalability, interoperability, and tangible enterprise outcomes powered by trusted infrastructure. The brand new options assist prospects obtain quicker time to worth regardless of the place they’re on their IT modernization journey with turnkey AI options that mitigate threat and simplify deployment.”

One of many early adopters of Lenovo’s new storage choices is OneNet, a supplier of personal cloud companies. The corporate is utilizing Lenovo’s infrastructure to enhance each efficiency and power effectivity in its datacenters.

“Innovation is embedded in OneNet’s DNA and partnering with Lenovo represents a dedication to modernizing the information heart with cutting-edge options that drive effectivity and sustainability,” stated Tony Weston, CTO at OneNet. “Backed by Lenovo options and Lenovo Premier Help, OneNet can ship high-availability, high-performance personal cloud companies that our prospects can depend upon.”

With this portfolio replace, Lenovo is positioning itself as a key infrastructure supplier for enterprises seeking to scale AI workloads with out overhauling their total stack. As inferencing and retrieval-based fashions develop into customary in manufacturing environments, distributors throughout the ecosystem are beneath strain to make storage smarter, quicker, and extra adaptable.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...