AMD Tries to Break NVIDIA’s CUDA Ecosystem with UDNA 

AMD UDNA

AMD has announced a significant shift in its GPU architecture strategy with the introduction of UDNA (Unified Data and Neural Architecture). This new architecture aims to merge AMD’s existing RDNA (for gaming) and CDNA (for data centres) architectures into a single, unified platform.

However, users allege that AMD has been partial in providing support, and is more inclined to providing better support to CDNA. RDNA requires per-generation optimisation. Due to this reason, AMD has to put a lot more effort into RDNA users.

Since RDNA has a smaller user base, very few developers are willing to put more effort into developing software, especially when the future market share is a mystery to them.

This means that with the new UDNA approach, AMD can streamline development across all the GPU catalogues. It will probably rival NVIDIA Ada Lovelace architecture which is used in consumer, workstation and data centre GPUs.

With UDNA, AMD is trying to do what NVIDIA has done with its GPU lineup. The latter had only one architecture for everything, which means that anyone with a PC would be able to get into their developer ecosystem for enterprise and other stuff that wasn’t gaming.

On the other hand, AMD did not do that, but when they said they would, for consumer cards, it came a year late and was dropped less than a year later.

“This is something they should have done a decade ago but didn’t. And now they’re realising that they will never grow their market share if they don’t follow the leader,” said a Reddit user, suggesting why UDNA is a perfect move from AMD.

Why is UDNA a Big Thing for AMD?

UDNA is designed to provide better scalability for consumer products and data centre solutions. This approach aims to attract more developers by offering a consistent architecture across different GPU types, potentially growing AMD’s market share.

Having single chiplets also simplifies the software stack for AI. Right now, RDNA doesn’t support all the optimisations that CDNA supports for ML workloads, for instance.

“Using the same architecture will make gaming GPUs more ML capable and simplify these optimisation efforts,” a Reddit user added, suggesting how RDNA can simplify software development for AI using AMD hardware.

Anurag Bansal, managing director at 13D research & strategy, mentioned that UDNA will enable AMD to simplify development and improve software compatibility across consumer and data centre GPUs. AMD is also prioritising forward and backward compatibility to avoid losing optimisations in future generations of chips.

Interestingly, AMD had a similar approach called GCN, a “one-size-fits-all” solution, aiming to address both graphics (gaming) and compute (GPGPU) workloads efficiently. It was launched in 2012 and discontinued in 2021. Users are speculating that AMD is about to make the same mistake again.

Well, it is not the same. UDNA is a bit different since it’s a chiplet architecture. They can make IO and compute chiplets that have various capabilities versus scaling a similar chip like GCN. Plus infinity fabric is a great set of technologies to build a unified architecture that didn’t exist for GCN.

Can the Real AMD Please Stand Up?

AIM had said that 2024 will be the year for AMD, and it seems like the market is finally accepting this. AMD’s data centre revenue rose 115% to $2.8 billion in the June quarter compared to the same period a year ago, while NVIDIA’s data centre revenue rose 154% to $26.3 billion in the July quarter.

That is a huge concern for tech companies as they believe NVIDIA has a monopoly over the AI market which is correct and scary. This is why it is necessary to have an NVIDIA competitor and AMD is positioned well to be that.

Meanwhile, Oracle’s senior vice president Karan Batta said Oracle’s use of AMD hardware could also help it guard against potential NVIDIA supply shortages which happened last year.

The software approach is definitely going to take this further. When we consider how it measures up to CUDA, there are still a few problems. One of them is software support. Even when the GCN was in the market, it couldn’t beat NVIDIA as it lacked optimised drivers.

“It’s not just a GCN thing anyway; RDNA1 had the same thing, and so did RDNA3. I think truth be told, AMD rushes their launches and doesn’t make good drivers till they have enough time to optimise,” a Reddit user mentioned.

In the past, AMD was not the preferred option for AI and ML tasks as there were no good libraries for using AMD in ML. But things are about to change. The Finnish LUMI-G supercomputer (5th in computing power in the world, IIRC) is built with AMD hardware.

AMD recently acquired Finnish AI company Silo AI after the acquisition of Mipsology and Nod.ai.

This way, AMD is going to properly seed fund the use of AMD in ML and improve the driver’s situation. NVIDIA is a market leader with its unmatched ecosystem; meanwhile, AMD has a few advantages working in its favour. Unlike NVIDIA, AMD has ROCm, which is an open-source platform, and it has a track record of providing hardware at cheaper prices.

So, if AMD can provide GPUs with large VRAMs at affordable prices, developers will adopt and start building AI projects over it. ROCm being an open-source platform, solutions can be shared anywhere and it might boost the adoption of AMD in AI.

The post AMD Tries to Break NVIDIA’s CUDA Ecosystem with UDNA appeared first on AIM.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...