Among the slew of announcements from AMD today around their 2022 Financial Analyst Day, the company offering an update to their client GPU (RDNA) roadmap. Like the company’s Zen CPU architecture roadmap, AMD has been keeping a 2 year horizon here, essentially showing what’s out, what’s about to come out, and what’s going to be coming out in a year or two. Meaning that today’s update gives us our first glace at what will follow RDNA 3, which itself was announced back in 2020.

With AMD riding a wave of success with their current RDNA 2 architecture products (the Radeon RX 6000 family), the company is looking to keep up that momentum as they shift towards the launch of products based on their forthcoming RDNA 3 architecture.  And while today’s roadmap update from AMD is a high-level one, it none the less offers us the most detailed look yet into what AMD has in store for their Radeon products later this year.

RDNA 3: 5nm with Next-Gen Infinity Cache & Chiplets

First and foremost, AMD is targeting a greater-than 50% performance-per-watt uplift versus RDNA 2. This is a similar uplift as they saw moving from RDNA (1) to RDNA 2, and while such a claim from AMD would have seemed ostentatious two years ago, RDNA 2 has given AMD’s GPU teams a significant amount of renewed credibility.

Thankfully for AMD, unlike the 1-to-2 transition, they don’t have to find a way to come up with a 50% uplift based on architecture and DVFS optimizations alone. RDNA 3 will be built on a 5nm process (TSMC’s, no doubt), which is a full node improvement from the TSMC N7/N6 based Navi 2x GPU family. As a result, AMD will see a significant efficiency improvement from that alone.

But with that said, these days a single node jump on its own can’t deliver a 50% perf-per-watt improvement (RIP Dennard scaling). So there are several architecture improvements planned for RDNA 3. This includes the next generation of AMD’s on-die Infinity Cache, and what AMD is terming an optimized graphics pipeline. According to the company, the GPU compute unit (CU) is also being rearchitected, though to what degree remains to be seen.

But the biggest news of all on this front is that, confirming a year’s worth of rumors and several patent applications, AMD will be using chiplets with RDNA 3. To what degree, AMD isn’t saying, but the implication is that at least one GPU tier (as we know it) is moving from a monolithic GPU to a chiplet-style design, using multiple smaller chips.

Chiplets are in some respects the holy grail of GPU construction, because they give GPU designers options for scaling up GPUs past today’s die size (reticle) and yield limits. That said, it’s also a holy grail because the immense amount of data that must be passed between different parts of a GPU (on the order of terabytes per second) is very hard to do – and very necessary to do if you want a multi-chip GPU to be able to present itself as a single device. We’ve seen Apple tackle the task by essentially bridging two M1 SoCs together, but it’s never been done with a high-performance GPU before.

Notably, AMD calls this an “advanced” chiplet design. That moniker tends to get thrown around when a chip is being packaged using some kind of advanced, high-density interconnect such as EMIB, which differentiates it from simpler designs such as Zen 2/3 chiplets, which merely route their signals through the organic packaging without any enhanced technologies. So while we’re eagerly awaiting further details of what AMD is doing here, it wouldn’t at all be surprising to find out that AMD is using a form of Local Si Interconnect (LSI) technology (such as the Elevated Fanout Bridge used for the MI200 family of accelerators) to directly and closely bridge two RDNA 3 chiplets.

RDNA 4: Furthering AMD’s Performance & Efficiency in 2024

And while AMD prepares to bring RDNA 3-based GPUs to the market, the company is already hard at work at its successor.

RDNA 4, as it’s being aptly named, will be AMD’s next-generation GPU architecture for 2024. Unlike today’s Zen 5 reveal, we’re getting almost no details here – though that was the case for the RDNA 3 reveal in 2020 as well. As a result, there’s not a whole lot to dissect at this moment about the architecture other than the name.

The one thing we do know is that RDNA 4 GPUs will be manufactured on what AMD is terming an “advanced node”, which would put it beyond the 5nm node being used for RDNA 3. AMD made a similarly obfuscated disclosure in 2020 for RDNA 3, and as was the case back then, AMD is seemingly keeping the door open to making a final decision later, when the state of fabs for the 2024 timeframe is better established. One of TMSC’s 3nm nodes would be the most ideal outcome here, however a 4nm node is not off of the table – especially if AMD has to fight for capacity. (As cool as consumer GPUs are, other types of products tend to be more profitable on a mm2 basis)

Finally, like AMD’s Zen 5 architecture, RDNA 4 is expected to land in 2024. With AMD having established a pretty consistent two-year GPU cadence in recent years, a launch in the latter half of 2024 is not an unreasonable guess. Though there’s still a lot of time to go until we reach 2024.

Comments Locked

21 Comments

View All Comments

  • lemurbutton - Friday, June 10, 2022 - link

    Again, not interesting.

    M1 Ultra is beating the pants off both Nvidia and AMD in perf/watt.
  • Che0063 - Friday, June 10, 2022 - link

    Yeah, let's post a comment in response to news.

    Why do people like you feel the need to be so vocally negative? If it's not interesting, then leave. This article was meant to be informative, not groundbreaking.
  • Kangal - Friday, June 10, 2022 - link

    Didn't you hear? He is not interested.
    :P
  • DannyH246 - Friday, June 10, 2022 - link

    Agreed. Your comment clearly would not be interesting to him.
  • Khanan - Friday, June 10, 2022 - link

    There’s really people who defend this delusional troll? That’s sad.
  • brucethemoose - Saturday, June 11, 2022 - link

    Something about consumer CPUs/GPUs in general drives commenters to be quite tribal and negative.

    You don't see many negative comments about AI accelerator or network switch or exotic memory coverage here.
  • Fulljack - Friday, June 10, 2022 - link

    if I have a workstation with access to 1200W of power, I wouldn't care less about the performance of mobile chip that could only push as high as 300W.
  • mukiex - Friday, June 10, 2022 - link

    I mean, what are you using a $4,000 M1 Ultra for? If it's Blender rendering, it will be beaten, on the Metal back-end, by a 1050 Ti Laptop on Optix or a Radeon 6800 XT on Metal. The latter has a power consumption rate of 300w, but you could buy a lot of them for the $2,000 *extra* that the M1 Ultra costs over the M1 Max. It gets embarassing once you look at your options in gaming.

    The M1 Ultra, at the moment, is a solution in search of a problem, or a solution terribly kneecapped by a problem, that being Apple's thorough ecosystem lack of need for a high-end GPU.
  • Khanan - Friday, June 10, 2022 - link

    Correct, it’s a good product but also way too expensive. Something which only Apple can afford, anyone else would’ve been called delusional.
  • Khanan - Friday, June 10, 2022 - link

    The first comment as per usual complete trash.

Log in

Don't have an account? Sign up now