Now that JEDEC has published specification of GDDR7 memory, memory manufacturers are beginning to announce their initial products. The first out of the gate for this generation is Samsung, which has has quietly added its GDDR7 products to its official product catalog.

For now, Samsung lists two GDDR7 devices on its website: 16 Gbit chips rated for an up to 28 GT/s data transfer rate and a faster version running at up to 32 GT/s data transfer rate (which is in line with initial parts that Samsung announced in mid-2023). The chips feature a 512M x32 organization and come in a 266-pin FBGA packaging. The chips are already sampling, so Samsung's customers – GPU vendors, AI inference vendors, network product vendors, and the like – should already have GDDR7 chips in their labs.

The GDDR7 specification promises the maximum per-chip capacity of 64 Gbit (8 GB) and data transfer rates of 48 GT/s. Meanwhile, first generation GDDR7 chips (as announced so far) will feature a rather moderate capacity of 16 Gbit (2 GB) and a data transfer rate of up to 32 GT/s.

Performance-wise, the first generation of GDDR7 should provide a significant improvement in memory bandwidth over GDDR6 and GDDR6X. However capacity/density improvements will not come until memory manufacturers move to their next generation EUV-based process nodes. As a result, the first GDDR7-based graphics cards are unlikely to sport any memory capacity improvements. Though looking a bit farther down the road, Samsung and SK Hynix have previously told Tom's Hardware that they intend to reach mass production of 24 Gbit GDDR7 chips in 2025.

Otherwise, it is noteworthy that SK Hynix also demonstrated its GDDR7 chips at NVIDIA's GTC last week. So Samsung's competition should be close behind in delivering samples, and eventually mass production memory.

Source: Samsung (via @harukaze5719)

POST A COMMENT

16 Comments

View All Comments

  • PeachNCream - Thursday, March 28, 2024 - link

    GDDR improvements are a good-ish sort of thing but more importantly, system RAM matters a fair bit more since graphics for the vast majority of people reside on the CPU package where GDDR is meaningless. DDR4 and 5 have certainly helped but latency remains high and RAM quantity remains low with some laptops for sale today shipping with a mere 4GB of single channel soldered memory. Fleetingly few people spend anything on a dGPU so not many outside of GPU compute business operations will realize benefits with yet another GDDR generation. Reply
  • nandnandnand - Thursday, March 28, 2024 - link

    Windows and ChromeOS (Plus) have basically flushed out 4 GB in favor of 8 GB, although you can still find older systems being sold with it. The "AI PC" marketing craze could make 16 GB more common, with the purpose of that spec being to run the smaller 7/13B parameter LLMs and other AI stuff locally:

    https://www.tomshardware.com/software/windows/micr...

    I don't know if we will ever see significant DRAM latency improvements. We can hope for big L3/L4 cache to come to cheaper systems (on-package DRAM like in Meteor/Lunar Lake does not count as L4, I'm thinking Intel's no-show "Adamantine").
    Reply
  • Dante Verizon - Thursday, March 28, 2024 - link

    I don't see it being cheap. Reply
  • nandnandnand - Friday, March 29, 2024 - link

    2.5D/3D packaging will become more common until the point it is ubiquitous by design even in cheap products 10+ years from now.

    X3D isn't doubling the cost of AMD chips, it's adding like $50 at most. We'll see how Adamantine does when Intel deigns to release it. Nobody is expecting life-changing amounts like 8 GB at launch, maybe closer to 512 MB instead.
    Reply
  • Diogene7 - Friday, March 29, 2024 - link

    I think Anandtech published a news in 2018 announcing that Samsung was starting mass manufacturing of 16Gbits GDDR6 chip.

    It is really a bit sad that by now, we still don’t have at least 32Gbits and even 64Gbits memory die.

    Actually it should even be 32Gbits or 64Gbits Non-Volatile-Memory (NVM) memory die, like SOT-MRAM or VCMA-MRAM as it would unlock plenty new opportunities (especially in IoT and mobile devices).

    The US Chip Act funding should be allocating a lot of funding to scale-up disruptive NVM spintronic MRAM memory manufacturing, especially from US Avalanche technology and Everspin as it would allow the US to take leadership in the next generation beyond CMOS memory and computing technology.
    Reply
  • Ryan Smith - Friday, March 29, 2024 - link

    "It is really a bit sad that by now, we still don’t have at least 32Gbits and even 64Gbits memory die."

    There are a number of reasons for this. But at the end of the day DRAM capacity scaling has tapered off. Logic is the only thing really scaling well with EUV and newer nodes. The trench capacitors and other analog bits of DRAM aren't getting smaller, and that keeps DRAM fabs from making significantly denser dies.

    To be sure, they're still making some progress. Take a look at DDR5 die density, for example. But most capacity increases at the high-end are coming from die stacking, either in the form of HBM or TSV-stacked dies for DIMMs.
    Reply

Log in

Don't have an account? Sign up now