QLC Goes To 8TB: Samsung 870 QVO and Sabrent Rocket Q 8TB SSDs Reviewed
by Billy Tallis on December 4, 2020 8:00 AM ESTWhole-Drive Fill
This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.
The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.
Average Throughput for last 16 GB | Overall Average Throughput |
Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.
On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.
Working Set Size
Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.
When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.
We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.
When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.
The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.
The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.
150 Comments
View All Comments
Great_Scott - Sunday, December 6, 2020 - link
QLC remains terrible and the price delta between the worst and good drives remains $5.The most interesting part of this review is how insanely good the performance of the DRAMless Mushkin drive is.
ksec - Friday, December 4, 2020 - link
I really wish a segment of market move towards high capacity and low speed like QVO This is going to be useful for like NAS, where the speed is limited to 1Gbps or 2.5Gbps Ethernet.The cheapest SSD I saw for 2TB was a one off deal from Sandisk at $159. I wonder when we could see that being the norm if not even lower.
Oxford Guy - Friday, December 4, 2020 - link
I wish QLC wouldn't be pushed on us because it ruins the economy of scale for 3D TLC. 3D TLC drives could have been offered in better capacities but QLC is attractive to manufacturers for margin. Too bad for us that it has so many drawbacks.SirMaster - Friday, December 4, 2020 - link
People said the same thing when they moved from SLC to MLC, and again from MLC to TLC.emn13 - Saturday, December 5, 2020 - link
There is an issue of decreasing returns, however.SLC -> MLC allowed for 2x capacity (minus some overhead) I don't remember anybody gnashing their teeth to much at that.
MLC -> TLC allowed for 1.5x capacity (minus some overhead). That's not a bad deal, but it's not as impressive anymore.
TLC -> QLC allows for 1.33x capacity (minor some overhead). That's starting to get pretty slim pickings.
Would you rather have a 4TB QLC drive, or a 3TB TLC drive? that's the trade-off - and I wish sites would benchmark drives at higher fill rates, so it'd be easier to see more real-world performance.
at_clucks - Friday, December 11, 2020 - link
@SirMaster, "People said the same thing when they moved from SLC to MLC, and again from MLC to TLC."You know you're allowed to change your mind and say no, right? Especially since some transitions can be acceptable, and others less so.
The biggest thing you're missing is that the theoretical difference between TLC and QLC is bigger than the difference between SLC and TLC. Where SLC hasto discriminate between 2 levels of charge, TLC has to discriminate between 8, and QLC between 16.
Doesn't this sound like a "you were ok with me kissing you so you definitely want the D"? When TheinsanegamerN insists ATers are "techies" and they "understand technology" I'll have this comment to refer him to.
magreen - Friday, December 4, 2020 - link
Why is that useful for NAS? A hard drive will saturate that network interface.RealBeast - Friday, December 4, 2020 - link
Yup, my eight drive RAID 6 runs about 750MB/sec for large sequential transters over SFP+ to my backup array. No need for SSDs and I certainly couldn't afford them -- the 14TB enterprise SAS drives I got were only $250 each in the early summer.nagi603 - Friday, December 4, 2020 - link
Not if it's a 10G linkleexgx - Saturday, December 5, 2020 - link
If you have enough drives in RAID6 you can come close to saturate a 10gb link (read post above 750MB/s with 8 hdds in RAID6)