Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
Default
25% Over-Provisioning
Despite the use of newer and slightly lower performance 16nm NAND, Reactor's performance consistency is actually marginally better than the other SM2246EN based SSDs we have tested. It's still worse than most of the other drives, but at least the increase in capacity didn't negatively impact the consistency, which happens with some drives.
Default
25% Over-Provisioning
Default
25% Over-Provisioning
TRIM Validation
To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.
The other performance "limitation" is program/file demands. The vast majority of files, program data, video files and game data are under 5000MB in size...so the difference between a 500MB/sec and a 1000MB/sec drive is a matter of seconds and sometimes milliseconds if reading files <1000MB.
The spotlight for my SSD is loading Battlefield 4 levels. I noticed no difference when going from a SATA3 drive to an M2 PCIe drive. The limitation is somewhere else, possibly my 2 year old Xeon CPU? Who knows. But the only place I notice a difference is when unRARing lots of huge files. Since Windows 8.1 64-bit only loads about 800MB of data from the drive during boot (that's what I measured in NAND reads between reboots) again, the difference between a 500MB/sec and 1000MB/sec drive is virtually nothing for everyday computing.
The demand will eventually come for faster SSD's in the consumer space (they're already in the enterprise space, they have been for years) but it probably won't come from Microsoft as their OS's are leaner and leaner every generation. Windows 8 had lower system requirements than Windows 7, and Windows 7 had lower system requirements than Vista. And Windows 10 will run on virtually anything, even Intel's Quark-based SoC dev platform (400MHz P55C "Pentium 3" based)
It is true Windows 8 has lower system requirements but I tested latest Windows 8.1 with updates and Windows 7 SP1 with updates and figured out that Windows 8.1 is more RAM hungry, I switched to Windows 8.1 and I keep getting messages about closing applications because I am too low on RAM. The real potential of SSD over HDD is random writing/reading not sequential. For me 6 Gbit/s sequential bandwidth is enough because I care about random write/read which doesn`t come close to 6 Gbit/s.
One of your applications must be seriously RAM-hungry then. I have 4 or 5 browsers open and a few other programs at the same time, with page file turned off on a 8GB of RAM system and I never have it squawk about needing more RAM. Now, when I open a serious game like Arkham City, THEN it starts squawking from time to time because AAA-games are seriously RAM-hungry.
You make a very good point Solandri, but the limit of the PCIe interface is far higher than the 800 MB/s figure you used; that is pretty much a bottom-end stat. Also, NVMe>AHCI. :)
Mathematically, PCIe can *never* speed things up more than the jump from SATA2 to SATA3. If you look carefully at the chart I made, going from SATA2 to SATA3 sped the 1GB read by 2 sec (from 4 sec to 2 sec).
Since the total read time over SATA3 is already 2 sec, the only way to increase read speed by another 2 sec is to go up to infinite MB/s. Even if you used the 8 GB/s limit of PCIe x16, the read time would be 0.125 sec, or a 1.875 sec speedup. Less than the 2 sec you spend things up going from SATA2 to SATA3.
The vast majority of the speed gains that can be gotten from SSDs (with respect to read/writes which benefit from PCIe) have already been gotten. Further improvements will be nice, but never impact computing to the degree that the initial SATA SSDs did. Read my comment to bug77 below for where we should be looking for significant speed gains next.
Actually, there is way to speed up things: By making a architecture that can handle bigger 'chunks' of data at a time. As games and other things are getting bigger, you need to load more data at a time in a shorter period. Therefore, increasing the rate at which data from the hard drive can get to RAM is the bottleneck and there can be improvements in that. Either by pre-loading data to RAM (inefficient) or by making it so that data can get from the hard drive/SSD drive to the RAM faster when needed/called.
And that's not even the main advantage of SSDs. The thing the user notices the most is the nearly non-existent seek time. But you get that even from first generation of SSDs. As a refresher, try to remember what happened when you started multiple, parallel, copy operations on a single HDD: the transfer rate took a serious nose-dive and it was all because of the seek time.
This. And it has wonderful second-order consequences; for example, you can still use your computer while it's doing a backup, so it's reasonable to schedule extremely frequent backups to hard disc.
Yup, exactly this. Non-queued 4k read/writes are still mired around 30-70 MB/s due to limitations in seek time (generally imposed by file system overhead). While this is nearly two orders of magnitude faster than HDDs, it's still short of the SATA3 limit by an order of magnitude. So there's still a lot of improvement (time savings) which can be made on this front.
Another way to think of it is that these 4k read/writes impact the time you spend waiting a *lot* more than the sequential read/writes. Again, because MB/s is the inverse of time you wait, the bigger MB/s figures matter less, the smaller MB/s figures matter more. Precisely the opposite of what MB/s seems to imply. e.g. If you need to read 1000 MB of sequential data + 1000 MB of 4k files over SATA3:
2 sec = 1000 MB of sequential data @ 500 MB/s 20 sec = 1000 MB of 4k data @ 50 MB/s
So 90% of the total read time depends on the 4k read speed, only 10% depends on the peak sequential read everyone obsesses over. If you want a "fast" SSD, concentrate on getting a drive whose *smallest* MB/s figures (almost always the 4k speeds) are higher than the competition's.
SATA3 does cause a little bit of an issue. It is the limiting factor on most drives now, no matter how much faster you make your driver, it's speed limitations bottleneck your hard drive. That is why M.2 is becoming so popular.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
69 Comments
View All Comments
Samus - Monday, February 9, 2015 - link
The other performance "limitation" is program/file demands. The vast majority of files, program data, video files and game data are under 5000MB in size...so the difference between a 500MB/sec and a 1000MB/sec drive is a matter of seconds and sometimes milliseconds if reading files <1000MB.The spotlight for my SSD is loading Battlefield 4 levels. I noticed no difference when going from a SATA3 drive to an M2 PCIe drive. The limitation is somewhere else, possibly my 2 year old Xeon CPU? Who knows. But the only place I notice a difference is when unRARing lots of huge files. Since Windows 8.1 64-bit only loads about 800MB of data from the drive during boot (that's what I measured in NAND reads between reboots) again, the difference between a 500MB/sec and 1000MB/sec drive is virtually nothing for everyday computing.
The demand will eventually come for faster SSD's in the consumer space (they're already in the enterprise space, they have been for years) but it probably won't come from Microsoft as their OS's are leaner and leaner every generation. Windows 8 had lower system requirements than Windows 7, and Windows 7 had lower system requirements than Vista. And Windows 10 will run on virtually anything, even Intel's Quark-based SoC dev platform (400MHz P55C "Pentium 3" based)
Uplink10 - Wednesday, February 11, 2015 - link
It is true Windows 8 has lower system requirements but I tested latest Windows 8.1 with updates and Windows 7 SP1 with updates and figured out that Windows 8.1 is more RAM hungry, I switched to Windows 8.1 and I keep getting messages about closing applications because I am too low on RAM. The real potential of SSD over HDD is random writing/reading not sequential. For me 6 Gbit/s sequential bandwidth is enough because I care about random write/read which doesn`t come close to 6 Gbit/s.Christopher1 - Sunday, February 15, 2015 - link
One of your applications must be seriously RAM-hungry then. I have 4 or 5 browsers open and a few other programs at the same time, with page file turned off on a 8GB of RAM system and I never have it squawk about needing more RAM.Now, when I open a serious game like Arkham City, THEN it starts squawking from time to time because AAA-games are seriously RAM-hungry.
Sabresiberian - Monday, February 9, 2015 - link
You make a very good point Solandri, but the limit of the PCIe interface is far higher than the 800 MB/s figure you used; that is pretty much a bottom-end stat. Also, NVMe>AHCI. :)Solandri - Tuesday, February 10, 2015 - link
Mathematically, PCIe can *never* speed things up more than the jump from SATA2 to SATA3. If you look carefully at the chart I made, going from SATA2 to SATA3 sped the 1GB read by 2 sec (from 4 sec to 2 sec).Since the total read time over SATA3 is already 2 sec, the only way to increase read speed by another 2 sec is to go up to infinite MB/s. Even if you used the 8 GB/s limit of PCIe x16, the read time would be 0.125 sec, or a 1.875 sec speedup. Less than the 2 sec you spend things up going from SATA2 to SATA3.
The vast majority of the speed gains that can be gotten from SSDs (with respect to read/writes which benefit from PCIe) have already been gotten. Further improvements will be nice, but never impact computing to the degree that the initial SATA SSDs did. Read my comment to bug77 below for where we should be looking for significant speed gains next.
Christopher1 - Sunday, February 15, 2015 - link
Actually, there is way to speed up things: By making a architecture that can handle bigger 'chunks' of data at a time. As games and other things are getting bigger, you need to load more data at a time in a shorter period. Therefore, increasing the rate at which data from the hard drive can get to RAM is the bottleneck and there can be improvements in that.Either by pre-loading data to RAM (inefficient) or by making it so that data can get from the hard drive/SSD drive to the RAM faster when needed/called.
bug77 - Tuesday, February 10, 2015 - link
And that's not even the main advantage of SSDs. The thing the user notices the most is the nearly non-existent seek time. But you get that even from first generation of SSDs.As a refresher, try to remember what happened when you started multiple, parallel, copy operations on a single HDD: the transfer rate took a serious nose-dive and it was all because of the seek time.
Tom Womack - Tuesday, February 10, 2015 - link
This. And it has wonderful second-order consequences; for example, you can still use your computer while it's doing a backup, so it's reasonable to schedule extremely frequent backups to hard disc.Solandri - Tuesday, February 10, 2015 - link
Yup, exactly this. Non-queued 4k read/writes are still mired around 30-70 MB/s due to limitations in seek time (generally imposed by file system overhead). While this is nearly two orders of magnitude faster than HDDs, it's still short of the SATA3 limit by an order of magnitude. So there's still a lot of improvement (time savings) which can be made on this front.Another way to think of it is that these 4k read/writes impact the time you spend waiting a *lot* more than the sequential read/writes. Again, because MB/s is the inverse of time you wait, the bigger MB/s figures matter less, the smaller MB/s figures matter more. Precisely the opposite of what MB/s seems to imply. e.g. If you need to read 1000 MB of sequential data + 1000 MB of 4k files over SATA3:
2 sec = 1000 MB of sequential data @ 500 MB/s
20 sec = 1000 MB of 4k data @ 50 MB/s
So 90% of the total read time depends on the 4k read speed, only 10% depends on the peak sequential read everyone obsesses over. If you want a "fast" SSD, concentrate on getting a drive whose *smallest* MB/s figures (almost always the 4k speeds) are higher than the competition's.
Christopher1 - Sunday, February 15, 2015 - link
SATA3 does cause a little bit of an issue. It is the limiting factor on most drives now, no matter how much faster you make your driver, it's speed limitations bottleneck your hard drive. That is why M.2 is becoming so popular.