The WD Black2 Review: World's First 2.5" Dual-Drive
by Kristian Vättö on January 30, 2014 7:00 AM ESTIf you had asked me a few years ago, I would've said that hybrid drives will be the next big thing in the storage industry. Seagate's Momentus XT got my hopes up that there is manufacturer interest in the concept of hybrid drives and I was expecting the Momentus XT to be just the beginning with more announcements following shortly after. A hybrid drive made so much sense -- it combined the performance of an SSD with the capacity of a hard drive in an affordable and easy to use package.
Seagate's Momentus XT showed that even 4GB/8GB of NAND can make a tremendous impact on the user experience, although it couldn't compete with a standalone SSD. The reason for that was the very limited amount of NAND since the speed of an SSD relies on parallelism: a single NAND die isn't fast (though for random IO even it's still far better than a hard drive), but when you combine a dozen or more dies and read/write to them simultaneously, the performance adds up. I knew the Momentus XT was a first generation product and I accepted its limitations, but I truly expected that there would be a high-end drive with enough NAND to substitute for an SSD.
It turns out I was wrong... dead wrong. Sure Seagate doubled the NAND capacity to 8GB in the second generation Momentus XT, but other than that the hybrid drive market was pretty much non-existent. Western Digital showed their hybrid drives over a year ago but limited them to OEMs due to a unique connector. To be honest, I've not seen WD's hybrid drives used in any systems, so I'm guessing OEMs weren't the biggest fans of the connector either.
As the hard drive companies weren't able to come up with decent hybrid offerings, the PC OEMs had to look elsewhere. Intel's Ultrabook concept was a big push for SSDs because Intel required at least 20GB of flash storage or the OEM wouldn't be able to use Intel's Ultrabook branding. Of course Intel had no weapons to stop OEMs from making ultraportables without flash but given the millions Intel has spent on Ultrabook marketing, it was worthwhile for OEM's to follow Intel's guidelines. Since the PC market kind of pushed itself into a corner with the price war, it wasn't possible for PC OEMs to do what Apple did and go SSD only due to prices, but on the other hand Ultrabooks had no space for two 2.5" drives. The solution? mSATA.
mSATA is barely the size of a credit card
Unlike hard drives, SSDs didn't have to be 2.5", it was simply a matter of compatibility with existing systems. What mSATA did was allow PC OEMs to build hybrid storage systems while keeping the Ultrabook spec and form factor. In my opinion this was a huge yet missed opportunity for hard drive OEMs. All they would have needed to do was build a hybrid drive with at least 20GB of NAND in order to meet the Ultrabook spec. I bet many PC OEMs would have chosen an all-in-one hybrid drive instead of two separate drives because managing a single supplier is easier and assuming sufficient volume the pricing should have been competitive as well.
When SSDs first appeared in the consumer space, the hard drive companies didn't feel threatened. The pricing was absurd (the first generation 80GB Intel X-25M cost $595) and performance wasn't much better than what hard drives offered. Back then SSDs were only interesting to some enthusiasts and enterprises, although many were unconvinced about the long-term benefits since the technology was very new. The hard drive companies had no reason to even think about hybrid drives as traditional hard drives were selling like hot cakes.
Today the situation is very different. Let's take the 80GB Intel X-25M G1 and 240GB Intel SSD 530 as examples: the price per gigabyte has dropped from around $7.50 to $0.83. In percentage points, that's a massive 89% decrease. Drops this big are impossible to predict as they usually aren't intentional and neither was this one. The reason why NAND prices dropped so rapidly was the oversupply caused by large investments made around 2010. The sudden increase in NAND demand due to the popularity of smartphones and tablets made the NAND business look like a good investment, which is why many companies invested heavily on it. While smartphone and tablet shipments continued to increase, the thing NAND fabricators didn't take into account was that their capacities didn't (at least not very quickly). In other words, the NAND fabricators expected that the demand for NAND would continue to grow rapidly and increased their production capacities based on that but in reality the demand growth was much smaller, which lead to oversupply. Just like other goods, NAND prices are controlled by demand and supply: when there's more supply than demand, the prices have to come down for supply and demand to meet.
SSDs are no longer luxury items. Plenty of systems are already shipping with some sort of SSD storage and the number will continue to grow. The hard drive companies can no longer neglect SSDs and in fact WD, Seagate, and Toshiba have all made substantial investments regarding SSDs. Last year WD acquired STEC, Virident, and Velobit; Seagate introduced their first consumer SSD lineup, and Toshiba has been in the SSD game for years. However, there still hasn't been a product that would combine an SSD and hard drive into one compact package. The WD Black2, the world's first SSD+HD dual-drive, changes that.
100 Comments
View All Comments
apoe - Friday, February 7, 2014 - link
People think US internet speeds are slow? When I lived in China, 50 kb/s was considered really fast. In the US, with a standard ISP, I can download Steam games at 6 MB / s, 120 times faster. According to Forbes, the US is in the top 10 for fastest internet speeds, but nothing tops South Korea. Helpful that plenty of larger datacenters are located here too.xKrNMBoYx - Tuesday, February 4, 2014 - link
Try downloading a game or program that is 20-50+ GB constantly. That eats bandwidth and datacap. With out optical drives your left having to download everything or do multi step transfer from disk to computer to another computer. There are external optical drives but that is another story. There will never be a time where optical drives become obsolete until ISP invest more in speed/reliability with lower prices. Then there's the group of people that wouldn't use the internet either.JMcGrath - Sunday, February 9, 2014 - link
@Morawka -Forget Blu Ray, it will be left far behind in the (fairly) near future. I won't go as far as saying that BD is obsolete or going to be for quite a while, BD has far too much influence in the current movie industry or in the near future - even with 4K already hitting the markets a lot faster than people ever thought possible.
However, as other people have stated BD is simply not a feasible solution going forward. It has served it's purpose for many years now, but just like CD and DVD better technologies will replace it with larger and faster storage mediums.
I think it's too hard to say what will become the dominant technology in the near future, and hopefully we won't have to go through another BD vs HD-DVD type war again(!) but there are a number of different technologies in the works, many of which have already shown working prototypes to replace the aging BD tech.
Most of these tech's have gone with either smaller track widths and laser technologies, additional layers or a combination of the two. However, there is one new technology that sounds very promising, and one I believe (and hope) will become the adopted standard - Holographic Discs!
As the focus on ultra high resolutions and the aim for "retina" type displays, deeper color depths and shading, and higher true refresh rates (4K/60 or 4K/120 for example) new technologies will be needed. Most internet connections - even the fastest available in most areas - won't support these extreme bitrates, and BD simply can't keep up either.
I have seen demo's of everything from true 24-bit color panels, 60hz and 120hz 4K via HDMI 2.0 or DP 1.2+, multi-panel / multi-head displays de-multiplexed (demuxed) showing true 23:9 content at 11280x4320P @ 120hz using multiple DP/HDMI connections.
When talking about just the current standard 4K/30 on an RGB 4:4:4, 12-bit panel, you're talking about:
3840 * 2160 * 36 * 30 = 8,957,952,000 bps / 8 = 373,248,000 bytes/s...
=1.04GB/s, that's 3.67TB/hour (uncompressed, true 2160P)!!
That's 62.6GB / minute or just over 7.33TB for a 2 hour long movie, and this is excluding audio!!
Now, add in new technologies (coming very soon) like 24-bit color, 60FPS, and the *real* widescreen aspect and you're looking at closer to 367.6GB/s and 43TB for a 2 hour movie!
I haven't kept a real close eye on the holographic disc technology lately, I know it was originally created by GE (who actually had a working, but smaller 4TB/Layer, proto ~3 years ago!) The discs themselves look identical to a CD/DVD/BD, but rather than using a single laser on 1 linear track, the drive uses multiple lasers at different angles. The possibilities are really endless considering the technology itself is no different than current media, just add 2 more lasers @ 45 degrees and you increase density by 300%, add 2 more @ 30 degrees and you've increased it 500%, 2 more lasers @ 15 degrees... you get the idea.
The last I remember reading about the technology was that they had the working 4TB/Layer model that I mentioned, but were also working on using additional lasers and a finer track which would allow them as much as 40-80TB in the future!!
BD won the last round because Sony had such a large influence on the market, especially with the PS3 hitting the market at the same time as BD/HD-DVD players and HDTV's becoming mainstream. It remains to be seen what will be the driving factor this time around, but with a company as large as GE behind the wheel and the demand in large data centers for backup I think holographic discs stand a good chance at winning the next round.
For everyone out there that works in a large DC using automated tape backups or cloud based backups, imagine being able to not only store 80/160/320TB on a single disc the size of a CD but being able to do it in less than 2 hours!! Considering you could write 80TB in 2 hours and assuming they release PC writers @ 2X, 4X, 8X, etc you could backup an entire enterprise data center in less than an hour, throw it in a small fire/water proof safe, and you're done!
patrickjchase - Thursday, January 30, 2014 - link
This is tangential, but...I have similar backup needs, and faced similar issues with OD unreliability (and also with HDD failures for that matter). I ended up developing my own archiver that stripes backup files across multiple drives/disks (optical or HDD). It calculates and embeds strong block-level checksums, and provides RAID6-style Reed-Solomon-code based redundancy within each block-sized stripe. In particular it can tolerate up to 2 block-checksum failures in each stripe (for for example if I stripe across 7 Blu-Ray disks I can tolerate read errors from any 2 within any given block-sized stripe), which means that it can tolerate a *lot* of optical disk read errors. I intentionally degraded (read: scratched up) a backup set such that every disk yielded a very large number of read errors, but the backup payload as a whole was recoverable.
With that in mind, I find that optical (Blu-Ray) media remain very useful for backups due to their superior shock/vibration/environmental tolerance as compared to hard drives. If I were using them without my archiver I'd be pretty worried, though :-).
Navvie - Friday, January 31, 2014 - link
I'd be very, very interested in seeing this software!Solandri - Friday, January 31, 2014 - link
We did that on Usenet in the 1990s. When posting a big binary (e.g. a TV show episode) you had to break it up into multiple parts to fit within the Usenet post length limit. So you might break the TV show into 50 compressed archive files (usually RAR). The problem was Usenet would frequently fail to propagate a file. So even though you posted 50, many sites might only get 49 or 47. The solution was to add parity files. So you'd post the original 50 archive files (RAR) and 5 parity files (PAR).Any 50 of those 55 files would allow you to recreate the original video file. You could vary the number of parity files, but about 10% was typical.
When I was backing up stuff to DVD, I found and downloaded newer versions of the old parity programs. I broke up my backups into enough archive files and parity files that I could lose large portions of several disks, or even an entire disk and still recover my backup. Your block-level parity checksums sounds like it would be more robust and transparent, but I only had to use freely downloadable tools.
http://en.wikipedia.org/wiki/Parity_file
Navvie - Monday, February 3, 2014 - link
When I read patrickjchase's comment my first thought was "that's exactly like usenet."peter64 - Friday, January 31, 2014 - link
Yes, thank you Dell for making devices are easily user upgraded. I hate all these other notebooks being completely sealed. You can't even replace the battery.peter64 - Friday, January 31, 2014 - link
I bet if Dell didn't put that removable optical drive in there, your notebook won't have a 2nd hard drive at all.Thanks Dell for giving people options and post-purchase upgrade ability during times of sealed non-user upgradeable devices.
Johnmcl7 - Thursday, January 30, 2014 - link
I think it would have been the ideal solution for a single drive perhaps last year but I think it's too late and too expensive as the Crucial M500 960GB is now under £330. While that's still a bit more expensive than this drive it's much neater (one drive instead of two) and I assume power consumption and heat would be better as well. That's the option I'd go for on a machine now, if they'd managed a 2TB drive for this price it would be a lot more attractive as that would put it beyond what's affordable with SSD's at the moment. I realise there are technical difficulties with 2TB 2.5in drives (I don't know if there's any standard drives available in this capacity) but they have to move forward at some point.