Intel Optane Memory: How to make revolutionary technology totally boring

One day 3D XPoint could change the world. But not like this it won’t.

Enlarge / Intel Optane Memory. Engineering sample, but we hope it’s the same as retail hardware.

3D XPoint (pronounced “crosspoint,” not “ex-point”) is a promising form of non-volatile memory jointly developed by Intel and Micron. Intel claims that the memory, which it’s branding Optane for commercial products, provides a compelling mix of properties putting it somewhere between DRAM and NAND flash.

The first Optane products are almost here. For certain enterprise workloads, there’s the Intel Optane SSD DC P4800X, a 375GB PCIe card that offers substantially lower latency than comparable flash drives and can boast high numbers of I/O operations per second (IOPS) over a much wider range of workloads than flash. Intel isn’t letting reviewers actually use the P4800X, however; the first testing of the hardware, published earlier this week, was performed remotely using hardware on Intel’s premises.

For the consumer, there’s Intel Optane Memory. It’s an M.2 PCIe stick with a capacity of 16GB ($44) or 32GB ($77), and it should be on sale today. Unlike the P4800X, Intel is letting reviewers get hold of Optane Memory or at least something close to it: the part we received was branded “engineering sample,” with no retail branding or packaging. The astute reader will note that 16 or 32GB isn’t a whole lot of storage. Although the sticks can be used as conventional, if tiny, NVMe SSDs, Intel is positioning them as caches for spinning disks. Pair Optane Memory with a large cheap hard disk, and the promise is that you’ll get SSD-like performance—some of the time, at least—with HDD-like capacity.

Mysterious memory

Detailed descriptions from Intel of how Optane works are still notable by their absence—the company seems to have said more about what Optane isn’t than what it is—but a basic picture is slowly being built from what Intel and Micron have said about the technology. The memory has a kind of three-dimensional (hence “3D”) lattice structure (hence “XPoint”). Stackable layers have wires arranged in either rows or columns, and at the intersection of each row and column is the actual storage element: an unspecified material that can change its resistance to different values. The details of how it does this are unclear; Intel has said it’s not a phase-change material, and it’s different from HP’s memristor tech, but it hasn’t said precisely what it is.

Critically, the resistance change is persistent. Once a cell has had its value set, it’ll continue to hold its value indefinitely, even if system power is removed. While we don’t know how the resistance change works, one thing that we do know is that unlike DRAM, each data cell does not need any transistors, which gives rise to Optane’s next important property: it’s a lot denser than DRAM, with Intel and Micron variously claiming a density improvement of four to ten times.

The value stored in each data cell can also be written and rewritten relatively easily. NAND flash requires a very high voltage to erase each cell, which allows a cell to be written only once (flipping its value from a 1 to a 0) before it needs to be erased again. 3D XPoint cells, by contrast, can have their resistance (and hence stored value) updated between 1 and 0 and back again without needing any erasure step.

Optane and 3D XPoint memory are designed to blur the line between memory and storage. These new consumer drives are really only about storage, though.
Enlarge / Optane and 3D XPoint memory are designed to blur the line between memory and storage. These new consumer drives are really only about storage, though.

To cope with the high voltages, which slowly damage NAND flash over time, and lack of rewritability, NAND is structured in a very particular way. It’s organized into pages of up to 4,096 bytes, with pages then organized into blocks of up to 512 kilobytes. Reads and writes are performed a page at a time, with erases happening not at page but at block granularity. This creates issues such as “write amplification,” where writing a single byte to a page could require an entire block to be erased and rewritten. Optane, however, can be read and written at (potentially) the granularity of a single bit. Eventually, Intel and Micron plan to offer DIMMs based on Optane to take advantage of this RAM-like granular access.

Being storage-like, Optane Memory doesn’t offer bit-level access—it is organized into 512 byte “sectors” instead—but it nonetheless avoids the extreme write amplification of flash, and Intel claims that it has write endurance that’s perhaps ten times better than flash.

Optane is also a lot cheaper than RAM. While $77 would be a lot to pay for 32GB of NAND flash, it’s much less than you’d expect to pay for that amount of RAM.

In its server board, Intel is using Optane to offer a performance profile that flash doesn’t quite match. Flash SSDs can achieve very high numbers of IOPS, but to do this they tend to need large queue depths; that is to say, they need to have software that issues a whole lot of I/O operations concurrently, so the SSD can service them at least partially in parallel. Some drives need 32, 64, or even 128 I/O operations in flight at the same time to achieve their best numbers. The Optane P4800X can hit very high IOPS numbers without needing these deep queues, and its latency, the time taken to respond to each I/O operation, tends to be much lower than a comparable SSD. For certain kinds of server workload, this can be valuable, even in spite of the price premium that Optane commands over NAND flash.

Hybrids have been done before

In the consumer space, however, the Optane advantage is less obvious. The basic principle of hybrid drives is reasonable enough. Spinning magnetic disks have an enormous advantage in terms of absolute capacity and price per gigabyte, but we’re all familiar with their downside: relatively low transfer speeds and access times that are, in computer terms, epochal. A spinning disk can take tens of milliseconds to perform an I/O operation, orders of magnitudes longer than SSDs can manage. Hybrid drives offer a kind of best of both worlds. The large spinning disk offers abundant capacity for rarely used and performance-insensitive data, and the small SSD acts as a cache, providing lightning fast access to the files that get regularly used.

A number of manufacturers offer hard disks with flash embedded within them, offering an easy one-piece hybrid solution. Intel’s storage controllers built into its motherboard chipsets have also long offered a hybrid disk system, called SRT (“Smart Response Technology”). First introduced with the Sandy Bridge Z68 chipset in 2011, SRT allows more or less arbitrary pairings of SSD and spinning disk to be combined into hybrid disks (Apple’s “Fusion Drives” are conceptually similar but technologically unrelated).

For reasons that aren’t immediately obvious to me, Intel has always kept SRT gated. Naively, one would think that SRT’s widest appeal would be to low- and mid-range systems, where cost constraints make it infeasible to offer large quantities of SSD storage. But Intel feels the opposite. At its debut, it was only offered in the high-end Z68 chipset. In the chipset generations that followed Sandy Bridge, Intel did expand SRT availability to certain lower-end chipsets, but even today, the feature is not universal across the Kaby Lake chipset lineup. The company has five Kaby Lake chipsets, from Z270 at the high end, through Q270, H270, Q250, and B250 at the low end. Only Z270 and Q270 support SRT; the other three chipsets do not.

Typically, these hybrids (whether integrated or using SRT) offer substantial boosts to things like starting Windows and applications, but if your workload is diverse enough, their caching ability is curtailed. SRT is limited to a maximum of 64GB and Optane (currently) to 32GB. If your set of hot, regularly used programs and data fits inside 64 or 32GB then it can all be expected to reside in the cache. But if you were to play a handful of large modern games, the performance will become much more hard disk-like. The cache simply isn’t big enough to hold several 50GB games in their entirety, forcing the system to hit the spinning disk to load them.

The hybrids also tend to do little to improve things like software installation time. Software installers won’t be cached (because in general you only use them once) and so all the reading the installer does (and the subsequent writing of the installed programs back to the disk) operates at hard disk speeds.

From a technical, functional perspective, Optane Memory hybrid drives appear to be substantially identical to SRT hybrid drives before them. The basic setup process is the same: the Optane NVMe stick is paired with a spinning disk as an accelerator. With SRT, the SSD could be configured as either a write-back cache (wherein writes are written to the SSD and only lazily flushed to the HDD at the system’s leisure) or a write-through cache (wherein writes are written to the SSD and HDD in parallel). Write-back mode offers acceleration of writes, as they can operate at near-full SSD speed, but comes with a risk: if the HDD and SSD are separated, the data on the HDD may be missing, stale, or corrupt, because the latest data to be written is found exclusively on the SSD. Write-through mode is slower, since writes can only happen at HDD speed, but means that the HDD always contains a complete, usable, up-to-date copy of all your information.

Optane appears to only offer write-back mode. If you want to split the drives up, you’ll first have to disable Optane through the system firmware or through Intel’s management utility. This flushes the cached data to the spinning disk, bringing it up to date. If you disconnect the hard disk without going through this process, the Optane will be marked as offline.

Intel’s infamous arbitrary limitations

But there is one difference between Optane and SRT that isn’t technical, and that’s compatibility. Unlike SRT, which is restricted only to high-end chipsets, Optane is available to every Intel chipset—just as long as it’s a Kaby Lake 200-series chipset paired with a Kaby Lake (7th generation Core) processor. This means that a chipset such as the low-end B250 will let you create a hybrid out of Optane and a hard disk but won’t let you create a damn-near identical hybrid out of a flash SSD and a hard disk.

There appears to be no particularly good reason for this; it’s simply that Intel was caught by conflicting demands. On the one hand it wants to keep SRT as a “high-end” feature (even though it’s the low-end and mid-range audience that stands to yield the most benefit from SRT). On the other hand, it wants to maximize the potential demand for Optane. And on the gripping hand, it wants to create an extra incentive to upgrade to Kaby Lake, as it would otherwise be only a minor refresh to Skylake; tying a supposedly desirable feature to Kaby Lake (and, eventually, newer CPUs and chipsets) helps create that incentive.

The review system Intel sent us to test Optane uses none other than the B250 chipset. Optane-enabled, certainly, but SRT-disabled. In its press presentation announcing Optane Memory, Intel made plenty of comparisons between an Optane hybrid and a plain HDD, and, naturally Optane looked good. But surely the more relevant, significant comparison would be between an Optane hybrid and a (much cheaper) NAND flash hybrid. Alas, it is not a comparison I can make; I don’t have a Z270 or Q270 motherboard on hand.

One might well wonder why, of all the possible motherboards it could include in its review systems, Intel opted to pick one that made the obvious direct comparison impossible.

It performs like a hybrid disk

The Optane in the review system is paired with a 1TB Western Digital Black drive, with a 7,200 rpm spindle speed. This is a mid-range disk with generally decent performance and, I’d argue, a little better than what one might expect to see used in a bargain-basement B250 system (the 5400RPM WD Blue drives are considerably cheaper).

The Optane hybrid performed in much the way you’d expect of a hybrid disk. Because it’s a cache, the first time you do pretty much anything takes place at hard disk speeds, but, after repeating a task a few times, things settle down to cached Optane speed. The easiest I/O intensive workload that most of us run into from time to time is rebooting Windows, and here the Optane was remarkable. Rebooting from the hard disk alone took an average of about 56 seconds from the moment I hit “reboot” to the moment the desktop appears. With Optane enabled, this eventually settled down at a hair under 20 seconds. That’s a difference that’s very noticeable and very welcome.

Thing is, I’d expect a flash SSD-based hybrid using SRT to offer pretty much the same improvement. Flash SSDs were a lot slower back when Z68 hit the market, but even that first generation of SRT showed huge gains in boot time. I just can’t make the direct comparison because of Intel’s extreme product segmentation.

On a couple of occasions while rebooting, a strange progress screen popped up, too. I don’t know what provoked it exactly, and the picture I captured is not the best (it happened just after the firmware was finished, long before the print screen key does anything useful), but it looks as if something somehow upset the status of the hybrid, and it had to flush the cache or verify its integrity or something.

On a couple of occasions, this message appeared when booting the system. I'm not entirely sure what it's doing, or what these phases are, or why the appearance seemed to be random.
Enlarge / On a couple of occasions, this message appeared when booting the system. I’m not entirely sure what it’s doing, or what these phases are, or why the appearance seemed to be random.

Using CrystalDiskMark (a convenient front-end for Microsoft’s free and open source DiskSpd benchmarking tool) the peculiar performance profile of a hybrid disk became apparent: as the size of the test data grows larger and larger, performance becomes more and more hard disk-like. CrystalDiskMark tops out at 32GB of test data so is only barely enough to overwhelm the 32GB cache Intel supplied, but even this was enough to show how performance degrades when non-cached data is being used. Although the sequential performance remained admirably Optane-like, the random read and write performance fell off substantially. Moving from a 1GB data set (fully cache-able) to a 32GB dataset, random read performance fell from 200MB/s to 46MB/s, though writes held steady.

What’s more, after performing this large test in CrystalDiskMark, the reboot performance suffered—Windows itself was no longer cached so could no longer load quickly. A reboot fixed it, of course.

Running CrystalDiskMark against the raw Optane (no hybrid) reinforced the findings of the P4800X reviews. In terms of sheer sequential read performance, it’s a little behind the 1TB Samsung 960 EVO. In terms of sequential write performance, it’s actually a long way behind the flash SSD (this could be because for some reason it prohibits Windows from enabling write caching; I’m not sure). But the Samsung needs high queue depths to really shine. With a queue depth of 32, the Samsung manages 630MB/s of random read performance. Cut the queue depth down to 1 and that drops to just 54MB/s. The Optane showed a similar 636MB/s with a queue depth of 32, but it only fell to 300MB/s with a queue depth of 1. Random reads with short queue depths—the perfect workload for a cache drive—are clearly a strength of Optane relative to flash.

Narrow audiences

So here’s the thing. The 32GB Optane costs $77. The WD Black hard disk is $73 from Amazon right now. That’s $150 in total. For $139, Amazon is selling a 250GB Samsung 960 EVO. Clearly, 250GB is not as big as 1TB. If you really need the space but you’re really on a tight budget, maybe the hybrid is the way to go. But if 250GB is enough for your needs, the plain SSD is the better bet.

While I can’t use SRT in the B250 motherboard, I can use a regular NVMe SSD. I happen to have a 1TB 960 EVO on hand. The Windows reboot cycle takes about 20 seconds. Difference is, it always takes about 20 seconds. There’s no need to reboot a couple of times to prime the cache. Every read and every write is fast, because there’s no HDD at all, only the SSD. The 1TB model is a little quicker than its 250GB sibling, but the 250GB part isn’t slow. And unlike a hybrid drive—any hybrid drive—the SSD is always fast.

It’s not that the Optane hybrid doesn’t work; it does work. Of course it works. SRT is six years old now, and Optane Memory is basically the same thing. It works fine, and it works in the way I expected it to work. And I could see it making sense in situations where the cost differential is more significant. In fact, my own personal use case could fit; I have two 4TB spinning disks in a mirrored pair. I don’t like messing about with partitions or having Windows on an SSD and other things on spinning rust, because life’s too short to micromanage my storage in that way, and for the time being at least, 4TB of mirrored flash is out of my price range. So everything just goes onto the 4TB disks. Sticking a hybrid accelerator in front of that would make a lot more sense to me.

But I think this is a pretty unusual use case. Most people don’t have that much disk space and don’t need that much disk space. The 250GB SSD is going to provide a better experience (because it’s always fast, rather than only sometimes fast), and it’s going to do it at a lower cost.

And even if a hybrid is really the right option, is an Optane hybrid really going to offer any benefit over a flash SSD hybrid? I’d love if Intel had provided the equipment to answer this definitely, and I’m more than a little suspicious that it didn’t. For $71 on Newegg, I can get a 128GB Intel NVMe SSD. Half of that would be wasted with SRT, because SRT is capped to 64GB of cache, but that’s still twice the cache size for a little less than the $77 for 32GB of Optane. Its random read performance won’t be as good as Optane, sure. Does it matter? Probably not, because it still beats the snot out of a spinning disk.

Optane is certainly a good cache, and I can believe that maybe Optane is a little better as a cache than flash, but I’m not convinced that it’s sufficiently better as to justify spending more money to get an Optane cache rather than an SRT cache, and I’m wholly unconvinced that caching is the right approach for most users.

3D XPoint is interesting technology. One way or another, byte addressable storage and non-volatile RAM are likely to become mainstream, widespread technologies. They may use 3D XPoint; they may use one of the other competing technologies offering comparable properties. They’ll shake up operating system and software design when they do; we might have computers where RAM and the file system are one and the same thing, where even multiterabyte databases can be queried in “storage” rather than having to spool their data through RAM first. Our ultraportable laptops may start packing tens or hundreds of gigabytes of “RAM.” The specifics are hard to predict, but 3D XPoint, or something like 3D XPoint, is sure to open up all sorts of novel computing possibilities.

But Intel Optane Memory? It’s the most uninspiring use of this tech. Rather than showcasing the new capabilities that 3D XPoint brings to the table, it simply highlights how wretched Intel’s product segmentation is. It’s at best an incremental improvement over SRT, and, for the money, most people are probably going to be better off with a plain flash SSD than a hybrid disk anyway. 3D XPoint may yet turn out to be something good, perhaps even something world-changing. But this ain’t it.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s