Our first look at the Intel Optane SSD 900p only included the smaller 280GB capacity. We now have added the 480GB model to our collection, and have started analyzing the power consumption of the fastest SSDs on the market.

This second look at the Optane SSD 900p doesn't change the overall picture much. As we speculated in our initial review, the design of the Optane SSD and its 3D XPoint memory means that performance does not scale with capacity the way most flash-based SSD designs do. The Optane SSD 900p uses a controller with seven channels for communicating with the 3D XPoint memory. The difference between the 280GB and 480GB models is merely a difference of three or five 3D XPoint dies per channel. Of the 28 memory package locations on the PCB, the 280GB model populates 21 with single die packages. The 480GB model uses all 28 spots, and half of the packages on the front are dual-die packages.

A single NAND flash die isn't enough to keep one of the controller's channels busy, because flash takes many microseconds to complete a read or write command, and even longer for erase commands. By contrast, 3D XPoint memory is fast enough that there is little to no performance to be gained from overlapping commands to multiple dies on a single channel. Increasing the number of dies per channel on an Optane SSD affects capacity and power consumption but not performance.

Intel recently let slip the existence of 960GB and 1.5TB versions of the 900p, through the disclosure of a product change notification about tweaks to the product labeling. The specifications for the larger capacities have not been confirmed but likely match the smaller models in every respect except power consumption. Since Intel has not officially announced the higher capacities yet, no MSRP or release date is available.

Our SSD power measurement equipment burned out right before our first Optane SSD arrived. As of this week, we have newer and much better power measurement equipment on hand: a Quarch XLC Programmable Power Module. We'll explore its capabilities more in a future article. For now, we're filling in the missing power measurements from the past several reviews. Both of the Optane SSD 900p models have been re-tested with the Quarch power module on the entire test suite except for The Destroyer (so far). We haven't yet thoroughly validated the new power measurements against the results from our old meter so there may be some discrepancies, but the Optane SSDs draw so much power that any minor differences won't matter to this review. Everything that was tested with the old meter will eventually be re-tested on the Quarch power module, but we don't expect significant changes except to idle power measurements (where the Quarch power module should offer higher resolution).

Our first review of the Optane SSD 900p included a few puzzling results, most notably slightly higher performance when the ATSB Heavy and Light tests were run on a filled drive than when the Optane SSDs were freshly erased. One potential factor for this has since come to light: After first being powered on, Intel Optane SSDs perform a background data refresh process. This isn't necessary unless the SSD has been powered off for a long time, but the drive has no way to know how long it was without power. The documentation for the 750GB Optane SSD DC P4800X states this process can take up to three hours. We have not observed any clear transition in idle power during the first few hours after power-on, but there are occasional short periods where idle power drops by around 350-400mW (from around 3.5W).

Without a conclusive indication of whether background data refresh is happening and influencing benchmark results, we've re-tested the 280GB Optane SSD 900p for this review. Before running the synthetic benchmarks, the 900p was allowed to sit at idle for at least three hours. The ATSB tests were also conducted after an extended idle period, but the test system was rebooted between ATSB tests. Even with this precaution, there's still significant variability between test runs on the Optane SSD 900p and the full-drive performance is often better than freshly erased, so it appears there's another factor contributing to this behavior.

If you are staying with a single thread submission model Windows may we'll have a decent sized advantage with both iocp and rio. Linux kernel aio is just such a crap shoot that it's really only useful if you run big databases and you set it up properly.Reply

Don't hold your breath for a M.2 version of the 900p, or anything with performance close to the 900p. Future Optane products will require different controllers in order to offer significantly different performance characteristics"

Not necessarily. Optane Memory devices show the random performance is on par with the 900P. It's the sequential throughput that limits top-end performance.

While its plausible the load power consumption might be impacted by performance, not always true for idle. The power consumption in idle can be cut significantly(to 10's of mW levels) by using a new controller. It's reasonable to assume the 900P uses the controller derived from the 750, which is also power hungry. Reply

You are still confusing load power consumption with idle power consumption. What you said makes sense for load, when its active. Not for idle.

Optane Memory devices having 1/3rd the idle power demonstrates its due to the controller. They likely wanted something with short TTM, so they chose whatever controller they had and retrofitted it.Reply

Optane's very nature as a heat based phase change material is always going to result in higher power use than NAND because it's always going to take more energy to heat a material up than it would to create a magnetic or electric field. Reply

I'm curious whether it's possible to get more IOPS doing random 512B reads, since that's the sector size this advertises.

When the description of the memory tech itself came out, bit addressability--not having to read any minimum block size--was a selling point. But it may be that the controller isn't actually capable of reading any more 512B blocks/s than 4KB ones, even if the memory and the bus could handle it.

I don't think any additional IOPS you get from smaller reads would help most existing apps, but if you were, say, writing a database you wanted to run well on this stuff, it'd be interesting to know that small reads help.Reply

Those latencies seem pretty high. Was this with Linux or Windows? The table on page one indicates both were used.Can you run a few of these tests against a loop mounted ram block device? I'm curious to see what both the min, average and standard deviation values of latency look like when the block layer is involved.Reply