I dont really see how they would improve upon this, its really just solving the inconvenience of keeping system and frequent apps on the SSD and data on HDD. I'm sure Apple has a great algorithm already for deciding what gets to be on the SSD based on your system usage, though I hope there is an option to put new apps directly on the SSD on install (as I would want when installing games etc on my Windows PC). I hope windows 8 packs similar functionality because to the non-power user its not as trivial as it may seem to me.Reply

If the drive is presented as a single volume I don't see how you could request a new app be installed & why should you? Shouldn't the automatic tiering automatically move it to the SSD if warranted? (Used alot)Reply

I would have expected a larger write buffer than 4GB... In fact, I would have expected writes to work much like reads, with all writes going entirely to the SSD and files being moved to the slower HDD later if they weren't frequently accessed.Reply

Well, once it's been written to the write buffer, I assume the OS will decide whether or not to keep it there or move it to the rotational HD later.

And I don't see why greater than 4GB would be necessary. It truly is intended to just be a scratchpad/clipboard of sorts. As long as it can easily handle typical workloads, then the user shouldn't be able to see any improvement going to a larger buffer.Reply

I would think that media editors would benefit from a larger write-cache. So would anyone transferring a folder larger than 4GB I would think, which tends to happen not-too-infrequently from my experience xP I think a 12GB write-cache would have been pretty decent myself and would cover a lot of file-move scenarios.Reply

I am not sure why file moving is very important? Moving from where to where? Even with a thunderbolt connector, you'd need an external SSD to see a difference. Otherwise either the connector - ie USB 3 - or the external storage would be the bottleneck.

The 4GB write cache is for one edge case, which is that programs like to write lots of little files - file locks, temporary small stuff, whatever. The most recently used algorithm would miss those because they are brand new files - there's no usage data available. So they all go in that 4GB cache first - it's a great solution IMO.

For all other cases, the SSD and HDD dynamically re-arrange their contents. So if you're editing a movie, that movie will be on the SSD, for example. (I was about to bring up the case of a 300MB photoshop file but then realized that easily fits in that 4GB buffer too... whoops... 4GB is quite a bit, only movie editing will really exceed that).Reply

The more they use for the write buffer the less is available to files and programs I guess. 4GB just for writes should be enough for most people in most cases though, only when you transfer something larger than that would you take a hit to performance. Reply

4GB is more than enough - you only need to store the last X seconds of writes and then flush them out. ZFS's ZIL partition for a home filer can be fine even at 2GB.

As for writing everything to SSD and then migrating out slow stuff to HDD, that requires a lot more SSD to work. Veritas's file system vxfs v5 includes dynamic storage tiering (DST) that behaves this way, but you have a target mix of fast/slower storage that is 30/70, not 1/8 or 1/24 like these Apple offerings. There is also a considerable overhead cost to tracking and refreshing file locations with their product. Reply

It still could be block based, could it not? I mean, SRT is block-based yet it caches whole files, not just a few blocks that the file takes up :P If a file is on blocks 2, 3, 4, 5, 7, 11 then all of those blocks should be accessed an equal number of times, ensuring the entire file and all of it's blocks are transferred.Reply

Absolutely not. The opposite is the case. An algorithm that works on the OS level can make much better decisions on what to keep on the SSD and what to move to the HDD.

It could move all my media files onto the HDD, for example - I might watch movies, listen to music and look at my pictures, but the case that I'd edit these *and* the editing would incur a performance penalty is very small. It could keep all system files on the SSD. And so on. The OS has much more information about how your files might get used so it can make much better decisions.Reply

It could even keep track of which applications benefit heavily from the SSD and which don't to help make sure a photoshop install that gets used a couple times a month doesn't get pushed off the ssd to make room for big 10GB games that get played many times a week.Reply

SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort.

This seems to hit the sweet spot. Automatic immediate caching for small amounts (a la Turbo Memory,) with large amount automatic repositioning between drives based on usage load.

No manual keeping track, but MUCH more benefit than the existing solutions. In all honesty, this is what I thought both Turbo Memory and SSD caching *WERE* until I read more in to them. This makes a lot more sense. Use the spinning drive as "volume" storage as it should, then when you figure out what smaller amounts of data should be on the higher-speed, move them.

Make new writes of smaller amounts of data go to the SSD, then write to the spinning drive when workload allows. No risk of losing data like a "regular" write-back cache.Reply

"SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort."

I don't understand this at all. Have you ever tried to use Intel's SSD caching debuted on the z68 platform? I'm guessing you probably are just spewing someone's thoughts on the platform without ever trying it.

I have used SSD caching on my z68 platform since I've started putting it together. After about 5-10 mins and a few reboots, SSD caching was up and running and incredibly noticeable. Aside from the brief setup, I've never spent another thought on the matter. It's the easiest solution out there. If the SSD dies randomly, guess what? nothing happens other than me replacing the SSD. My single drive stores everything and maintains cache for anything I load often. Reply

Don't you think it's a tad bit early to declare Smart Response to have failed, seeing as it's only been available on notebooks and most desktops since the IVB launch earlier this year? In fact, I've seen a lot of cheaper Ultrabooks using SRT.Reply

Wow, thanks for posting this, Anand! I guess you're right - it's not SSD caching per se (other than the 4GB write cache). With a 128GB SSD array and the OS directing what files to move/exchange from the rotational HDD onto the SSD portion, this looks like it could be an elegant solution.

Given the option, would us regular desktop users want to move to an intelligently-managed hybrid Hard Drive solution? I run Windows Vista on a 128GB OCZ Vertex 3 with a 1 TB drive as my other applications (and media) storage drive.

However, all of my games were originally installed on my rotational HD by default. If I want Starcraft 2, for example, to be loaded off my SSD, I have to copy over the main game directory and use a directory junction (Windows' version of a UNIX-style symbolic link to a directory) to point to the new SSD directory in order to "fool" Windows into thinking it's still on my rotational drive. And strangely enough, this process BREAKS when it's time to patch or update my game.

If I had a smart algorithm deciding what files were commonly accessed and pulled those files onto my SSD for me, that would be very effective, I feel.

Can Intel's "Smart Response Technology" already do this? Is Apple's Fusion Drive significantly different? Are there any significant downsides to these methods versus manually maintaining an SSD volume and a rotational HD volume, given that you throw at least a 64GB or 128GB SSD into the setup?Reply

Meh, I don't see what benefit it has for enthusiasts... Junction points aren't much of a hassle, I like being in control, and SSD prices are plummeting anyway. I was gonna buy a second 128GB drive soon, to store more games on flash, but I might just jump on a 256GB given the deals I've seen (and with BF looming).

I am an enthusiast (work in IT, built computers for a decade) and I'm with Paulman. Sure, I can keep it mostly organized, but I have better things to do and I'm willing to sacrifice a little bit of performance for convenience. I'd rather be getting some work done than messing around with my files.Reply

Why don't you just reinstall SC2 on your SSD? o_O Seems kind of silly to have to go through that process again and again when you could do just one things and have it work.

And Intel's SRT is the exact same thing, except it leaves a copy for parity on the drive be default (I believe you can turn that off). Where Intel's SRT differs is it's cache size. Instead of 4GB, you have a max of 64GB which is used to not only cache writes, but to cache frequently used files as well. So instead of having 95% storage and 5% cache, the cache is flexible and configurable up to 64GB. If you have a bigger SSD than 64GB then you just use the rest for storage as you deem fit.

So in your case, you could say, use 10GB for Intel's SRT write-cacheing and your remaining usable space for installing Windows/SC2/Web browser/ect. Or if you want something automated, then make your cache the full 64GB and it will work just like Apple's Fusion and the remaining 60GB you can use to install Windows and any other programs/files you don't want getting bumped off the cache due to inactivity.Reply

What would be great is if you could use Fusion Drive on an SSD/HD combo you installed yourself. I have an SSD as my main drive and a HDD as the secondary in the optibay of my Macbook Pro. I would love to have Mountain Lion combine them into one logical unit and manage it for me.

I'm hoping there is an easy way to enable the feature through terminal commands or that someone makes a 3rd party app to do it (see: Trim Enabler).Reply

Since it looks like Apple designed "Fusion Drive" to appear as a single drive to work seamlessly with existing software, I'm guessing things like Time Machine backup and Lion Recovery will still work fine. Even if the OS drive failed, a Time Machine drive has a built-in recovery partition so you can still restore your files if either drive fails.Reply

"That 4GB write buffer is the only cache-like component to Apple's Fusion Drive. Everything else works as an OS directed pinning algorithm instead of an SSD cache. In other words, Mountain Lion will physically move frequently used files, data and entire applications to the 128GB of NAND Flash storage and move less frequently used items to the hard disk"

That first bit is essentially a hybrid drive like the Seagate Momentus, the rest is exactly like Intel's ssd caching... infact that's all it is with maybe a tweak algorithm.Reply

I find myself wondering if operating at the file level is any real improvement. The end effect is the same and it shouldn't matter is all or part of an application is cached. If something isn't cached it's because it doesn't get used much. Why waste SSD space on it?

If I use Word a lot but never do a mail merge, it isn't going to affect Word's performance for me if the files for the mail merge function live on the platter drive instead of the SSD.

'OS directed pinning operation' Isn't that what most modern desktop OSes do with frequently launched apps, cache them to RAM for quicker access? Why is that caching but Fusion isn't?Reply

I thought the concept wasn't worth it as per the last podcast =P. I want caching to be great, until we have 2TB SSD that are affordable, there will be a benefit to not have to deal with managing two drives. When you install a bunch of stuff, it can be tedious to manage.

Oh, and remember to check if the new 27" can still be used as a monitor as well please!Reply

To say that this isn't SSD caching is purely a semantic argument. The only difference is that this works at the file level rather than the sector level. Other than that it is still a continuing popularity contest to determine which items are worthy of a place on the SSD or are just as well off on the platter drive.

The big difference between this and building a PC with SRT enabled is that the Fusion drives comes pre-imaged with the Mac OS and bundled apps on the SSD, so you get the performance boost immediately instead of having teach it over the course of usage what files need to be cached. I expect PC OEMs to do the same as big brand machines start to offer SRT as a factory option.Reply

No, the real difference that makes this not a "cache" is that the files do not live on both the SSD and the HDD. They only live on one or the other. A true "cache" would have all files residing on the HDD, with selected ones also cached to the SSD. (Similar to how a processor's L1/L2/L3 caches simply store the same stuff that's already in RAM.... it doesn't disappear from the RAM when it gets put into the cache.)Reply

Unless you look under the hood with System Report the user will only be aware of a single drive volume. This to mean screams "SSD CACHE!"

Seriously, this is just building on SRT. Is there any doubt Intel shared their code with Apple to get started? End users get SSD benefits without having to think about what should go where. Where have I heard that pitch before?

I hope it's smart enough not to cache big video files regardless of how often they're played. That would be a horrible waste of SSD space for a type of data that gets little benefit from the medium. Though in a mobile setting it would be advantageous for battery life. Perhaps a control panel for such settings is needed.Reply

When I heard of this yesterday I assumed that this was Apple's implementation of Intel's SRT with OSX? Am I wrong assuming that? Regarding this article and comments it looks like this is somewhat different?Reply

For example, Apple says this isn't caching because the files in the SSD aren't duplicates of the platter drive. They are the sole copies. This means you get more combined space from the two drives. Rather than a 1 TB drive with an invisible 128 GB cache, you have what appears to be a single 1.2 TB volume with two performance levels depending on where data is located. This is the biggest benefit I can see of operating at the file level rather than the sector level as SRT does.

It remains to be seen how much of this is Intel and how much Apple. If it is mostly Intel we should see a version of this for Windows sometime next year as six months is a frequent exclusivity term between Intel and Apple. Intel may make it exclusive to Haswell chip sets to promote those.

It's kind of like the variable throughput of some optical drives. Makers of CD-ROM games for console once had to pay a lot of attention to where a file was on the disc. You wanted code on the fast parts of the disc and video on the slow parts, assuming it wasn't slower than the video playback required. Since the video is linear it doesn't matter how fast it loads so long as the minimum is maintained. But code needed to be on the faster tracks to avoid long pauses between parts of the game.

In general, this is a good thing for users who are easily confused about dealing with multiple drives of differing performance. I really liked the idea of SRT but found it extremely flaky on the two H67 systems where I tried to use it. I gave up and settled for manually managing the placement of different file types on my systems. (It is very easy to make the Windows 7/8 User data directories live anywhere you want so the SSD doesn't get filled with big data files.) Perhaps Microsoft should be like Apple and make Windows more natively aware of the concept rather than leaving it entirely to Intel. Reply

I have done The same thing as you with my OSX, I have moved moved VMs,steam,iTunes library and other Non-SSD friendly stuff to my HDD with a combination of soft links and moving the iTunes library but for some of my less technical friends this technology will be easier. Reply

In the Apple store, the Fusion drive is a $250 upgrade option (over the standard 1TB drive) on the Mac Mini (not available on the base model mind you, just the $800+ models).

So essentially they are charging an additional $250 for a 128GB SSD (probably Toshiba) and enabling some software bit that will decide for you what data gets put on which drive... at a time when consumers can purchase a 128GB Samsung 830 for $80, or 256GB for $160.

I guess we shouldn't be suprised considering they'll sell you a ($25) external DVD drive for $79, and upgrade from ($20) 4GB to ($40) 8GB of RAM for $100.Reply

Apple's doing what they always do, charging extra for convenience. Yes, you could seek out and buy cheaper parts, or manage multiple partitions on your own, but if you don't have the time, technical skill, or desire, Apple will handle it for you (for a price).

I see it as similar to the auto industry. Tasks like changing oil or replacing headlights are simple and cheap to do yourself, but an awful lot of people without the time or the skill (or just because messing around with their car scares them) pay someone else to do it.Reply

First I want to confess that I am not very up to date with NAND hardware, but wouldn't having a dedicated area for caching on the SSD be an issue with write wearing? Unless the SSD controller / OS handles wear leveling these days.Reply

Yes butt he time scale for losing a serious portion of the drives is measured in years.

Also, most drives have a reserved area for this purpose. The reserved cells are never available to the user and are switched in as cells wear out. This is why you see drives listed as 120 GB instead of 128 GB or 240 GB instead of 256 GB.

It's a trade-off that means the drive should last and function well long into the life span of the typical system. If you had a machine in continuous use for five or more years you might notice the drive losing capacity. Not an issue for most people.Reply

The Fusion drive reminds me the Hierarchical Storage Management of Irix I was using in the good old days ... HSM is still in use but is usually relegated to the big iron.It would be nice if apple will support multiple tiers of storage.Reply

For Random Write we have the 4GB sorted most of that. The problem is Random Read. Where do we do Random Read most since these needs to be inside SSD. For others Seq Read Write HDD isn't that much slower at all.

Again, i think the experience will be hard to measure with any benchmarks tools. I am looking forward to Anand's review/ Reply

And a mystery to me is why hasn't Seagate gotten it together, after years, to get really great performance from their XT series. I mean, how long does it take to get it right? A few months I bet is enough if you actually try.Reply

The problem for Seagate and other HD makers is getting the full value out of the SSD volume meant moving out of their comfort zone and into higher levels of the OS than their usual product. It also meant much higher price points if they were going to incorporate really large amounts of flash memory. This makes them nervous since they have very price sensitive customers in the big PC OEMs.

I'm sure they'll keep at it but they've missed their window and that bird has flown. What is happening now is new computer models are coming with mSATA port/slots to allow a notebook to have an SRT drive along with a high capacity HD. Dell, for example, ow including a 32 GB mSATA cache as standard on several models.

This isn't as sophisticated as what Apple is doing. It remains to be seen if Intel and/or Microsoft will offer the functionality to enable PC OEMs to match the Fusion feature set. Much of it should be trivial, such as pre-loading the SRT cache volume with the OS files and any other favored items, although what you'd really need to do is mirror the appropriate sectors on the hard drive. At least until a file system level version of SRT is offered.Reply

Although Apple ended up ditching ZFS after running into licensing issues, they started working on the basic underpinnings for a possible new filesystem, and included it in 10.7 Lion, calling it Core Storage.

While Apple's still using Core Storage with the ancient HFS+ filesystem, the new FileVault 2 disk encryption system uses Core Storage to present the encrypted drive as a virtual normal one to the system.

Thanks for providing some actual information on how this works. I've been googling around for a while, apparently none of the other so-called "tech" publications are interested enough to actually ask... Anandtech stands out, once again. Thanks!!!Reply

"Just like all the other tiered/caching SSD and small NAND drive systems that have been around for ages."

Exactly and where are these systems? While they may not be new, they weren't mainstream. They lived in the domains of the mommies cellar dwellers… While not new Apple at least made it mainstream… This is what they do...Reply

I read recently that the 1TB Fusion drive was supported with Bootcamp, but the 3TB was not... that's a shame since the internal drive is not upgradeable I would have gone with the 3TB Fusion drive. I also want the flexibility of installing Windows via Bootcamp, and see it's not supported on the 3TB drive. The Apple note also indicated that the Apple Disk Utility not be used for Bootcamp partition, and instead to use the Bootcamp utility.

Would using Parallels or VMWare Fusion (with Fusion drive? that's sure to be an opportunity for marketing!) get around the Bootcamp and 3TB restriction?

Another question: What about the Emergency Restore partition that started with Mountain Lion? Is that still supported with Fusion >drive< ?

Everyone seems to be overlooking the hdd portion of the Fusion drive. How fast does it spin? Is it 5400 or 7200? At some point this part of the drive will be critical to large file transfer performance. Reply

Anand, do you have any insight into how this drive would handle a 200GB Aperture library? The library appears in the Finder as a single file, but will the fusion drive software be clever enough to know that it can subdivide it to put the bit you're working with on the SSD? This is really the make or break question for the fusion drive for me, if the fusion can make it feel like Aperture is on SSD, but without having to fork out for a massive external SSD then it could be a massive boost for photo work.Reply

Whoever gets one of these macs with a Fusion drive and is willing to "play"... can the SSD and spindle drive be separated using Disk Utility, so that you have two physical disk icons appear on the desktop?

If so, Apple have sure made my life easier with ZFS and Macs that have single drives!SSD as HFS for booting and AppsSpindle for data using ZFS filesystems

I could even create disk0s4 and disk0s5 on the SSD and play with cache/zil.

And you would decide what to put where by what method? Logically, you'd figure out what stuff you access the most and is therefore the stuff that would benefit the most from faster loading.

Surprise! The system does that automatically and saves you the effort. It isn't a deep decision. It's just a matter of the system being better equipped to track file usage than you are and executing on your behalf.

Funnily enough this was something done by myself and a few people who i copied from the internet about a year ago. installing a 128gb SSD and replacing super drive from Mac book pro combining both volumes, I also did the same using a 4GB write buffer. i also have done the same to my gaming computer (windows). this isn't something apple and just thought of, maybe bettered. slightly annoying. Reply

I am really looking forward to seeing the new iMac review. Please also spend some word on the 21'inch iMac. Last year most of the word were given to the 27'inch, hopefully the review will cover a little bit more on its companion.Reply

Can you only fusion ONE SSD and ONE HDD? I currently have ML running on a 128GB SSD where I also keep my apps, and two 1TB HDD in raid 0 for storage. Can I fusion the SSD and the raid 0 pair? I imagine if this were possible, I'll get benefits of both fusion *and* raid 0. True?Reply

Great post...:)Yesterday I install Adobe software to my client that bought a new iMac 27" and I'm really surprise from the read/write speed....I go to the disk utilities and saw 1 Psychical drive of 3 tera. I thought that is a SATA drive, but how it working so fast??No I understand that is a fusion drive. Apple do it well.