Support and Q&A for Solid-State Drives

There’s a lot of excitement around the potential for the widespread adoption of solid-state drives (SSD) for primary storage, particularly on laptops and also among many folks in the server world. As with any new technology, as it is introduced we often need to revisit the assumptions baked into the overall system (OS, device support, applications) as a result of the performance characteristics of the technologies in use. This post looks at the way we have tuned Windows 7 to the current generation of SSDs. This is a rapidly moving area and we expect that there will continue to be ways we will tune Windows and we also expect the technology to continue to evolve, perhaps introducing new tradeoffs or challenging other underlying assumptions. Michael Fortin authored this post with help from many folks across the storage and fundamentals teams. --Steven

Many of today’s Solid State Drives (SSDs) offer the promise of improved performance, more consistent responsiveness, increased battery life, superior ruggedness, quicker startup times, and noise and vibration reductions. With prices dropping precipitously, most analysts expect more and more PCs to be sold with SSDs in place of traditional rotating hard disk drives (HDDs).

In Windows 7, we’ve focused a number of our engineering efforts with SSD operating characteristics in mind. As a result, Windows 7’s default behavior is to operate efficiently on SSDs without requiring any customer intervention. Before delving into how Windows 7’s behavior is automatically tuned to work efficiently on SSDs, a brief overview of SSD operating characteristics is warranted.

Random Reads: A very good story for SSDs

SSDs tend to be very fast for random reads. Most SSDs thoroughly trounce traditionally HDDs because the mechanical work required to position a rotating disk head isn’t required. As a result, the better SSDs can perform 4 KB random reads almost 100 times faster than the typical HDD (about 1/10th of a millisecond per read vs. roughly 10 milliseconds).

Sequential Reads and Writes: Also Good

Sequential read and write operations range between quite good to superb. Because flash chips can be configured in parallel and data spread across the chips, today’s better SSDs can read sequentially at rates greater than 200 MB/s, which is close to double the rate many 7200 RPM drives can deliver. For sequential writes, we see some devices greatly exceeding the rates of typical HDDs, and most SSDs doing fairly well in comparison. In today’s market, there are still considerable differences in sequential write rates between SSDs. Some greatly outperform the typical HDD, others lag by a bit, and a few are poor in comparison.

Random Writes & Flushes: Your mileage will vary greatly

The differences in sequential write rates are interesting to note, but for most users they won’t make for as notable a difference in overall performance as random writes.

What’s a long time for a random write? Well, an average HDD can typically move 4 KB random writes to its spinning media in 7 to 15 milliseconds, which has proven to be largely unacceptable. As a result, most HDDs come with 4, 8 or more megabytes of internal memory and attempt to cache small random writes rather than wait the full 7 to 15 milliseconds. When they do cache a write, they return success to the OS even though the bytes haven’t been moved to the spinning media. We typically see these cached writes completing in a few hundred microseconds (so 10X, 20X or faster than actually writing to spinning media). In looking at millions of disk writes from thousands of telemetry traces, we observe 92% of 4 KB or smaller IOs taking less than 1 millisecond, 80% taking less than 600 microseconds, and an impressive 48% taking less than 200 microseconds. Caching works!

On occasion, we’ll see HDDs struggle with bursts of random writes and flushes. Drives that cache too much for too long and then get caught with too much of a backlog of work to complete when a flush comes along, have proven to be problematic. These flushes and surrounding IOs can have considerably lengthened response times. We’ve seen some devices take a half second to a full second to complete individual IOs and take 10’s of seconds to return to a more consistently responsive state. For the user, this can be awful to endure as responsiveness drops to painful levels. Think of it, the response time for a single I/O can range from 200 microseconds up to a whopping 1,000,000 microseconds (1 second).

When presented with realistic workloads, we see the worst of the SSDs producing very long IO times as well, as much as one half to one full second to complete individual random write and flush requests. This is abysmal for many workloads and can make the entire system feel choppy, unresponsive and sluggish.

Random Writes & Flushes: Why is this so hard?

For many, the notion that a purely electronic SSD can have more trouble with random writes than a traditional HDD seems hard to comprehend at first. After all, SSDs don’t need to seek and position a disk head above a track on a rotating disk, so why would random writes present such a daunting a challenge?

The answer to this takes quite a bit of explaining, Anand’s article admirably covers many of the details. We highly encourage motivated folks to take the time to read it as well as this fine USENIX paper. In an attempt to avoid covering too much of the same material, we’ll just make a handful of points.

Most SSDs are comprised of flash cells (either SLC or MLC). It is possible to build SSDs out of DRAM. These can be extremely fast, but also very costly and power hungry. Since these are relatively rare, we’ll focus our discussion on the much more popular NAND flash based SSDs. Future SSDs may take advantage of other nonvolatile memory technologies than flash.

A flash cell is really a trap, a trap for electrons and electrons don’t like to be trapped. Consider this, if placing 100 electrons in a flash cell constitutes a bit value of 0, and fewer means the value is 1, then the controller logic may have to consider 80 to 120 as the acceptable range for a bit value of 0. A range is necessary because some electrons may escape the trap, others may fall into the trap when attempting to fill nearby cells, etc… As a result, some very sophisticated error correction logic is needed to insure data integrity.

Flash chips tend to be organized in complex arrangements, such as blocks, dies, planes and packages. The size, arrangement, parallelism, wear, interconnects and transfer speed characteristics of which can and do vary greatly.

Flash cells need to be erased before they can be written. You simply can’t trust that a flash cell has no residual electrons in it before use, so cells need to be erased before filling with electrons. Erasing is done on a large scale. You don’t erase a cell; rather you erase a large block of cells (like 128 KB worth). Erase times are typically long -- a millisecond or more.

Flash wears out. At some point, a flash cell simply stops working as a trap for electrons. If frequently updated data (e.g., a file system log file) was always stored in the same cells, those cells would wear out more quickly than cells containing read-mostly data. Wear leveling logic is employed by flash controller firmware to spread out writes across a device’s full set of cells. If done properly, most devices will last years under normal desktop/laptop workloads.

It takes some pretty clever device physicists and some solid engineering to trap electrons at high speed, to do so without errors, and to keep the devices from wearing out unevenly. Not all SSD manufacturers are as far along as others in figuring out how to do this well.

Performance Degradation Over Time, Wear, and Trim

As mentioned above, flash blocks and cells need to be erased before new bytes can be written to them. As a result, newly purchased devices (with all flash blocks pre-erased) can perform notably better at purchase time than after considerable use. While we’ve observed this performance degradation ourselves, we do not consider this to be a show stopper. In fact, except via benchmarking measurements, we don’t expect users to notice the drop during normal use.

Of course, device manufactures and Microsoft want to maintain superior performance characteristics as best we can. One can easily imagine the better SSD manufacturers attempting to overcome the aging issues by pre-erasing blocks so the performance penalty is largely unrealized during normal use, or by maintaining a large enough spare area to store short bursts of writes. SSD drives designed for the enterprise may have as high as 50% of their space reserved in order to provide lengthy periods of high sustained write performance.

In addition to the above, Microsoft and SSD manufacturers are adopting the Trim operation. In Windows 7, if an SSD reports it supports the Trim attribute of the ATA protocol’s Data Set Management command, the NTFS file system will request the ATA driver to issue the new operation to the device when files are deleted and it is safe to erase the SSD pages backing the files. With this information, an SSD can plan to erase the relevant blocks opportunistically (and lazily) in the hope that subsequent writes will not require a blocking erase operation since erased pages are available for reuse.

As an added benefit, the Trim operation can help SSDs reduce wear by eliminating the need for many merge operations to occur. As an example, consider a single 128 KB SSD block that contained a 128 KB file. If the file is deleted and a Trim operation is requested, then the SSD can avoid having to mix bytes from the SSD block with any other bytes that are subsequently written to that block. This reduces wear.

Windows 7 requests the Trim operation for more than just file delete operations. The Trim operation is fully integrated with partition- and volume-level commands like Format and Delete, with file system commands relating to truncate and compression, and with the System Restore (aka Volume Snapshot) feature.

Windows 7 Optimizations and Default Behavior Summary

As noted above, all of today’s SSDs have considerable work to do when presented with disk writes and disk flushes. Windows 7 tends to perform well on today’s SSDs, in part, because we made many engineering changes to reduce the frequency of writes and flushes. This benefits traditional HDDs as well, but is particularly helpful on today’s SSDs.

Be default, Windows 7 will disable Superfetch, ReadyBoost, as well as boot and application launch prefetching on SSDs with good random read, random write and flush performance. These technologies were all designed to improve performance on traditional HDDs, where random read performance could easily be a major bottleneck. See the FAQ section for more details.

Since SSDs tend to perform at their best when the operating system’s partitions are created with the SSD’s alignment needs in mind, all of the partition-creating tools in Windows 7 place newly created partitions with the appropriate alignment.

Frequently Asked Questions

Before addressing some frequently asked questions, we’d like to remind everyone that we believe the future of SSDs in mobile and desktop PCs (as well as enterprise servers) looks very bright to us. SSDs can deliver on the promise of improved performance, more consistent responsiveness, increased battery life, superior ruggedness, quicker startup times, and noise and vibration reductions. With prices steadily dropping and quality on the rise, we expect more and more PCs to be sold with SSDs in place of traditional rotating HDDs. With that in mind, we focused an appropriate amount of our engineering efforts towards insuring Windows 7 users have great experiences on SSDs.

Will Windows 7 support Trim?

Yes. See the above section for details.

Will disk defragmentation be disabled by default on SSDs?

Yes. The automatic scheduling of defragmentation will exclude partitions on devices that declare themselves as SSDs. Additionally, if the system disk has random read performance characteristics above the threshold of 8 MB/sec, then it too will be excluded. The threshold was determined by internal analysis.

The random read threshold test was added to the final product to address the fact that few SSDs on the market today properly identify themselves as SSDs. 8 MB/sec is a relatively conservative rate. While none of our tested HDDs could approach 8 MB/sec, all of our tested SSDs exceeded that threshold. SSD performance ranged between 11 MB/sec and 130 MB/sec. Of the 182 HDDs tested, only 6 configurations managed to exceed 2 MB/sec on our random read test. The other 176 ranged between 0.8 MB/sec and 1.6 MB/sec.

Will Superfetch be disabled on SSDs?

Yes, for most systems with SSDs.

If the system disk is an SSD, and the SSD performs adequately on random reads and doesn’t have glaring performance issues with random writes or flushes, then Superfetch, boot prefetching, application launch prefetching, ReadyBoost and ReadDrive will all be disabled.

Initially, we had configured all of these features to be off on all SSDs, but we encountered sizable performance regressions on some systems. In root causing those regressions, we found that some first generation SSDs had severe enough random write and flush problems that ultimately lead to disk reads being blocked for long periods of time. With Superfetch and other prefetching re-enabled, performance on key scenarios was markedly improved.

Is NTFS Compression of Files and Directories recommended on SSDs?

Compressing files help save space, but the effort of compressing and decompressing requires extra CPU cycles and therefore power on mobile systems. That said, for infrequently modified directories and files, compression is a fine way to conserve valuable SSD space and can be a good tradeoff if space is truly a premium.

We do not, however, recommend compressing files or directories that will be written to with great frequency. Your Documents directory and files are likely to be fine, but temporary internet directories or mail folder directories aren’t such a good idea because they get large number of file writes in bursts.

Does the Windows Search Indexer operate differently on SSDs?

No.

Is Bitlocker’s encryption process optimized to work on SSDs?

Yes, on NTFS. When Bitlocker is first configured on a partition, the entire partition is read, encrypted and written back out. As this is done, the NTFS file system will issue Trim commands to help the SSD optimize its behavior.

We do encourage users concerned about their data privacy and protection to enable Bitlocker on their drives, including SSDs.

Does Media Center do anything special when configured on SSDs?

No. While SSDs do have advantages over traditional HDDs, SSDs are more costly per GB than their HDD counterparts. For most users, a HDD optimized for media recording is a better choice, as media recording and playback workloads are largely sequential in nature.

Does Write Caching make sense on SSDs and does Windows 7 do anything special if an SSD supports write caching?

Some SSD manufacturers including RAM in their devices for more than just their control logic; they are mimicking the behavior of traditional disks by caching writes, and possibly reads. For devices that do cache writes in volatile memory, Windows 7 expects flush commands and write-ordering to be preserved to at least the same degree as traditional rotating disks. Additionally, Windows 7 expects user settings that disable write caching to be honored by write caching SSDs just as they are on traditional disks.

Do RAID configurations make sense with SSDs?

Yes. The reliability and performance benefits one can obtain via HDD RAID configurations can be had with SSD RAID configurations.

Should the pagefile be placed on SSDs?

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.

In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,

Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.

Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.

Are there any concerns regarding the Hibernate file and SSDs?

No, hiberfile.sys is written to and read from sequentially and in large chunks, and thus can be placed on either HDDs or SSDs.

What Windows Experience Index changes were made to address SSD performance characteristics?

In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads.

In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance.

A massive and very real-world benchmark would come out of that actually.

Good job MS. Been an "early" 😉 RC adopter for two days now and the only problem I see is sometimes poor Aero performance of my AMD690G on-board graphics (Radeon x1200 series). Other than that it’s snappy, beautiful and extremely usable and polished.

Everything sounds really great here except for fragmentation. Your solution is to just allow filesystem fragmentation to run unchecked and rely on the SSD to power through it? I thought trim might help enable a better defrag approach.

I’m curious about whether your measurements of performance degradation also took into account filesystem fragmentation. As you know, it’s very common for some files to reach thousands and even tens of thousands of fragments, and free space can become extremely fragmented on highly utilized drives. I question whether SSD performance is sufficient to counter this using a brute force approach.

I wonder if you decided that it would be better to defer this task to third parties for now, and look at solving it in a future version of Windows. Or, if you genuinely believe this approach offers the right tradeoff of performance vs. wear long-term.

@tgrand: "As you know, it’s very common for some files to reach thousands and even tens of thousands of fragments, and free space can become extremely fragmented on highly utilized drives."

They don’t directly address it, other than some SSD drives have 50% more space than advertised to the system (most have only about 10%). But for anyone looking at a SSD drive, you should never fill them up. That will dilute wear leveling, and many performance benefits. Filled drives are the typical reason for such highly fragmented files.

That said, if the Win7 disk defragger had a SSD type mode that only defragged the rare cases (leaving the rest well enough alone), that would be a nice option.

I found that I was going to use about 45G for a system drive, figured on doubling that to get the wear leveling and opportunistic flash cell writing which had me looking for a 90G drive. The closest thing was a 120G vertex. It would have fit in a 60G, but I know how the engineering works, and would never have so little ‘free’ space. Unlike a HDD, that free space actually gets used.

If SSDs are attached to a hardware RAID controller (which typically identify their arrays using their own manufacturer’s name) will Windows 7 know that the array is composed of SSDs and use the trim function?

Will there be a way to manually tell Windows that this array (or single device) is/are SSD’s, this one is magnetic platters, and so on?

A massive and very real-world benchmark would come out of that actually.

Benchmarks based on real world traces are immensely valuable. Some of the traces referenced in the above USENIX paper as well as some other traces from production servers referenced in this IISWC paper have been made publicly available by Microsoft via SNIA to enable research by academia and others. Of course all personally identifiable information (PII) has been removed from these traces. Note that these are server traces and not client traces. System traces can be captured by using the built in ETW functionality in Windows (client as well as server) and visualized/analyzed using the Windows Performance Tools Kit.

@bananaman:

If SSDs are attached to a hardware RAID controller (which typically identify their arrays using their own manufacturer’s name) will Windows 7 know that the array is composed of SSDs and use the trim function?

Will there be a way to manually tell Windows that this array (or single device) is/are SSD’s, this one is magnetic platters, and so on?

Typically the disks behind a RAID controller are managed by the RAID controller and presented to the operating system as one or more units (disks) of storage space. If the RAID controller reports the rotational speed as zero for the units of storage (disks) that it presents to Windows 7 then it will treat that unit of storage as an SSD.

If Win7 currently has Trim (or when it is added) what 3rd party work needs to be done for the end user to actually utilize Trim? (i.e. SSD firmware flash to accessp Trim command, support for Trim in Motherboard chipset drivers and so on)

Thanks for the blog entry, I’ve been looking for information regarding Win7 and Trim (just bought 3 OCZ Vertex SSDs for a couple new builds I’m working on)

Very funny thing. I had time to look into RC version and it’s even worse than beta. We have issues from 7000 build and more. It shows, how Microsoft is treating own customers. You can delete my post, but facts will be facts (see below).

First example:

1. set UAC level to highest one

2. go into Computer Management

3. set Startup Type for Application Information service to Disabled

4. restart system

5. many system actions (requiring UAC prompts) will simply not work, you can’t easy fix it

I understand, that it will need access to computer. But selling system to customers with such functionality "by design" won’t be too honest and professional.

Second example: like zdnet.com notified, system by default hides files extensions. How many non-technical users will understand, that file "document.txt" in Explorer is "document.txt.exe" ?

I have feeling, that nobody (I repeat nobody) is controlling it. There is no one clear consistence vision (what to do with this architecture) and we have mix of everything with everything. This is one big mess.

"Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB" – how can pagefile read be less than 4 KB? This is the size of memory page and memory manager operates with pages not bytes.

And thanks a lot for this statistic, I’ve looked for it for a long time.

I’m running Windows 7 RC 64 bit right now and so far I find disabling the Superfetch and the Prefetcher and Readyboot increases my boot up times. Also Once I disable Prefetecher I dont have that wasted 600 megs of memory or so for that stupid readyboot feature. My Windows 7 now boots under 6 sec to desktop only using 320 megs of ram.

In earlier Windows versions disabling some services was blocked, in Seven "Application Information" is not blocked. You should do steps described by me and you will see, that normal user will need reinstalling system to get it working. Is this really hard to understand ? Authors of various malware are only waiting for such occasions…

This blog is read by many Microsoft employees (including Steven). Info about this issue in 7000 was written by me long time ago. And nothing… Many people want returning some things (animated icon with network activity is example). And nothing… Sorry, but this system is created for users. This is not art for art.

Microsoft fans were screaming "wow" in 2007 and we know results. Maybe it’s time to stop screaming "excellent" in 2009 and start thinking, how to better address customer needs ? Please note – after first "excellent" opinions in more and more sites you can read opinion, that Seven is not so good (at least: it’s not revolution and doesn’t have killer features)

Correct, if fsutil reports that "DisableDeleteNotify" is 0, then Trim is enabled. (The feature is sometimes referred to using different names: Trim == Delete Notification == Unused Clusters Hint.) The setting is written in terms of disabling something because we like to use values of 0 for defaults.

Have Trim enabled according to this setting, which you do, means that the filesystem will send Trim commands down the storage stack. The filesystem doesn’t actually know whether this command will be supported or not at a lower level. When the disk driver receives the command, it will either act on it or ignore it. If you know for sure that your storage devices don’t support Trim, you could go ahead and disable Trim (enable DisableDeleteNotify) so the filesystem won’t bother to send down these notifications. However sending down the notifications is pretty lightweight and I haven’t seen any performance improvement by disabling them, so I don’t recommend disabling this setting. If you have an SSD which does support Trim, then you definitely don’t want to disable it, because there are some performance gains to be had for leaving the setting in its default form.

I’m intending to install the RC on a RAID0 made of two SSDs. Will the installer be able to detect that the drives are SSDs automatically? Is there a way to find out if it has? If not, is it possible to force it to detect a drive as SSD?

Slightly off-topic – on systems with SSDs and a lot of RAM (12gb, in my case) is there any point in having a page file above 4gb (say) if I’m not bothered about full memory dumps? SSDs generally being smaller than normal drives, I’d rather not give up a full 12gb to it!

Yes, fsutil shows the same value in the Beta and the RC, because Trim is supported and enabled by default in both the Beta and the RC. 😉

I should add that very few (if any) SSD drives in the marketplace today actually support Trim. Most of the ones that do are next-generation prototypes. But when they do become available, Windows 7 will take advantage.

Have Trim enabled according to this setting, which you do, means that the filesystem will send Trim commands down the storage stack. The filesystem doesn’t actually know whether this command will be supported or not at a lower level. When the disk driver receives the command, it will either act on it or ignore it.

</quote>

So do the Microsoft IDE Controller disk drivers support it? If you have a 3rd party disk driver provider (i.e. non-Microsoft), then I guess you rely on them implementing the Trim() functionality?

thanks for the clarification. is this the final form of trim to be implemented in win7 (barring any future rtm update/sp)? i am using vertex drives which have, at this point, a somewhat proprietary, functioning trim fw, though we are expecting an ms compatible version in about a weeks time. can i assume you are "good to go" re win7 trim, and that drive controller manufacturers will have what they need?

I’m using the Windows 7 RC and I love it. I have only a suggestion for you. To unpin some icon from the taskbar make something like drag-n-drop, to put drag the icon to the taskbar (like Win7 RC) and to unpin drag the icon to the desktop, disappearing with the icon

<<"I should add that very few (if any) SSD drives in the marketplace today actually support Trim. Most of the ones that do are next-generation prototypes. But when they do become available, Windows 7 will take advantage.

– Craig (NTFS team)">>

Thanks for your input so far Craig.. much appreciated.

While the bigger folk sort out SSD ‘logo’ reqs etc.. any ‘issues’ with a little fella like me trying to disable Win 7(RC) native TRIM and using a brute force propriety trimming ‘tool’ which would work really well for me on a customised ‘schedule’ basis. I intend to use SATA Native IDE mode and default Microsoft ATA/ATAPI device drivers with a combined MLC/SLC implementation.

By ‘issues’ I refer to legal/propriety as well as technical.. my assumption being that Procmon can keep me reasonably well informed of what’s going on ‘technically’ between the OS and my SSDs.

Just installed the RC and there’s one thing strange to me: sharing is enabled by default for the USERS folder (not the public folder) and from my other computer I can access everything in this folder and perform operations (read/copy/delete). It was a clean Win7 install and of course I haven’t changed the sharing policy for myself.

At what level does Win 7 disable defragmentation, Superfetch, ReadyBoost etc for SSDs ? I have an SSD as the system drive in my Win 7 RC machine, and the defrag GUI shows scheduled defrag as turned on, and in the schedule, select disks includes the SSD. Is defrag for SSDs disabled at some lower level that the GUI does not show ?

I manually turned off scheduled defrag, suspecting that my SSD was not be handled correctly by the Operating System.

When I run fsutil behavior query DisableDeleteNotify, I get a 0. Does this mean that my SSD is being correctly recognised as an SSD ?

I do notice that on the ReadyBoot tab for an SD card I put into my machine, it says "This device cannot be used for ReadyBoost. ReadyBoot is not enabled because the system disk is fast enough…."

Finally, is there a way for me to determine what rotational speed my SSD is reporting to the Operating System ? fsutil ? wmi call ?

It is important to note, for IOPS intensive enterprise storage, there are alternatives to a "NAND only SSD". One alternative, the DDRdrive X1, elegantly avoids all of the above mentioned limitations of Flash by using DRAM for IO reads/writes and Flash solely for backup/restore.

1) No LBA remapping, thus no wear leveling overhead.

2) Deterministic performance, no pauses or stuttering when dealing with writes – ever.

I’m not an expert on our storage drivers (I deal mainly at the file system level), but it appears that our ATA port driver (ataport) does implement trim support. This means that SSD drives which present themselves as ATA drives (which I think most if not all do), can support trim provided the drive itself also supports trim. Non-ATA devices — including USB drives and SCSI drives — don’t yet have the ability to support trim, since our other port drivers don’t implement trim. This may change as the market evolves. I don’t know if any 3rd-party storage drivers implement trim as of yet, but yes, they would have to implement it for it to work.

@m.oreilly:

As far as Win7 RTM goes, trim is in its final form. Of course it could evolve in service packs, etc., as the market demands. It’s a pretty new market. Drive manufacturers know what they need to implement on their end. Some have provided prototypes to us for testing.

@me&er:

I doubt there are any legal issues with you sending down trim commands yourself, but it sounds like an awful lot of work to me. Firstly I’m skeptical that you can even get the proper information you need. I don’t think you can infer from ProcMon output alone what clusters are in use and what aren’t. The file system knows this; what makes you think you can do better? If you get this wrong, you can end up corrupting the volume. Secondly, I’m not sure what the benefit of your approach would be even if you could get it right. By having the file system send down trim commands when appropriate, you enable the drive to immediately benefit from this information. There’s very little overhead to these commands. Contrast this with defragging, where if you were constantly defragging everything the cost would outweight the benefits. I strongly suggest you don’t try to implement this yourself.

@lukechip:

That fsutil query just tells you whether the file system is sending down trim commands or not. The file system doesn’t know (or much care) what kind of storage lies at the very bottom; it might even be multiple types (think volumes that span multiple disks, RAID arrays, etc.). If trim is enabled, NTFS sends down trim commands on all volumes and lets the underlying layers sort it out. I’m not sure how you can get the physical characteristics you want about your SSD drive. As a start, poke around at its properties in Device Manager (devmgmt.msc).

I doubt there are any legal issues with you sending down trim commands yourself, but it sounds like an awful lot of work to me. Firstly I’m skeptical that you can even get the proper information you need. I don’t think you can infer from ProcMon output alone what clusters are in use and what aren’t. The file system knows this; what makes you think you can do better? If you get this wrong, you can end up corrupting the volume. Secondly, I’m not sure what the benefit of your approach would be even if you could get it right. By having the file system send down trim commands when appropriate, you enable the drive to immediately benefit from this information. There’s very little overhead to these commands. Contrast this with defragging, where if you were constantly defragging everything the cost would outweight the benefits. I strongly suggest you don’t try to implement this yourself.

Tuesday, May 12, 2009 3:37 PM by craigbarkhouse<<QUOTE>>

Many thanks Craig. Appreciate your responses.

Just to put the record straight, I’m not looking at programming an alternative to TRIM and have no wish to compromise

I’m using a propriety TRIM tool in beta at the ‘mo from OCZ/Indilinx for their Vertex series with the Barefoot controller. If you guys haven’t got it yet.. I would give it a go. I’m not marketing it at all.. all I am interested in is confirming that it actually consolidates the free space effectively and without excessive ‘overhead/nand longevity issues’. My limited testing ability confirms it initiates fine in cmd/conhost.exe and works along the FS stack with ‘native’ Microsoft storage drivers, writing to the MFT and paging to file effectively with no file corruption identified.. the end result being a noticeable increase in random read/write speed at little or no cost to sequential read/write. Calculations for the longevity of the specific MLC drive is somewhat complex but nevertheless quite acute on the 30GB models, so I will need to compare this propriety tool ‘initiated’ usage against the FS one.

TRIM or trimming or defragmentation/consolidation of free space is much the same for my needs.. what is important is fitting this in to how it effects the smaller capacity MLC SSD, as part of a write optimisation strategy.

This strategy involves looking at controller/FW Wear Leveling and how it interacts with Win 7 with both SLC and MLC SSD which is why any form of TRIM and Device/OS write cacheing is right at the top of my analysis. Neatly brings me onto this question:

If..

<<QUOTE>>for devices that do cache writes in volatile memory, Windows 7 expects flush commands and write-ordering to be preserved to at least the same degree as traditional rotating disks…<<QUOTE>>

My ocz core v2 ssd isn`t detected as ssd. scheduled defragmentation is enabled to that drive and also superfetch is enabled. So i presume the windows hasn`t detected drive correctly. Hdd is attached to jmicron jmb363 controller and is confugured as ide because ocz page said that drive should no be configured to achi mode. and only settings in jmicron are raid,ide, and ahci.no sata mode without ahci enabled

EWF (from XPe) & MS Steadystate seem to sequentialize random writes, I read somewhere that such a device is in Win 7 already but not yet functional? Is this true, what is it? what´s the name? can it be activated?

Typically the disks behind a RAID controller are managed by the RAID controller and presented to the operating system as one or more units (disks) of storage space. If the RAID controller reports the rotational speed as zero for the units of storage (disks) that it presents to Windows 7 then it will treat that unit of storage as an SSD.

So i presume the windows hasn`t detected drive correctly. Hdd is attached to jmicron jmb363 controller and is confugured as ide because ocz page said that drive should no be configured to achi mode. and only settings in jmicron are raid,ide, and ahci.no sata mode without ahci enabled

By having the file system send down trim commands when appropriate, you enable the drive to immediately benefit from this information. There’s very little overhead to these commands. Contrast this with defragging, where if you were constantly defragging everything the cost would outweight the benefits. I strongly suggest you don’t try to implement this yourself.

the ability to support trim, since our other port drivers don’t implement trim. This may change as the market evolves. I don’t know if any 3rd-party storage drivers implement trim as of yet, but yes, they would have to implement it for it to work.

this can get a bit mumbo jumboy for me, but are you saying that that if the raid controller can address this "rotational speed as zero for the units of storage (disks)" then windows will see the raid 1 array as i disk and the Trim tool in win 7 will work on a raid array?

Maybe I shoud give windows 7 jmicron drivers when installing not afterwards. Microsoft shoud be more informative with these post.Example information about hdd detection

and information about reading and writing changes in low lever.I would very interesting and I guess many people who read this blog.

Has it have to be manufacturer drivers or manufacturer supplyed to microsoft drivers or is it ok to install with standard drivers and expect windows to detect drive as ssd ? I usally give windows drivers afterwars if windows detects drive so I can install to it.

Hi Steven, I was wondering what kind of compression codec does NTFS use. For example on the internet i have heard of people saying the overall transfer rate decreased while using compressed files. I am sure you would have heard of the LZO Algorithm http://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer

I start to test my solid state drive on Windows 7 64bit Build 7201. With Windows XP and Windows Vista i made very bad experience, after a couple of weeks, the solid state lost a lot of speed and simple internet browsing was terrible. I checked after installation in Windows 7 that Disk Defragmention is off and also Super Fetch is disables. Disk Defragmention was on Manual after Installation, i have Disbaled now. Super Fetch was active and start at the boot. I have disabled now. I use a 32GB OCZ Solid State. Is there already a tool aviable to check which ssd drives are certified ?

Thanks for the article. Even though I’m not the greatest fan of Microsoft products, I’m pleased to know that Windows 7 supports SDD. As far as I know, the technology is quite new, quite expensive and I don’t know any real people who are using SDD. However, I think that it is really promising and I’m waiting for the moment when I can try it myself.

So I’ve read this entire thread, but I still don’t understand if I am building and entire new system with a SSD for my boot drive and a traditional hard drive for the rest of the system, what needs to be done at Windows 7 installation time.

Can I just install Windows 7 on the SSD direct from the Windows install CD?

I have 2 X25-M SSD’s in RAID 0 on new install of WIN 7 Home Prem. Find Suprefetch is running so tried "fsutil behavior query DisableDeleteNotify" in RUN but DOS window flashes and disappears so quickly I cant see result. How can I get to see if the result is 1 or 0? Also I accidently pasted "fsutil behavior query|set DisableDeleteNotify" in the RUN command. Have I now turned off the Trim command?

I have 2 X25-M SSD’s in RAID 0 on new install of WIN 7 Home Prem. Find Suprefetch is running so tried "fsutil behavior query DisableDeleteNotify" in RUN but DOS window flashes and disappears so quickly I cant see result. How can I get to see if the result is 1 or 0? Also I accidently pasted "fsutil behavior query|set DisableDeleteNotify" in the RUN command. Have I now turned off the Trim command?

@I have 2 X25-M SSD’s in RAID 0 on new install of WIN 7 Home Prem. Find Suprefetch is running so tried "fsutil behavior query DisableDeleteNotify" in RUN but DOS window flashes and disappears so quickly I cant see result. How can I get to see if the result is 1 or 0? Also I accidently pasted "fsutil behavior query|set DisableDeleteNotify" in the RUN command. Have I now turned off the Trim command?

Instead of doing Start->Run and then entering the text, use Start->Run and type cmd

This will put you in a dos shell, from which you can run the desired text and see the results directly.

Thanks for the help with getting into the DOS shell. Now when I try to run fsutil it says I must have administrative privileges to run this program. I have administrative privileges in Win 7? I double checked. Do I need to somehow set them in DOS?

By the way I found the registery key and checked it in regedit and it is zero so I assume trim is not turned off but Superfetch is still on? I turned off defrag by disabeling the service so it had not been turned off.

Isn’t there a better way of telling if Win 7 knows I have SSDs. Possibly it can’t detect them since they are in RAID 0

There are two classes of users. First, performance junkies use SSD’s as a huge RAM drive. SSD’s have much higher speed than traditional hard drives. Second is portable usage. they use far less power, are lighter, and are vibration and impact resistant. They last much longer in mobile laptops than do traditional hard drives.

One question, How much of the information is also applicable to XP? For instance, I’ve seen several sites with suggested ways for tuning XP for use with SSD, (in particular the article on the OCZ forums). How much of this is good advice (like turning off the page file for example, is it the same for XP as it is 7?).

I have never had any problems running any systems without a paging file.

(I have more RAM now (4 gig) than the capacity of some of my old hard drives!)

Paging used to slow down the response of my old systems, so I got more RAM than I needed, and selected:

"No paging file" under the advanced system parameters.

(I would only need 2 gig now to use Windows 7, but I hope to do some gaming soon, so I got 4, and 2 OCZ VERTEX-LE 100GB SSD’s.)

As long as you have enough RAM for YOUR needs, you shouldn’t have any problems. You can check your system monitor for your RAM usage. If things get tight, or you get an "Out of memory" error message, temporarily close unneeded applications, and/or re-enable a paging file as necessary (requires a system restart) until you install more RAM, if you so choose.

There is a link somewhere in the OCZ forums to download a free utility from a very kind programmer to assist in suggesting/making the recommended adjustments to XP and newer versions of Windows automatically. It includes logic to allow undoing any changes. I tried it, and it seems to work OK.

I am concerned that TRIM may not work in a RAID-0 array. (yet?)

Therefore, I am wondering if long term RAID-0 performance will suffer to the point where it would be nearly the same as a single SSD (vs. a pair) where TRIM is working effectively. If so, there is little point in the expense of getting a second SSD for RAID-0, unless one intends to use that method to double capacity.

(If I were "mirroring" SSD’s, I would also be concerned.)

Microsoft may not be able to help, if in fact new RAID controllers are required, unless they decide to make some that plug into a pci-express slot… hint, hint.

When does Win 7 disable defragmentation for SSDs ? I have an SSD as the system drive in my Win 7 ACER 8930 laptop, and the defrag GUI shows scheduled defrag as turned on, and in the schedule, select disks includes the SSD (named KINGSTON SNVP325-S2). Is defrag for SSDs disabled at some lower level that the GUI does not show ?

I manually turned off scheduled defrag, suspecting that my SSD was not be handled correctly by the Operating System.

I guess I’m still confused at the level of Win 7 TRIM support. Is it true that if you are not running your BIOS in AHCI mode (i.e. in IDE or "Compatibility" mode) that the TRIM command is not passed to the SSD by the storage driver?

From what I have read only the msahci driver passed the TRIM command and this driver is only loaded when your SATA controller is running in AHCI mode.

The reason I ask is that I have a netbook that doesn’t allow the option to run in AHCI mode but would still love to use a SSD with auto TRIM.

Thanks for the article. Even though I’m not the greatest fan of Microsoft products, I’m pleased to know that Windows 7 supports SDD. As far as I know, the technology is quite new, quite expensive and I don’t know any real people who are using SDD. However, I think that it is really promising and I’m waiting for the moment when I can try it myself.

So i presume the windows hasn`t detected drive correctly. Hdd is attached to jmicron jmb363 controller and is confugured as ide because ocz page said that drive should no be configured to achi mode. and only settings in jmicron are raid,ide, and ahci.no sata mode without ahci enabled

Drive in, Win7 installed, operating perfectly. It’s just that defrag was still turned on (I’ve since manually turned it off). I thought Win7 should have detected it as an SSD, and turned off defrag itself. Does this mean that Win7 doesn’t ‘know’ that it’s an SSD? (Device Manager correctly reports the type of the drive as ‘INTEL SSD… ATA Device)

Hdd is attached to jmicron jmb363 controller and is confugured as ide because ocz page said that drive should no be configured to achi mode. and only settings in jmicron are raid,ide, and ahci.no sata mode without ahci enabled

One question, How much of the information is also applicable to XP? For instance, I’ve seen several sites with suggested ways for tuning XP for use with SSD, (in particular the article on the OCZ forums). How much of this is good advice (like turning off the page file for example, is it the same for XP as it is 7?).

Your talking about Windows 7, SSD disks and stuff, but can you imagine that today, i was selling hardware in a store when a customers enters. And after a brief talk about hardware and stuff, she asks if she could run Windows 98 on the machine??

I also have Windows 7 Professional installed from scratch on a 80GB Intel X-25M G2 SSD. I found that Defrag was turned on after installation, but that "fsutil behavior query DisableDeleteNotify" gave the result of ‘0’ indicating that TRIM is enabled. However, that shows the same result–‘0’–on a regular hdd.

Since then I have installed the latest Intel Firmware (02HD) and run the Intel Toolbox utility, but still can’t tell if Windows knows this is an SSD.

How can I tell for sure that Windows is recognizing the SSD correctly?

I'm confused. How exactly can you say that RAID is supported? From what I can tell no controllers support TRIM in a RAID configuration. You get the speed in a RAID 0 configuration but clearly you are bargaining with the devil as there is no mechanism for trimming the drives like a single drive (Intel SSD toolbox). Will MS provide a way to do this that I am unaware of?

how do i find out if the disk drive is of type SSD using WMI. I check for mediatype, it reports fixed media for SSD and SCSI drives, i executed win32_volume defrag method, it returned SUCCESS for SSD, i am using windows 2008 R2.

Thanks for the info. Do you have any advice about configuration of Environment Variables for optimizing SSD usage, or having multiple partitions on SSDs?

FYI,on XP and Server 2008 I split out the OS from applications (install all Apps to an E: drive), give the paging file its own partition (no fragmentation), and then store data (downloads, docs, databases, and Documents and Settings) on a fourth partition. So what would you suggest for a mixed SSD – HDD environment, with a sizeable (256GB+) SSD and one or more relatively fast (7200 or 10K RPM) HDDs? From the above, I assume that the paging file should go onto the same partition as the OS %System Root% on the SSD, that things like docs and .pst files should go on the HDD. But what environment variable settings do you recommend for a new Windows 7 install, to minimize "wear" and improve performance?

I would be very careful about your statement that RAIDing SSDs is a good idea. There is currently no RAID controller out that that still processes TRIM requests to the drives properly, so the benefits of TRIM are lost when operating a RAID.

Please provide better support for SSDs in coordination with Intel's Rapid Storage Technology. I am using an SSD as a system disk and 2 HDDs in RAID 1 for data. All three drives have to be controlled by Intel's RST which recognizes the SSD as such but Windows 7 sees it as a "standard disk drive". Consequently Windows 7 fails to provide SSD optimization automatically.

Defrag (dfrgui.exe) won't even run on my system after replacing my boot disk with a SSD drive. It closes itself immediately after loading.

I wish Microsoft had the HDD tools to better manage this migration. Countless hours were lost trying to do this the free way. After a number of failed robocopy copies with all the relevant switches along with diskpart, bootrec, and bootsec commands, I finally got success with AOMEI Partition Assistant.

A common scenario in my job, owner of a music recording studio, is to have to deal with portable USB3 & external SATA disks, used as devices for to record/playback multiple simultaneous streams of audio and video, not necessary in a sequential writing process.

This usage creates a severe fragmentation in the disks that require dayly defragmentation, that of course take a lot of time.

I found that the SSD perform quite better than the rotation HDs when they are new, but after several days of intensive use the write performance suffer a consistent degradation, giving a worst performance than standard HDs.

Initially the exFat file system work better speedier than NTFS, but it has the problem that, once the directories are written fragmented, they can´t be defragged and the specific defragmentation tools (like Raxco Perfect Disk, Defragler or UltraDefrag), depending of erratic circumstances, allow or not to access the disks for defragmenting.

¿Why it is not a tool for to defragment directories in exFat file system disks and why applying defragmentation to this results in more and worst performance?

This was nice to read but I cannot get my new Samsung SSD 840 PRO to ADD to the W7-system. It does not come up as a new disk on the disk menu. I am running W7 64 Home Premium. The software Samsung Magican can make performance test on the HDD´s but the Samsung SSD it describes as "Unallocated". The Samsung Data Migration program tries to make a Data Migration from Harddisk to the SSD but it writes that the Samsung SSD is missing.

In addition I reinstalled the W7 after physically having mounted the SSD in the system with the hope that W7 should include the SSD in the menu. But NO it did not.

This article is like help files. I am more frustrated than before because it doesn't tell me how check if any of this stuff is working. How about some of those little blue words that take me to a related place that tells me how to do this stuff.

All this article is just too well explained and usefull, but a question has risen for me.

In an old article from Microsoft, their engineers had suggested that pagefile would be best to be set on any other HDD or multiple ones and avoid the system one. After reading this today, i suppose that, that article was made exclusively for HDDs, so i ask now, would it be better to cancel all HDDs pagefiles and use it on my SSD only?