Wednesday, 29 December 2010

When a traditional hard disk with spinning platters and moving heads needs to read a (very large) file, it can do so much faster if the file is stored in an unbroken sequence of blocks on the drive. If different pieces of the file are spread around the drive, the hard disk’s heads need to physically move between those pieces. The time it takes is indicated as the seek time in drive specifications.

Defragmentation software rearranges all the files on the drive so they are all stored in unbroken sequences. In Windows you can find it under Start, All Programs, Accessories, System Tools, Disk Defragmenter. Defragmenting mechanical hard disks is a good idea. It’ll noticably speed up your PC if your system drive is a heavily fragmented hard disk. Hard disks are the slowest component in modern PCs. They can use all the help they can get. (That’s also the reason why it makes a lot of sense to replace your system drive with an SSD.) Because defragmentation is so helpful, Windows actually does it automatically when your PC is idle. You may have noticed the HDD drive light on your PC flickering furiously when coming back from lunch, only to stop instantly as you touch the mouse.

Defragmenting an SSD is a terrible idea, for several reasons:

The key benefit to SSDs is that they have virtually no seek time. Reading adjacent blocks of data is no faster than reading blocks that are spread out over the drive. Fragmentation does not affect SSD drive speed.

As I discussed in my SSD Remaining Drive Life article, SSD drives physically wear out as you write to them. Defragmentation software moves around all the files on your drive. Thus, defragmenting an SSD reduces its life span without giving you any benefits.

SSD drives deal with the limited lifespan of their memory cells by using wear-leveling algorithms. These algorithms take advantage of the fact that fragmentation does not affect the drive’s speed. They purposely fragment the drive so that its cells wear out evenly, even if you’re constantly overwriting a small set of files (e.g. database fiels) and never overwriting other files (e.g. operating system files).

Modern SSDs even lie to the operating system. If the operating system tells the drive to save a file in blocks 728, 729, and 730, the drive may decide to write it to blocks 17, 7829, and 78918 instead, if it determines that those blocks haven’t been worn out as much yet. The drive keeps a lookup table of all its blocks, so that when the OS wants to read blocks 728 through 730, the drive reads blocks 17, 7829, and 78918. With such drives, defragmentation software can’t possibly work. The software will think and tell the user that file X was nicely defragmented and stored in blocks 728, 729, and 730, while it actually has no idea where the data is stored physically on the drive.

Conclusion: don’t waste your time and your SSD’s life expectancy by defragmenting it. The automatic defragmentation in Windows 7 skips SSDs automatically. In Vista, you can disable it via the Performance Information and Tools item in the Control Panel. I do strongly recommend you upgrade to Windows 7 if you have an SSD, so you get TRIM support.

Tuesday, 21 December 2010

One (potential) disadvantage of solid state drives versus traditional hard disk drives is that the memory cells in SSDs are subject to physical wear. Before a cell can be written to, it needs to be erased. That can be done only so many times before the cell stops accepting a new charge. Affordable SSDs use multi-level cells (MLC) that are typically specified with a maximum PE count of 10,000. That means each cell can (in theory) be overwritten 10,000 times before it fails. Wear-leveling algorithms in the drive’s controller try to make sure that the cells are worn out evenly. If you overwite the same file over and over again, it’ll be moved around the drive so it’s not always the same cells being erased and rewritten.

SSD drives keep track of how many times each block of cells has been erased (overwritten). They report basic statistics on this via the S.M.A.R.T. parameters. You can read them with a tool such as CrystalDiskInfo. This free software is a must-have for all SSD owners.

I’ve been checking these numbers a couple of times per month over the past year. Zero failures in programming (writing), erasing, and reading means the drive is in perfect condition. The minimum and maximum erase counts are probably quite meaningless, because they may refer to just one block. At least one block saw ten times the action that it was rated for. When the drive was new, the maximum erase count shot up from almost nothing to about 20,000 in the first month, and then shot up again to over 100,000 a months or two after that. After that the maximum count has remained unchanged.

The remaining drive life seems to be calculated directly from the average number of times each block was erased. 100% – (1,315 / 100,000) = 87%. This number has been steadily dropping by 1% each month. So my drive is going to last for a total of about 8 years.

I certainly haven’t tried to minimize the amount of data being written to the drive. I bought it for its speed and that’s what I’m using it for! At one point the drive was completely full, but now I have about 60 GB out of 256 GB free. I’ve done several complete OS restores the past year and I work quite a bit with virtual machine snapshots. I’m sure I’ve written (and overwritten) several terabytes of data to the drive already.

For a desktop drive in a developer’s workstation, SSD drive life is largely irrelevant. The drive will be obsolete before it wears out. It’s already obsolete. The link to my M255 drive on Crucial’s website results in an error page. Curcial now only sells the RealSSD C300. This drive did not exist when I bought mine. The available capacities are the same: 64, 128, and 256 GB. The prices are a bit lower: $599 for the 256 GB instead of $699. The speed, according to independent tests, is significantly higher.

Eventually I’ll replace the drive because I want a bigger and faster drive, not because it wore out. I’m not using any 8-year-old hard disk drives for the same reason.

Monday, 1 November 2010

Sometimes it’s the little features that make a software product much more usable. For as long as I can remember, closing Delphi, recompiling the application, or opening another project while your app was still running has always made Delphi ask: “Debug session in progress. Terminate?” You could click OK to terminate your app, or click Cancel to stay where you were.

Delphi XE has a new prompt with over 5 times as many words: “This current debug session must end before the requested operation can complete. Please indicate how you would like to end the debug session.” Though the long statement is not much friendlier that the old short question, the buttons are. The button that terminates your app is now labeled Terminate instead of OK. But the highlight is the brand new Detach button. If you click it, Delphi stops debugging your app, while your app continues to run. Delphi then does whatever you asked it to do that caused the prompt to appear. It’s a very handy option. I’ve used it many times already, even though I’ve been using Delphi XE for only a few weeks. After I came to appreciate the Detach button, I found the Detach from Program item in the Run menu. It stops debugging the app while letting it run, without doing anything else.

But this post isn’t about detaching your debugger from your app. Today’s lesson is that for an app to be user-friendly, features must be presented in the context where they are useful. You see, Delphi 2010 also has the Detach from Program item in the Project menu. I never used it, because I never noticed it, because I never looked for it, because Delphi 2010 continued to use the same old “Terminate?” question.

Don’t use the standard OK/Cancel or Yes/No/Cancel message boxes. Present a message that clearly explains what is going on. Use custom buttons that clearly represent their consequences. Give me “Save Changes” and “Lose Changes” buttons, not “Yes” and “No” buttons that may make me click “No! Don’t do that!” which your app interprets as “No! I don’t want to keep all my hard work.” Once you’re free of the canned Yes/No/Cancel buttons, you can offer more than 3 choices if your application can do more than 2 things besides doing nothing, so people will actually use the options you worked so hard on.

Friday, 17 September 2010

Delphi 2009 was the first Delphi version to produce Unicode applications. When it was released I ranted about the needless strings checks that it did. Those were a bit of a hack to allow C++Builder developers to port their applications to Unicode more easily by fixing up at runtime what the developer failed to fix at coding time. The checks provided no benefits to Delphi developers. This did not change in Delphi 2010.

Delphi XE no longer does these strings checks. The functions that performed them, such as InternalUStrFromLStr, have been removed from System.pas. The Project Options screen no longer offers the “string format checking” compiler option. The compiler still accepts the $STRINGCHECKS directive as valid syntax, but it no longer has any effect. Even if you put {$STRINGCHECKS ON} in your units, the compiler generates the “quick and efficient” code I showed in my Delphi 2009 article.

The main upshot is that the RTL and VCL in Delphi XE no longer do those needless string checks. Thought you can and should use {$STRINGCHECKS OFF} to compile your own units in Delphi 2009 and 2010 without the string checks, the units in the RTL and VCL in Delphi 2009 and 2010 always do the string checks as Embarcadero compiled them with {$STRINGCHECKS ON}.