SSDs (solid state drives) are great. They’re shock resistant, consume less power, produce less heat, and have very fast seek times. If you have a computer with an SSD, such as an Eee PC, there are some tweaks you can make to increase performance and extend the life of the disk.

The simplest tweak is to mount volumes using the noatime option. By default Linux will write the last accessed time attribute to files. This can reduce the life of your SSD by causing a lot of writes. The noatime mount option turns this off.

Open your fstab file:sudo gedit /etc/fstab

Ubuntu uses the relatime option by default. For your SSD partitions (formatted as ext3), replace relatime with noatime in fstab. Reboot for the changes to take effect.

Using a ramdisk instead of the SSD to store temporary files will speed things up, but will cost you a few megabytes of RAM.

Reboot for the changes to take effect. Running df, you should see a new line with /tmp mounted on tmpfs:tmpfs 513472 30320 483152 6% /tmp

Firefox puts its cache in your home partition. By moving this cache in RAM you can speed up Firefox and reduce disk writes. Complete the previous tweak to mount /tmp in RAM, and you can put the cache there as well.

Open about:config in Firefox. Right click in an open area and create a new string value called browser.cache.disk.parent_directory. Set the value to /tmp.

An I/O scheduler decides which applications get to write to the disk when. Because SSDs are so different than a spinning hard drive, not all I/O schedulers work well with SSDs.

The default I/O scheduler in Linux is cfq, completely fair queuing. cfq is works well on hard disks, but I’ve found it to cause problems on my Eee PC’s SSD. While writing a large file to disk, any other application which tries to write hang until the other write finishes.

The I/O scheduler can be changed on a per-drive basis without rebooting. Run this command to get the current scheduler for a disk and the alternative options:cat /sys/block/sda/queue/scheduler

You’ll probably have four options, the one in brackets is currently being used by the disk specified in the previous command:noop anticipatory deadline [cfq]

Two of these are better suited to SSD drives: noop and deadline. Using one of these in the same situation, the application will still hang but only for a few seconds instead of until the disk is free again. Not great, but much better than cfq.

Here’s how to change the I/O scheduler of a disk to deadline:echo deadline > /sys/block/sda/queue/scheduler

(Note: the above command needs to be run as root, but sudo does not work with it on my system. Run sudo -i if you have a problem to get a root prompt.)

You can replace sda with the disk you want to change, and deadline with any of the available schedulers. This change is temporary and will be reset when you reboot.

If you’re using the deadline scheduler, there’s another option you can change for the SSD. This command is also temporary and also is a per-disk option:echo 1 > /sys/block/sda/queue/iosched/fifo_batch

You can apply the scheduler you want to all your drives by adding a boot parameter in GRUB. The menu.lst file is regenerated whenever the kernel is updated, which would wipe out your change. Instead of this way, I added commands to rc.local to do the same thing.

[update] Commenter dondad has pointed out that it’s possible to add boot parameters to menu.lst that won’t be wiped out by an upgrade. Open menu.lst (Remember to make a backup of this file before you edit it):sudo gedit /boot/grub/menu.lst

Thee is a section in the grub menu called default where you can put boot parameters and they will be added to the end of grub menu updates. I do this on my Ubuntu installation because I have to have reboot=b as an option or my box won’t reboot properly.

There are a few things you could improve. First, the sysfs options can also be set at boot time in a comfortable way using the package “sysfsutils” from the universe repository. You can configure your values to be set at startup in /etc/sysfs.conf. Just add lines like
“block/sda/queue/scheduler = deadline” or
“block/sda/queue/iosched/fifo_batch = 1″

You can work around your sudo problem elegantly by using the “tee” command:
echo deadline | sudo tee /sys/block/sda/queue/scheduler

(tee writes the standard input to both the standard output and the file provided as parameter.)

That’s pretty sweet, I never thought of using tee like that. Of course the reason you can’t use sudo while redirecting stdout to a file (>) is because the fopen() for the redirect happens in the current shell, and the sudo is executing the child process (echo) as a super user.

I would recommend mounting /tmp and /var/tmp tmpfs on any modern system. Speed benefits aside, there are often files which are left in temp dirs by sloppy programs, and these will fill up your drive at best, and compromise your security at worst. One example would be when using “Open With” in firefox it places that file in /tmp, but will not delete it if you click “Clear all” and the file is locked by the application you opened it with. Starting off fresh every boot is a good way to clean it out automagically.

Well, did so on my XPS notebook with an HD, since I hardly use the 2Gigs of RAM (except when running a virtual machine.. which is rare). Works quite nice, and my personal experience is that it’s become a bit faster.

Newer incarnations of the linux kernel over a “relatime” option which updates the access time only as soon as a file is written and thus will reduce the writes equaly as a “noatime” would, while still updating the access-time of each file and thus retaining some backwards compatibility to the tools which still rely on updated access-time information.

You will find that current ubuntu installations all have “relatime” parameters allready on by default

See http://www.storagesearch.com/ssdmyths-endurance.html for more about the ‘excessive writes wear out flash drives’ issue, which is mostly a myth at present. Basically you have to write evenly across whole SSD, as fast as possible and 24/7, to even have a chance of wearing it out, and it typically would take many years even with this unrealistic usage pattern.

Actually, this is not a myth supposedly based on experience. This is a known disadvantage inherent to flash technology. Writing to flash chips damages the chip. It can take a very long time to cause sufficient damage to actually destroy the data integrity of a chip, but excessive writing (especially to the same blocks, like most modern OSs do) can dramatically reduce the life of flash media.

One of the reasons tests may not show this is because most flash drives contain extra unused memory that is used as failover when bytes on the regular part of the drive fail (I am sure the redirection to failover, once a block has failed, increases read and write times, although only marginally).

The problem with modern flash manufacturers is that they do not list technical specs on the package. This means I do not know if that drive can withstand 10,000 writes per block, or over 1,000,000 writes per block. This also means I do not know if the drive has 5 or 10 blocks of failover space, or half the total drive capacity.

Anyway, excessive write use, especially to the same location, very well can wear out a flash drive far more quickly than a hard drive. Modern technology and failover techniques can slow this, but it depends greatly on how much failover space the manufacturer provided and the technology used by the drive, and since these are not listed on the package, the dependability of any given drive is subject to many unknown factors.

In any case, even if I did have a drive known to have an average life that is 3 times that of magnetic hard drives, I would still use these techniques, because they will extend the life of the drive at least two if not five to ten times as long as it would otherwise have been.

Note: A user would not have to write the entire drive over and over to quickly destroy a flash drive. He would just have to focus on a single block, writing it until it failed, then continue the same pattern until all of the failover space was also destroyed. Temporary data storage and virtual memory use in most modern OSs fits this pattern almost perfectly. Once a single block has been made unreliable, the reliability of the entire drive is gone.

Another big disadvantage of USB flash drives is heat (having less experience with SSDs, I do not know if this is also a problem for them). Try copying a 25MB file(or folder)to your USB drive, then feel the drive’s case. Imagine how hot the chips inside must be for you to be able to feel that heat through the plastic case. This heating (and then cooling in between reading or writing) is what destroys most small electronics. Reducing write cycles to USB flash drives will help circumvent this wear. If you are using USB drives, internally, for hard drives, I would recommend at least removing the plastic cases if not adding heat sinks to the chips, to keep them cooler.

(Several years ago I was working on using a USB flash RAID for a mini-ITX motherboard and I did an enourmous amount of research on this. Because flash technology has improved a good deal since then, I am going to actually try it this time. My recent research has found that the limited write cycles of flash is still a big concern when using it for running an operating system, but modern flash technology coupled with the techniques listed on this page should yield a good long term use system.)

Mr Rybec, you write “Note: A user would not have to write the entire drive over and over to quickly destroy a flash drive. He would just have to focus on a single block, writing it until it failed, then continue the same pattern until all of the failover space was also destroyed.”

I think you misunderstand why “writing the entire drive” is brought up in this context. Current SSDs supposedly incorporate “wear levelling”, where virtual blocks are mapped to real blocks so that each block gets a similar amount of wear. Therefore, writing to one location repeatedly would produce no more wear than “writing the entire drive”.

Your quote above would thus be FUD, except for the fact that as you observe, manufacturers don’t give details about how their drives work. It is not just the redundant blocks however, as you suggest, but rather, the actual wear levelling algorithm which is employed.

In my experience, there is still a definite difference between ‘noatime’ and ‘relatime’. ‘relatime’ is doing a lot more writes and slowing things down (note: I’m only concerned with performance here, not the idea that I’m going to wear out the SSD too fast).

I’m using the noop scheduler and it seems fine; much better than the default. I haven’t tried testing it against the deadline scheduler, though.

I use ext2 as well. I rarely have unclean shutdowns, and hey, I used it for years before ext3 existed without a problem. Seems to me more likely that the SSD as a whole would fail than that I would lose data to an unclean shutdown, and I have full backups anyway.

putting the firefox disk cache on a ramdisk makes no sense to me, Firefox already has a memory cache. The disk cache is the 2nd level cache. Why not disable the firefox disk cache completely and set firefox to cache only in memory?

I was using the noop scheduler for my 30GB OCZ SSD and decided to try the deadline scheduler after reading your article. No major throughput differences but it seems the deadline scheduler uses a few less cpu resources. This is more of a subjective observation using the gnome system monitor.

Firefox puts its cache in your home partition. By moving this cache in RAM you can speed up Firefox and reduce disk writes. Complete the previous tweak to mount /tmp in RAM, and you can put the cache there as well.

Open about:config in Firefox. Right click in an open area and create a new string value called browser.cache.disk.parent_directory. Set the value to /tmp.

[...] here will be for Windows, but some will also work with Linux and some specific programs. Credits to Tombuntu and OCZ for these items. I’m just organizing them better and inserting my own opinions. [...]

I had problems with Hardy Heron where Mozilla would not shut down right away when I clicked it off. I would get the “do you want to wait or shut the application down now” box for a couple seconds. I applied the first two tweaks to my system and now I don’t have the problem any more. It works fine..!!
This is on a NON-SSD, standard hard drive installation.

[...] the solid state drives and making more applications run from RAM instead of writing to the drive. Four well known ones will give a great start to the optimisation process, as well as several others on the user [...]

My 256M SD card was mis-behaving. I was unable to format it for any filesystem, VFAT, ext2, ext3 (yes, I know, journal bad for SSD). So just for giggles, I ran “badblocks -w” against it, a destructive test that writes all 1′s, all 0′s, 10′s, and 01′s.

It reported no bad blocks.

I was then able to format the SSD with ext2.

Maybe the badblocks program actually fixed the failing bits, and maybe it was just a fluke. As I said, YMMV. But, if the SSD is failing anyway, what do you have to lose? After all, you have backups, right? Right?

The reason is that SSDs work on preparing pages for writing by erasing them. There are several blocks per page. Typical page size is 4KiB, and typically 128 blocks per page, making erasures one half MiB each. Having added -w, you just told your ‘puter to write over every single last block of your SSD four times over. Because you touched every single block for write, you erased everything. Otherwise there will be discrepancies between what’s written and what needs to be erased. That’s what the more modern TRIM command is about, informing the drive of which blocks it may preemtively erase. It’s that erase/program (write) cycle which takes a long time, and thus makes your write performance sucky. And it’s exacerbated by making a tiny write (compared to the page size) which makes it appear to wear faster (Imagine it as doing 128 writes of 4KiB or less each (such as a simple mtime update), then the 129th might have to perform an erase before it can be actually performed, even if it’s the same block being written as far as your OS is concerned…the SSD’s wear-leveling algorithms will spread that out over many physical blocks).

Well, I upgraded my EEEbuntu to 9.xx and really wanted these tweaks to work. No problem with ‘noatime’, but the shifting over of the /tmp area seems to make the configuration unable to mount my SDcards or USB memory sticks.

I am somewhat of a conservative. For the SSD, there is no swap disk, ext2 for the file system, and ‘noatime’.

I understand that the noop/deadline scheduler is a good choice for SSD drives but what happens when I plug a spinning/traditional usb hard disk to my mini9?
I use such external hd for my backups and I really don’t want to mess it up!

Q: How do I convert my ext3 partition back to ext2?
Actually there is only little need to do so, because in most cases it is sufficient to mount the partition explicitely as ext2. But if you really need to convert your partion back to ext2 just do the following on an unmounted partition:

tune2fs -O ^has_journal /dev/hdaX

To be on the safe side you should force a fsck run on this partition afterwards:

fsck.ext2 -f /dev/hdaX

After this procedure you can safely delete the .journal file if there was any.

I expanded on this and now have a script setup that upon shutdown or reboot remembers all directories & files in /var/log (including access rights, owner and group settings).

To use it is easy enough:
1> Create a script call svlogdir.sh somewhere and make it executable (chmod +x svlogdir.sh).
2> Open with gedit and put text at bottom of this reply in the file (gedit svlogdir.sh)
3> Run it as root to install (sudo ./svlogdir.sh)

You also need to edit your fstab (sudo gedit /etc/fstab) and put in the following line at the end to make /var/log a tmpfs:
tmpfs /var/log tmpfs user,noatime,mode=0755 0 0

Once you have completed both, do the following commands to complete the setup:
sudo rm -rf /var/log/*
sudo /etc/init.d/mklogdir.sh
sudo reboot

Your SSD sucks. You should not be getting half the write speed to your SSD compared to a hard drive. Decent SSD drives will write in the 180-200 megabytes/sec range w/ the type of sequential write test that hdparm does.

the sudo command is not working because the > causes the output to be written to the file by the currently running shell. you need a sudo’ed process to write it so fire up bash in sudo and give it the command to run with the -c (for commnad) option:

[...] min near the end… but 3 min is sweet. Also put in place some of the tweaks like mentioned here: Four Tweaks for Using Linux with Solid State Drives | Tombuntu (mainly noatime and tmpfs)… will post some speed numbers once Lucid gets a little stable… its [...]

This is precisely why LED backlighting is more interesting to me in a ThinkPad than SSD. The new Fujitsu T2010, Toshiba R400, and Compaq 2710p tablets all feature LED backlighting (as does the upcoming Dell Latitude XT tablet). Why doesn’t the X61t? While the X61t is an excellent machine, including the unusual IPS-type display, the loss of brightness and battery life of CFL versus LED makes it difficult to recommend to most users.

[...] mkinitpcio.conf. It’s done now and I’m running at full speed, using all the standard Linux tweaks (deadline, tmpfs, noatime). More to come soon, hopefully benchmarks for the new drive and maybe [...]

[...] Solid State disks run straight out of the box with most up to date distributions. I found this article on how to improve performance and durability of your SSDs by tweaking the /etc/fstab http://tombuntu.com/index.php/2008/0…-state-drives/ [...]

[...] mac's counterpart of tmpfs to /tmp/ or /private/var/tmp/ ? Nope. Gave up as I've not even got this one to work — a script to maintain /var/log/* in tmpfs and write it to disk on shutdown. Noobnesse [...]

I like the article, but I am inclined to agree with Mathieu above. The system pauses during write and the slow throughput suggest Tom had a flawed first-gen SSD. I’m curious whether the IO scheduler trick is necessary with an Intel SSD, or one of the second-gen cards.

All schedulers assume seek time is very large compared to sequential read time. That is not tne case with SSDs. Hence, the scheduler assumptions will cause problems that get worse when more complex SSD controllers are present. In fact, a good SSD controller will have a scheduler of his own, and this may cause conflicts.

CFQ is very complex. It gives highest priority to real-time reads and puts everything else on a queue using some complicated stochastic model. It has a high probability of conflicting with the SSD controller.

Deadline is very simple. It also prioritizes real-time processes; but all other requests simply have a deadline assigned and are executed in elevator order (the lower address closest to the last read is read). When a deadline of some request is reached, the scheduler jumps to that request. Users are finding that deadline works reasonably well with the SSD controller.

Anticipatory is very closely tied with hard-drive structure. It waits some time for nearby reads (which is an irrelevant strategy in SSDs), and then proceeds in elevator order, like in deadline (except that there are no deadlines!). It should never be used with SSDs.

Noop is the simplest, and just assumes that scheduling is done by the SSD (or hardware RAID) controller. It serves requests in the order they are received. Theoretically is the best choice for SSDs, but its performance depends heavily on the “smartness” of the scheduller in the SSD controller. We can only guess that…

If i add the elevator=deadline option to the default parameters in /boot/grub/menu.lst, what should i do with the echo 1 > /sys/block/sd?/queue/iosched/fifo_batch commands. Should i add them to /etc/rc.local, or aren’t they needed in this case?

[...] Four Tweaks for Using Linux with Solid State Drives | Tombuntu. Rate this:please shareShareGoogle +1TwitterEmailLinkedInFacebookRedditPrintDiggStumbleUponLike this:LikeBe the first to like this. [...]