I'm planning to build a new system this winter. I'd love to put a SATA 3 (6GB/s) SSD in it, for the OS, like one of the Crucial C300s. But I'm finding mixed reports on how well this setup is supported by the Linux kernel. I found one report (ubuntu) that it works full speed with a boot option "libata.force=noncq", to turn off NCQ.

Other info is in infobash below. The Marvell controller on the SSD is not an issue for Linux -- but it might be a BIG problem for the BIOS on your motherboard. On the Asus P6X58D-E (AMI BIOS, updated), once the default RAID0 array on the SSD is disabled, neither of the "drive" devices would appear in the BIOS bootable devices list, even after trying all the suggestions on the OCZ forum. However, I found that Linux (Live CD) had no problem seeing the two 60GB "drives" on the SSD, and I could use fdisk to do the partition alignment. Therefore, I installed an old WD740 SATA drive, and set a 1GB ext4 boot partition on it, and used that to install Grub and /boot, when I installed aptosid on the first SSD drive. I put my 20GB Win 7 VM on the second SSD drive, which is also a ext4 filesystem. After it ran stable for a few days, I set the ext4 mount options for a desktop SSD (discard,noatime,commit=300), and mounted /tmp and the /var logs in tmpfs, to minimize system writing on the SSD. I set swappiness and other vm settings in /etc/sysctl.conf to go easy on the SSD.

I also wanted to experiment with the new btrfs filesystem, using the new WD drives. According to the wiki, the default btrfs configuration on a multi-drive installation is for data to be striped, but metadata is mirrored. Therefore, I connected the two WD drives to the two SATA 3 connectors on the P6X58D-E, and used mkfs.btrfs to set a single btrfs filesystem across both drives, and then set it up for automatic mounting in /etc/fstab. It has been running for 6 days now. I have been making some DVD ISO images, saving them, then deleting them, making new directories, then deleting them, etc., plus I copied about 200GB of music files onto it along with my docs and images. I see some anomalies with the way the btrfs usage statistics are shown, but the filesystem seems to work correctly and very fast from the CLI. For example the "1%" in the infobash output below is not correct. Here is df -h:

Note that it only reports one of the btrfs devices (/dev/sde, but not /dev/sdd), but it includes the full filesystem size on the two 1TB drives. Here is the btrfs filesystem spanning the two WD drives: