anyway I don't understand why so many people complain about compile-time, one can do other things while compiling... for me the "compile everything yourself philosophy" was my main reason to change from mint/arch/fedora to gentoo... not even because it should run faster, but because it's cooler and "realer"

In the early days of FLASH memories, write cycle life expectancy was about 1000 cycles. There was no wear leveling and the devices were too small and expensive to even think about using them is solid state drives.

Write life expectancy has improved, wear leveling exists, device capacity has increased and cost has plummeted.

There is anecdotal evidence on these forums that attempting to install Gentoo on the early 4G SSDs fitted to the eeepc would kill the drive before the install was complete.

Its no longer an issue._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

but is life expectancy still lower then the one of conventional harddrives? on my T420s I only have one 160GB SSD, but on my workstation I have a 128GB SSD and a 2TB HDD. Is it still a good idea to put /var and /tmp on the HDD for this reason? because now I have lvm2 with an extended volume group containing /home /var /tmp /opt and as far as I know I can't control if /var and /tmp are completely on the hdd

I have /tmp and /var/tmp/portage in RAM.
/tmp should only be small. If you need to put a DVD iso file somewhere before you burn it, /tmp is not the place. Somewhere in ./~ is better._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

# check order set to zero as the initrd does the checking now
# using real /dev names as the /dev/vg/symlinks are not created in the initrd
# we could just make them
#/dev/dm-3 /var ext4 noatime,noauto 1 0
#/dev/dm-0 /usr ext4 noatime,noauto 1 0

# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
# POSIX shared memory (shm_open, shm_unlink).
# (tmpfs is a dynamically expandable/shrinkable ramdisk, and will
# use almost no memory if not populated with files)
shm /dev/shm tmpfs nodev,nosuid,noexec 0 0

Hmm, it looks like I forgot to fix /tmp in /etc/fstab after I tested with /tmp in RAM.

I have a 4 spindle RAID. /boot is raid1, / is raid5 and everything else is in lvm2 on top of raid5.
When I set this box up I wimped out of root in lvm2 on raid but then udev caught up with me and separate /usr and /var._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

My laptop is more than three years old and from day one it only has had one 128GB SSD drive in it. I use it everyday at work and home. No problems so far. So, IMO, the last of your problems on a desktop or laptop will be reaching the limit of writes of your drive. Most probably the machine will become obsolete before the SSD fails.

I do my compiling on RAM (but mainly because of the time gain). As cach0rr0 said, 4GB are not enough for everything. But in portage you can set different PORTAGE_TMPDIR for different packages. For example, I have the regular /var/tmp/portage in RAM to compile everything except Firefox and LibreOffice, which are compiled in /var/bigtmp that is located on the SSD.
This can be done through the /etc/portage/package.env. Here is my setup:

I also currently run Gentoo on SSDs, have three machines with SSD primary disks (32GB SSD/2GB RAM, 128GB SSD/4GB RAM and 180GB SSD/8GB RAM). I did not however do a full install from stage3, they were "stage4" type installs from a hard drive. I have the portage tree installed on the >100G disks.

However they all have gone through pretty much an "emerge -e world" because of the number of updates since first install.

I did find something interesting on my Crucial 128GB SSD - it records the average number of times the SSD cells have been erased. It currently is "3" (of 3000) and this includes the usage on the Windows partitions... It does not appear to be incrementing way too quickly despite the emerge updates, probably around 1 per month so far.

I tend to compile in tmpfs due to speed too, but when I run out of RAM I just let it go on the SSD. So far so good, and I even compile firefox (Libreoffice I use the binaries).

(I still have the 4GB SSD that came with my eeePC. It's too small to do anything really, I wonder how many write cycles it has left. The 4GB SSD was replaced with the 32GB SSD. I keep portage on an 8GB SD card, which I expect to fail much sooner than the 32GB...)_________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed to be advocating?

I also currently run Gentoo on SSDs, have three machines with SSD primary disks (32GB SSD/2GB RAM, 128GB SSD/4GB RAM and 180GB SSD/8GB RAM). I did not however do a full install from stage3, they were "stage4" type installs from a hard drive. I have the portage tree installed on the >100G disks.

However they all have gone through pretty much an "emerge -e world" because of the number of updates since first install.

I did find something interesting on my Crucial 128GB SSD - it records the average number of times the SSD cells have been erased. It currently is "3" (of 3000) and this includes the usage on the Windows partitions... It does not appear to be incrementing way too quickly despite the emerge updates, probably around 1 per month so far.

I tend to compile in tmpfs due to speed too, but when I run out of RAM I just let it go on the SSD. So far so good, and I even compile firefox (Libreoffice I use the binaries).

(I still have the 4GB SSD that came with my eeePC. It's too small to do anything really, I wonder how many write cycles it has left. The 4GB SSD was replaced with the 32GB SSD. I keep portage on an 8GB SD card, which I expect to fail much sooner than the 32GB...)

sounds like things are more or less sorted with the wear leveling side of things
I'm still wanting to make the SSD jump, and that was one of them, but man...I need crypto, and they still don't seem to have a great answer for that.
I may be forever stuck rotational_________________Lost configuring your system?
dump lspci -n here | see Pappy's guide | Link Stash

Well, luks TRIM passthrough works, though zeroed erase blocks do reveal something about the encrypted partition. You have to decide how valuable your information is.

yip, and that's a problem i just plain dont see a way to fix. I don't think it ever will be. You can't present a disk full of random data when a full disk is counter to the functionality of an SSD.
i just cant justify it. I need airtight. I just can't justify leaving even the slightest opening for the sake of what is, granted, a substantial performance benefit, when on the whole my machines already work worlds faster than my brain can manage.

it's really really really tempting. And frustrating. A speedy enough disk and i could keep a Windows VM that i need for work running full time. Just can't do it. Too paranoid I guess._________________Lost configuring your system?
dump lspci -n here | see Pappy's guide | Link Stash

Well, in theory the kernel could postpone in a random manner to free some blocks. However, I guess it does not appear important enough to implement it. I cannot imagine that the information about free blocks is so valuable that you can draw any serious conclusions from it: Even hypothetical, I do not see any such examplle in the moment.

How much would it cost to compromise your data with and without TRIM enabled, and how do these compare to the value of your data?

How much is the speed of an SSD worth to you compared to the cost of using an SSD, with and without TRIM?

I'm in a fortunate position where I can have craploads of backups, all over the place (and all of them encrypted as well)
So my chances of losing data - at least the important stuff, I don't include my stash of funny animated gifs in my backups - are low short of a catastrophic global event.

Data compromise is more likely for me at the borders, where my "4th Amendment" rights are suspended. I don't have anything that *should* interest folks in my government, but that's the old, "if you have nothing to hide..." - right, so, dear government, it's my damn data, you can sod right off thinking you're entitled to it. The worry is of course that I get my HDD cloned, and some government leech sitting in an air conditioned underground bunker gets to take his time trying various attacks at his leisure - mathematically, it should take many lifetimes to get at my data, so I don't want anything to make it easier on 'em.

Getting at my unencrypted disk means saved passwords, saved crypto keys for other things, that frankly the government has no business seeing. It's mine. Not the property of anyone else.

So yeah, I've been holding off on pulling the trigger on an SSD for i think the past 2 or 3 years thanks to that. Just I convince myself "well, the risk is so low", I always come back to "nah, cant risk it"_________________Lost configuring your system?
dump lspci -n here | see Pappy's guide | Link Stash

So I guess the conclusion is that the security of your data is priceless to you.

For me, I just want to prevent identity theft. If I some NSA spook wants to read my diary entries, fine by me.

I may reevaluate if I ever become important.

purely a matter of principle. I am completely and totally unimportant and uninteresting to a government.
but my data, the things that are in my brain - this is our last bastion, our last frontier, of things that the government can't touch or meddle in.
If they come with actual guns blazing to take my property, to arrest me, kill me, whatever else, I can offer little resistance.
But if they come for my data, they have no better chance of "winning" than I do - in fact, it's a rare case where I have an advantage.

so if they want pictures of my cat and my guns, or if they want to sift through my porn stash - the bastards are gonna have to work for it._________________Lost configuring your system?
dump lspci -n here | see Pappy's guide | Link Stash

I compile stuff in tmpfs too (using 2G out of 4G RAM), except for firefox/chromium, oo/libreoo and thunderbird. I use the -bin versions of them. It's no use trying to compile them. I think a big benefit of compiling is slightly faster startup, but it is not worth spending hours compiling such a big package when binary versions are available.

I have been trying out using a zram mounted /tmp, and have allocated 500M to it. I have made my browser cache go to the /tmp and have never seen the usage go above 50M._________________emerge --quiet redefined | E17 vids: I, II | Now using e17 | e18, e19, and kde4 sucks :-/