Knowledge Base: Solid State Drives SSD

SSD are not new, but as of 2011 marketed broadly. This page refers
to SSD consisting of so called Flash cells, trapping electrical charge
in electrically isolated areas, keeping data without electrical power.

Defragmenting a SSD?

This topic causes controvers discussions, close to religious wars,
but very few people understand the basic operation.
I try to explain.

A SSD does not need to reposition heads by mechanical movement,
which needs time.

A SSD does not need to wait for a sector to appear at the heads.

Both SSD and HDD need separate commands to read non-adjacent sectors.

Transmitting a command (and in return a status) needs time.
Processing it needs time, too.

The answer to defrag-question is the difficult to give answer to the
question: "Will contiguous numbering of sectors assigned to a file
result in significantly fewer commands send for read (or write)
in your very computer system?"

The file system (if any is in use, some data base apps go directly to
mass storage sectors) does not allocate sectors to a file, but logical
blocks (or clusters, terms vary), which can be 1 to N sectors.
A typical sector is 512 Byte, a default cluster of NTFS is 4096 Byte
means 8 sectors.

There is a limit on number of sectors to be processed by a single command,
I would need to look it up. Above that number, multiple commands have to
be issued anyway, even for contiguously ascending sector numbers.

What is the typical read data size in you system?

At the edge of physics

Being "at the edge of physics" simply means it is impossible to
improve one parameter without makeing another
one worse. It means to use difficult to master algorithms and technologies
to get the desired result, risking errors / mistakes / bugs.

Important parameters of mass storage

Important parameters are:

Capacity

(Price per capacity)

Speed (different sub-categories)

Number of allowed Write and Read cycles

Long-time safety of data

Is Flash based SSD new?

No, not at all! It has been no mass market product before, but targeted
special markets, smaller volume, higher requirements, and was much more
away from e-o-p.

HDD or SSD

Why mistrust SSD? Because it is strongly pushed into market with a lot of
advertising. Some HDD makers do a bad job, too, but risk at SSD is much
higher - my feeling.

Pro SSD

SSD may be faster, dramatically on random access.
SSD is mechanically more robust.

Contra SSD

Many reports say: SSD needs same power like HDD, sometimes even more.
As of 2011-06, SSD is still a lot higher in price, not necessarily worth
the money.
SSD has a low limit on possible write cycles.
The limit of HDD is much higher, needs no consideration.

Opinion

As long as the system operates without page file, and you have the money,
SSD is first choice.
I do not want to place a (Windows) page file on SSD,
unless I had even more money to buy some compatible SSD for regular
replacement up-front.

SSD failure mechanism

A worn out SSD as a higher charge leakage of the memory cells.
The cells probably do not fail during write with verify
(memory programming operation) - which could easily be handled and
the user be warned still at the right time.
Failure is likely to occur much later during read, detected by
mismatching sector checksums or errors found by more sophisticated
checking and correction algorithms, like Hamming code.
If there are too many errors for correction algorithm, it is too late
for the data. It should be possible to prevent such a situation by
regular reads of all (used) sectors, e.g. once a week.
User needs a honest and communicative SSD for this,
which tells him about number of correctable errors found.
User has to take action as soon as error numbers start to rise.

This very failure mechanism means any short-term tests
(without emulated time acceleration by harder conditions) have little value.
They state whether cells keep their content for a short time, not more.

Estimation for page file usage and SSD survival

In the old days, best Flash memories had their parameters given as
"inclusives", means 0 errors after X years of storage or operation
at temperature Y, having experienced already Z erase/write cycles, for the
whole chip.

Today, parameters are usually exclusive. You get one, may be two,
but not all at same time. Error correction mechanisms are included to
recover from a defined number of bits failing.
Data sheets need very cautious readers. Some companies prefer to hide their
info behind NDA, more or less.
The data sheet numbers are realistic, may be too promising, but very unlikely
to be under-estimating the performance these days. The "Endurance"
(a.k.a. Max Erase/Write or Program/Erase) numbers are clearly decreasing with
increasing memory chip capacity.

I WILL HAVE TO REWORK FOLLOWING (TO BE WORSE):
Let's take numbers from a freely available Data Sheet for 32 Gbit NAND Flash,
the "hynix HY27UK08BGFM series", rev 0.0 of 2007-02-09, which is
clearly not the lastest chip achivement as of 2011-06-24
(which use to be worse, due physical limits already reached!).

Data Retention: 10 years

Max Program/Erase cycles, with 1 Bit ECC at 512 Byte: 100.000

Min Valid Blocks (out of 32768): 32128

Max Ambient Operating Temp for Commercial / celsius: 70

I am unsure whether "Max Ambient O.T." is an "inclusive"
here, but have good reason to guess it is.
The "Data Retention" uses to be valid up to 1/10 of
"Max Program/Erase cycles", so taking all together we have
10.000 cycles and 10 years in a really operating device, which may have
temperature of 55 celsius (so real results should be better than estimated).

Not considering capacity reduction due to "Min Valid Blocks",
any additional capacity needed for management, firmware, or replacements,
a 128 GByte SSD needs 32 chips. This is a lot, but realistic.

We have 10.000 writes and 10 years, as said above.
I assume "wear leveling" is well implemented and does really work.
It means the storage device balances the number of writes
(or erasures, technology dependend) done to blocks of memory.
This may involve moving around already stored data on writing new data,
which has a cost on its own (performance, 1 more erase/write cycle),
but happens infrequently only, on significant wear difference.
Assume 128 GB SSD size.
Assume 20 MB/s write rate, quite high, because RAM must be read back again,
and there shall occur some other processing, too. Even if paging a lot,
data will not be modified that often, means on swapping a RAM location is
read multiple times from different mass storage locations, but less
frequently written to mass storage. So consideration is worse case.
Need 6400 s to write 128 GB, 6.4E7 s or 740 days of contiguous operation
to write it 10.000 times, means 2 years.
I guess such a SSD survives
even heavy page file usage for 10 years of user operating a computer,
and still keeps data for 10 years.