I've learned about this by talking to people in the
industry. The exact details and algorithms used are proprietary secrets and
sometimes covered by patents. But the principles are the same in all SSDs.

In flash devices 2% to 10% of blocks may be error prone or unusable when
the device is new.

And after that data in "good blocks"
can later be corrupted by charge leakage,
disturbance from
writes in adjacent parts of the chip, wear-out and
variability in the
tolerances of the R/W process in MLC SSDs.

The
explanation below is
based
on an email I sent to a reader in November 2010.

Controllers remap
every time they write to a block - because they try to even out the total writes
done on any physical block.

When they get unacceptable errors from a block it's assigned to a dead
pool.

For every type of flash chip and each process stepping and
each manufacturer - the SSD designer needs to know the percentage of dead
blocks which they are likely to get during the life of the SSD. (Typically using
a design life of 5 years.)

Successfully working around these defects
also depends on the strength of error coding - and how the blocks are mapped
on the solid state disk.

Using a
RAID aproach and a
population of thousands of flash chips in a rackmount SSD like those made by
Violin - gives a higher
percentage of blocks which can fail and still leave the SSD usable - because
data is striped across blocks.

On the other hand - in consumer SSDs with less chips and lower
capacity - the striping options are more limited.

The design
process results in a bad block budget - for example 4% to 10% - of dead blocks
which the SSD can find and yet still operate. Bad blocks are mapped as "do
not use". And known good blocks substituted instead. This budget (which is
due to media defects) is in addition to the budget which is calculated for
attrition of blocks due to wear-out.

The percentage of bad blocks
which can be accomodated is a product marketing decision. The spare blocks
come from over provisioning inside the SSD and using capacity which is
invisible to the host.

If the bad blocks exceed the budgeted number for any reason- the
SSD fails.

In the SSD market one of the reasons that some SSDs may
have failed early was that SSD designers - who knew too little about what they
were doing - used flash chips from other sources than those qualified by the
controller manufacturer. That threw away the built in safety margin. Another
problem can arise when the original flash chip manufacturer changes something in
their process - which doesn't affect the parameters they are testing for - but
does change the way the devices look from the data integrity point of view. That
too - can tip the balance outside the margins designed into the controller.

Another risk of SSD failures comes from virgin SSD designers who don't
know enough about the variance of parameters in the flash chip population. If
they choose the bad block budget numbers based on too small a sample - and
don't allow enough margin - the controller runs out of spare blocks to assign
and dies.

SSDs are only as good as the people who design them and make them.
There can be orders of magnitude difference in operational outcomes - even when
different SSD makers are using exactly the same memory chips.

Increasing Flash SSD
Reliability - although this artice is mainly about endurance - it gives a
good insight into how block quality checking and remapping occur as part of the
continuous work done by the SSD controller.

This article will help you understand why some
SSDs which (work perfectly well in one type of application) might fail in
others... even when the changes in the operational environment appear to be
negligible.

Due to the undesirability
(from an industrial chipmakers point of view) of waiting 7 to 10 elapsed years
to collect the real-time reliability evidence which would convince industrial
users it was safe to design these new products into their systems - by which
time they would be EOL and long forgotten - the semiconductor industry evolved
theoretical methods to satisfy customers in such markets much sooner. These
centered around accelerated life tests...

TMS
does a 1 month burn-in of flash memory prior to shipment. (One of the
reasons cited for its use
of SLC rather than
MLC BTW.)
Through its QA processes the company has acquired real-world failure data
for several generations of flash
memory and used this to model and characterize the failure modes which
occur in high IOPs SSDs.

Most enterprise SSDs use a simple type of
classic RAID which groups
flash media into "stripes" containing equal numbers of chips. RAID
technology can reconstruct data from a failed Flash chip. Typically, when a chip
or part of a chip fails, the RAID algorithm uses a spare chip as a virtual
replacement for the broken chip. But once the SSD is out of spare chips, it
needs to be replaced.

VSR technology allows the number of chips to
vary among stripes, so bad chips can simply be bypassed using a smaller stripe
size. Additionally, VSR provides greater stripe size granularity, so a stripe
could exclude a small part of a chip rather than having to exclude an
entire chip if only part of it failed - "plane error". With VSR
technology, TMS says its SSD products will continue operating longer in the
installed base.

"...Consider a hypothetical
SSD made up of 25 individual flash chips. If a plane failure occurs that
disables 1/8 of one chip, a traditional RAID system would remove a full 4% of
the raw Flash capacity. TMS VSR technology bypasses the failure and only reduces
the raw flash capacity by 0.5%, an 8x improvement. TMS tests show that plane
failures are the 2nd most common kind of flash device failures, so it is very
important to be able to handle them without wasting working flash."

The publishers say that
SSD designers must
understand flash technology in order to exploit its benefits and countermeasure
its weaknesses. The new book is a comprehensive guide to the NAND world -
from circuits design (analog and digital) to
reliability.