There's a new concept which has been
coming up in a lot of the conversations I've been having about SSDs in
recent weeks. It's a simple one word summary which neatly bundles up a bunch of
technical and business concepts.

If you start looking at SSD companies
from this angle it gives you a new empirical way in which you can spot likely
winners and survivors in the SSD market.

It can also give you a fine
grain reassessment of companies which - by other external measures (such as
quarterly results or being listed in the
Top SSD Companies List)
- already appear to be doing well.

But appearances are
deceptive.

A simple metric can probe beneath these external SSD
business veneers- and sub-divide (even the most attractive looking set
of ) today's SSD companies into 2 further classes - which are laden
with portents for their future business outlooks.

I've found this
new way of looking at SSD companies equally valuable whether I'm talking to
someone who runs an SSD investment fund in a bank, or is the CEO, VP marketing,
or CTO in an SSD company, or someone who is a seriously interested designer and
user of SSDs.

Now
- if you're a regular reader of my SSD articles - you might say something like
this.

"Hey Mr SSDmouse - I've heard this type of preamble from
you before. Didin't you already write a couple of big new SSD idea articles
earlier this year in which you promised that this would be the most
important single idea about SSDs that we would have to get our heads around this
year? How come you're trying to pull that same old trick again?"

To
which my reply has to be...

Dear Reader - thanks for reading my
previous articles and tweeting and blogging about them. You're perfectly correct
in what you just said. My defense is that in each of these earlier articles
it seemed that what was being discussed was indeed more important than what had
been discussed before. (And I didn't get too many complaints at the time.)

If
the SSD market had fossilized and stayed exactly as it was 2 or 3 quarters ago
- then I agree that there would be no more need for new articles of this type.

But that didn't happen. The SSD market hasn't stayed still. and the
pace of developments in the SSD market in the past year has accelerated.

10 years ago,
5 years ago
and even 3 years
ago - you could safely coast along through the SSD currents by absorbing
one big technology idea or one new SSD business dynamic idea each year.

But don't say I didn't warn you that this not too overly strenuous
SSD re-education
process was about to change.

And
now in 2012
(or 2013,
or 2014
or 2015 if
you're reading this a bit later) there are a lot more companies doing new SSD
stuff.

The overall maket is bigger which means there are more SSD
market segments which have each grown big enough to support their own style of
innovation and distinct set of values. That's why the new SSD big idea
articles seem to be happening nearly every month.

And there's another
reason too. Sometimes to understand a new high level concept - you may need to
first absorb and get familiar with a bunch of lower level SSD ideas - which are
part of that framework.

But I promise not to write any more articles
which start out by saying - this is the most important idea about SSDs which
you will read this year. (Unless the publication date is December 24th.)

OK
- enough of that - let's get on with it.

A new word has been creeping
into nearly every email and conversation I've been having about SSDs recently -
and that's - "efficiency".

SSD efficiency is a very powerful differentiator in
technology and I think it will also be very important in influencing business
success too.

Let's talk about efficiency in the context of technology
first.

What does efficient SSD technology look like? - and why does it
matter?

Suppose you're a customer looking at 2 competing 2U rackmount
SSDs for an application you've got in mind. You're going to buy hundreds of
these - but you've narrowed it down to these 2 suppliers.

Both suppliers are supported equally well for the type of software
environment you've got.

Which one are you going to buy?

The
SSD mouse comes along with his technological screwdriver, lights up a torch
inside both boxes and starts looking at what's inside.

Then he says
something which surprises you.

This one uses nearly twice (2x) as many
memory chips (to do the
same job).

Or

This one does the same job with a lot less raw
chips and also BTW the chips are a different generation and type of MLC which is
much cheaper for the vendor to use.

Why should you care?

Both
boxes will cost you the same? - There's nothing else much to choose between
them.

The best choice is the product which has the better efficiency.
This efficiency comes from design architecture. (I'll say more about the nitty
gritty details later in this article.)

Why is the most efficient
SSD architecture your best choice? - given than either product works just as
well and is being quoted to you at the same price....

You can infer
that the vendor with the most efficient architecture

is much better advanced in their understanding of application specific SSDs

can make more money at the same price point as less efficient
competitors - and therefore is less likely to need bucketfuls of VC funding -
and is more likely to stay in business as a stable supplier (even if the company
and its product are acquired by someone else).

the more efficient SSD design will use less electrical power - which means-
if you use a lot of them - you'll see lower running costs and better reliability
(because most of that wasted power just turns into waste heat).

How is
it that one SSD system can be so much more efficient at its use of raw chips
than another?

In earlier phases of the market these differences in
approrach didn't matter so much - as long as they could deliver an SSD that
could meet a performance, price and density goal. But today - the SSD market is
maturing to a new level where being good at what you do may not be enough if
another SSD competitor looks at the same market niche and puts their mind to
doing it better.

Here are the main factors which account for the
differences in efficiency.

Raw flash capacity can be leveraged in various ways to provide performance
or reliability or both. The classical SSD architecture case was discussed in
my article - the
SSD
capacity iceberg.

Adaptive
R/W and DSP flash techniques enable efficiencies in both raw memory use
(when using the same memory generation) and also introduce the possibility of
using newer generations of memory which have intrinsically better efficiency at
the raw chip level - which are not feasible using pre-DSP classical designs.

Good SSD software can change the efficiency and effiicacy of
overprovisioning, RAID-like overhead, the utilization and attrition rate of
raw flash blocks and also impact the cost budget allocated to SSD processors
and controllers.

Each one of these factors can contribute a raw
efficiency factor which ranges from about 5% to over 40%. When you add up
several of those little percentages - you start to see big differences.

In
the hypothetical comparison of 2 rackmount SSDs - the example I'm using is
obviously the enterprise SSD market. That's the market where you can see the
biggest differencies between single competing products.

However, the
only one of these bullet points which you would leave out for the industrial and
consumer SSD markets - is the big versus small SSD controller architecture.
That's because - due to physical
size contsraints and packaging technologies - it's not yet feasible to apply
large controller architecture within small form factor SSD components. That
might change in another 3 to 5 years - but not soon. Nevertheless - the other
raw efficiency ingredients still add up to efficiency a can't ignore concept.

In
consumer SSDs - the efficiency differences add up to enable lower raw cost. (As
to what color the SSD should be - you'll still have to consult the shoe event
horizon marketers.)

In industrial SSDs - the efficiency differences
add up to enable improvements in dimensions like - the smallest viable form
factor, cost, power consumption and reliability.

The idea that architectural
efficiency is a significant technological advantage isn't new to top
ranking SSD management.

Many companies have known that this is
something which gives them an edge for years. What's new is that recently
several new factors have entered the market which can take these efficiencies to
a new overwhelmingly hard to ignore level.

That covers what I want to
say here today on the subject of SSD design architecture efficiency. But there's
a related efficiency theme I'd like to bring in here too. That's marketing
efficiency.

What do I mean by marketing efficiency?

It's
closely related to classic business school meanings.

SSD vendors
have to identify more accurately the
market segments which
they operate in - within a more complex and sophisticated SSD market
landscape. Vendors have to design their marketing messages around a tighter set
of value propositions and develop products which are each better optimized
around a smaller set of applications.

The idea I'm getting at here is
that in the past a small set of SSD designs could compete across a wide span
of applications.

But now that the SSD market has grown bigger -
there are clear differences emerging about where SSDs can be used and for what
type of application each type of design is best suited. I outlined the different
use cases and segments for enterprise SSDs in my
SSD silos article.

In the past - having a small degree of product overkill wasn't a
marketing handicap - because customers wouldn't complain if they were getting
more than they needed once they decided that an SSD solution was in fact
affordable.

In the much more competiive SSD market of today it's not
unusual for customers to have a better idea of the range of products on offer
than some of the vendors who are pitching to them. (And a better idea of the
SSD market than some past SSD CEOs too.)

The customer isn't going to
tell a vendor that the reason their SSD isn't in their purchase order is
because 30% of the cost is going into overkill features and performance which
the customer doesn't need. (This is an example of marketing inefficiency. The
short term answer for the vendor is to find customers who do need and value the
additional features. The long term answer is to match the feature set of future
products more closely to what customers need.)

And the customer isn't
going to tell a supplier that they know (or can guess) that your product uses
40% more chips than it needs to - to perform what they do need. (And the
customer is worried about the risks to them of less than optimum future
pricing, or what happens if you go bust, and the extra cost and heat from that
electrical power which your design shouldn't really have.)

Vendors
have to figure these things out for themselves. Then take action to align
themselves better with market expectations.

Conclusion

In the
next few years efficiency as a concept at both the SSD architecture and
marketing level will become a headline subject which will make and break
fortunes.

.

update and
clarification to my SSD Efficiency article

Editor:- October 29, 2012 - The above article
arose out of conversations I'd been having with business leaders in trend
setting flash SSD companies. Some of these people I talk to - and their
companies have designed the world's best known SSD systems and controllers. My
theme (as often on these pages) was "SSD thought leadership" - and not
- an entry level introduction to flash SSD design.

Maybe I should have
made that context clearer in my introduction.

One regular
correspendent - Robert Young
whose blog - Dr. Codd Was Right
- sometimes visits the topic of flash SSDs from a database angle - may have
thought I'm starting to lose it - because he politely suggested that "the
elephant on the coffee table? in this October editorial was - no mention of
larger NAND chips sizes, and resultant block/erase block size on efficiency..."

Here's
what I said.

"The point of my article was how SSD makers are
different in the efficiency of their system designs - even when they have access
to exactly the same pool of chips. So larger NAND chips sizes etc are
irrelevant.

"What's important at the systems level is that some companies can
build the same usable capacity, performance and reliability for the user's app
- even when using 20, 30, 40% and even 50% less chips which come from the same
memory generation as their SSD competitors who have less efficient
architecture and don't have the same reliability IP or market knowledge."

.

....

...

"Efficiency is
important. As a rough approximation, a server in your datacenter costs as much
to power and cool over 3 years as it does to buy up front. It is important to
get every ounce of utility that you can out of it while it is in production."

"Many All-Flash
Array vendors propose deduplication and compression as the work-around for the
endurance problem of flash based storage systems. When these methods are
implemented "in-line" so they occur before new or updated data is
initially written to the flash they can reduce or eliminate some of the data to
be written to the flash module.

The problem is that effectiveness of
these methods is not consistent across various data center workloads.
Deduplication can directly impact performance and creates its own set of writes
as hash tables are updated. Overall these techniques are certainly worth
implementing in All-Flash systems but by themselves they are not enough."

...there are additional
techniques, besides compression and deduplication, that can reduce space usage
significantly and thereby increase the effective capacity.

One
example is - Zero-block pruning the system does not store blocks that are
filled with zeroes.

This technique can be seen as an extreme case of
either compression or deduplication. Also, some systems generalize this
technique to avoid storing blocks that are filled with any repetitive byte
pattern.