The path toward more effective server utilization in data centers rests in software.

Big data
- and all data - continue to up the ante for faster processing. The way the
industry has traditionally addressed this is by coming up with faster chips for
the CPU and other processing functions in hardware. From a data center
perspective, managers want to see all of the capacity utilized, and in big data
environments it is not uncommon to see upwards of 90 percent server utilization.
But in transaction processing environments, server utilization can fall to as
low as 20-30 percent, which is not an effective optimization of hardware.

Beyond the Stone Age

"We
have beaten up hardware relentlessly to improve performance and also energy efficiency
in data centers," said Mike Hoskins, Chief
Technology Officer At Actian, a provider of big data analytics solutions, "There
has been so much invested, yet the results have been poor."

Hoskins believes
that the path toward more effective server utilization in data centers rests in
software, and he uses the example of a two processor, 16 core server to
illustrate his point.

"One
can argue that software is in a kind of "stone age," in that it just
hasn't kept pace with hardware innovations," said Hoskins. "In the
case of a two processer, 16-core server, single-threaded software, which most
software today is, only keeps one to two cores busy and the other 14 cores go
unused. We experienced a bit of a breakthrough when virtualization technology
came on the scene, because in an environment like VMware, you could take eight
cores of processing and spread them over four virtual machines that used two
cores each." This compensated for the limitation of single-threaded software
because the software could be spread across four different server engines with
the core divisions afforded by the virtualization.

Unfortunately,
when we are talking about big data and analytics, "sleight of hand"
virtualization techniques that can improve software performance simply don't
work. The reason is that big data with its massively parallel processing, isn't
well suited for virtualization. Consequently, sites are potentially left with the
challenge of crunching through massive amounts of data in a small amount of
time - and possibly bumping up against limits in the software they are using. At
the same time, their budgets constrain them from investing in even more
powerful processing that goes beyond what they have available to work with - various
incarnations of x86-based server technology.

Hoskins
believes that sites can overcome the single-threaded core utilization limits of
most software if they can somehow lash processing cores together in a
harmonious memory-only data flow engine that can parallel-process incoming data
and also address the various steps of big data processing that have to be done,
such as data cleaning, aggregating, ingesting, and finally analytics.

Tiers of data

The idea
is to move tiers of data in memory closer to the CPU to improve overall performance.
Here is how it works.

There
are three tiers of in-memory data storage in cache: L1, L2, and L3. Any one of
them gives faster processing results than having to go out to external memory.
The L1 tier is the first tier that is checked for in-memory data, but it is
small in size, has smaller blocks of data, and also fewer blocks of data. As
access moves into the L2 and L3 tiers of this in-memory storage, the data blocks
are larger and there are more data blocks. The L3 level in particular has the
block size (and number of blocks) to handle big data that is being parallel
processed. There is also no need to go out to external memory if data is
resident in L3 data cache.

"In
this environment, a processing engine can know where data is executing and then
push the data that will be required into L3 cache," said Hoskins. "This
is one way that we can get around software constraints and get the most out of
hardware."

By Mary Shacklett

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President o...

Full Bio

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President of Product Research and Software Development for Summit Information Systems, a computer software company; and Vice President of Strategic Planning and Technology at FSI International, a multinational manufacturing company in the semiconductor industry. Mary is a keynote speaker and has more than 1,000 articles, research studies, and technology publications in print.