Today we will be talking about VCE’s cloud infrastructure product, the Vblock. Gartner’s recent study that through next year 60% of enterprises will embrace some form of cloud adoption, has enlightened the competitive cloud vendor market. But at the same time, does the cloud industry need to be driven by vendor competition or vendor collaboration? Archie Hendryx of VCE Technology Solutions discusses this very matter.

EM360°:Could you tell us about VCE and why cloud has played a big part in your company’s solutions?

Archie: VCE is a unique start up company formed via joint investments from EMC, Cisco, VMware and Intel that has been operating for just over three years. Its focus is solely on building the world's most advanced converged infrastructure, the Vblock. The Vblock is a pretested, prevalidated and preconfigured and more importantly pre-integrated infrastructure solution of storage, compute, networking and hypervisor; so in other words it ships out as a single SKU and product to the customer.

Personally I like to equate VCE as a revolutionary that has changed the way we view infrastructure as it’s manufacturing and selling infrastructure as a product much in the way like you buy a car such as an Audi. When you buy an Audi you may have different components from different vendors that make up that car but what the end user is purchasing is a single product. Similarly with the Vblock while we may use different components from our investors Cisco, EMC, VMware and Intel the end user is acquiring a product. Because it’s a standardized product, the Vblock models are exactly the same regardless of geographical location, which completely radicalizes and simplifies the customer experience of infrastructure and consequently mitigates the typical risk associated with it.

As for how the cloud has played a big part in VCE’s success, one of the major criticisms of private clouds is that the end user still has to build, manage and maintain the infrastructure to the extent that they are continuing the ‘keeping the lights on’ approach of IT. Ultimately this lacks the economic benefit that makes cloud computing such an intriguing concept. Hence what we and our customers quickly realized is that a private cloud’s success ultimately depends on the stability, reliability, scalability and performance of its infrastructure. By going the Vblock route our customers immediately attain that stability, reliability, scalability and performance and consequently accelerate their private cloud initiatives. For example with support issues, VCE alone are the owner of the ticket because the Vblock is their product. Once the Vblock has been shipped out problems that might potentially be faced by a customer in Glasgow can easily be tested on a like-for-like standard Vblock in our labs. This rapidly resolves performance issues or trouble tickets.

The other distinctive feature of the Vblock is its accelerated deployment. We ship to the customer a ready assembled logically configured product and solution in only 30-45 working days, from procurement to production. This has an immediate effect in terms of the reduction in cost of ownership, especially when the businesses demand that instant platform for their new projects.

EM360°:Your latest cloud infrastructure solution sees your components from the Vblock, integrating with VMware’s new cloud solutions system. Can you tell me why industry collaboration is seen to be prominent in today's market?

Archie: What I think has driven this is a change in mindset of customers which has been initiated by the concept of cloud computing. Customers are reassessing the way they procure IT and they want a simplified and accelerated experience that doesn't require having to go to multiple vendors and solutions. I think vendors that are still only focused on storage or servers and have not looked at expanding their offerings via alliances or acquisitions are either going to fold or be swallowed up by the big fishes as they look to add to their portfolios. This is one of the reasons why the latest announcement from VMware and their vCloud suite is so exciting and of course VCE’s support and integration for it.

If VCE and the Vblock are responsible for accelerating your journey to the private cloud you could say that adding this vCloud suite would pretty much give it a major turbo boost.

EM360°:Are copyright factors, or other vendors sussing out each other’s strengths and weaknesses, a problem when you encounter a project like this?

Archie: That's a really interesting question and certainly I have experienced that in previous roles, especially when I was with initiatives such as Storage Networking Industry Association (SNIA) when they had SMI-S compliancy. We were always promised that SMI-S compliancy would allow us to have the utopia of a single pane of glass for heterogeneous storage arrays, regardless of whether the storage array was from HDS, HP or EMC. Sadly this was never the case. As none of the vendors opened up fully and you only ended up with around 60% functionality, which ultimately meant that you went back to the native tools and multiple management panes of glass that you had anyway. You could not really blame the vendors as it would be naive to think that one vendor would allow its competitor to dissect their micro-code. This mindset is not going to change. So that is why you will see vendors deciding to procure their own server companies or storage vendors to provide this end-to-end stack.

At VCE we are in a very unique position where our investors are not competing with each other, and for us they are ultimately the component providers to our product. We don't necessarily support or include all of our investors’ products or portfolios as components, only those we feel really best integrate with our single end user product. Once we have our specific components defined from our investors based on our standards we then pre-integrate and manufacture our product as a comprehensive solution. While our competitors and even our investors have such a large portfolio of products and offerings, VCE only do Vblocks and hence only focus on improving and optimizing Vblocks, enabling us to do things which others in the industry have only dreamed of, and this will be announced very soon.

EM360°:Today's enterprise market is obviously rather confused and what some other analysts are also thinking. I don’t think some companies know what they want for their departments, whether to embrace public, open, private or a bit of both — hybrid functions. A lot of vendors are doing their own spin on cloud, particularly the niche players. Is the industry doing enough to simplify the product offering?

Archie: In a nutshell no. There is still a lot of confusion out there and smoke screen marketing from various vendors and this hasn't helped the end user decide or make the distinction between the various offerings and what is best for them. What we have found most recently with a lot of our enterprise clients is that they initially look at us as part of a storage server or datacenter refresh. While they may have some cloud initiatives, they really have little or no idea on how to achieve them, certainly in terms of the traditional model of IT procurement and deployment.

Once they understand the concept of the Vblock and how VCE can provide them a productized, risk free infrastructure we immediately see them come to the realisation of how this could be aligned to a Private Cloud model that in turn could develop to a Hybrid cloud. Once the customer realizes how agile and quick the deployment of their infrastructure could be with a Vblock , we nearly always find them talking and feeling freer to think higher up the stack with strategic discussions and plans on how they can deploy a management and orchestration solution and service portal. Ultimately if you want people to really understand the Cloud and what’s best for them you’ve got to show them how you take away the risk from their traditional IT and infrastructure challenges.

EM360°:Have we seen innovation thrive in the cloud infrastructure management market, and what kinds of developments and developments have really caught your eye today?

Archie: There are a lot great products and suites out there. Every day we are seeing improvements in the look and the feel of such products as they come closer to providing that public cloud experience to the private cloud. I think the challenge of all of these solutions up to now is that they have to be integrating with all of the components of the infrastructure as separate entities, especially when it comes to designing and deploying orchestration. Without trying to reveal too much what VCE will be bringing out, I can certainly say that it will completely revolutionize and simplify this, where the product will now be managed, monitored and orchestrated as exactly that, a single Vblock product. When this development comes it will really excite many and completely transform the private cloud infrastructure model going forward.

EM360°:Are there any final thoughts you would like to leave our readers with as to how the cloud infrastructure market will play out in the future, what kind of systems we could be using and how enterprises should look to plan ahead?

Archie: The industry is at an inflection point. The approach to IT is changing and is affecting the way customers and vendors approach and procure infrastructure, specifically with regards to the mission critical applications that they ultimately depend on. This is going to lead to more converged infrastructure offerings that are eventually going to pretty much get to the point where VCE are, which is a standardized product offering, or as we like to call it an x86 mainframe. One CTO of a customer recently said to me that if I do not look at purchasing the components of the power and cooling of my datacenter, why should I do that with my infrastructure? That kind of summed it up for me because there is going to come a time when people will look back at the way open systems IT was purchased and deployed as ludicrous as someone today buying all of the components of a laptop, putting it all together, somehow expecting it to work perfectly and then be supported seamlessly by all of the vendors of the components.

To take that laptop analogy further what we will eventually see with infrastructure viewed and built as a product, is a new way to manage, monitor and update it as a product. For example when you update your laptop you are automatically notified of the patches and it’s a single click of the button for the single product. You don’t receive an update for your keyboard, followed by an update for your screen only for a week later to be sent another update for your CD-ROM. Concurrently when it comes to support you don’t log a call with the manufacturer of the CD-ROM component of your laptop you go directly to the manufacturer of the product. Imagine that same experience with your Cloud infrastructure where it alerts you of a single seamless update for the whole product? Where it has a true single management pane and presents itself to the end user as a single entity? Imagine how that would simplify the configuration, orchestration and management of the Private Cloud. That’s where the future lies and to be honest it might not be that far away.

When the character Maverick from the movie Top
Gun exclaimed, “I feel the need, the need for speed”, you’d be forgiven for
mistaking it for a sound bite from a CIO discussing their transactional
databases. Whether it’s a financial organization predicting share prices, a
bank knowing whether it can approve a loan or a marketing organisation reaching
consumers with a compelling promotional offer, the need to access, store,
process and analyze data as quickly as possible is an imperative for any
business looking to gain a competitive edge. Hence when in 2011, SAP announced
their new in-memory platform HANA for enterprise applications everyone took
note as they coined the advantage of real-time analytics. SAP HANA promised to
not just make databases dramatically faster like traditional business warehouse
accelerator systems but instead speed up the front end, enabling companies to
run arbitrary, complex queries on billions of records in a matter of seconds as
opposed to hours. The vendors of old legacy traditional databases were facing a
major challenge, most notably the king of them all…Oracle.

The
Birth and Emergence of Big Data

Back in the days of mainframe, you’d find the
application and transactional data of reporting databases physically stored in
the same system. This was due to applications, operating systems and databases
being designed to maximize their hardware resources, which consequently meant
you couldn’t process transactions and reports simultaneously. The
bottleneck here was cost, in that if you wanted to scale you needed another
mainframe.

After the advent of client servers where
applications could run on a centralized database server via multiple and cost
effective servers, scalability was achieved by simply adding additional
application servers. Regardless, of this a new bottleneck was quickly
established with systems relying on a single database server and requests from
ever increasing application servers that ended up causing I/O stagnation. This
problem became exasperated with OLTP (online transaction processing), where
report creation required the system to concurrently read multiple tables in the
database. Added to this servers and processors kept getting faster while disks
(despite the emergence of SSD) were quickly becoming the bottleneck to
automated processes that were producing large amounts of data that concurrently
resulted in more report requests.

The net effect was a downward spiral where the
increase of users requiring an increase of reports from the databases meant an
increase in huge amounts of data being requested from disks that simply weren’t
up to the job. When you then factored in the data proliferation of external
users caused by the Internet and pressure inducing laws such as Sarbanes-Oxley,
the demand to analyze even more data even quicker has reached fever point. With
data and user volumes increasing by a factor of thousands compared to the I/O
capability of databases, the transaction-based industry faced a challenge that
required a dramatic shift and change. Cue the 2011 emergence of SAP’s
HANA.

Real-Time
In Memory Platform Presents a Groundbreaking Approach

One of the major advantages of SAP HANA’s ability
to run in real time is that it offers a non-requirement for data redundancy as
it’s built to run as a single database. With clusters of affordable and
scalable servers, transactional and analytical data are run on the same
database, hence eliminating different types of databases for different
application needs. Oracle on the other hand has built an empire on exactly the
opposite.

Oracle has thrived on a model where generally
companies start with a simple database that’s utilized for checking sales
orders and ensuring product delivery to customers but as the business grows
they need more databases with different and more demanding functions. Functions
such as managing customer relationships, complex reporting and analysis drives
a need for new databases that are separate from the actual business requiring data
to be moved from one system to another. Eventually you have a sprawl of
databases as existing ones are unable to handle the workloads making it almost
impossible to track data movements yet alone attain real time updates. So while
the Oracle marketing machine is also pitching the benefits of in-memory via its
Exalytics appliance and in-memory database, TimesTen, Oracle are certainly in
no rush to break this traditional model of database sprawl and the
money-spinning licenses that come with it.

Looking closely at the Oracle Exalytics /
TimesTen package, despite the hype, it merely is just an add-on product meaning
that an end user will still need a license for the transactional database,
another license for the data warehouse database and yet another license for
TimesTen for Oracle Exalytics.

Moreover, the Oracle bolt-on approach serves to
sell more of their hardware commodity and in some ways perversely justify their
acquisition of SUN Microsystems, all at the expense of the customer. Due to the
Exalytics approach continuing the traditional requirement for transactional
data to be duplicated from the application to the warehouse and once again to
Exalytics, the end user not only ends up with three copies of the data, they
also have to have three levels of storage and servers. In contrast SAP HANA is
designed to be a single database that runs both transactional applications and
Business Warehouse deployments. Not only does SAP HANA’s one copy of data
replace the two or three required for Oracle it also eliminates the need for
materialized views, redundant aggregates and indexes leaving a significantly
reduced data footprint.

Comparing
HANA to Oracle’s TimesTen and Exalytics

As expected Oracle have already initiated their
FUD team with bogus claims and untruths against HANA as well as even pushing
their TimesTen as a like for like comparison. Where this is hugely flawed is
that they fail to acknowledge or admit that SAP HANA is a completely
groundbreaking design as opposed to a bolt-on approach. With SAP HANA
data is completely managed and accessed in RAM consequently doing away with the
requirement of MOLAP, multiple indexes and other tuning features that Oracle
pride themselves on.

Furthermore, despite the Oracle FUD, SAP HANA
does indeed handle both unstructured and structured data, as well as utilise
parallel queries for scaling out across server nodes. In this instance Oracle
are trying hard to create the most confusion and subsequently detract the
market from realizing that the TimesTen with Exalytics package still can’t
scale out beyond the 1TB RAM limit unlike SAP HANA where each container can
store up to 500TB of data all executable at high speed.

With an aggressive TCO and ROI model compared to
a traditional Oracle deployment, SAP HANA also proves a lot more cost
effective. With pricing based on an incremental of 64GB RAM and the total
amount of data held in memory, licenses are fully inclusive of production and
test/development requirements as well as the necessary tools.

SAP
HANA’s embracing of VMware

Furthermore with Oracle’s belligerent stance
towards VMware and the cost savings it brings to end users, SAP on the other
hand has embraced it. The recent announcement that SAP HANA is supporting
VMware vSphere will provide them a vast competitive advantage, as it will
enable customers to provision instances of SAP HANA in minutes as VM templates,
as well as gain benefits such as Dynamic Resource Scheduling and vSphere
vMotion. By virtualizing SAP HANA with VMware, end users can quickly have several
smaller HANA instances all sharing a single physical server leading to better
utilization of existing resources. With the promise of certified preconfigured
and optimised converged infrastructures such as the Vblock around the corner,
SAP HANA appliances could be shipped with vSphere 5 and SAP HANA pre-installed
within days, enabling rapid deployment for businesses.

The
Business Benefits of Real-Time

With business and transactions being done in real
time, SAP HANA ensures that the data and the analytics that come with them are
also in real time. The process of manually polling data from multiple systems
and sorting them through are inadequate in a time when businesses are facing
unpredictable economic conditions and volatile demand and complex supply
chains. The need is for real time metrics that are aligned to supply and demand
where a retailers' shelves can accurately and immediately be stocked
eliminating unnecessary inventory costs, lost sales opportunities and failed
product launches. Being able to instantly analyze data at any level of
granularity enables a business to quickly respond to these market insights and
take decisive actions such as transferring inventory between distribution
centers based on expected sales or altering the prices of promotions based on
customer demand. Instead of waiting for processes that take hours, days or even
weeks, SAP HANA’s real time capabilities enable businesses to react in real
time to incidents.

Ultimately SAP HANA is a revolutionary step
forward that will empower organizations to focus more on the business and less
on the infrastructure that supports them. With the promise of new applications
being built by SAP to support real time decision making as well as being able to
run existing applications, SAP HANA presents the opportunity to not only
transform a business but also the underlying technology that supports it.

Who is the SANMAN?

Disclaimer

The thoughts, comments, views and opinions expressed in this blog are entirely my own and not those of the company I work for. Content published here is not read or approved in advance by my employer and does not necessarily reflect the views and opinions of the company I work for.