Global High-Tech Innovation

May 2013

May 28, 2013

In my last post I shared how enterprise health care
customers are facing pressure to incorporate massive amounts of external unstructured consumer health data and holistically integrate it into their existing data
centers. Keep in mind that this situation is not limited to health care; most enterprise IT architectures are trying to “extend the boundary” of their existing enterprise data centers to incorporate consumer-generated data. In a previous article I discused IDC's definition of the "3rd Platform". Can existing enterprise architectures expand to build this new type of platform? The diagram below shows a boundary around a secure, reliable EMC data center containing structured and unstructured health care data, with a vast amount of unstructured eHealth/Wellness data being generated external to that data center.

At EMC World several weeks ago, Chief Technology Officer John Roese discussed some of the requirements that enterprise customers (health care, banking, retail, etc.) are sharing with him regarding how their data center operation must expand:

The customer experience must extend beyond their core setting and controlled environment (e.g. the walls of their data center)

Instead of just focusing on gathering data in a clinical setting (e.g. visits to a hospital), their is a need to gather data about external behaviors like exercise frequency, what patients are eating, their mobility, etc.

Embedded medical devices supplying telemetry data need to synchronize their data to a common location for analysis.

The entire ecosystem of external patient data can be very helpful in the diagnosis of one single patient, and therefore there is a great need to aggregate and analyze the collective. Patients can number into the hundreds of millions, and valuable trends and correlations will inevitably surface and improve care.

Much of this data is coming out of the consumer world (e.g. FitBit and Fuel Bands), further lending credence to John's view that the lines between the consumer ecosystem and the enterprise are blurring. The value of consumer health data is augmented greatly when it can be correlated and analyzed in conjunction with the clinical data already captured within the walls of the traditional enterprise environment depicted above. Clearly, the workloads that will have to run in these new environment requires that the data center extend itself to support the new workloads. The requirements coming from the enterprise dictate the following characteristics of a solution:

Huge amounts of highly unstructured data

Extreme price sensitivity for storage, given that every bit of data is not useful on its own but only in the aggregate (which typically means data transformation, which requires more storage)

An analytics platform to incorporate consumer and clinical data to understand the end-to-end health of the patient.

Privacy and protection of the consumer data.

John asserts that the acquisition of Isilon extended EMC's ability to satisfy telemedicine and digital pathology workloads several years ago. He also believes that recent innovation in EMC's portfolio can now effectively cover every aspect of these "3rd platform" requirements listed above, and the picture he shared at EMC World highlights the solution:

In a future post I will step through the new additions to EMC's portfolio that protect the existing investments in legacy data center infrastructure while expanding to cover new workloads.

May 23, 2013

In a recent post I described Chief Technology Officer John Roese’s EMC World message
about arming Enterprise customers for the future. He advised customers to prepare
themselves for the converging service
provider, consumer ecosystem, and enterprise markets. I used the diagram below
to illustrate this convergence.

John used an example from Health Care to illustrate EMC’s
approach for arming Enterprise customers. In order to fully understand the
convergence of the three ecosystems, it helps to start by exploring the real
world example below.

Looking back approximately ten years ago, there was a strong requirement in a hospital environment to store terabytes worth of PACS data and electronic health records. This environment had to be highly secure and
have mission critical reliability characteristics. The price of acquisition of
these systems was considered well-worth the investment in order to comply with regulations such as HIPAA. The workloads in this case could be thought of as typical of a "business inside of a hospital".

Over the course of time, a new set of workloads began emerging in the medical environment. These workloads centered on the
creation and maintenance of larger and larger amounts of digital content due to applications such as tele-medicine and digital pathology. The size of hospital data for this workload increased to Petabytes. There was still a strong need for
confidentiality (with perhaps less regulation) and infrastructure stability (though perhaps not to the same level of resiliency).
Another major requirement that emerged was that of operating expenses. The amount
of unstructured data that needed to be managed was increasing exponentially,
and health-care IT administrators demanded that cost per capacity be driven
down.

The older workloads (e.g. PACS, EHR) were still there, but the hospital infrastructure needed to expand to accomodate the new workloads as well.

EMC responded to both of these phases via a wide portfolio
of products that satisfied all of the workload requirements for both structured and
unstructured data. A sample set of these products is shown below:

The diagram above doesn’t show the full set of EMC products that are applicable to this specific use case (e.g. backup/restore, retention-aware
storage, etc). But it does highlight how the EMC portfolio evolved over time to
arm enterprise health care customers with a secure data center that satisfied the requirements of all workloads. Health care is just one example;
other verticals are similar.

What happens when a health care provider (or any enterprise customer) is faced with new
requirements resulting from the convergence of the service provider and consumer ecosystem? The diagram below shows the scenario that many customers
face.

Massive quantities of unstructured health care data are now
being created on consumer devices OUTSIDE of the enterprise IT infrastructure. Electronic interface with customers happens outside of the hospital via methods like eWellness and mHealth. Is the customer exercising? Are they recording biofeedback regularly? This new class of workload results in new questions:

How can enterprise customers draw this data into their ecosystem while
maintaining existing levels of security and availability?

How can the data be
analyzed at high speed and in the context of existing data sources located
inside the enterprise?

How can this all
be accomplished at a price that is in line with costs that are more akin to the
consumer ecosystem?

The answers to these questions can be found in
the continued evolution of EMC’s technology portfolio, which I will cover in my
next post.

May 20, 2013

One of the more interesting talk tracks coming out of EMC
World 2013 was the Venn Diagram introduced by EMC Chief Technology Officer John
Roese:

Prior to joining EMC, John had observed that the lines were
blurring between three different markets: the enterprise market, the service
provider market, and the new consumer ecosystem that is driving information and
communication Infrastructures into a new era.
John is on record discussing this phenomenon in a variety of online
videos and interviews.

During EMC World, John discussed the technology
ramifications of each market in detail. In this post I’d like to introduce his
message describing how EMC is arming our traditional Enterprise customers to
best position themselves for the merging of these three markets.

How can enterprise customers benefit from technology
advances in the Service Provider market? The answer is primarily through leveraging
Service Provider infrastructure as a public cloud option. This junction between
the two spheres is highlighted below as a Hybrid Cloud option.

How can enterprise customers benefit from technology
advances in the Consumer Ecosystem? The answer is primarily through the secure
integration of consumer technology into the Enterprise infrastructure. This junction between the two spheres is
highlighted below as the Consumerization of IT.

From an enterprise customer standpoint, two questions were
raised (and answered) by John:

How can existing enterprise customers most
effectively evolve their infrastructure to best absorb the benefits of these
two spheres?

What new technologies must be introduced to
minimize cost, raise revenues, and reduce risk?

In other words, how is EMC arming the enterprise to position
itself for success?

These questions are best answered in the context of a
real-world customer use case, which I will describe in an upcoming post. In addition, I’m looking forward to
summarizing John’s final thoughts on (a) the intersection of the Service
Provider and Consumer Ecosystem, and (b) how EMC is well-positioned to not only
serve the enterprise market, but expand to serve the consumer ecosystem and
service provider markets as well.

May 09, 2013

As usual my old VP Rich Napolitano had me laughing on Wednesday as I sat down with him for EMC Backstage. The appearance by Rich allowed EMC World attendees (physical or virtual) to tweet Rich their questions in response to his keynote.

Rich stressed VNX's software assets quite a bit, and during his keynote he actually brought up three engineers from his research lab to demonstrate some (unreleased and work-in-progress) research areas:

An app-store capability where anti-virus and replication software were downloaded from a storefront interface to run directly within the VNX itself.

The ability to run VNX entirely within a virtual machine. One demo showed a file being copied into a virtualized version of VNX, while a separate demo feature VNX as a VM being uploaded to a public cloud provider (Verizon).

One of the nice things about the Backstage session is that it allows the audience to ask Rich for a drill-down into any topic of their choosing. One of the best questions focused on asking Rich how customer input drives innovation. One of the key customer inputs is always "Make VNX Faster". Rich explained that while the VNX "Flash-first" strategy has accelerated response times for many workloads, the multi-core MCX innovation obliterates the existing "knee in the curve" I/O saturation levels and starts to exhibit ridiculous I/O per second rates.

Other customer requests have led the VNX team to research the download of applications from a "storefront paradigm" directly into the VNX system. Virus scanners and replication software were two areas which were highlighted during the demo. The download of value-add applications to run inside VNX is another common customer request. Rich stated during our Backstage session that "he can't count how many times" customers have asked for this feature. The research offers the possibility that customers will no longer have to acquire the hardware/software for a CAVA-style appliance, and could experience more of an "IT-as-a-service" experience for new apps.

One customer tweeted a question about how to decide between "all-flash" and "hybrid-flash" arrays. Rich answered by recommending a deep look at the workload in question. At first glance an all-flash aproach may be more expensive, but if deduplication ratios are taken into account the savings could be significant and all-flash may be the most cost-effective, performant way to go.

This conversation led us into a deeper look at workloads. David Goulden and Jeremy Goulden spoke quite a bit this week about the wide variety of customer workloads that our entire portfolio addresses. For VNX-appropriate workloads, however, Rich pointed out that tenant workloads vary wildly in the peaks and valleys of VNX resource consumption over time. This phenomena, Rich pointed out, is an interesting computer science problem that his team plans on solving with more effectively balancing VNX job scheduling to use the power of all the cores.

I closed out this session by commenting that EMC World is first and foremost a technology conference, and Rich delivered with a great keynote and engaging backstage session.

May 08, 2013

Tuesday at EMC World I had the chance to catch up with Amitabh Srivastava at the EMC Backstage set. It was a great chance for EMC World customers (physical or virtual) to Tweet their questions (and there were many) about the new ViPR technology that Amitabh's team is developing.

It's worth pointing out that Amitabh's background has two themes: (a) he has a strong research background (developed at companies like TI, Digital, and Microsoft), and (b) a strong track record of product delivery (Windows Server and Windows Azure being prime examples).

As a technologist I was looking forward to asking a few of my own questions, given that I had heard several things during Amitabh's keynote that I wanted more detail on. So my first question had to do wth Amitabh's assertion that ViPR will allow applications to access data written in one protocol (e.g. object) with a different protocol (e.g. file).

Amitabh answered this first question with a real-world customer use case: video editing. He described a customer that used public cloud, object APIs to store a large amount of video assets. During the editing process however, the customer had to insert non-film collateral (e.g. credits) into the video files. The customer needed to use file APIs for this task, so they

dragged the current video out of the public cloud using object APIs

stored the video into their local file store

used file APIs to manipulate the video image

moved the file back to the public cloud object store

This migration approach is slow and costly. Amitabh explained how ViPRs solution to this use case is to keep data in place and allow applications to "open" specific objects using a file API, perform the edits, and close the file.

At this point I could have spent an hour asking him how specifically ViPR accomplishes this task but it was time to answer questions from the Tweet Stream.

Storage System Questions

Amitabh moved on to answer a lot of Tweets related to ViPR's storage capability. Several of the Twitter handles I recognized as EMC employees expressing concern with the ViPR approach:

"Does this mean that there is no need to innovate at the storage level"?

Amitabh's answer was no. Many workloads will still require more and more innovation for increased intelligence within the array (think of the "sideways" additions to FAST that Brian Gallagher mentioned).

"Will EMC lose business to customer's purchasing commodity devices"?

Amitabh's answer, again, was no. He pointed out that historically EMC doesn't play in the storage market for commodity devices, but with ViPR now EMC can. This capability can actually extend EMC's business into new markets.

Reliance on the Ecosystem

During his keynote Amitabh highlighted that ViPR will need to rely on a partner ecosystem to add value. The Tweets from the crowd asked him for more detail on this point. His response was that we need partners to innovate in three different ways:

From the storage integration standpoint (e.g. different vendor arrays)

From the management framework (e.g. plugging into different management paradigms like OpenStack, VMware, etc).

From the data services standpoint

Later on in the discussion I asked Amitabh about his keynote claim that ViPR would be "open", and I asked him how the partner ecosystem would engage with ViPR. Amitabh responded that this process is being worked out with the customers currently evaluating the technology, and the process will be shared once these current partners finish their deployment and integration into the system (they have access to ViPR's current restful APIs and binaries).

The session was packed with technical content but I'm quite sure we didn't get to answer everything. Throughout the rest of the week customers can continue to use the #emcbackstage hashtag to ask questions about ViPR, and the EMC Social Media team will do their best to follow up.

Perhaps the most interesting question focused on how his experience with the Azure platform applied to ViPR. Amitabh said the following: the lesson of Azure was that "things fail". This drove Amitabh to guide the ViPR team with a heavy focus on software resiliency.

May 07, 2013

On Tuesday at EMC World I had the pleasure of hosting the President of EMC's Information Intelligence Group, Rick Devenuti, at EMC Backstage.

Rick has been with EMC for five years and oversees
all aspects of IIG's business, including worldwide sales and services, channel
strategy, product development, marketing, strategic business and financial
initiatives, IT, technical support, and the Total Customer Experience program.

Rick stated during his keynote that he grew up a New York Mets fan! I was concerned that my status as a rabid Red Sox fan would lead to a painful discussion about 1986. But Rick and I kicked things off quite nicely as he talked about how IIG's Syncplicity product is helping the Red Sox scouts analyze top talent! His team has put together a great video on Syncplicity and the Sox:

Rick explained the benefits of this particular Syncplicity solution:

Scouts don't have to use the clumsy FTP interface to transfer videos back to the Red Sox secure IT department. This interface requires fast, online connectivity, which is often severely lacking in whatever locale they are working in. The Syncplicity interface is easy. They drag the video into their folder, Syncplicity moves it securely at the most convenient opportunity, and the scout can turn in and get a good night's sleep.

The aggregation of all the videos uploaded via Syncplicity land on the Red Sox Isilon infrastructure, which allows for fast editing and all videos being gathered in one place. The integration of all video assets behind the Fenway firewall happens automatically.

As scouts come and go within the Sox organization, the IT department has the ability to centrally control, via Syncplicity, the removal of video assets from all remote devices that departed employees were using.

After the Syncplicity conversation more questions were given to Rick from the Twitter stream, and he was able to add additional context and color to his keynotes in the following areas:

Solutions/Partners: Rick shared that the five key vertical solutions that IIG supports (Energy/Engineering, Life Sciences, Healthcare, Public Sector, Financial Services) rely heavily on a partner ecosystem with strong expertise in those areas. These partners complement the expertise that IIG consultants have in those areas.

Cloud: Rick dove into the On-Demand offering that IIG can supply, giving customers "as much cloud as they need". Customers can choose On-Demand as an on-premise option (e.g. Private Cloud), or they can choose to host their content in an On-Demand off-premise option, run by IIG (e.g. virtual Private Cloud).

Products: Rick spent quite a bit of time talking about the benefits of Syncplicity (as in the Red Sox use case), but he also mentioned how much he benefits from EMC's new ViPR platform. Instead of having his engineering team focus on direct integration with a variety of different storage platforms and paradigms, his developers can integrate with the ViPR API and instantly integrate with a wide variety of heterogeneous storage platforms.

We ended the discussion by taking a Tweet from the community regarding migration. Rick stated that customers running legacy IIG infrastructure can benefit from a new Enterprise Migration Appliance, which moves objects directly from IIG's database (as opposed to the latencies incurred via API access). Rick encouraged customers to request a free quote on this process by contacting their Documentum rep.

On Monday at EMC World I had a conversation with Brian Gallagher at the inaugural session of EMC Backstage. We took some live questions (Tweets) from
the crowd, and Brian was able to follow up on his keynote speech with some
great insights. I'd thought I'd share a few of them below.

Cloud in the Real World

During Brian's keynote, he displayed several videos to the
audience that showed real-world customers deploying EMC cloud solutions with
"enterprise-class" data integrity, performance, business continuity,
disaster recovery, etc. In other words, three years after the
"journey to the cloud" began, customers are successfully running cloud-style
IT-as-a-Service offerings using the product set in Brian's Enterprise Storage
Division. Service providers are building real-world, enterprise-class public
clouds. One of the key enablers, of course, has been the introduction
of VMAX Cloud Edition.

In fact, Brian pointed out that not only are Service Providers
meeting their ITaaS goals with VMAX Cloud Edition, but there is strong demand
for Cloud Edition from Enterprise customers as well.

Sideways

FAST (Fully Automated Storage Tiering) is going “Outside the
Box”!

Most people understand FAST to tier customer data vertically
between different layers of flash and disk drives. Brian described how
these algorithms will evolve to also automatically move data horizontally to
primary (e.g. a replica), secondary (e.g. Data Domain backup) and tertiary
(e.g. public cloud) targets. And like any good computer scientist
familiar with zero-based counting, Brian introduced the term
"zero-ary" to eventually enable horizontal integration of FAST
with all-FLASH disk array targets.

Why is this important? Because most customers will
introduce all-FLASH arrays as an adjacent technology to their existing data
centers. They will likely do this for specific workloads that require consistent low latency and high
IOPS. Zero-ary integration between VMAX FAST algorithms and all-FLASH systems
is a big win for customers that want to leverage their existing infrastructure
as they introduce flash technology.

VMAX Gravitational Pull

Brian's keynote also described a unique VMAX future state:
software features that will soon be able to run inside the
system. This will allow software to run much closer to the data itself. Over time,
a wide variety of virtual machines, including
appropriate customer applications, will have the ability to run inside VMAX (as Brian
demonstrated during his keynote by running virtualized VPLEX inside).

The EMC Backstage concept really allowed Brian to dive down into areas that EMC World attendees (physical or virtual) cared to hear more detail about. Brian and I also had a good laugh about the vFridge technology. During his keynote he had paused to grab an ice-cold water out of a fully functionally VMAX refrigerator. Some lucky EMC World attendee will bring that fridge home for their own personal use!

One final point raised by Brian is that this trend towards
software agility and mobility is the reason why VMAX integrates so well into
EMC's new Software-Defined Storage strategy: ViPR.

This was the first EMC Backstage session of the week, and the
ability for EMC World attendees to Tweet questions directly to Brian made it an
engaging session.

Time permitting I'll post the summaries with other EMC
Executives in the days to come!

May 03, 2013

Next week at EMC World I'll be co-hosting a new activity that will greatly augment the much-awaited keynotes by EMC Executives.

The activity is known as EMC Backstage (hashtag #emcbackstage) and it is essentially a very social and informal Executive Q&A session, following immediately upon the heels of the Keynote.

In other words, each speaker walks directly off stage and into the EMC Backstage experience.

The EMC Backstage set is located directly in the middle of EMC Square (also home to the CUBE and the Blogger's Space). The set will have a similar look and feel to the Flash Launch EMC conducted back in March (see David Goulden below).

Perhaps the most dynamic aspects of EMC Backstage are (a) the "live" nature of the experience, with audience members literally surrounding the stage, and (b) the ability to Tweet questions directly to the executives via the #emcbackstage hashtag.

I'll be handling approximately half of the interviews, with EMC CTO John Roese handling the rest.

I look forward to tweeting you there. With any luck I'll be able to write a blog post summarizing the highlights of each keynote and some of the takeaways from the EMC Backstage follow-on. The following schedules lists the times for each Backstage session.

My career started at the end of the mainframe era. I have a graphic which I refer to as "pre-RAID" storage architecture, and this diagram can be used to describe the "nearness" of the application to the storage itself. This picture shows a CPU sending values directly to a disk drive.

The application, in this example, would run directly on the CPU.

At the beginning of the client/server (2nd platform) era, the application begin to transition "further away" from the disk in terms of virtualized layers. The diagram below introduces a virtualized RAID layer (not shown) and a write cache above the disk drives. One can begin to graphically see the application move "further" away from the application storage repository.

As we fast forward to the ending of the client/server era and the beginning of the 3rd platform era, we now see virtualization at every level, and an increased amount of "distance" (in terms of layers) between the storage and the application. My EMC colleague Ken Durazzo likes to use the following diagram to depict an examplar data center application stack.

Application nearness is clearly decreasing, and as the vision of the 3rd platform advances, applications will move even further away (think mobile apps connecting into the stack depicted above).

The coordination of all of the plumbing and wiring between the application and storage is where the bulk of innovation will surface in the next few years. Cloud management platforms (like OpenStack) and Network Virtualization technologies (like SDN) are getting a lot of buzz right now for just this reason.

Data center architectures are transforming significantly, but legacy configurations can't flip over to radically different paradigms. DC architects are juggling a lot of balls in the air right now (see Doing Three Things at Once).

In future posts I hope to discuss different architectural approaches to cope with the application nearness phenomena.

Employer

Volunteer

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by DELL Technologies and does not necessarily reflect the views and opinions of DELL Technologies nor does it constitute any official communication of DELL Technologies.