Dana Gardner: Hello, and welcome to the next edition of the HPBig Data Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing sponsored discussion on how data is analyzed and used to advance the way you live and work.

Once again, we're showcasing thought-leaders and companies worldwide that are capturing myriad knowledge, gaining ever deeper analysis, and rapidly and securely making those insights available to more people on their own terms.

Our next big-data innovation discussion highlights how the latest version of HP HAVEnproduces new business analytics value and strategic returns. So please now join me in welcoming Girish Mundada, Chief Technology Officer for HP HAVEn.

Gardner:
Dan, let me start with you. We’re in a fascinating time because analytics and big data are now top of mind. What was once relegated to a fairly small group of data scientists and analysts as reporting tools -- and I am thinking about business intelligence (BI) -- has really now become a comprehensive capability that’s proving essential to nearly any business strategy.

From your perspective, what’s behind this eagerness to gain big-data capabilities and exploit analytics so broadly?

Wood:
You’re right, Dana, and it’s because we're starting to see some very
clear quantification of the value and the benefits of big data. It’s
fair to say that big data is probably the hottest topic in the industry.

There’s a lot of talk across all forms of media about
big data right now, but what’s happened is that credible publications
like the "Harvard Business Review,"
for example, have started to put solid numbers around the benefits that
enterprises can get if they can get their hands around big-data
analytics and apply it to business challenges.

For
example, Harvard Business Review is saying that, on average,
data-driven organizations will be five percent more productive and six
percent more profitable than their competitors.

Worth chasing after

Think
about that. A six-percent distinct profitability increase would double
the stock price for a lot of organizations. So there really is a prize
worth chasing after.

What we’re seeing, Dana, is much more widespread interest across the organization and not just within IT. We’re seeing line-of-business leaders understanding and, in many organizations, actually starting to benefit from big-data analytics.

They’re able to analyze the call logs in a call center, better understand the clickstreams
on a website, and better understand how customers are using products.
All of these are ways of analyzing large amounts of data and directly
tying it to specific line-of-business problems.

That’s
where we are right now. Industries around the world are going
through transformational projects using big data to gain competitive advantage.

Gardner: It’s interesting too, Dan, that they’re not just taking these as individual data sets
and handling them individually, but increasingly businesses are combining them, and finding new relationships, and doing things that they really couldn't
have done before.

Wood: Absolutely. It’s the idea of 360-degree view of their internal operations, or of their
external customer trends and needs -- and it’s come from combining data
sets.

This industry label of big data is perhaps not the most helpful, because
it’s not just the volume of data that is the challenge and the
opportunity for the business.

For example, they’re combining social media analytics on customers with the call logs into the call center, with internal systems of record around the customer relationship management (CRM) and ongoing customer transactions. It’s by combining all those insights that the real big-data opportunity reveals itself.

Gardner: And the sources for those insights and data, of course, are across almost any type of information asset. It’s not a just structured data or data that your application standard is around -- it’s getting all the data all of the time.

Wood:
That’s right. In some ways, this industry label of big data is perhaps
not the most helpful, because it’s not just the volume of data that is
the challenge and the opportunity for the business. It’s the variety of
sources, as you’ve alluded to, and also the velocity at which that data
is moving.

The business needs to get hold of these
multiple sources of data and immediately be able to apply the analytics,
get the insights, and make the business decisions. This is
why still the vast majority of that data that’s available to an
enterprise remains dark.

Unused and unexploited

It’s
unused and unexploited. Organizations, with their traditional analytics
systems, are struggling to get the meaning and insights from all these
data types that we mentioned. These include unstructured information,
such as social media sentiment,
voice recordings, potentially even video recordings, and the structured
and semi-structured things like log files and data center data. For many
organizations, getting the information quickly enough out of their CRM
and enterprise resource planning (ERP) systems is a challenge as well.

Gardner:
So we see that there’s a great desire to do this, and there are great
returns on being able to do this well. We talked about some of the
general challenges. What specifically is holding people up?

Is
this an issue of cost, complexity, or skills? Why aren’t companies able
to move beyond this small fraction of the available information to which
they could be applying such important insight and analytics?

Wood:
It’s a complexity and a skills challenge, as you mentioned. The systems
they have today, Dana, typically aren’t set up to able to analyze these vast amounts of unstructured information, and also to be able to analyze the structured data at a speed needed by the organization.

Typically, the analytic systems that organizations have had aren’t able
to cope with that or with that unstructured human information.

Think
about the need to analyze immediately a clickstream from an online
shopping application or a pay-to-use application that an organization
has. That is, a rapid-scale analysis of a large amount of structured
data. Typically, the analytic systems that organizations have had aren’t
able to cope with that or with the unstructured human information.

This is why HP has created the HAVEn Big Data Platform,
and Girish will talk in more detail about this, and how it brings together
the analytics engine needed to address these issues.

Just
as importantly, there’s the ecosystem around HAVEn, which includes HP
experts and services and services from partners, to bring together the skills needed to turn this data collection into useful information.

And
there are skills around data scientists, as well -- skills around
understanding the right questions the line of business needs to be
asking, and understanding actually how to visualize and represent the
data.

Gardner: Based on
what we have talked about in terms of some of these serious challenges,
Girish, what were some of the guiding principles that you were thinking of when HAVEn was being put together and refined?

Talking to customers

Mundada: HAVEn came together not by
creating it in a dark room somewhere in the back office. It came
together by talking to customers. On a regular basis, I meet with some
of HP's largest customers worldwide, getting input from them. And they're
telling us what their current problems are.

Let me see if I can describe the landscape in a
typical organization, and we can go from there. You'll see why we
created HAVEn.

Let’s visualize four different waves of data. Back in early '60s,'70s, even part of the '80s, mainframes
were the primary way to process data, and we used them for
operationalizing certain parts of data processing, where data was
extremely high-value. If you look at the cost of the systems, it was
phenomenal.

Then came the next wave in the ‘80s, where we went into what I call client-server computing, and we already know several companies that were created in this space.

I’ve
lived in Silicon Valley for almost 30 years now, and a whole bunch of
new companies were born in this space. I worked for a company, Postgres, which became Illustra, then became Informix, and became IBM. If you look at that entire wave of OLTP technologies, we created data-processing technologies designed to solve basic business problems.

Application software was created: CRM, supplier relationship management (SRM), you name it. Many companies that did consulting around that were created, too. That was that second wave after the mainframe.

We’re talking about volumes that are growing exponentially. In the past, they were growing linearly.

Then
came the third wave, where we took this data from all these
transactional systems, brought them together to find out some basic
analysis, which we now call business analytics, to find out "who is my
most profitable customer, what are they buying, why are they buying," and
things of that nature.

We created companies for that wave, too, and many technologies. Exadata, Teradata,Netezza, and a whole bunch of companies and applications were born in that space. That wave lasted for quite a while.

What
we're seeing now is that from 2003 onward, something very fundamental
has happened. At least, that’s the way I've been seeing this. If you
look at the three Vs that Dan has described -- volume, velocity, and
variety -- we’re talking about volumes that are growing exponentially.
In the past, they were growing linearly. That creates a very different
kind of requirement.

More importantly, if you look at
the variety that Dan mentioned, that’s really the key driver in my mind.
People are now routinely bringing in machine data, human data, and your traditional structured warehouses -- all of them together.

If
you visualize a bar graph, you would see that 10 percent of the data
that we now can monetize is coming from traditional sources, whereas 90
percent of the data that we need to monetize is now sitting in machine
data and human data.

High velocity analytics

What
we're trying to do with HAVEn is create a combined platform, where you
can combine these three different data types and do very high-velocity
analytics.

As a simple example, if you look at Apache Web Server
logs, that data is used historically by the security people to see if
anybody is breaking in. That data was being used by operational people
to see if machines aren’t overloaded.

More importantly the digital marketing
guys now want to look at that data to see who's coming to their website,
what they’re buying, what they’re not buying, why they’re buying, and
which geographies they’re coming from. Then, they want to combine all
these data sets with their existing structured data to make sense out of
it.

Today, it's a mess in the market. When we talk to
our partners and customers, they’re saying that they have point
solutions for each of these things, and if you want to combine that
data, it’s really hard. That’s why we had to create HAVEn.

HAVEn
is the fourth wave. HAVEn is specifically about big data, the fourth
wave. If you look at HP’s portfolio, we sell products and services
across each of these waves, and the fastest growing wave right now is
the big-data wave. It’s growing at about 35 percent a year, according to
Gartner, and that's why we're excited about it.

If you look at what’s required now to process big data in its entirety, one product no longer can do it all.

Gardner:
Now we know why you created it and what it’s supposed to do. Tell us a
little bit more about what’s included in HAVEn and why it is that you’ve
been able to create a combination of product and platform that solves this very difficult task?

Mundada:
If you look at what’s required
now to process big data in its entirety, one product no longer can do it
all. There is a very famous paper written by some university professors titled “One size does not fit all.” It proves that different
data structures are able to solve different kinds of data problems far
more efficiently.

One way to think about big data is to
think of it as a pile of dirt. It’s a big pile. In that pile, there’s
gold, silver, platinum, iron, and other metals you don’t even know. If
the cost of mining that data
is high, obviously you’re going to go after only the platinum and some
known objects that you care about, because that’s all you can afford.

HAVEn
is about bringing that cost of processing down to a very, very low
level so you can go after more metals. That means you have to bring
together a set of technologies to be able to solve this. If you look at
the last three years, HP has made very significant amounts of investments in the big-data space.

Now, we have a set of technologies to be able to combine them into a unique experience. Think of it almost like Microsoft Office.
Before you had Microsoft Office, you would buy a word processor from
one company, a spreadsheet from another company, and presentation
software from a third company.

Let’s say you wanted to
create a simple table. If you had created it in a word processor or
even a spreadsheet, you couldn’t mix and match that. It was impossible
to mix and match very different types.

Then, Microsoft
came to the table and said, “Look, here’s a simplified solution.” If
you want to create a table, go ahead and create it in PowerPoint. Or if you want to create more complicated thing, put it in Excel.
Then, take that Excel and put it in PowerPoint. Or, you can put the
whole thing into a Word document. That was the beauty of what Microsoft
did.

We’re trying to do something similar for big
data, make it very easy for people to combine all these different
engines and the different data types and write simple applications on
it.

Today you need to combine different sets of data techniques to solve different problems, and they have to work seamlessly.

Gardner:
What also is going on, other than product acquisitions, is recognizing
the industry standards and the H in HAVEn, being a representative of Hadoop,
is an indication of that. Tell me, beyond the products, what is binding
them together, and why being an open and standard space has its
important role here, too?

Mundada: Let’s look at
HAVEn as a platform. HAVEn is really two different concepts. There’s the
HAVEn data platform, which we’ll talk about now, and there’s a HAVEn
ecosystem, which I’ll mention in a minute.

HAVEn means Hadoop, Autonomy, Vertica, Enterprise Security, and “n” applications. That’s the acronym. So let’s look at one of these pieces, and why we need an architecture like this.

As
I said, today you need to combine different sets of data techniques to
solve different problems, and they have to work seamlessly. That’s what
we did with HAVEn. I’ve been with HAVEn from day zero, before the
project concept started, and I can tell you why and how we added these
pieces and how we’re trying to integrate them better.

If
you look at Hadoop as an ecosystem part of that HAVEn, our story with
Hadoop at HP is that Hadoop is an integral part of HAVEn. We see a lot
of our customers and partners betting on Hadoop and we think it’s a good
thing to keep Hadoop open and non-proprietary.

Leading vendors

We also today work with all leading Hadoop vendors, so we have shipping appliances as well as reference architectures for both Cloudera and Hortonworks, and we’re working now with MapR to create similar infrastructure. That’s our Hadoop’s story.

We’ve
also found that our customers are saying they want some flexibility in
Hadoop. Today, they may want one vendor, and tomorrow, they may decide
to go to another vendor for whatever business reasons they choose.
They want to know if we can provide a simple management tool that works
across multiple Hadoop distributions.

As an example, we had to extend our Business Service Management (BSM)
portfolio, so we can manage Hadoop, Vertica, hardware, storage, and
networking all from within one environment. This is simply
operationalizing it. Having a standardized set of hardware that matches
multiple Hadoop distributions was another thing we had to do. There are
many such enterprise-class innovations that you’ll see coming from HP.

But
more than that, we also found that Hadoop is really good for certain
kinds of applications today, and obviously, the community will extend
that. You will see more and more innovations coming from that community
and ecosystem.

It’s an analytic database, and by that, I mean the underlying algorithms are completely designed from the ground up.

Today,
there are several areas where there are holes in Hadoop, or maybe
they’re not as strong as commercial products. One such area that you see
is SQL. The SQL phase of Hadoop is going to be one of the key differentiators across the different Hadoop packaging.

In
that area, we have a technology called Vertica, which is the V part of
HAVEn, and you’ll see companies like Facebook, using a combination of
both Hadoop and Vertica.

The classic use case we see is
that people will bring all kinds of raw data, put it into Hadoop, and
do some batch processing there. Hadoop is great as a file system, a
batch processing environment. But then they’ll take pieces of that data
and want to do deep analytics on it, like a regression analytics, and
they will put it into Vertica.

What’s
different is that it’s custom built for the fourth wave. It’s an
analytic database, and by that, I mean the underlying algorithms are
completely designed from the ground up. Michael Stonebraker who created the key products in the first wave and the second wave -- Ingres and Postgres -- also created this at MIT from the ground up.

Data today

The
intuition was that if you look at the processing of data today, it’s
gone from having 10 to 20 columns per row to possibly thousands of columns.
A social media company, for example, might have 10,000 pieces of
information on me, and while they do processing, it’s going more linear.
It’s going regression-oriented in a sense. You might say “Girish, age
x, lives here, and likes y. What’s the likelihood somebody else may like
it?”

It’s meant for that kind of deep analytical
processing, a column-oriented structure. In those kinds of applications,
this database technology tends to be magnitudes faster -- tens of times
faster. That’s one example of Hadoop and Vertica, and we can talk more
about other pieces Autonomy and Enterprise Security with you.

Gardner:
So we see that there’s a platform that you put together. There’s an
ecosystem that’s supporting that. There are these binding standards that
make the ecosystem and the platform more synergistic. But other people are doing the same
thing. What’s making HAVEn different? What is it about HAVEn that you
think is going to be a winner in the marketplace?

Mundada:
There are two different answers to it. Let me talk about how we’ve
taken just not the SQL piece of Hadoop, but how we extend it with other
parts of HP that are unique to HAVEn. It’s the breadth of it. Let’s see
how we extend this simple combination of Hadoop and Vertica.

With Vertica, we’re able to drop in other codes that are user defined and user written.

I
said it’s an analytic database platform. If you look at that platform
piece of it, with Vertica, we’re able to drop in other code that are
user-defined and user-written. For example, you can drop in R language routines, Java, C++, or C language routines directly into the database. Now, we’re now able to combine that richness across our portfolio.

Autonomy,
which is the A part of HAVEn, is a unique technology. It's one of a
kind. Some of the largest governments and some of the largest
organizations in the world, such as banks and financial institutions,
have this in production in what it's meant for, human information
processing, which is audio, video, and text.

As an
example, you could take a video stream and ask simple questions. Tell me
if an object is moving from point A to point B, or tell me what’s in the
object. Is it a human? Is it a car? Can you read car number plates
automatically?

And you could do some really
sophisticated applications. Taking a car, we have cases where police
cars have video cameras mounted on the side, and as they’re driving by
in a parking lot, they can take photos of the number plates and compare
it to stolen cars.

Crime detection

Imagine being able to take that technology and combining it automatically, through simple SQL-like or simple REST API-like
commands with SQL, with your existing data and creating very
sophisticated applications to understand your customer or for crime
detection and things like that?

Now let’s bring in the
third of part of the puzzle, the E part, which is Enterprise Security. That’s also unique. We have an entire portfolio, both for
security as well as for operations management.

If you look at enterprise security and if you look at the Gartner Magic Quadrant, HP’s product set has been in the leader space for several years in a row. They are the number one vendor in that area.

Now,
think about our portfolio of ArcSight, Fortify, Tipping Point, and
other ESP products. Imagine being able to take the data-collection
algorithms of those, bringing it into this common platform of HAVEn,
combining it with other structured and unstructured data with just
simple commands. That’s something we can do uniquely.

Operations
management is another area where we have hundreds of these machine
logs. We can collect them, break them open into modular pieces, and
create new applications. You can go look at our website, Operations Analytics, where with a simple slider, you can go back and forth in time to millions of log files as if they were structured data.

With simple SQL, we can essentially write simple queries across structured and unstructured data.

We
can do that uniquely, because we have that entire collection. Our BSM portfolio has been on the market for 30 years. It’s one of the leaders.
This is the HP OpenView platform and this is one of the things we can do
uniquely at HP, bring all these things together.

That’s
the breadth of our portfolio, but it simply doesn’t stop at this
platform level. Remember, I said that there are two concepts. There is a
platform, and then there is the ecosystem. Let’s look at the platform
level first.

We have the whole of HAVEn. We have the connectors, and we ship these 700 connectors out of the box. With simple
commands, you can bring in social-media data in every language written.
You can bring in machine logs and structured logs. That’s the platform.

Let’s
extend it further into the ecosystem part. The next thing that people
were saying was, “We want to use something very open. We have our own
visualization tools. We have our own extract, transform, load (ETL) tools that we’re used to. Can you just make them work?" And we said, "Sure.”

That’s
one of the things that we’re able to do now. With simple SQL, we can
essentially write simple queries across structured and unstructured
data. Using Tableau Software, or any other tool that you like, we can access this
data through our connectors, but, more importantly, it let’s you hook
in your existing ETL tools into this -- completely transparently.

Even with the platform, the HAVEn components
in the middle, the connectors, and being able to match them with
matching hardware, our customers are asking, “Can you give us matching
hardware for Hadoop, so we don’t have to spend time setting it up?”
That’s one of things that HP can uniquely do, but more importantly we
have appliances for Vertica, for example, which are standardized.

If
you look at the other side, our customers are also saying, “We
understand that HP wants to provide us all this, but we like openness
and we like other partners.” So we said, “Fine, we’ll leave this entire
ecosystem open.” Our software will work with HP hardware and we can
optimize, but we also commit to working on everybody else’s hardware.

If you look at our visualization, we didn’t go and force a visualization technology on you. We kept it open.

Our cloud story is that we’ll work on Amazon, as well as OpenStack. For example, if you want to build a hybrid cloud,
where part of your data resides on HP or your private environment using
OpenStack, that’s fine. If you want to put it in Amazon or Rackspace,
no problem. We’ll help you bridge all these. These are the kinds of
enterprise-cloud innovations that HP is able to do, and we’re open to
this.

So to answer your question very succinctly, if
there were three things I would pick where HP is different, one is our
breadth of our portfolio. We have very large breadth that we've brought
together.

It’s the openness of the platform. HP is
known to be a very open company. If you look our Hadoop story, we have
an example. We didn’t create a proprietary Hadoop. We kept it open. If
you look at our virtualization, we didn’t go and force a virtualization
technology on you. We kept it open.

More importantly,
if there is one key thing that you want to take home from what we've
done with HAVEn, it's not about feeds and not about speeds. It's about
business value.

The reason we created HAVEn was to create that iPhone-like environment or Android-like
environment, where the vision is that you should be able to go to a
website, say you have standardized on the HAVEn platform, and then, be
able to point and click and download an application.

The
end part of HAVEn is really the business value of it, and that’s how we
see HAVEn as unique. There is nobody else, as far as we know, that has
that end-vision, where you can build the applications yourself using
standard tools -- SQL, ODBC, REST API, JDBC -- or you can buy ready-made
software that HP Software has created.

We have packages across service, operations, and digital marketing. Or you can go with a partner. The partner could be HP Enterprise Services, Accenture, Capgemini, or any of those big partners. That’s something unique about the HP big-data ecosystem that doesn’t exist anywhere else today.

Applications

Gardner:
Applications are something that take advantage of the
platform, the capabilities, the breadth and depth of the data, and
information.

I wonder if you could explain a little
bit more about the application side of HAVEn, perhaps through examples of
what people are already doing with these applications, and how they’re
using them in their business setting?

Mundada:
That’s actually one of the most exciting parts of my job. As I said, I
meet literally 100 customers a month. I'm traveling across the
continents, and the use-cases of big data that I see are truly
phenomenal. It really keeps you very motivated to keep doing more.

Let's
look at a very broad level of why these things matter. Big data is not
just about monetary profits. It's really about what I call extended profits. It
doesn’t have to be monetary. If you look at a simple example, we have
medical companies using data, using our technologies, to dramatically
speed up drug discovery hundreds of times more than they were able to
with Hadoop.

HAVEn isn’t about speeds and feeds. It's about really creating business
value in a hurry, so you get there before your competitors can.

That translates into just saving lives. At our Discover
show, we saw that a very innovative organization is using our
technology to look at bio-diversity and save wildlife in the Amazon.

That’s
unique, but those are like edge cases. If you look at a regular
enterprise, what they want to do at a very high level falls into three
categories: Applications that HP itself is building, applications that
partners are building, and applications that customers themselves are
building.

Let's start with the ones that HP is
building. Today HP is shipping several applications, and I’ll talk about
a few of them. Even before I talk about these applications, let's look
at why people generally want to do this. They’re saying that they want
to either increase revenues, so that’s affecting the top line, or they
want to decrease costs, so they can increase the bottom line. Third is
that they want to improve products and services. Those are really the
three broad categories at a very, very high level.

As I
said, HAVEn isn’t about speeds and feeds. It's about really creating
business value in a hurry, so you get there before your competitors can.

From that perspective, there are three applications
I’ll mention. In terms of increasing revenue, we have a product that we
ship called Digital Marketing Hub, and it combines the power of Autonomy and Vertica to analyze all of your customer analytics.

You’re
able to take your call center logs, your social media feeds, your
emails, your phone interactions and find out what the customer is really
is saying, what they want and don't want, and then, being able to
optimize that interaction with the customer to create more revenue.

More precise answers

For example, when a customer calls knowing what they want, obviously you can tell them more precise things. That’s one example.

Let's
look at another example, where you want to decrease your bottom line or
decrease your costs. Operational Analytics is another software product
we ship. We’re able to drive down costs of debugging network troubles by
80 percent by combining all these logs from machines on a very frequent
basis.

We can look at this and say. "At this second,
every machine was okay. A second later, machines have gone down." I can
look exactly at the incremental logs that showed up, using a simple pen
like a pointer, going through SQL-like data. That’s unique.

Those
are the kinds of applications we’re able to create. It's not just these
two. The other thing people want is improve products and services. We
have something called Service Anywhere,
where as you're calling or as you're typing in commands and saying you
want to find information about that, the system is able to understand
the meaning of what you’re saying.

Notice that this is
not keyword search. This is meaning, where it's able to go through
existing case reports from customers, look at existing resolutions, and
then say, “Okay, this might solve your problem automatically.”

That’s the beauty of the HAVEn platform. On the same platform, you can buy HP built applications or you can build your own.

Imagine
what that impacts. Your customers are happy, because the answers are
quicker. We call this ticketless ID, but more important, look at some
other interesting ways of how this affects a company.

For
example, I was recently in Europe. I was talking to a
very large telco there, and they said, “We have something like 20,000
call-center operators who are taking calls from customers. Each call
volume might take six minutes and some of them are repeat calls. That’s
really our problem.”

We worked out something that
roughly could save them two minutes per call. That translates to about a
$100 million net saving per year. That’s really phenomenal. Those are
one kind of application that HP built.

Now imagine a
customer wanting to build the same application themselves. That’s the
beauty of the HAVEn platform. On the same platform, you can buy HP built
applications or you can build your own.

Let's look
at NASCAR as an example. They did something very similar for customer
analytics. They are able to -- while the race is happening -- understand
audio, television channels, radio, broadcast, and social media and
bring that all together as if it's one unique piece of data.

Then,
they’re able to use that data in really innovative ways to further
their sport and to create more promotional dollars for just not
themselves, but even the participants. That’s unique -- being able to
analyze mass scale human data.

Looking to the future

Gardner:
Well, we've learned a lot about the market, the demand, why big data
makes so much sense. There is very large undertaking by HP around HAVEn,
and what it’s getting in terms of openness, platforms, breadth, and
these great examples of applications. But we also need to look to the future.

What's coming next in terms of HAVEn 2.0 or HAVEn
1.5? Dan, could you update us on how things are progressing, what
you have in mind for the next versions of these products and,
therefore, the whole increasing as sum of the parts increases?

Wood:
Dana, we've just announced HAVEn 2.0. The way Girish explained HAVEn
there in terms of the platform and the ecosystem and continuous
innovation now is around both of those pieces. It's really important to
us to be driving the ecosystem, as well as the platform. So I’ll speak to HAVEn 2.0 and one of the feature that’s the focus in driving HP
forward.

In terms of the platform, there are the
analytics engines that we have. Girish mentioned they were best in class
at the time that HP acquired them, and we continue to invest in R and D
across Autonomy, IDOL, Vertica, and the ArcSight Logger product. We
recently announced new versions of all three of those, improving the
analytics capability and the usability and, just as importantly,
increasing the interoperability.

At the moment, on an early-access program, we’re making the IDOL engine available to developers as a cloud-based offering.

For example, we now have integration of the ArcSight Logger with the Autonomy IDOL
engine for analyzing unstructured human information. A really great use
case of this is Logger was previously enabling IT to understand data
movements and potential threats and the risks in the organization.

For
example, if I were sending 50 percent of my email to a competitor, you
could combine that capability with the unstructured information analysis
in Autonomy and understand by that the information layer exactly what’s
in that email, 50 percent of which is going to a competitor.

Let’s
start putting that together and getting a powerful view of what an
individual is doing and whether it’s a risky individual in the
organization, integrating those HAVEn engines and putting more effort on
integrating it into the Hadoop environment as well.

For
example, we have just announced integration Hadoop connectors for
Autonomy. A lot of people are saying that they’re building this data
lake with Hadoop and they want to have the capability of putting some
analytics into the unstructured information that exists in that Hadoop
data lake. Clearly, we’ve also got integration with Vertica in the
Hadoop environment as well.

The other key thing within
that on the engine is IDOL OnDemand. At the moment, on an early-access
program, we’re making the IDOL engine available to developers as a
cloud-based offering. This is to encourage the independent developer
community to take components of IDOL with that social media analytics,
whether it’s video or audio recognition, and start building that into
their own applications.

We believe the power of HAVEn
will come from the combination of HP-provided applications and also
third-party applications on top.

Early-access program

We’re
facilitating that with this initial early-access program on IDOL OnDemand, and also, we’re investing in developer programs to make the
whole HAVEn development platform far easier for partners and independent
developers to work with.

We’ve set up a HAVEn developer website, and stay tuned for some really fun events online and
physical events, where we’ll be getting the developer community
together.

In terms of those applications that make the
whole HAVEn ecosystem come to life, Girish has mentioned some of them
that we have announced over the last few weeks. So I’ll give you a quick
recap on those.

And along with the HAVEn 2.0 announcement, we’re really pleased that six of the leading SI partners -- Accenture, Capgemini, Deloitte, PwC, Accenture and Wipro
-- themselves have put marketing applications on top of HAVEn. And
those guys have gotten fascinating mixtures of very industry-specific
analytics applications and more horizontal apps based on the priorities
that they’re chasing after.

We’re populating our solutions and partner solutions to facilitate
the whole commerce side of those applications taking off in the market.

So we’re really excited about that and expect to see many more announcements of partner applications over the next few months.

The
final piece of HAVEn 2.0 to support this whole ecosystem thing is a marketplace that we’ve launched, where we’re populating our solutions
and partner solutions to facilitate the whole commerce side of those
applications taking off in the market.

Gardner:
Just to flesh out that last point, when you say a marketplace, is this
an app store? Will some of your partners that are able to create
analytics-oriented applications on HAVEn then be able to sell them? Is
this a commerce site or is it a community site only at this point?

Mundada:
The original vision of HAVEn was to be able to make it essentially like
how you buy applications on a mobile phone today. Once you have settled
on a platform, the eventual vision is to be able to go there and just
download these applications. As Dan said, they’ve launched this now and
you will see much more stuff coming in this area.

Gardner:
For those interested in learning more, those who might want to focus on
one element of HAVEn, it’s not all-inclusive. You don’t have to buy it
all at once. It comes in parts. There are on-ramps, and then you can
expand. How do you get started? How do you learn about specific parts of
HAVEn? Which combinations would work for you?

One-stop resource

Wood:
The first place to go is hp.com/haven. That’s your one-stop resource
for information on this platform, all of the engines that Girish alluded
to. You can get the inspiration from some amazing customer case studies
we have on there -- insights from experts like Girish and other people
who are talking in depth about the individual engines.

And
as you rightly say, Dana, it’s finding the right on-ramp for yourself.
You can look at the case studies we have, the use cases on big data in
particular industries, and take a look at what the specific pain point
you have today. That’s the hp.com/haven website, and that gives you all
of that information.

You can also drill down from
there, if you're a developer, and find the tools and resources that
we’ve spoken about to enable you to start building apps on top of HAVEn.
That’s one part.

The whole power of HP behind this
HAVEn platform is in enabling, from an infrastructure and services point
of view, to start building these big data analytics. A couple of key
things here.

Those guys have data scientists and industry experts who can actually
help customers go through the design phase for a big-data platform.

To expand on that, the Technology Services
team is able to do full consulting on how to optimize the overall
infrastructure from the point of view of processing, sharing, and
storing this vast amount of information that all organizations are
coping with today. That will then start to put in things like 3PAR
storage systems and other innovations across the HP hardware business.

Another
place where I see customers often needing some help to get started is
in understanding exactly what the questions are that we need to be
asking in terms of analytics and exactly what algorithms and analytics
we need to put in place to get going. This is where the Big Data Discovery Experience Services from HP come in.

This is
provided by the Enterprise Services Group (ESG). Those guys have data
scientists and industry experts who can actually help customers go
through the design phase for a big-data platform and than offer the
HAVEn infrastructure supported by the ESG Services team.

Finally, Dana, come and see us on the road. We’ll be at HP Discover in Las Vegas June 10-12.
We’re putting together several road shows and events across the main
regions in Europe, the Americas, and in Asia Pacific, where we will be
taking HAVEn on the road, too. Take a look at that hp.com/haven website, and details of the events will be found on there.

Key messages

Mundada:
I wanted to add a couple of things that are a bit relevant. There are
two key messages: big data is really important and it’s disrupting
business. Your competitors are going to do it. You have a choice to
either lead and do it yourself or you will be forced to follow. It’s one
of those things that are disrupting industries worldwide.

Now,
when you think of big data, don’t think of pieces and don’t think of
piece parts. It’s not like you need a separate solution for human
information, another for machine logs, and another for structured data.
You almost have to think of it holistically, because there are many
kinds of newer applications that I’m seeing regularly, where you have to
bring all these data types together and create joint applications.

Whichever
technologies that you choose and settle on, think of that Microsoft
Office-like experience. You want to combine integrated solution across
the entire stack and there aren’t that many available in the market
today. So whoever you work with, make sure that you’re able to handle
that entire piece as one giant puzzle.

You want to combine integrated solution across the entire stack and there aren’t that many available in the market today.

Gardner:
Very good. I’m afraid we we'll have to leave it there. You’ve been
listening to an executive-level discussion highlighting how the latest
version of HP HAVEn produces new business analytics value and strategic
return. We have seen how big-data capabilities and advanced business
analytics have now really become essential to nearly any business
activity.

This discussion marks the latest episode in
the ongoing HP Big Data Podcast Series, where leading edge adopters of
data-driven business strategies share their success stories and where
the transformative nature of big data takes center stage.

Gardner:
To learn more about how businesses anywhere can best capture knowledge,
gain deeper analysis, and rapidly and securely make those insights
available to more people on their terms, visit the HP HAVEn Resource
Center at hp.com/haven.

I’m Dana Gardner, Principal
Analyst at Interarbor Solutions, your moderator for this ongoing
sponsored journey into how data is analyzed and used to advance the way
we live and work. Thanks so much for listening and do come back next
time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.Transcript of a BriefingsDirect podcast on how HP is developing
products and platforms to help businesses deal with the demands of big
data in a competitive environment. Copyright Interarbor Solutions, LLC,
2005-2014. All rights reserved.