SAP HANA – from Analytics Appliance to Vendor Consolidation

Last week we headed to SAP TechEd 2011 in Las Vegas – the first of 4 TechEd conferences around the world. I thought it might be nice to reflect for a few minutes on one of the big topics that we expect to see out there – and what it means for the wider IT audience.

As we know, SAP HANA 1.0 SP02 as it stands today is just an analytics appliance on crack. I’ve been vocal about some of the issues that customers are facing during implementations but none of that will matter once SAP start to solve the issues faster than they are being found.

What is truly fascinating is what SAP are planning for HANA, and how aggressive the roadmap is. Because let’s be clear, they are planning to build out software which will change the whole face of the IT industry, and if they succeed, I will probably have to suggest that SAP deserved to use the moniker “game changer” and kitties weren’t harmed afterall.

SAP HANA as a database

The first thing that SAP plan is to turn SAP HANA into a kick-*** multi-functional database. Right now in HANA 1.0 SP02 there are far too many limitations for this to be true but they are working on it. So far they have built out:

– Columnar storage for good data warehouse performance– Row storage for good transactional read performance– Delta mechanisms to offset the poor write performance of columnar storage– Calculation and execution engines to optimise data warehouse performance– Persistent storage and writes so the appliance can be powered down

But there’s a lot of work to go to make it into a proper database. For instance:

– Disk storage for archives– ANSI compliance for SQL– Ability to run as a database for SAP Business Suite– Planning engine– Backup, disaster recovery and good administration tools

But let’s be clear – if SAP clear all this stuff up, and make it work, they have the potential for a formidable product which outperforms anything out there in terms of cost-to-performance. And that’s just from a generic database perspective – so Teradata, IBM and Oracle had better watch out.

SAP HANA as an Application Platform

Because SAP HANA is a 2-tier architecture (app platform and client) rather than a 3-tier architecture like SAP R/3 (database, application server and client), there are benefits that it can bring to bear that haven’t been relevant since the traditional mainframe. In the last 20 years, the biggest problem by far that large-scale computing has faced is the ability to move stuff between layers.

None of that I/O performance matters with HANA because everything happens inside the memory of one system – not along lengths of glass or copper tubes. This provides a lot of exciting opportunities because you can build the database, application and everything into a single unit, which will perform better than anything else out there.

What’s more SAP HANA as a platform could run multiple Apps, ERPs, Data Warehouses and anything else – all in one big cloud – virtual private cloud, or perhaps – in time – public cloud.

SAP HANA as a Vendor Consolidation Platform

If you look at this logically – what SAP HANA does is to collapse layers. It collapses the application layer with the database layer, which means you don’t need to buy your database software from one vendor and your application software from another – you can buy it all from SAP. This is a great simplification of the procurement process and means SAP can make more profit and deliver more innovation for the same license fee. But let’s take this further.

At the hardware level there’s also consolidation – you buy an appliance which is fixed in its nature. There are currently 5 hardware vendors you can buy HANA appliances from: Cisco, Dell, HP, IBM, Fujitsu. Dell are the kings of commodity hardware and they are currently a disorganised mess around SAP HANA so far. If Dell get their act in gear – and I’m planning on kicking them until they do – this stuff will commoditise fast. It will drive profit margins down to normal commodity and this will ease the barrier to entry.

But this isn’t the point: if you want to consolidate with SAP HANA then the only logical outcome is that SAP also provide the hardware – I have written before about why SAP must acquire a hardware vendor. This way it becomes a one-stop-shop to procure hardware, database and application.

And take it a step further, what does it mean for the traditional Systems Implementor (SI)? I predict two models: first, the SI will be the orchestrator, combining SAP, Hardware and Services into the HANA pot pie. The second will be SAP providing a one-stop-shop for everything including – perhaps – a cloud based model – inevitably subcontracting services to SIs. Customers just don’t want the confusion of any other model in this increasingly SaaS-dominated world.

Conclusions

Let’s be clear – SAP HANA isn’t many of the things I’ve described yet – it’s just in its infancy. But if you are going to SAP TechEd 2011 either in Las Vegas, Bangalore, Beijing or Madrid and you are in the technology business, then you need to seriously consider what it means to you. And I highly recommend you learn about SAP HANA early and get into the groove.

The underpinnings for SAP HANA are already in place – the in-memory database is already there and this won’t change. So anything you learn today about HANA will be useful to you in the future, and will get you a early-adopter’s advantage.

Assigned tags

Related Blogs

Related Questions

you’re right with that HANA is pretty fresh and certainly not a mature product.

And sure enough there are several leaps to take to turn it into a general purpose database that could replace current NetWeaver data bases.

However, the points of critic chosen aren’t the best ones:

– Disk storage for archives=> What do you mean by that exactly? Backup/Recovery features? Or what other DBNS vendor feature would you like to see in HANA?

– ANSI compliance for SQL=> Sorry, but this really is just a “checkbox” feature. HANAs main application will be in context with SAP products, so if the SQL subset used by the SAP products are supported (which is the case already) then this will be sufficient.Moreover, other query interfaces like MDX become more important as well and are already supported.Also: most often the available SQL features aren’t nearly fully exploited by programmers. Thus asking for ANSI SQL compliance really don’t make any difference in the day to day use.

– Ability to run as a database for SAP Business Suite=> This usage is close at hand and will be available.Already you can use HANA as the data base for a BW 7.3 SPS 5 system.

– Planning engine=> Yes, will be there – but which other database has this???

– Backup, disaster recovery and good administration tools=> Backup is there, restore/recovery is there – administration tools are there.Sure, more functions are necessary and will come, but it’s not as if these features just aren’t there yet.

Concerning your warning towards the big DBMS vendors: I don’t think that HANA is a big threat to their main business.HANA has a high entry barrier and requires a rather big effort to really take advantage of the technology – there’s a quite defined market for that.The other databases have different target groups. They, especially Oracle and MS, make DBMS for EVERYTHING and EVERYBODY.

Although hard to believe, but the DBMS world is much bigger than SAP world 😉

Anyhow, I liked your blog post and that finally some discussion beyond the technology starts.

1. – ANSI compliance for SQL=> Sorry, but this really is just a “checkbox” feature. HANAs main application will be in context with SAP products,….

That is not how Vishal Sikka etc have projected HANA so far. In fact, majority of the HANA POCs you demonstrated at SAPPHIRE deal with non-SAP systems alone. Has something changed in SAP’s strategy?

2. – Ability to run as a database for SAP Business Suite=> This usage is close at hand and will be available….

How close? Some day in future I am sure you will have it. But if it is close, why is SAP not indicating even rough dates when we can expect this? I seriously think this is where HANA would make a tangible difference

3. – Backup, disaster recovery and good administration tools=> Backup is there, restore/recovery is there – administration tools are there.

Are you saying HANA has mature DR/HA capabilities that gives customers confidence to send it to production? If that is the case, maybe SAP needs to shout out that message louder. This is the single biggest reason I heard from customers – at SAP events and outside, on why they don’t want to put HANA in production. This should not be left as a grey area – it needs to be crystal clear how HA works if people have to put critical functionality in HANA, beyond POCs.

But one thing needs to be clear: all this is my personal opinion and not the official statement of SAP or any kind of official SAP strategy.It’s just what I figure from my exposure to HANA.

That said, here comes my thinking:

@1: The HANA PoCs aren’t all non-SAP scenarios. The source data nearly always come from a SAP NetWeaver system and is replicated (close to realtime) into the HANA database.That’s the situation now, sure, but every implementation of these PoCs required a lot of work. I don’t think that most customers would want to do it that way. There’s so much “technical” stuff to code by hand right now.Just think of correctly handling odd ABAP datatypes like DATE or NUMC…

I believe that most customers would want to adopt HANA system just like they are used to by ABAP systems.

@2: How close, how far? You know that there won’t be any release dates out before the software is actually ready.I’m just saying that this will come and that we’re not that far away.

In my view it’s not that hard to have NetWeaver running on HANA 1:1 like on a classic DBMS.That’s something everybody knows how it is supposed to work. Errors are easy to spot and fix.However, I don’t think anybody would actually want to have NetWeaver run 1:1 like on a classic DBMS.I think, you would want it to take advantage of the HANA features right away. To fully exploit it out of the box.THAT’S what will take time. Re-think the standard applications implementation.Re-design what worked for tens of years.And yet keep it compatible or somehow on a doable upgrade path.

@3: I’m saying that HANA provides Backup/Recovery on a level that fits what most customers implement on their sites.Sure there are customers that use advanced features in this area. But are these _most_ of the customers? NO. Clearly no (having worked in Oracle/MaxDB SAP Support for the last years I DO have quite an insight into this).

Anyhow, right now HANA is not the main DBMS for any scenario currently supported. And Backup/Recovery development is ongoing and the lacking features are known to the development team.

So on this point I’d say the truth lies in the middle: it’s neither that there is no such feature as Backup/Recovery nor is it comparable to a feature set of products that are 30 years+ in the making.

I’m a bit … disturbed by this comment about SQL support as well. I agree with Vijay that most (not all, but most) of the scenarios shown up to about 4 months ago involved ripping data and queries out of a source system (often non-SAP) database and transplanting them directly onto HANA.

I kept asking, “and you didn’t need to adjust the SQL query at all?”. SAP has consistently answered that adjusting SQL is not necessary. This type of scenario is only possible with good standard SQL support. Without that support you are looking at a much higher project cost and a longer timeline.

Interestingly, these “quick win”, generic reporting scenarios stopped being highlighted just about the same time that Sybase Replication Server fell out of favor. So now it is certainly possible that scenarios now are much more SAP-centric, but that wasn’t the case until a few months ago.

well, not being fully compliant to the SQL standard (which standard version by the way do you need?) doesn’t mean that your old queries couldn’t be executed. Oracle didn’t support ANSI join syntax for years and years – and no application broke due to this.Oracle doesn’t do SCHEMAs – anybody hurt by that? Nope.That’s why I wrote that ANSI SQL compliance is a check box feature.

I’m not in the advertising group, so I wouldn’t subscribe to statements like “you don’t have to change anything”. As with all new technology, this is all about doing things differently.Actually you wouldn’t want to keep doing mass data processing as you did before. You want to do this right in the HANA box and probably not via slow-by-slow SQL statements but via procedure calls.

As I wrote – personally I don’t see that on line-syncing large amounts of operational data from the main database into HANA and run your reports off there is the big thing here.Running your application natively on HANA and spare the replication will be.Maybe I’m wrong about this, but syncing is an option, where you can either wait or not so accurate values are acceptable – like in google or facebook.But if you cannot accept values that aren’t exact and up to the second precise, then syncing is out of question, if you don’t want to wait for it to finish.

And that’s why I don’t see these early scenarios as the killer apps for HANA. These are first-touch experiments – nothing more and nothing less.

Once again: all this is my personal and private view and opinion. I’m not working in HANA development, product management or sales. I’m working in support. So all I write may be the complete opposite of what SAPs official strategy is on this.cheers,Lars

I think you give a realistic view of the situation, and I welcome that whole-heartedly. I’m just concerned (and have been since the beginning of the HANA discussion) that the gap between the reality and what customers and developers are being told and shown at events like Sapphire and TechEd appears to be rather large. I think John’s expectations, comments, and disappointment (such as it exists, which I’m not sure is much at all) above are probably based on hearing all of these promises.

Did I misunderstand what was highlighted in TechEd ’11 last week? The scenario SAP highlighted in keynote was non-SAP scenario: Oracle PL/SQL reports versus SAP HANA. Customer who went live on Aug 20th was: Nongfu Spring, China.

no you didn’t misunderstand the presentation (at least not as far as I can tell :-).

I think right now we’re looking at the line that separates what’s technically possible from what’s feasible to implement and operate in a regular, somewhat conservative business IT landscape.HANA can already do many impressing things – if used by some few very developers that have much exposure to HANA development.One major goal of the next months/years will be to leverage the potential to non-super-cracks-developers. To get the performance into the hands of ‘mere mortals’ as some guy from cupertino may have put it.And this of course includes interfacing to other tools, support of standard maintenance and operations processes, documentation and so on.

As right now, today, HANA is not yet the super critical appliance to have in your company it may be worth to take the time and dig out all materials on parallel processing and database programming in general somehow available.

Successful system implementations on HANA will rely on having the principles understood right – not on 6 month earlier or later having the machine in your cellar.

Based on other comments, I now understand what you mean by “ANSI SQL compliance is a check box feature”. I know 15 years ago Informix provided ANSI SQL as a checkbox feature(non-ANSI was default) and don’t know of any customer who really wanted to use ANSI database due to efforts involved. (One thing I clearly remember is the discussions we had in 1995 regarding BEGIN WORK/COMMIT WORK(ESQL-C/Informix 4GL) while discussing the option to become ANSI-compliant. BEGIN WORK is implicit in ANSI DB so COMMIT WORK is critical to end a transaction;if not used, the transaction will be rolled back). At any rate, your discussions brought smile to my face. Nostalgic days:)

Great comments although I think you’ve been drinking the company Kool-Aid too heavily!

1. DIsk Storage for Archives – I mean the “Disk Based Store” and “Data Ageing Manager” which will allow you to push archive data out of memory. This is proposed in HANA 1.0 SP03 which is set to be Generally Available in Q2 2012.

2. ANSI compliance. Mmm a checkbox perhaps, but HANA is not a stable database yet like Oracle. Moreover MDX is supported only for certain scenarios and certainly is NOT supported as a generic interface for 3rd party products (which is kinda the point of MDX). I’m with Vijay in that HANA is designed to be used more generically.

3. Ability to run as a database for SAP Business Suite. Are you kidding me? You cannot use HANA as a RDBMS for any SAP NetWeaver platform yet – and it will be available in HANA 1.0 SP03 – with limited availability in November 2011. Its maturity is unknown.

4. Backup, disaster recovery and good administration tools. What is there is currently very basic. Point-in-time recovery? No. Integration with Tivoli or CA or…? No. Integration with Solution Manager? Basic. HANA is 1.0 so it’s OK that not everything is there yet but we should be clear that there is work to do.

On the point of the larger vision I think that most people currently fall into your line of thinking, and I know I’m being slightly controversial with my views. But I genuinely believe that SAP HANA could be a more generic database, used as the foundation application platform for consolidation. And this is a big potential threat to both Oracle and Microsoft.

Hmm… you’re the first calling me a advertising sucker – but, interesting if I occur to be.

In fact the opposite is true – but I don’t see all this as negative as you seem to do.It took years for all major DBMS to come somewhat close to the ability to successfully run NetWeaver applications.And we will be there in what? Half a year after GA?Come on – that IS pretty good.

Same is true for Backup/Recovery. Yes, there is LOTS of space to fill. But we know the space. The same people that developed storage/backup/recovery/monitoring for MaxDB/liveCache develop these features in HANA.Sure, it has to be done, the code has to be designed and written. But it’s not like we’re waking up in the middle of a nice marketing dream and realize “hey, wow, we should through in some real-world features as well” 😉

Concerning the MDX restrictions – I don’t consider them technical but rather support strategic restrictions.Supporting arbitrary queries – especially at the current phase of development – doesn’t help.It only makes things unnecessary complex as would waste time. All that for basically no advantage for the end-user.Oracles optimizer developer team is flooded by the arbitrary query options – and don’t seem to get out of the bug tar pit.Working in BW/HANA support and having worked in Oracle support for many years, I still find this to be a clever decision.

And the business opportunity? Well, THERE I’m pessimistic.Oracle is selling the ONE database since years – still customers run instances over instances. And I don’t see Exadata changing this a lot.When SAP started to invest in MaxDB everybody was speculating about our entry to the DBMS vendor market. It never happened.We bought Sybase and kept it pretty much self contained (that is, we didn’t assimilate it like BO). So, I yet have to see where SAP starts to challenge Oracle, IBM and MS on their very own playfield.

I think there is at least one more logical alternative (other than SAP becoming a hardware vendor that is), and that is for SAP to give up on the “appliance” model for selling HANA, and to just license the software and support it against a defined hardware specification.

You’ve argued yourself (in another blog) that the current certification process for HANA doesn’t make any sense, and I agree. So if this process was to be replaced by a specification based support, then customers can continue to happily choose their preferred hardware spec and continune to deploy SAP software (including HANA) on it.

I also think we need to be careful when we talk about removing layers. The only layers that HANA is going to remove are the ones that are rendered unnecessary by the increased power of this new platform. The remaining layers may well become virtual and some might even share the same physical layer (e.g. application and DB) but they will remain because they represent something useful.