Why SAP must acquire a hardware vendor for In-Memory

8 June 2011

John Appleby

John Appleby

Global Head of Sales

At SAP's SAPPHIRE May 2011 conference in Orlando, I met with Ike Nassi, who has the austere title of Chief Scientist for SAP. He runs SAP's In Memory Database (IMDB) hardware program and was charged with building the original prototypes for hardware that run the underlying database upon which HANA now runs.

He is a deeply interesting individual and spending time with him reminded me of my own mentors at University - some of the late great computer scientists like Roger Needham and David Wheeler. There is a certain analytical style and restless curiosity which has been lost in later generations of computer scientists.

Ike built the VAX for Digital and he's been in and out of retirement for years. It's pretty fitting that he should come back to run this particular program for SAP and how different it must be to building the VAX. These days the hardware that runs SAP's IMDB software is commodity equipment that can be bought off the shelf, which is what he's been doing for the last few years - ducking in and out of Walmart.

How commodity is commodity?

Architecturally it's the same stuff that's in your laptop - Intel's x86 range of computers that has lineage back to the early IBM PCs of the late 70s. In real terms though this stuff is pretty expensive, because IMDB requires (the key's in the name, right) lots of fast memory.

Early versions of HANA 1.0 require rack based equipment but it's certain that later versions of IMDB will run on blade systems - high density servers that fit 8-10 units in an 18" high enclosure. Memory might only cost $100 per Gb now, but for a high-density blade with 2TB of main memory, this means $200k of memory alone. A fully populated IMDB blade enclosure with 20TB of memory can easily cost $3m.

So whilst it is commodity hardware - the costs can ramp up for large-scale systems, although it's worth bearing in mind that IMDB has some 10:1 compression compared to e.g. Oracle and a 2TB IMDB system might hold the same information as a 20TB Oracle database.

But SAP doesn't have a stack, right?

The chairman of SAP's Executive Board, founder Hasso Plattner, has always maintained that SAP is not building a stack - partially to take a peck at competitor Oracle (who acquired Sun Microsystems) and partially because it's historically been true.

SAP's flagship R/3 and Business Suite products will run on anything you like. The laptop I am penning this blog on has a copy running on it, and it runs on IBM's mainframe i/Series behemoths and everything in between. Almost any hardware and all the major databases from Microsoft, IBM, Oracle and (soon) Sybase.

But Hasso's argument is nonsensical in the context of HANA. SAP is implicitly building a stack with HANA because HANA only runs on one hardware platform (Intel x86) and one database (SAP IMDB). Several key executives slipped up and referred to the SAP stack at SAPPHIRE and this is testimony to the "no stack" argument falling apart. It is true that IMDB still runs on equipment sold by the major vendors: HP, IBM, etc. but this is just because they all offer equipment based on the x86 platform.

How is SAP influencing Intel's server strategy?

Well it's worth considering that SAP is a fly to Intel. They are still a pretty small consumer of Intel chips - partially because a lot of SAP's install base, particularly the larger customers, buy specialist equipment still running on UNIX platforms from IBM, Sun and HP - most of which does not bring revenue to Intel.

But Ike said he met with Intel engineers a few years ago and when asked what they should be doing, replied that they needed to increase the ratio of cores:RAM. Let's consider my laptop here. It has 4 cores and 8Gb RAM - a ratio of 1:2, which is pretty typical in the PC market. 3 years ago the top-end x86 equipment ran to 8 cores and 64Gb RAM - a ratio of 1:8. We are now up to 24 cores and 1Tb RAM - a ratio of 1:42.

And yet SAP are finding that IMDB is memory-constrained - the CPUs are not being worked in most applications and they need more RAM. Higher density memory coming later in 2011 should bring 2Tb RAM which should help. But there are very few technologies that need this much memory and neither Intel nor the hardware vendors are really pushing this.

What is SAP doing in the hardware space?

Ike was necessarily a bit tight lipped about what he was doing in the future, but it is clear that SAP is investing in some R&D here. Not to the extent that IBM and HP are, but they have a new datacenter where they have built out some new IMDB appliances.

And the key is that they're not just buying up commodity equipment any more - they are getting their knees deep. The team is trying to find out what equipment characteristics make IMDB fly, and they are getting some interesting conclusions. For instance, they found they can supercharge IMDB by putting an additional level of ultra-high speed memory in the hardware.

This is interesting because this isn't available in the equipment of any of the major vendors - only in the equipment that SAP is building out itself. And in the context of the sort of investment that IMDB appliances requires, a 10-15% increase in performance is significant.

What about the timeline of applications to IMDB?

This has been written to death by other people but the big prize is to run SAP's Business Suite on IMDB, which would be an Oracle killer. We are theoretically already here today - SAP are migrating Business Suite customers onto IMDB in the labs - but the reality is we are still a few years away.

By the end of 2011 we will see the very first NetWeaver BW Data Warehouse customers move onto HANA 1.0 SP03 (formerly known as HANA 1.5 or 1.2), which will be based on SAP's IMDB technology. But BW benefits so much from the performance of IMDB that customers will take it early in the product lifecycle and immature. BW doesn't affect the core business for many customers and that will be a risk worth taking.

It will take a further 2-3 years for IMDB to mature from a stability, performance and tooling perspective for many of the less risk-averse customers who rely on SAP's Business Suite for their core customers - and probably years longer than that for the more risk-averse customers who cannot accept any loss of reliability. This isn't a criticism of SAP's strategy but rather a reality check of how long it takes to take a database to market.

Why does SAP have to acquire and what is the impact?

Whatever Ike's team build out will be a toy in the lab and I don't think he has any other pretentions. When I pushed him on this, he didn't give a straight reply but rather suggested that SAP had no plans to do this in the short term.

Which makes sense - SAP doesn't have a proper IMDB product yet and building its own hardware would cement the idea of the stack and deeply upset relationships with good friends IBM and HP in particular. Building hardware now would be suicide and serve no purpose.

But IBM and HP seem unlikely to build out IMBD specific equipment and SAP need the performance boost of a tailored architecture - in time for when customers start to move their Business Suites on top of the In-Memory platform. This is likely to be 3-4 years in the future, based on the maturity of the existing IMDB platform and what industry analysts are thinking.

In order to build this tailored platform, SAP will need someone who knows how to build volumes of servers, how to distribute and how to support it. This is a specialist business and they won't want to build that capability from scratch: it would be a distraction.

So for my money they have to acquire, and I suppose the question is who? This is rampant speculation but "who not" is much easier - it won't be one of the big guys because they can't afford them. What's more, SAP would only buy a server vendor - they have no interest in PC sales, which narrows it down to one of a few players, or perhaps the server arm of an existing HANA vendor like Fujitsu. We will see.

View comments

Comments

Blog post currently doesn't have any comments.

About the author

John Appleby

Global Head of Sales

I've always been passionate about new technology. I cut my teeth helping global financial services and consumer goods companies build data warehouses to manage their business - especially when they wanted to run faster.

These days, I travel weekly across continents, helping clients differentiate themselves using analytics technologies. This then involved building a team to design the solution, make it work and lead it through to successful completion.

I believe that in-memory computing is radically changing the face of business so you can ask me about SAP HANA, or about any other data platform like DB2 BLU, Hadoop or MongoDB for that matter.

I'm passionate that giving back to the community reaps long-term rewards and this has shaped the last few years of my career - being a contributor to knowledge sharing websites as an SAP Mentor; and a sometime advisor to Wall Street investors on emerging technologies and those companies bringing to market new innovations.

When I'm not busy designing in-memory apps, you may find me pounding the pavement to the beat of music in the hilly suburbs of Philadelphia, or traveling the world to meet new people and new cultures.

Bluefin and SAP S/4HANA - welcome to the one horse race

We use cookies to provide you with the best browsing experience. By continuing to use this site you agree to our use of cookies.