Tag Archives: HANA

Section 6 – HTAP, OLAP vs OLTP SAP Application Throughput, Optimizations

Historically for SAP Business Suite / ERP / OLTP Systems we have all previously used SAP (Sales and Distribution (SD) 2 Tier benchmark results for both sizing the significant majority of SAP workloads and to enable “common currency” comparisons of relative SAP server and database throughput a peak 99-98% utilisation levels.

Subsequently SAP introduced the SAP BW Enhanced Mixed Load (EML) test, which in turn has recently been replaced by the new SAP BW Advanced Mixed Load (AML) benchmark test which in my view are both aimed at OLAP orientated Business Intelligence (BI) HANA based workloads.

There has been a significant absence of published SAP SD results for the SAP HANA database platform, whilst SAP SD results continue to be published for Sybase ASE and/or DB2 10.5 etc.

In my experience when client related large Enterprise intense SAP NetWeaver / ECC transactional (OLTP) and/or batch workloads for both SAP SD order entry type transactions and/or for a representative mixture of client customised and SAP optimised OLTP transactions are executed it becomes clear very quickly that traditional, mature and optimised row orientated database platforms like DB2 10.5 offer significant performance, throughput and efficiency benefits , indeed this short youtube video from Coca Cola Bottling Co highlights very significant improvements in both SAP transactional and batch throughput whilst concurrently saving ~ $1m in TCO reduction through enhanced rates of SAP DB2 data compression.

Conversely running an identical SAP SD like workload “side by side” on both DB2 and SAP HANA with the same SAP application and database server resources simply served to highlight significant “write” (single SQL insert, update and/or delete) penalties associated with running existing customized SAP NetWeaver OLTP / transactional workloads against columnar in-memory data stores vs prior row optimized SAP NetWeaver rdbms platforms.

With the availability of SAP HANA over Linux on POWER8 (LoP) it is also possible to run a representative “side by side” set of SAP BW 7.4 OLAP queries and reports over both SAP HANA and DB2 10.5 with BLU identical fully virtualised SAP BW application server capacity.

These results are indeed very interesting and confirm the benefits of a columnar in-memory strategy for OLAP workloads, whilst also clearly demonstrating the efficiency and maturity existing DB2 query optimisers and multi-threaded and multi core workload distribution and management with DB2 BLU compared to alternatives like SAP HANA.

Indeed it was possible to observe both superior scaling and throughput as the workload concurrency and complexity increased with DB2 BLU whilst using ~ 50% of the configured memory database server capacity and identical SAP BW application server resources.

It was also clear as described in a prior section that SAP BW 7.4 “Flat InfoCubes” and/or semantically partitioned flat Infocubes provided a significant throughput gain on a in-memory columnar platform vs traditional relational row platforms (even if parallel like DB2 DPF).

Personally unless a new SAP S/4 HANA “read optimised” application template has deployed I view Suite of HANA (SoH) simply as a rather uncomfortable mismatch and “half way house” in application and platform technology terms, one that in my view should be avoided if possible.

I have produced the following chart to highlight my viewpoint in this area:

Throughput Choices 260816

Recently I was also sent a link to this related item on LinkedIn by Shaun Snapp, this item also highlights many of the concerns and questions that I also have about the principle of a size” SAP HANA “one columnar size fits all” workloads and platform strategy.

Indeed some observers would suggest this is being driven as much by SAP SE’s commercial desire to displace existing proven SAP NetWeaver rdbms choices like DB2 10.5 and/or Oracle 12c with their own rdbms platform, irrespective of the benefits or otherwise for their major existing SAP Business Suite clients.

My input to existing large Enterprise SAP Business Suite clients with significant, intense and business critical OLTP workloads would be to ask SAP SE for guarantees that a representative set of critical SAP OLTP and/or batch transactions will perform at a similar of higher level, whilst using a similar set of SAP platform capacity, understanding significant increases in core count and memory capacity to “throw in-memory columnar iron” at a OLTP problem can have very unwelcome real TCO increase issues and really hurt prior DC efficient / Green IT strategies & KPI’s.

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view. They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).

Advertisements

In-memory marketing hype vs reality – Hype Busting

This section 5, let us briefly looks at some – In-memory marketing hype vs reality

To see if these really stack up and what alternatives exist for clients who are worried about the disruption, maturity, risks and commercial lock in of the new SAP S/4 HANA, SoH and/or SAP BW HANA platform strategy.

This section could also be called a degree of “Hype busting” as we likely need clearly separate the excellent and pervasive marketing from the technical and solutions deliverable reality.

Is SAP HANA your destination ?
For the more technical minded reading this item, we shall now drop into some relatively technical discussions related to relational databases and systems design, I make no apologies for doing this as it’s likely important to help reset or gently correct a number of the relative benefits and themes that are normally associated with SAP HANA and/or S/4 HANA “Digital Core” presentations including at recent Sapphire and/or SAP TechEd conferences.

Where are we now in my view with respect to SAP S/4 HANA adoption rates vs a Gartner Type Hype curve:
Gartner Hype Curve

 

In this case I’ll use IBM’s DB2 SAP optimized data platform as a point of reference, it’s not that Oracle 12c SAP “AnyDB” platform choices don’t share a number of similar capabilities (I’d naturally say we do it better, more efficiently etc), it’s just that it would be rather technically presumptuous of me to try and represent Oracle’s 12c in-memory cache capabilities without sitting down with them to understand the capability of Oracle 12c and their ongoing development roadmap vs SAP HANA in greater detail for SAP NetWeaver and/or SAP BW 7.x workloads.

Assuming SAP SE commercially actually want to best leverage these AnyDB and/or enable these capabilities (on not), hence I won’t attempt to do this in this item.

 “In-Memory” Columnar Myth / Hype Busting – Number 1

Firstly I know it sounds obvious but all databases run in computer memory, we are really simply discussing if the database is organized in a columnar relational form (ideal for analytical / OLAP “multi SQL select” read orientated SQL workloads) or if it is organized in a row relational form that is typically used demanding transactional (OLTP) workloads with higher volumes of “single SQL select, insert, update and/or delete” and/or often row based batch updates, let’s call these more traditional read / write OLTP workloads.

Where 70/30, 80/20, 90/10 read / write ratios are common, with higher write ratios typically often observed for demanding OLTP, batch, planning (SCM) and/or MRP manufacturing workloads.

Indeed the IBM DB2 10.5 BLU “In-memory” columnar capabilities are named after a IBM Research Project at IBM’s US West Coast based IBM Almaden Labs called “Blink Ultra” in 2007 / 8 which effectively observed that by converting prior relational rows to columns in memory, that up to x80 times SQL query reporting times speed up were observed for more demanding OLAP / SQL analytical queries.

A detailed research paper from Guy Loman and his team in IBM Almaden from 2007/8 can be found here, if required.

It’s also true that with DB2 LUW (and/or DB2 on Z/OS) that IBM has spent many years optimizing the use of relatively moderate amounts of DB2 database cache (called DB2 Buffer Pools) and systems memory to provide optimal throughput with justifiable levels of systems platform memory investments, whilst persisting data to disk / SAN Storage and also sustaining ACID database transactional consistency.

Hence the idea that any one vendor has a technology unique in this area is largely marketing hype from my point of view, for sure a particular vendor has marketed this capability very effectively, whilst IBM has been less effective with the marketing and likely more effective with an evolutionary, non disruptive deliverable.

For examples of this DB2 + SAP BW deliverable, refer to a couple of summary you tube videos at Yazaki (a large privately owned Japanese manufacturer of custom auto / car wiring looms) and/or at Knorr Bremse a large manufacturer of advanced braking systems for trains etc.

Yazaki and Knorr Bremse – SAP BW plus DB2 10.5 BLU videos

In-Memory “Commodity Computing, Multi Core is cheap” Myth / Hype Busting – Number 2

DB2 10.5 LUW (Linux, Unix, Windows) has been optimized to take advantage of the more recent multi core processor architectures, including both Intel Xeon and POWER (AIX, Linux, iOS) based architectures whilst offering a choice of Operating System support with ongoing SAP ERP / SAP NetWeaver 7.40 and 7.50 certification, optimization and support through to 2025.

If for example we consider the proven and mature “Symmetrical Multi-Threading” (SMT) capabilities of the IBM PowerVM Hypervisor with either AIX / Unix and/or Linux, these proven capabilities have have been extended over time to provide options to switch between one, two, four or eight threads to best match the application workload instruction flow that are then assigned and executed on multiple CPU cores (up to 12 per socket).

This helps to both increase application throughput and increase IT asset utilization levels.

Indeed in recent IBM Boeblingen Lab tests with DB2 and BLU we tested the relative benefits of SMT 1, 2, 4 and/or 8 for a SAP BW 7.3 and/or 7.4 analytical workload, it was clear during these tests for this particular workload SMT 4 provided an optimal balance of throughput and Server / IT asset utilization (CPU capacity, cycle & thread utilization) whilst avoiding excessive “time slice” based hypervisor thread switching that can significantly hamper the throughput of alternative less efficient hypervisors serving the Intel / Linux or “WINTEL” market demand.

Typically with IBM’s POWER for both DB2 10.5, v11.1 and/or HANA on POWER we observe a typical x1.6-x1.8 times greater throughput per POWER8 core (vs alternative Intel Processors), which is supported in balanced systems design terms by roughly x4 times the memory and/or IO throughput compared to alternative processor architectures.

For example if you have a demanding SAP IS Utilities daily, monthly or quarterly billing batch run for tens of 1,000’s of your Utility clients with SAP ERP 6.0 / SAP NetWeaver, the combination of DB2 10.5 and POWER8 with AIX 7.1 (and/or Linux) is really very hard to beat in batch throughput, availability, reliability and delivered IT SLA / Data Centre efficiency terms.

In parallel considerable and ongoing DB2 development lab efforts have resulted in DB2 10.5 SAP platform solutions that also fully leverage modern “Commodity” Intel based multi core cpu architectures, hence this is not a SAP HANA rdbms unique capability by any means.

During mixed SAP or other ISV application workload testing it’s true to say that some ERP / ISV applications better exploit multi-threaded CPU architectures and modern OS Hypervisors than others.

This remains as true for various SAP Business Suite / SAP NetWeaver (or indeed S/4 HANA) as other ISV workloads, where multi-threaded application re-engineering and optimizing typically takes many months and/or many man years of effort, indeed at one SAPPHIRE (2014) Hasso Plattner (Co-Founder and Chair of the SAP Supervisory board) reflected on the significant and ongoing effort to re-optimize many millions of lines of ABAP code in the existing SAP NetWeaver core platform for S/4 HANA, in addition to the subsequent CDS “push down” initiatives briefly mentioned before.

Also as previously mentioned in my prior Walldorf to West Coast blog, I’m rather reserved about the later upgrade complexity and costs I’d previously observed in a Retek / Oracle Retail scenario which pushed retail merchandising replenishment (RMS) functionality down from the Clients specific application configuration through the Oracle Application tier into the Oracle 10g rdbms tier leveraging PL SQL stored procedures.

For sure this helped to speed up key replenishment batch vs prior IBM DB2 or IMS based M/Frame platforms, however with the later penalty that the overall Retek RMS or WMS solution stack became very tightly coupled and interdependent in version terms.

It also essentially limited (like SAP HANA) the Oracle Retail / Retek platform rdbms choice to one only, where later application version upgrades were really very significant “re-implementations”, conversely the prior segregation and separation of application and rdbms duties in a SAP IS Retail / NetWeaver helped to reduce on mitigate this issue.

Hence in this case the structured development and enablement of SAP’s Core Data Services (CDS) interface between the application and deeper database functionality becomes vital for SAP clients

It’s also true to say the functional depth and breadth of capabilities being built into SAP HANA is very impressive, however this does mean a high rate of change, patching and version upgrades that in turn will need to be aligned to Vora / Hadoop platform versions.

In an Intel environment DB2 10.5 and/or 11.1 LUW also naturally also leverages Intel / Linux and/or Windows “Hyper Threading” (typically dual threads per physical processor core).

In my view the myth here is that “per say” Intel multi core architectures are inherently cheaper than alternative mature, Type 1 (or type 2) hypervisor implementations on IBM’s POWER or IBM System z (refer to this item for a summary of the difference between Type 1 and Type 2 hypervisors)

For example, within IBM we internally consolidated many 1,000’s of prior distributed Unix / AIX / Linux systems and applications onto a limited number of large IBM System z servers running Linux with a highly efficient and mature type 1 hyper visor, this was in fact significantly cheaper and considerably more efficient in Green IT and DC PUE terms, than alternative distributed computing options.

I’m not saying here that Intel / VMware ESX or Linux based hypervisor solutions don’t also provide considerable IT efficiency and platform virtualization opportunities, they do, it’s just that I rarely favour “a one size fits all” binary IT platform strategy.

In my experience a single platform strategy rarely does (for the largest Global Enterprises, it’s likely rather different for small and medium sized enterprises).

Typically implementing “a one size fits all strategy” often forces rather uncomfortable compromises for very Large Enterprise scale clients, who often naturally both virtualize and tier out their server and storage platforms (increasingly also in Hybrid Cloud deployment patterns) to match the requirements of different workloads and/or delivered business driven IT SLA’s and real life practical delivered TCA / TCO’s and cost / benefits.

For sure it’s relatively easy to compare an older or partially virtualized Unix / Oracle environment with a fully virtualized x86, Intel, VMware / Linux scenario (or Intel / Linux Cloud) and demonstrate TCO / TCA savings, however theses often tend towards being potentially rather misleading “apples & pears” comparisons vs comparing one rdbms platform under load against another on the same platform and operating system for the same set of OLAP or OLTP workloads (a much more balanced comparison).

Where the intense IBM focus is really on the most efficient use of the available systems resources (cores, memory & IO) in combination with increased IT agility and responsiveness to help optimise Enterprise Data Center efficiency (some call this Green IT) whilst minimizing the required input power (often measured in Mega Watts) for larger DC’s as measured by the Data Centre Efficiency ratio (PUE).

In this area with the significant consumption of many GB and/or TB of Ram and/or many thousands of cores (for large Enterprise SAP Landscape deployments) the SAP HANA architecture can be very costly indeed in DC Efficiency terms, in particular with limitations currently associated with the virtualization of on premise HANA production environments.

In-Memory “Data Compression Rate and TCO Savings” Myth Busting – Number 3

With DB2 10.5 “Adaptive and Actionable” compression we often observe and sustain 75-85% rates of DB2 DATA compression (call it a 5:1 compression ratio).

In particular with DB2 BLU columnar conversion of targeted SAP BW tables we leverage advanced Huffman encoding, in addition to significantly reducing the requirement for aggregates and indexes resulting in compression rates of 85-90% or more (vs prior uncompressed base lines) depending on the specific nature of the clients SAP BW 7.x tables.

For SAP BW with HANA ratio’s of 3.5 to 4:1 vs uncompressed maybe typically observed (client data depending etc).

Hence in these scenario’s Clients implementing SAP HANA columnar strategies will actually likely observe a reduction in compression rates if they are already using either DB2 10.5 Adaptive and/or DB2 10.5 BLU actionable compression with SAP BW 7.x.

In addition to “Doubling up” the required memory for SAP HANA working space, whilst sizing a combined SSD / HDD (Solid State or Hard Disk Drive Storage) at FIVE times compressed data for HANA database persistence (x4) and HANA logs (x1).

In these scenario’s that client will actually observe a significant net increase in SSD / HDD or SAP HANA TDI based SAN based storage capacity, not a reduction as often claimed in SAP HANA marketing presentations and brochures, in particular when these differences are then multiplied up over multiple SAP environments (Dev, QAS, Production, Dual Site DR, Pre-Production, Training etc for a real life SAP Landscape (operating in either a dual or single track landscape on the path to production from Sandpit / Development, through QAS, Pre-Production, Regression to Production).

Naturally for both SAP HANA and/or indeed DB2 10.5 BLU with SAP BW clients can complete older data house keeping and/or BW NLS archiving, in the case of DB2 BLU using a common BLU NLS archiving capability, for SAP HANA + BW it’s SybaseIQ based BW NLS archiving currently, potentially Hadoop / Vora in the future.

This diagram is often presented at SAP Sapphire and/or TechEd conferences to summarize the target, potential SAP HANA storage savings with Suite on HANA for SAP’s own ERP deployment.

HANA Storage Reduction Screen Shot

However what is rarely mentioned is that ~ 3.x TB of older table data from a prior Oracle to DB2 (over HPUX, Superdome) SAP ERP migration was removed “cleaned up” in house keeping terms in advance of the ERP on HANA migration, this then gives very different picture in terms of the realized data compression rates vs a prior partially compressed older DB2 environment, however this would then really spoil a good story, SAP HANA marketing chart!

Naturally the required additional new hardware capacity investment that is required is then of significant commercial interest to the multitude of Intel / SAP HANA SAN platform providers (either HANA Appliance and/or consolidated TDI SAN based).

For example I created this simple chart to reflect the relative SAP DB2 BLU and SAP HANA memory sizing ranges, understanding for both technologies it’s also individual client table data dependent.

Relative SAP HANA vs BLU memory sizing

In my experience this seems to have turned into a bit of a server and storage hardware vendor feeding frenzy with multiple h/w vendors rushing to endorse the whole SAP S/4 HANA adoption story for obvious reasons whilst largely ignoring prior existing, proven, incremental SAP platform solutions!

In-Memory “Future Optimization, SAP Roadmap” Myth Busting – Number 4

In many SAP Enterprise Client engagements, I receive the following comments “but we have been advised that we will miss out on future SAP application optimizations if we don’t migrate to a SAP HANA rdbms and/or S/4 HANA “Digital Core” sooner rather than later”.

These comments are often made irrespective of the actual, real life S/4 HANA adoption rates that are actually a very small fraction (= > ~ 1%) of the installed SAP Business Suite / NetWeaver installed base, such is the largely sales incentive driven pressure for SAP sales and technical sales teams.

At best this is only partially true, where SAP continue to enable and develop a “Core Data Services” (CDS) rdbms abstraction layer that creates a logical structure for the push down and optimisation of SAP HANA “re-optimized” application code to the rdbms database tier.

Consequently and logically IBM with DB2 (and indeed Oracle with 12c) continue to develop, optimize and align DB2 capabilities to SAP NetWeaver CDS functionality, which incidentally is supported and certified with SAP NetWeaver 7.40 and 7.50 with DB2 10.5 (& above) through to 2025.

Additionally  typically for IBM Financial Services Clients CDS has been deployed in conjunction with FIORI transactional applications to significantly improve SAP usability whilst protecting the Clients investment in IBM’s System z and/or DB2 over z/OS with Linux or AIX SAP application server capacity.

In practical terms this means that ongoing SAP HANA based SAP ABAP code re-engineering and optimization efforts (there are many many millions of lines of single stack ABAP and/or prior dual stack ABAP / JAVA code) are aligned via CDS to optimized rdbms alternatives like DB2 and/or Oracle 12c in the near and mid term IT investment and planning horizon.

At Sapphire NOW 2016, I picked up a number of initial comments that the “Suite on HANA” SAP HANA Compatibility Views would only be developed and sustained for a finite period (until 2020), allowing Clients more limited time to migrate to new simplified SAP S/4 HANA Enterprise Management code steams and table structures (the new Universal Ledger in Simple Finance as an example).

From a personal point of view, deploying an existing deeply customized regional or global SAP NetWeaver / ECC application template that has been “Read / Write” optimized for existing rdbms platforms over many years over HANA (SoH) is likely an application and rdbms platform mismatch.

It’s likely more logical to implement a new simplified S/4 HANA Digital Core “read optimised” application template with over a HANA columnar rdbms platform. This assumes the require application functionality is available and that the business is willing to remove or remediate prior customizations to align to a forward SAP S/4 HANA digital core roll out and transition strategy.

However it is also becoming clear now that in addition to the prior SAP Business Suite / NetWeaver code line (and various PAM defined OS/DB supported combinations) the SAP HANA initiatives have created at least 4 if not more different SAP S/4 HANA “simplified” code lines or releases including:

  1. Simplified S/4 HANA solutions hosted on the HANA Enterprise Cloud
  2. The prior S/4 HANA Simple Finance (sFin v1) code, maintenance and release line
  3. S/4 HANA Enterprise Management and Simple Finance v2 “On Premise” code & release line
  4. The S/4 HANA Enterprise and Simple Finance “On Premise” code line but HEC hosted

The clear risk for both SAP SE and/or SAP Enterprise Clients is that there is simply a switch from developing, managing, testing and releasing multiple “Any DB” OS/DB choices over a SAP Business Suite / SAP NetWeaver code stream to managing, aligning and releasing multiple S/4 HANA editions and code lines (on or off premise), this is just a different set of complexities to manage, but with a new restriction of prior Client “AnyDB” choices, this is not in my view SAP HANA “simplification”.

S:4 HANA Simplification ? 

In-Memory “Commodity / Cloud Based TCO Reduction” Hype Busting – Number 5

In our industry we are observing the convergence of multiple significant structural changes, where previously we would typically deal relatively speaking, with a single significant structural change every 3-5 years (Desktop Computing, Client / Server, Distributed, the emergence of Eclipse, JAVA, Linux Open Source etc).

Today we have to manage and prioritize limited IT investment resources over multiple concurrent significant structural changes (Mobile Devices, IoT, Public / Hybrid Cloud, Big Data, significant Cyber Security threats), some of us older folks with many years in IT (and a few grey hairs), might suggest some of these themes are a being a little “over hyped” in IT Industry fashion terms, hence we tend to take a cautious view, then asking the harder “but, so what questions ?”, helping to sort out material delivered benefits, ROI and progress from the considerable IT industry hype (it is a bit of a fashion industry also !).

In my view it’s perfectly possible to architect, build and deploy an “at scale” fully virtualized SAP Private Cloud that is every bit as efficient ( if not more so in Data Center Efficiency / PUE terms) than either a Hybrid Public / Private cloud based on AWS (Amazon Web Services and/or MS Azure) platforms based on Intel Commodity “ODM (Original Design Manufacturer) 2 or 4 socket servers.

Indeed the author was directly involved and responsible for the successful deployment of a fully virtualised IBM DB2 SAP Private Cloud in support of ~ 8 Million SAPS, 600+ Strategic SAP environments with ~ 12 Petabytes of fully virtualised and tiered SAP storage capacity spread over Dual Global Data Centre’s with WAN Acceleration to support prior SAP GUI, SAP Portal and/or Citrix enabled SAP Clients leveraging DB2 and PowerVM, AIX, where in practical terms it remains a highly efficient, flexible and scalable SAP platform in support of a 50+ Bn Euro (~ $75 Bn annual t/o) Consumer Products business.

In this case, as mentioned before briefly in a prior blog section, when we completed detailed modelling of a SAP HANA Appliance based deployment over 4 regions and 4 at scale workloads / SAP Landscapes (ECC, APO/SCM, BW, SAP CRM) with dedicated production appliance and VMware ESX / Intel virtualized capacity for smaller non production SAP HANA instances with a shared, common TDI based storage strategy, this carried a DC TCA (Total Cost of Acquisition) premium over the existing Virtualised, Tiered IBM DB2 SAP and IBM POWER deployment strategy of between 1.5 and 1.6 times.

On one SAP HANA video a x10 landscape capacity reduction was indicated, however this really did not correlate in anyway with the actual worked example mentioned above.

For sure, I would not debate the agility, flexibility and initial responsiveness (assuming the required VPN links and security , data encryption needs are met) of AWS, MS Azure and/or indeed IBM’s own SoftLayer Cloud offerings for rapid provisioning of Dev/Ops enabled “Front Office, Big Data and/or next generation Mobile Enabled application workloads including S/4 HANA or indeed SAP NetWeaver with DB2 10.5 and/or CDS which is also available on MS Azure, AWS and/or IBM’s SoftLayer / CMS4SAP platforms.

The crucial factor here is a proper base line and measurement of the “before & after” environments and to avoid the considerable temptation to compare different “apples & pears” generations of SAP platforms that rather “mixes up” the whole TCO analysis and results equation.

I consistently observe Cloud TCO comparisons of prior “Legacy” partially virtualized older generations of Unix / rdbms systems with fully virtualised Intel x86 Cloud environments, these types of old vs new compares can be rather misleading and should in my view, be taken with a large and rather cynical pinch of salt.

Any comparisons TCA / TCO should really use “like generation” CPU / Virtualization platforms and virtualised, tiered storage combined with current generation rdbms platform choices. For example comparing an older version of Oracle (or indeed DB2) over a prior Unix platform generation with a fully virtual x86 Cloud with initial development SAP HANA + SAP BW (including any risks of noisy neighbour, unless dedicated capacity is deployed) scenario can be very misleading whilst potentially creating impressive but also potentially rather misleading headlines during Cloud vendor marketing events and presentations.

In-Memory “IT Agility, Sizing, Solution Responsiveness” Hype Busting – Number 6

After many years of SAP and/or ERP Platform sizing experience, we all understand that sizing complex SAP Systems Landscapes is a combination of science (user input on expected user, transaction volumes, data volumes and expected user & data growth rates, expected roll out rate and planning horizons, workload scalability testing, Client specific PoC’s etc).

Which is then combined with detailed prior experience and judgement on the likely system sizing variation and future growth rates after SAP Application configuration and customization, along with catering for the typical often changing business requirements and/or fluid ERP / SAP roll out schedules by country or region over different SAP ERP and/or related non SAP systems alignment and integration requirements.

In this context it really nets out to one of two sizing strategies in particular if SAP HANA appliance vs TDI strategies are being considered.

  1. The “Appliance based model”Define the target environment, future growth horizon and then add a safety margin for errors, unexpected changes in inbound demand (an increasingly frequent issue)Then you deploy the targeted 2, 4, 8 or more socket / server appliance building blocks with the appropriate data compression rates and GB / TB of Ram sizing methods
  2. An On Demand (In IBM we call it Capacity Upgrade on Demand – “CUoD”) ModelWhere you size a scalable platform with Active live and/or “Dark” CUoD capacity that is then activated “on demand” when the actual workload requirement is known vs initial SAP ERP sizing estimates.
  3. Then on top of these 2 models or approaches you then consider the realistic IT / ERP platform technology / capacity refresh cycle vs expected roll out schedules, workload and data growth rates to ensure you don’t break the target capacity building blocks for peak vs average demand over a typical 3-5 years IT asset write down cycle.

These rules mostly apply irrespective of the SAP Solution Cloud deployment model (Hybrid, Public, Private) selected to match the various development and roll out phases (remembering a chart I defined back in Feb 2005 as below !) to describe this typical Enterprise SAP ERP workload and roll out cycle (just to prove some things don’t really change as much as we might imagine !).

Dynamic Infrastructure Sizing

One of my very experienced SAP platform solution architect and sizing colleagues, said that he felt that sizing SAP HANA appliance based landscapes (vs fully Virtualized System p + DB2) was a bit of a “back to the future” experience in SAP / IT platform sizing, server capacity and life cycle / refresh terms.

E.g. there are significant issues and penalties in capacity, disruption and and building block upgrade terms if the initial SAP HANA sizing is incorrect, as in addition to the typically 24-36 month refresh frequency on commodity / Intel x86 platforms.

This means that selecting the wrong sized SAP HANA appliance TYPICALLY means later rather uncomfortable conversations are required with at the CIO, CTO and/or CFO level when these need to be refreshed, often in advance of typical 4-5+ year Enterprise IT asset write down cycles and System of Record technology refresh terms.

In my view, it’s very important for these technology refresh cycles to be factored into any SAP Platform TCO /TCA analysis, in one prior large Retail scenario we used 3-4 Years for Intel / Linux, 4-6 Years for POWER / DB2 and 6-8 Years for M/frame System z, DB2 with either Intel Linux or Power AIX application server capacity, which aligned to the clients scenario and two of their 5 year fiscal write down / budgeting processes.

If you end up refreshing “commodity” technology or with a proliferation of different appliance based solutions (with large volumes of cores in the Data Centre to install, manage, power, cool and maintain with typical DC power to cooling ratio’s of 1.5-1.7 times) this can quickly become a rather costly and inflexible SAP platform strategy.

Personally I prefer to deploy a proven, scalable, flexible virtual platform upfront and then scale as required through Capacity Upgrade on Demand (CUoD) options. This helps to effectively manage business driven changes in requirements, unexpected mergers / acquisitions etc.

However if you have an existing workload that is stable, with clear growth rates and can deploy this over an appropriate appliance building block after a detailed PoC to help with sizing this can also work. It’s then really all about unexpected workload growth, which often driven by mergers or later acquisitions, disposals and/or business driven SAP platform consolidations activity.

Indeed, for example even last weekend I was reading about the continued significant rates of mergers, acquisitions and consolidation that is ongoing in the FMCG / Consumer Products Industry.

In these scenario’s suddenly finding your core SAP ERP “System of Record” platform now needs to scale by a factor of 3 or 4 times (vs 1.5-2 times) is actually not that uncommon as the back office functions for two substantive businesses need to be merged into a single SAP instance / template and platform to realize prior or committed merger / acquisition savings and economies of scale.

It’s for sure a case of buyer beware, the age old, golden rules of making sure your target ERP platform has at least x2 capacity headroom, has never been more true, if you “tight size it” it will for sure hurt later, please refer to the follow SAP HANA – 7 Tips and resources for Cost Optimizing SAP Infrastructure” Blog

https://blogs.saphana.com/2014/11/06/7-tips-and-resources-for-cost-optimizing-sap-hana-infrastructure-2/

For sure Cloud / IaaS based models can help with initial project agility, responsiveness and/or even to help actually size “model” configured environments, but per say it’s still important not to simply assume “a Cloud / Commodity” model is always cheaper than an effectively designed and deployed, virtualised “Private Cloud” or hosted “Private / Hybrid Cloud” model, in particular if you are implementing at scale over a 4-5+ year write down cycle vs 12-36 months.

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view.

They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).


Does a SAP S/4 HANA “Digital Core” destination make sense for your business, if so when ?

Executive Synopsis

At the SAP SAPPHIRE NOW 2016 conference in Orlando I was approached by a number of large IBM / SAP Enterprise Clients and business partners who essentially asked a similar and in theory relatively simple question, which unfortunately has both a simple and more complex answer.

HANA Bay, Maui, Maybe not be exactly the tropical beach and bay that you had initially imagined!

HANA Bay Picture

What was the synopsis of the similar question that I was asked and my initial and more detailed answer?

Which I’ve subdivided into a series of interrelated related blog topics for ease of consumption and particular interest, concerns and questions.

Section 1 – Does SAP S/4 HANA make sense for you, client Choice A vs Choice B ?

Section 2 – IBM DB2 BLU and/or DB2 10.5 Optimization for SAP – Evolution vs Revolution

Section 3 – Stable SAP NetWeaver Core + Best of Breed / SaaS Edge, Hybrid Cloud Strategy

Section 4 – Will Open Source Enabled API, IoT, Big Data prevail or proprietary – YES or no ?

Section 5 – In-Memory IT Hype vs reality – Some “In-Memory” Hype busting

Section 6 – HTAP, OLAP vs OLTP SAP Application Throughput, Optimizations

In summary for me individually referencing the diagram below Choice A is strongly preferred, however I fully recognize that some large Enterprises may essentially decide to go “all in” with a SAP S/4 HANA digital and extended core, essentially Choice B

On the basis that a picture is worth a 1,000 words, let’s start here:

Business into IT Investment Choices, Strategies v2 300816

A consolidated version of the repeated question I was asked at Sapphire Now 2016 ?

“Today we run our core, often customized SAP Business Suite / ERP 6.0 NetWeaver systems on
our preferred choice of IT platform, including an AnyDB choice, that embraces our choice of SAP platform technology.

(For example a choice of IBM iSeries, System z, System p and/or Intel x86 / Linux or Windows with DB2, or indeed Oracle with SAP over System p or Intel / Linux etc)

However our local SAP sales and technical sales team are strongly advising us to start out all over again in SAP platform technology terms with either SAP BW + HANA, Suite on HANA (SoH) and/or SAP S/4 HANA Enterprise Management with Simple Finance v2 running only on HANA over  Linux / Intel and/or IBM’s POWER and Linux with SAP HANA TDI storage.

What should we do and what’s your point of view and input?

PS Please don’t start your answer with, but naturally but it depends…. !

As a relatively conservative, risk adverse, experienced SAP / ERP technical solution architect and IT Strategy Advisor with 33 years of IT, ISV and ERP platforms solution experience, I will provide both a high level and then more detailed, structured and considered reply after time for more detailed research and to develop a more structured response and in more depth that I believe this topic requires and warrants.

Various sources of information and research used whilst creating this response includes various SAPPHIRE NOW 2016 and prior SAP TechED keynotes, respective SAP S/4 HANA and/or IBM DB2 SAP NetWeaver product roadmaps, technical solution benefits, choices, references back prior detailed SAP Platform TCO / TCA and IT risk / benefit / strategy analysis.

It includes a number referenced independent sources that are less influenced by commercial gain from automatically following the prevailing “the answer is SAP HANA, now what was the question again ?” viewpoint, hence maybe considered by some to be a little controversial.

The IT Executive Level summary answer is relatively simple as follows:

Business into IT Investment and Innovation Strategy Choice A or Choice B, which way are you going?

For me the start now Green Arrow Choice of Path A vs Path B is a hugely strategic and critical question for many enterprises, indeed the Harvard Business Review recently published an IBM sponsored paper titled “The Ecosystem Equation: Collaboration in the Connected Economy”.

This paper and a webinar presentation of its summary can be found at the following location/s:

If we net this excellent HBR research out, it really indicates that the next generation of industry leaders will be determined by the combination of consistent Executive “C Suite” sponsorship, investment, speed to market and value of digital enablement in a highly connected, open and collaborative strongly emerging “Digital Economy”.

In summary as follows:

HBR Connected Economy Summary

For me this research essentially indicates Choice A is likely a preferred path as it helps to focus typically finite Strategic IT investment resources more rapidly on delivered speed to value of Open Source, Analytics data / API and IoT driven platform innovation.

I’m not saying some Enterprises won’t choose Choice B, they will, however this choice will critically need to be made with an “opportunity time vs cost vs risk vs benefit” analysis of essentially pausing 1st to remediate existing customized SAP NetWeaver application templates towards S/4 HANA Enterprise Management templates, or as a minimum  investing in significant SAP application template remediation in parallel with front office business aligned IT innovation investment strategies.

Naturally as a consequence of this strongly emerging “Digital Economy” reality I then I tend to start my answer with further exploratory background IT strategy and SAP / ERP / IT Platform Client solution strategy related question/s as follows:

Are you planning to and can you practically implement a new “read optimized” simplified S/4 HANA Enterprise Management and/or Simple Finance v2 template with aligned optimized and revised, simplified business processes, Yes or No, or maybe even you are not sure yet?

Additionally, are you prepared to adapt the current business processes to match the capabilities of the S/4 HANA Digital Core / Enterprise Management package including Simple Finance v2 ?

Which for example is essentially a practical ERP platform strategy for relatively young but fast growing companies, for example like Asian Paints, in India whose CIO co-presented during one of the SAPPHIRE NOW 2016 Keynotes, in summary he indicated they had adapted and aligned their business processes to the available SAP solution capabilities and phased deliverables, not the other way around which is more normal in large scale, complex global enterprises.

Understanding that within the ~ 3,700 Clients that SAP SE indicate have adopted SAP S/4 HANA it was mentioned there are actually ~ 180 S/4 HANA productive deployments with a further ~ 300-350 pipeline projects, from recent Bloor Research and Nucleus Research analysis it also looks like a significant majority of these 180 clients tend towards “net new” SAP S/4 HANA deployments and/or early testing in smaller subsidiary operations of larger corporations vs core prior SAP Business Suite deployments.

Where ~ 180 + the pipeline of 300-350 further deployments actually represents ~ = > 1% of SAP’s ~ 45,000-55,000 installed SAP Business Suite clients.

Relatively speaking it was also recently mentioned that due to the bias of large, regional or global Enterprises running SAP over IBM DB2 approaching 1/3rd of SAP’s existing Business Suite transactions are actually processed on a IBM DB2 database platform running over a choice of IBM’s System z, p, I and/or Linux / Windows and Intel.

This represents many 1,000’s of installed SAP DB2 SIDS (SAP System ID’s) for both non production and often mission critical production use, and is consequently a low risk and proven SAP Business Suite / SAP NetWeaver platform capability.

Understanding also of the ~ 3,700 clients mentioned, this typically it includes a range of SAP BW on HANA (OLAP), Suite on HANA (SoH), HANA Side Car (CO PA, ML Accelerators) and/or S/4 HANA license upgrades in addition to things like SAP HANA HEC (HANA Enterprise Cloud) SuccessFactors, Hybris etc SaaS (Software as a Service) deployments, hence it’s very difficult to get an accurate and precise view.

As mentioned briefly after SAPPHIRE NOW, On the 28th of June 2016, Nucleus Research also recently published a summary paper that indicates 9 out of 10 Clients from the 40 SAP Clients they interviewed (within a research pool of ~ 200+ ERP engagements) don’t plan to deploy S/4 HANA in the near future, a link to this item is referenced later in this section.

However …

For larger or more complex existing SAP Business Suite / ERP / ECC 6.0 / SAP NetWeaver Enterprise clients a functional analysis and/or re-mapping of the “existing” and “to be” business process into SAP S/4 HANA Enterprise application template is then required.

This itself is typically a non-trivial exercise which can take many weeks or even months of effort, even if the latest S/4 HANA SAP custom code compatibility inspection tools are employed.  

This also assumes that conversion to this new “Read Optimized” SAP S/4 HANA application template is viable and practical in roll out terms and that the required functionality is both available and stable.

It also assumes that the required remediation is affordable (in time, IT opportunity cost & resources, roll out and SAP HANA platform terms vs alternative strategic IT investment strategies), which then swings back in a full circle back the Choice A Vs Choice B in the first diagram.

In my experience client CIO, CTO’s and/or Chief Enterprise / ERP or Data Architects I’ve spoken to are mostly adopting either Choice A vs Choice B (or a wait and see strategy), with an objective to more effectively meet intense business pressure for more rapid returns from IT investments, value delivery and a faster ROI.

In my humble view, the prior days of monolithic SAP / ERP roll outs are simply drawing to a close.

Re-Integration of prior SAP IS (Industry Solutions) into the SAP S/4 HANA Enterprise digital core.

A number of the prior SAP IS (Industry Solutions), like the SAP IS Retail and/or AFS (Apparel and Footwear) solution are now being re-integrated in data structure, table and functional terms back into the simplified new S/4 HANA Enterprise Management “Digital Core”.

In the case of the Retail Industry solution with phased S/4 HANA Enterprise based functional deliverables beyond “Simple Finance” planned for the SAP S/4 HANA 1611 Release in Q4 2016, followed be further SAP S/4 HANA Enterprise Management Retail hybrid retail / distribution functionality in Q4 2017 etc, in effect the industry aligned, functional delivery has taken a ~ 24 month rain check to be re-engineereddata onto a SAP HANA “read optimized, columnar in-memory platform”.

Refer to the SAP S/4 HANA Retail Roadmap/s (SAP Service Market Place ID Required)

It also assumes that this is a key, strategic forward IT investment priority and focus area, we will come back to the strategic aspect of this particular question a little bit later.

Then naturally you will consider and review very carefully if a new “read optimized” S/4 HANA Digital Core and revised application template that more naturally aligns to a SAP HANA columnar (only) data platform capability with the required revised, updated SAP Basis / database and IT platform skills, limited SAP HANA platform choice/s, change and release and/or Cloud (HEC) or on S/4 HANA on / off premise hybrid cloud deployment requirements, prerequisites and options that this implies.

In the recently updated SAP Nation 1.0 > 2.0 book by Vinnie Mirchandani it mentions the Director of IT, Andre Blumberg at CLP Group (a large Hong Kong headquartered Asia Pacific SAP Utility Client) takes an engineering like approach to the evaluation of new IT technologies like SAP HANA, where they then found the TCO (Total Cost of Ownership) would actually be significantly higher with SAP HANA, not lower as claimed in multiple SAP HANA sales and marketing presentations at Sapphire and TechEd.

This was / is consistent with prior SAP platform TCO/TCA analysis I completed for a ~ $75Bn (50-55Bn EU) Global CP company that was already running a virtualized, tiered, consolidated and standardized SAP platform strategy (over DB2 and System p ) in the form of an “at scale” IBM SAP Private Cloud where a switch to a SAP HANA strategy resulted, when modeled over four regions and four SAP workloads / landscapes (ECC, APO/SCM, BW and/or CRM) using a common tiered, virtualized tiered storage strategy resulted in a 1.5 to 1.6 times increase in SAP platform TCA (Total Cost of Acquisition) .

It also added a further 12-14,000 commodity Intel Cores, which in turn would have forced a significant and costly ~ 3 Mega Watt power increase in each of their two Global Data Centers running directly contra to their green IT, and Data Center sustainability KPI’s.

It’s also worth mentioning to put things into perspective this single Enterprise has more DB2 SAP strategic instances (SIDS’s) in production and non-production than productive S/4 HANA deployments globally (at 600+).

Today, unfortunately SAP are indicating you no longer have an SAP “AnyDB” choice for their new S/4 HANA Digital Core, where today it’s clear that SAP’s S/4 HANA platform strategy is to offer a rdbms choice of one (some SAP Clients might say this equals none) over the prior multiple SAP NetWeaver / ERP 6.0 platform PAM (Product Availability Matrix) defined AnyDB and supported OS/DB permutations and combinations, choices offered before.

Please refer to my prior From Walldorf to West Coast ? S/4 HANA blog on the LinkedIn CIO forum:

https://www.linkedin.com/pulse/from-walldorf-west-coast-s4-hana-tim-main?trk=mp-author-card

The strategic SAP point of view expressed at SAPPHIRE NOW 2016 and/or prior SAP TechEd’s in Q4 2015 is that by restricting the prior AnyDB SAP platform choice, we can deliver new “in-memory” SAP HANA enabled innovations and converged OLTP / OLAP functionality significantly faster.

It is also likely that this helps SAP SE to significantly reduce their prior SAP NetWeaver / ERP 6.0 platform application regression testing costs, largely at the expense of forcing a SAP S/4 Digital Core HANA based platform change for many mutual, IBM SAP large or medium Enterprise clients.

Some more experienced and possibly more cynical longer term IT folks might be forgiven for then suggesting that we seem to then have a case of the SAP HANA columnar rdbms SAP platform “technology tail” wagging the SAP ERP “business application dog”.

Effectively the new SAP HANA rdbms technology choice requires and forces a new “revised read optimized” columnar rdbms SAP S/4 HANA Digital Enterprise based business application model” including things like the new Universal Journal.

Personally I see limited solution or technology benefit in running an existing customized SAP read / write optimized SAP Business Suite template in a Suite on HANA (SoH) platform configuration.

For me at best this is a basic SAP application and platform technology platform mismatch or rather compromised “halfway house”.

Indeed at SAP Sapphire Now 2016, in one key note it was very briefly mentioned SAP CDS HANA “Compatibility Views” would only be supported for a limited period of time (I understand this is currently to 2020) vs for example the support of existing SAP NetWeaver 7.40 and/or 7.50 SAP CDS (Core Data Services) functionality over DB2 10.5 LUW until 2025.

In the latest “Your path to S/4 HANA” brochure distributed at SAPPHIRE NOW 2016 it mentions SAP SE’s very significant investment and commitment to their invention of “the most disruptive pure in-memory technology “SAP HANA” to bridge the gap between prior often separate transactional and analytical platforms.

As mentioned before, recently on June 28th 2016 Nucleus Research have published a report that summarizes the output from a survey of ~ 40 Enterprise IT / SAP Clients with respect to their SAP S/4 HANA adoption plans.

In this report, whilst recognizing it’s a relatively small sample size (40) that was supplemented by research data from a further recent 200 client ERP evaluations, a significant number of clients indicated that they had no near term plans to adopt S/4 HANA (9 out of 10). In my view this is fairly profound input for SAP SE to consider.

http://www.businesswire.com/news/home/20160628006566/en/6-10-SAP-Customers-Buy-SAP-Nucleus


BOSTON–(BUSINESS WIRE)–A 60 percent majority of SAP SE (NYSE: SAP) customers wouldn’t buy SAP solutions again according to a new analysis by Nucleus Research. And in SAP’s core market of ERP, nine out of 10 customers say they are not considering a future investment in SAP’s S/4HANA solution.
Previously one of the key reasons why Enterprise Clients selected SAP ERP / NetWeaver solutions was the ability to effectively integrate a more open application platform with existing IT platform choices, IT operational investments & skills to minimise change and risk at the platform vs business process into SAP / ERP application level (which is tough enough to successfully deliver in its own right for large scale ERP business process change and IT phased project deliverables).

Is HTAP actually achievable and desirable at scale? – The answers is that it really depends.

I have a personal view, that whilst it may be practical and/or desirable for small and/or medium enterprises the implied SAP HANA HTAP (Hybrid Transactional Analytical Processing) strategy that is being strongly promoted by SAP SE as a SAP HANA Landscape complexity / TCO Reduction strategy, this is likely not realizable or not even desirable in many large scale Enterprise clients scenario’s, in particular clients with mixed ERP / “Systems of Record” platform portfolios.

In particular where logically for a large Enterprise IT Client, their EDW (Enterprise Data Warehouse either physical or indeed increasingly logical data warehouses) typically consolidates multiple sources of “Systems of Record” / ERP data in addition to increasingly absorbing consolidated information, often in “Distilled SQL form” from external Big Data sources (typically Hadoop / HDFS, MapReduce or Spark / HDFS based).  Some folks are now calling these Hybrid Data Warehouses or Data Lakes.

The key challenge being with Data Lakes and/or Data Reservoirs, if not very carefully managed and governed they can quickly turn into “Data Swamps” pooling untrusted information of uncertain heritage and accuracy being into the Data Lake / Reservoir as a “large bucket” of data.

Hence I would strongly suggest that the middle grey layer of information and data governance, movement, master and meta data management in the diagram below is a rate determining critical success factor for many Enterprises, in addition to the ability to virtualize or federate queries with appropriate throughput in the Hybrid or “Logical Data Warehouse” Vs building prior often monolithic EDW’s.

This typically has to be combined with a controlled and managed insight into action strategy that is typically expected and/or often required by the combination of key line of business executives, and/or IT users.

Personally I prefer to use what I call the “Two Triangles” data architecture, it reminds me of the production of a fine Scottish single malt whisky using local filtered water, malted barley, mash tuns and successive distillation processes, copper and brass stills and the subsequent storage, selection, leveraging highly skilled blending and/or and subsequent consumption or aging in high quality oak barrels.

With the quality of the end whisky produced being totally dependent on an optimal combination of proven skills and capabilities, the quality of the source ingredients, combined in logical proportions with the appropriate distillation asset investments, retention and aging periods.

Translating this into a Big Data scenario for sure also the quality of the end “Insight into Action” product depends on the quality of the data input, the required capital investments and the accumulated Business Analytics skills involved (say the availability of experienced Business Intelligence, Analytics SME’s and/or Data Scientists) as summarized below:

Two Triangles Screen Screen Shot

If a broader Data Lake or Reservoir strategy is of interest to the blog readers, please refer to the following excellent Paper –  Governing and Managing Big Data for Analytics and Decision Makers.

Anyway, I digress, back to the topic in hand, hence for me, I still believe in the logical separation of at scale Enterprise OLTP / Transactional and/or at scale EDW / OLAP Analytical workloads as follows:

Does HTAP Makes Sense for you ? v2

Indeed SAP SE are also currently re-positioning SAP BW (Business Warehouse) from its prior typical role of a SAP ERP aligned operational and/or transactional reporting platform into a role as a SAP BW + HANA based EDW with HANA live / FIORI Analytics user interface used for transactional and/or operational reporting.

This assumes of course the required business value content and reports have previously been defined and delivered (which is not automatically the case, a recent SAP HANA Live client indicated in comparison with SAP operational reporting tools from vendors like EveryAngle).

A Gartner point of view on this topic can be found in Gartner Paper – G0027727, via your Gartner subscription,  Hype Cycle for Information Infrastructure, 2015, published 13th August 2015

“Almost all new infrastructure technologies emerge into market productivity as incremental solutions (and as such can persist as stand-alone solutions).”

Gartner source / copyright respected – please refer to original for full source details

Gartner Report Information Infrastructures 2015 heading

Gartner HTAP Point of View

What happens if you have just deployed an existing broad, deep, Customized SAP NetWeaver ERP 6.0 platform ?

Now we firmly get to the most complex and difficult scenario, where you have an existing often deeply customized mission and business critical, broad and deep SAP ECC / NetWeaver / Business Suite deployment that has just been rolled out at a regional or global level (with a standardized but often deeply customized SAP Business Suite / ECC 6.0 template).

There is also a point of view that the considerable “marketing hype” associated the SAP HANA “in-memory” columnar database will largely be forgotten in 3-5 years as alternative less disruptive evolutionary “in-memory columnar” rdbms solutions have become available, enabled and been optimized with SAP BW 7.0x and/or more recently SAP BW 7.3 > 7.4 including Flat InfoCubes.

In Section 2 consequently – Let us briefly review the benefits of DB2 BLU and/or DB2 10.5 for SAP workloads before circling back around in Section 3 to Strategic IT Investment and Innovation choices – Back to the Choice A or B as below, where I believe many clients will select Choice A to accelerate strategic Open Source, API / IoT enabled “connected economy” investments in the “Green Arrow” path as below:

Simplified Choice ?

 

https://wordpress.com/post/timmainblog.com/134

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view.

They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA Roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).

 

IBM DB2 BLU and/or DB2 10.5 Optimization for SAP – Platform Evolution vs Revolution

In blog section 2 consequently – Let us briefly review the benefits of DB2 BLU and/or DB2 10.5 for typical SAP NetWeaver Business Suite workloads.

In blog section 1 we highlighted the complex choice IBM SAP Enterprise IT Clients face if you are already happily running often customized SAP Business Suite / SAP NetWeaver over DB2 (with z, p, i and/or LUW) over your preferred and/or virtualized IBM SAP platform choice (z, p, i Series, Linux, VMware ESX, widows / Intel + DB2 LUW etc).

Then very careful analysis of the TCO, Functional and cost / benefits and risks associated with SAP HANA with SAP BW and/or Suite on HANA (SoH) or indeed starting again with a new S/4 HANA Digital Core S/4 HANA Enterprise Management (at the 1511 release vs the prior 1503 code path before) is then required.

This helps to ensure the claimed or indicated benefits actually align to your business and IT priorities over SAP SE’s natural desire to increase their share of your IT spend, in terms of the required HANA rdbms license, support and/or HANA remediation consulting revenues vs prior AnyDB platform choices.

This has to considered and balanced vs the continued deployment of viable, mature and proven SAP IBM DB2 NetWeaver optimised AnyDB alternatives that have and continue to be progressively developed and optimized over many years jointly with SAP DB2 development teams in Walldorf, IBM’s Boeblingen and/or the IBM’s DB2 Development Labs in Toronto, Canada.

Unfortunately there are no joint development labs in the Scottish Highlands, never mind !

(Whilst also not forgetting that 2 or more IBMers invented relational database platforms, many, many years ago following on from IBM’s IMS which was used by NASA for the Apollo programme etc) .

In particular where Client and/or more recent joint IBM DB2 / Systems Group Lab testing indicates for more complex and concurrent SAP BW Analytical (OLAP) workloads, IBM’s DB2 10.5 BLU (and/or DB2 11.1 with BLU + MPP – Massively Parallel Processing which is now certified with SAP BW) often match or significantly exceeds the throughput of SAP HANA with SAP BW for OLAP / Complex SQL BW Reporting workloads with less than half of the configured system memory.

Also typically using significantly fewer multi-threaded cpu cores, whilst providing rapid, incremental and non-disruptive speed to value without having to re-engineer or optimize the Clients SAP BW configuration and/or Business Objects (or Cognos etc) reporting tiers.

With SAP BW 7.0x (and above up to BW 7.4) and DB2 10.5 BLU, this is normally combined with a relatively simple, quick and largely non disruptive targeted row to columnar DB2 SAP BW table conversion using the latest version of the DB6CONV tool typically targeted at the SAP BW reporting tier (InfoProviders, InfoCubes).

DB2 10.5 BLU includes enablement and optimization of SAP HANA derived “Flat InfoCubes” support at SAP BW 7.40 (with SAP NetWeaver 7.40 or 7.50) with DB2 10.5 FP5S or above.

This diagram below indicates the relative speed up typically observed between DB2 10.5 LUW with SAP BW in a row relational form, then in a columnar “in-memory” organization and/or columnar “in-memory” with SAP BW “Flat InfoCubes” (at BW 7.40) or a representative sample set of BW / SQL queries and reports.

BLU Relative Throughput Flat Infocubes
Over a range of queries excellent throughput improvements are observed with relatively modest increases in the allocated DB2 memory (GB Ram) and server CPU core capacity.

Personally I’m not a great fan of the prior x100 or x1000 HANA speed up claims that seemed to be features of prior SAPPHIRE and/or SAP TechEd conferences with respect to SAP HANA.

Whilst these maybe true for individual queries when comparing older row based rdbms systems (often on prior generations of hardware) with SAP HANA on the most current Intel hardware, these from my PoV are often “apples & pears” comparisons that make good marketing charts, but are likely not so representative for many clients real life mixed SAP BW OLAP reporting and/or batch / ETL (Extract Transform Load) workload scenarios.

The table above simply highlights the benefits of leveraging prior proven and mature DB2 LUW (Linux Unix Windows) rdbms technology, combined with proven query optimization and/or buffer pools deployed to leverage a columnar, autonomic (automatic to me & you) modern tiered data platform.

With DB2 v11.1 we also now combine prior proven DB2 Data Partitioning Features (DPF) that effectively manages, distribute and optimise both queries and data placement over a scale out n+1 architecture for the very largest clients (10-100’s of TB of adaptively compressed DB2 SAP BW data) to enable DB2 BLU with MPP (Massively Parallel Processing) – This also fully leverages prior DB2 BLU “in-memory” columnar and prior SAP BW 7.x optimizations, with an expected GA and/or SAP certification in Q3 / Q4 2016.

For folks that are interested a summary review of the DB2 11.1 LUW “Hybrid Cloud enabling” capabilities can be found in the following paper by Philip Howard from Bloor Research and/or a summary from a recent DB2 11.1 announcement web link.

Insert web link here – Re Bloor Research

A link to a news wire on the DB2 v11.1 announcements is enclosed, or for the IBM web site formal announcement below.

IBM Targets Developers with Powerful In-Memory Database in the Cloud

DB2 on Cloud makes hybrid cloud development easier

Now let’s consider the OLTP / Transactional workloads vs OLAP / Analytical scenarios.

The next statement may sound relatively harsh, but in many cases is true in the cold light of day when the relative costs, risks and real benefits of migrating an existing “read / write” optimized customized SAP NetWeaver ABAP / SAP ECC OLTP (and/or prior BW 7.X OLAP) template to a “Suite on HANA” and/or a new S/4 HANA Digital Core are considered, it may or may not stack up in cost / benefit terms exactly as previously suggested and marketed by SAP SE.

This is naturally back to my technical and solution architect “it depends” disposition, we also have to consider the relative strategic competitive and business into IT benefits of various IT and strategic platform investment choices (S/4 HANA Digital Core, IBM’s Watson / IoT, Bluemix etc) for competitive advantage over prior COTS or packaged application deployment strategies.

The next key question becomes is your business aligned IT investment priority focussed on front office IT enablement and differentiation in the System of Engagement, Systems of Insight, Systems of Innovation / IoT area, for competitive advantage vs remediation of existing customized SAP ERP NetWeaver “Back Office” “System of Record” configurations to enable what might initially appear to be a commercially driven SoH OS/DB SAP HANA platform migration ?

For me individually I view SoH (Suite on HANA) as essentially a “zero sum” game (not an ideal combination of the two worlds) in real delivered IT benefits terms, where for example IBM DB2 SAP Clients can already fully leverage the throughput, scalability, TCO reduction, adaptive data compression, many years of SAP DB2 optimizations and maturity of DB2 over SAP ERP 6.0 / SAP NetWeaver including support through to 2025 with SAP NetWeaver 7.40 and/or 7.50.

For these Clients there are also clear benefits from the SAP Core Data Services (CDS) HANA aligned application database functional push down optimization enablement over DB2 10.5 (and above) with SAP NetWeaver 7.40 and 7.50 in addition to FIORI Transactional application user interface enhancements (vs Fiori analytical optimization with, for BW + HANA  and/or SAP S/4 HANA).

In the following diagram we describe examples the detailed mapping of aligned IBM DB2 10.5 functionality to SAP Core Data Services (CDS) aligned rdbms function calls and optimizations:

DB2 CDS 10.5 Optimisations

I believe a number of mutual IBM SAP Enterprise clients will decide to sustain a “functionality stable” SAP NetWeaver ERP 6.0 “System of Record” Core (ECC, BW, APO/SCM, PI, PLM etc).

Essentially they will adopt a let’s “wait & see” strategy on future S/4 HANA Digital Core adoption.

A number of them may also decide to switch from a prior SAP 1st back office (SoR) strategy to selecting and integrating “Best of Breed” front office alternative SaaS / Cloud / IoT solutions from vendors like Salesforce.com (CRM), NetSuite (Financials / ERP) Workday (HR) and/or Anaplan (S&OP), IBM’s own Watson IoT / Bluemix etc.

Where this “Best of Breed / SaaS / Hybrid Cloud strategy is then integrated back to this stable SAP ERP 6.0 / SAP NetWeaver core via SOA (service-orientated architecture) API (application programming interfaces) standards, leveraging existing deployed Enterprise Application Integration (EAI) messaging / integration buses or indeed various “Application Integration as a Service” (AIaaS) offerings (which may include for example SAP PI/PO or IBM’s WBI, IBM’s WebSphere CastIron * / IBM’s Data Power etc).

* Indeed IBM’s WebSphere CastIron integration appliance was often previously used to integrate SuccessFactors with SAP ERP NetWeaver solutions prior to SAP’s acquisition of SuccessFactors and is also typically used to help integrate via a “Drag & Drop” simplified template (TIPs) driven integration strategy Salesforce.com CRM solutions with SAP NetWeaver / ERP 6.0 solutions.

Some time ago, in February 2012 Forbes published the following item on Cloud computing:

http://www.forbes.com/sites/joemckendrick/2012/02/22/6-shining-examples-of-cloud-computing-in-action/#1eb0c0345927

6 Shining Examples of Cloud Computing in Action, Joe McKendrick.

Cloud computing means more than simply saving on IT implementation costs. Cloud offers enormous opportunity for new innovation, and even disruption of entire industries.

Which provides a natural segue into my section 3 topic

Section 3 –SAP NetWeaver Core + Best of Breed / SAAS Strategic IT Alternative Investment Choices