All posts by Tim Main

Information Management and Applications - Technical Director

API / Microservices Business Innovation and Solution Enablement Strategies

Or in other words what does a Post-Modern ERP strategy look like?

The Hunter Becomes the Hunted Image at 180717

Blog by Tim Main – IBM Information Management and ERP – Technical Director

17th July 2017

Executive Summary – The Question?

A number of months ago I was asked by one of my experience and senior colleagues in the IBM SAP Implementation and solutions technical community the question what might “A Post-Modern ERP Solution Landscape” look like?

This followed a similar prior question from the CIO of a significant European Pharmaceutical  concern and the recent DSAG SAP CIO Investment Study of 269 German speaking CIO’s from November 2016-January 2017.

Where ~ 50-60+ % of these CIO’s essentially identified increased strategic IT investments in Digital Innovation as a key priority, whilst up to 50% did not currently consider SAP S/4 as an alternative to their existing often customized and broadly deployed SAP Business Suite / ERP deployments.

Consequently linking back to my prior blogs in this area, I decided to explore what a “Post -Modern ERP Landscape” may look like leveraging an open API / Microservices architectural integration construct that seeks to “Innovate with new + integrate + leverage existing” ERP Systems of Record applications scenario.

It maybe suggested by some ERP vendors that a migration, upgrade and/or remediation of prior customised, business process aligned SAP Business Suite functionality onto SAP S/4 Enterprise Management is linked as a pre-requisite to the ability of an Enterprise to Digitally Innovate.

Recently Philip Howard, Information Management, Research Director at Bloor Research has published a paper looking at the Myths of SAP HANA which can be found here.

From my point of view, and likely more importantly from the view of a number of Enterprise Client Chief Enterprise / IT Strategy Architects I have spoken to, we simply don’t see this firm or direct linkage, in fact the reverse seems to be true in an Open Source / Open Enabled Digital Innovation world.

Where the focus is correctly on enabling the integration of new front office Digital Innovation IT solutions (enhanced Systems of Engagement, Systems of Insight, Systems of Innovation) vs remediation of prior Systems of Record / ERP solution investments.

If we accept this fundamental strategic assumption, we can then go on to consider a number of the “layers of the cake” in terms of Business Innovation into IT technology strategy and prerequisite capabilities.

The Resulting Strategy – High Level Strategy – May look something like?

An Open Business Domain : IT Transformation Viewpoint v2 180717

This also links back to my prior analysis of “Factory IT” and Innovation IT” following a Harvard Business Review (HBR) – Business into IT Strategy Review Paper in 2008/9, where this is now more commonly referred to by IT analysts like Gartner and/or Forrester as “Bi-Modal or Dual Speed IT”.

Further details on “The Layers of the Cake”

Open Business : IT Transformation View - Details v1 140717

From my point of view one of the most important and strategic aspects of an effectively implemented API / Microservices strategy is to enable, deploy and govern a layer that acts as a layer of graphene or graphite, gearing and lubrication between the rapid pace of innovation and change demanded by the business of Innovation IT.

Whilst pragmatically recognizing that Factory IT needs to operate at very different speeds from a change and release perspective whilst still communicating and passing data between these two layers effectively.

A “Back to the Future” IBM SOA Solutions for SAP from 2008

Whilst researching in preparation for this blog I was repeatedly drawn back to a prior IBM SAP SOA Client White paper at Viessmann that describes SOA (Services Orientated Architecture) solution enablement to complement the Clients prior and significant SAP Business Suite / ERP investments.

This paper essentially described the principles of a flexible IBM SOA enabled front office application integration/s strategy for client and channel facing and line of business applications, helping Viessmann to deliver increased business flexibility and enhanced customer service and productivity to support the needs of a growing business.

Being a firm believer in “Back to the Future” Scenario’s within the IT Industry, I was very naturally happy to look back for a proof point to then look forward again.
The summary paper describing this project can be found at

In its simplest terms this case seemed to cover a number of the key aspects for a client to consider in a “Post Modern-ERP Scenario”.

However this said, I’m often then challenged by existing EAI (Enterprise Application Integration) teams on the question “but we already have a deployed and working ESB, why do we need an “Inner Ring / Outer Ring” hybrid API / ESB architecture” which I’ve attempted to explain in the following two diagrams.

The fist diagram comes from an excellent “Integration Throughout and Beyond the Enterprise” IBM Redbook that can be found here.

Figure 1.1 and Figure 1.2 in particular nice summarizes the differences in the prior SOA focus and the SOA + API Economy focus.

Figure 1.2 from API Redbook

Additionally, in the following diagram after reviewing and consuming the API for Dummies Wiley book that can be found here, I’ve attempted to summarize the differences and positioning of this dual ring API / Microservices and ESB / EAI enabled strategy.

High Level Strategy - API Microservices Enablement at 180717

Then I have pulled together a couple of diagrams (on the basis a picture is worth a 1,000 words) that consider the key factors from both a business and technical point of view on the positioning of API enabled Microservices vs ESB enabled Enterprise Application Integration (EAI) as follows, whilst a little busy they are both self-explanatory.

The Wikipedia description of Microservices seems to nicely summarise the combination of loosely coupled, fine grained services to enable agile and flexible development initiatives combined with “re-factoring” and/or re-facing of existing systems into the post-modern ERP world.

“In a Microservices architecture, services should be fine-grained and the protocols should be lightweight. The benefit of decomposing an application into different smaller services is that it improves modularity and makes the application easier to understand, develop and test. It also parallelizes development by enabling small autonomous teams to develop, deploy and scale their respective services independently.[1] It also allows the architecture of an individual service to emerge through continuous refactoring.[2]Microservices-based architectures enable continuous delivery and deployment.”

Recently the University of Manchester has been doing some very innovative work looking at the layered properties of Graphene for water filtration this work as described and summarised in a recent BBC Science & Environment item on the 3rd April 2017.

Graphene Image from the BBC web site - 3rd April 2017

For me this provided a nice analogy of what we are seeking to securely do with an API / Microservices IT architecture where we have “Inner / Outer Ring” layers of the business and application integration cake that enable loosely coupled clusters of fine grained “SOA” like services to work.

Where solutions like IBM’s API Connect provide proven secure  “appliance based” strategies for the outer ring whilst integrating and safely filtering / passing fine grade API’s enabled data to and from with the clients existing inner ring ESB.

This diagram below attempts to summarize the differences in nature between an API  / Microservices appliance and a typical ESB for Enterprise Application Integration (EAI).

Integration Topologies Inner Outer Ring v3 170717

And from a business into technology point of view in terms of grouping, use case alignment.

Example Mapping Functional Capabilities at 180717

Then we simply need to layer in capabilities in the new “Two Triangles” worlds of SQL schema before and SQL schema after “Big and little data” and we have a foundation from which to build.

Supporting Data Management and Information Management Strategies.

In my prior blogs I’ve described the supporting “Two Triangles” (SQL Schema before and SQL Schema after) data worlds that needs to be developed in parallel with a viable API / Microservices strategy, this is very important to avoid the API / Microservices enabled business solutions becoming “islands of information” that are isolated from each other.

Where a critical objective of an API / Microservices economy is to leverage information insights for strategic and competitive advantage including both prescriptive and predictive analytics in addition to the more common descriptive analytics.

At the risk of a slightly longer Blog item, this architectural approach is summarized and described below, where the “Insight into Action” activities maybe both API / Microservices enabled and linked to aligned cognitive / intelligent business process optimization tools and IT capabilities.

The Two Triangles Information Strategy v2 170717

Inhibitors and Enablers

In this diagram, I’ve attempted to summarize the key enablers and inhibitors for a successful API / Microservices deployment strategy:

Key inhibitors and enablers for an API Strategy 170717

Critical Success Factors

Understanding that ultimately a successful API / Microservices strategy starts with the business digital innovation agenda and strategy and then flows down into the enabling IT capabilities, whilst initially bottom up API / Microservices projects are a way to start small and scale fast, ultimately it will require a top level down strategic investment led strategy.

Critical Capabilities - Executive IT Architectural PoV at 180717

Also I would refer readers to an excellent IBM Institute of Business Value study that looks at Innovation in an API Economy which can be found here.

Open Source and Standards driven enablement and business process optimisation

In my view for an API / Microservices strategy and economy to succeed it requires a clear and long term commitment to Open Source Solutions and the definition of Open / Published API / SOA messaging formats and standards.

IBM has a very significant and clear track record in this area, including in recent times the Open Source enablement and contributions made by NODE-Red in the IoT (Internet of Things) area, which was the subject of a prior blog item.

Conversely any ERP vendors who attempt to impose aggressive license terms and conditions that essentially prevent the enablement of a successful API / Microservices economy through the application of “indirect access” license terms and conditions will likely become increasingly isolated islands in time.

Which takes back in a full circle to the beginning of this blog, what does a “Post Modern ERP Application” look like in a world where the ability to Digitally innovate and successfully integrate traditional Systems of Record / ERP systems with innovative Systems of Innovation, Insight and Engagement becomes and critical “business survival and differentiating strategic IT capability”

PS A recent example of front office Business Process Optimization, Automation and Integration rat Carlsberg can be found here. and the enclosed youtube video link.

Sources of further information that are referenced or were researched for this blog include:

Understanding there is a very broad and deep pool of information sources in this area, consequently my principle challenge for this Blog was what to leave out vs not what to include.

For example, I left out a pool of material on Client SOA / API maturity capability analysis and step wise development that was very interesting and critical for most clients, in addition to IBM’s Data First Method, whilst understanding Rome was not built in a day.

IBM’s API Connect Overview can be found here:

https://www.ibm.com/support/knowledgecenter/en/SSMNED_5.0.0/com.ibm.apic.overview.doc/api_management_overview.html

..and further technical details here:
IBM Redbook – Getting Started with IBM API Connect: Concepts and Architecture Guide

http://www.redbooks.ibm.com/abstracts/redp5349.html?Open

APIs for Dummies – Claus T Jenson

https://public.dhe.ibm.com/common/ssi/ecm/ws/en/wsm14025usen/WSM14025USEN.PDF

Plus, a recent demonstration of integrating simulated data from a back-office ERP system with Weather data to dynamically re-route deliveries to from ACME Co to Retail pharmacies and distributors leveraging “Strong Loop” capabilities:

The Evolution of the API Economy – IBM Institute of Business Value

IBM Redbook – Integration Throughout and Beyond the Enterprise

https://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg248188.html?Open

The prior Viessmann IBM White Paper re SAP and IBM SOA Solution Enablement

http://www-05.ibm.com/de/solutions/references/download/SPC03045DEEN-Viessmann_Final_EN.pdf

White Paper – IBM SOA Foundation: providing what you need to get started with SOA, 2005.

ftp://ftp.software.ibm.com/software/soa/pdf/SOA_g224-7540-00_WP_final.pdf

Wikipedia Entry re Microservices

https://en.wikipedia.org/wiki/Microservices

Finally as an example IBM Watson IoT architectural point of view (pass the curser over the various ABB’s) in the landscape which combines integration and information building blocks.

https://www.ibm.com/devops/method/content/architecture/iotArchitecture

The opinions within this blog are the authors, they do not represent a formal IBM corporate point of view, copyrights are respected and/or sources referenced.

Innovation that matters – Node-RED

IoT Innovation that matters – Node-RED

node-red-simple-v2
With 33+ years of Enterprise IT solutions and architectural experience, it’s not often that I come across innovations, ideas and solutions that are truly new and/or transformational.

However I have to say a recent IBM OpenWorks Node-RED webinar and IoT solution / device integration webinar really made me sit up and take note, the event replay can be found here.

node-red-simple

In the webinar the team of presenters and founding Node-RED developers from IBM’s  Hursley Development labs, Nick O’Leary and Dave Conway-Jones (IBM Senior Inventor), in conjunction with Dr Mike Blackstock and Dr Rodger Lea from Sense Tecnic Systems, Inc (and the University of British Columbia) really knocked it out of the park for me.

It looks like an excellent combination of a logical, simple, effective idea and combined with ease of use and flow execution. It combines a set of API driven device integrations and data inputs/outputs into “flows”, leveraging a flow based programming model, that in turn was originally defined by J. Paul Morrison at IBM in the early 1970’s.

Node-RED has now been adopted by Linux Foundation JavaScript js.foundation

This model essentially combines a network of asynchronous processes communicating by means of streams of structured data chunks or elements, where each process is in effect “a black box”. Each step does not not need to know what went before, or indeed what comes after: it just acts on the data it receives and passes the result on.

In this respect it is similar in concept to one of the founding principles of IBM’s MQSeries, deliver once reliably and only once, and/or subsequently MQTT as a lightweight pub / sub messaging protocol that sits behind Facebook chat etc.

Remembering childhood games of “pass the parcel” where participants could leave (or indeed re-join) the game, whilst executing their specific step until the music stopped.

node-red-flow-overview

The Node-RED solution (which has now been adopted and is being further developed by the Node.JS open source community) looks like a very elegant, cost effective and simple solution to complex IoT / Manufacturing 4.0 device and data integration requirements.

To me, I really enjoyed the IBM OpenWORKS talk and would commend it to IT Strategy teams and architects that are engaged in Front Office / IoT Digital Innovation solution deployments and platform strategy definitions.

You might also call this Node-RED solution the “yin” to a blockchain / IBM Hyperledger “yang”, where both are now founded on significant Open Source principles whilst one does not mind what went before and after, and the other executes principles of immutability in a trusted network of asset flows (fiscal or physical).

the-yin-and-yang-bigger-picture
Also this week I really enjoyed the annual British Computing, Alan Turing lecture by Dr. Guru Banavar is vice president and chief science officer for cognitive computing at IBM.

Guru is responsible for advancing the next generation of cognitive technologies and solutions with IBM’s global scientific ecosystem, including academia, government agencies and other partners, he leads the team responsible for creating new AI technologies and systems in the family of IBM Watson, that are designed to augment (and not replace) human expertise in a broad range of industries.

For me it was an interesting and logical co-incidence that Guru was previously, the chief technology officer (CTO) for IBM’s Smarter Cities initiative, where Node-RED also has very logical use cases for building climate control systems and solutions, described by Dr Mike Lea and Dr Roger Blackstok in the second part of the Node-Red update.

Guru Banavar designed and implemented big data and analytics systems to help make cities, such as Rio de janeiro and New York, more livable and sustainable.

When I logically put together Node-RED IoT / Manufacturing 4.0 / IBM Watson IoT enabled innovation with IBM’s Bluemix Dev/Ops platform and the IBM Watson Cognitive, Analytical and HyperLedger capabilities in a secure Hybrid Cloud, API / Microservices enabled “Choice A” scenario the opportunity for Open Source enabled digital innovation seems to be truly significant!

The opinions within this blog are the authors, they do not represent a formal IBM corporate point of view.

Developing flexible, business aligned IT innovation and capability delivery strategies

After my prior LinkedIn item on when does SAP S/4 HANA make sense for your business?

I received a question through a colleague in respect of a large European client who was effectively defining a Choice A over Choice B forward IT strategy.

it-innovation-choices-level-1-02022017

This prompted me I to sit down and put some further thought into a IT Executive view of a business into IT strategy model and approach that could result after an Enterprise client makes Choice A.

A Business Domain / IT Transformation Viewpoint

Getting straight to the point, as a direct consequence, I then sketched out the following high level architectural thinking and strategic business into IT building block based approach (having further researched various recent IBM C Suite and IBM Institute of Business Value (IBV) studies) that places an effective API and Enterprise Service Bus “inner / outer ring” strategy as the key enabling capability.

high-level-business-into-it-strategy-300117
It was also interesting to read a recently (31/01/17) published summary of a study by the DSAG (German Speaking SAP User Group) which in addition to discussing various S/4 HANA adoption rates amongst the surveyed SAP DACH clients.

This study highlighted plans to significantly increase forward IT investments and focus in front office “digital transformation and business / IT enabled innovation projects” within the surveyed German and Swiss speaking / based companies (similar to Panalpina) leading in this area, with up to 60 and 70% respectively of funding being targeted at these critical area’s and “competitive advantage” or simple “stay in business” IT transformation and investment requirements vs disruptive adjacent industry or new physical “asset lite” digital disruptive competitors (Netflix vs Blockbuster retail outlets comes to mind) .

It is possible to say that a number of industries including global logistics companies naturally already have complex extended digital supply chains which are facilitated by standards based messaging, API’s, data exchange with supporting back office EAI (Enterprise Application Integration) platforms and web enabled consumer and partner .

After creating this high level model and approach I naturally started to think further the critical business into IT capabilities that are required to ensure success when implementing a business aligned IT innovation strategy of this type.

When I net these out, to me they seemed to distill into six or seven strategic IT capabilities and imperatives in combination with the profile of the individual business appetite from a forward risk / reward and rate of IT innovation and management / adoption of change point of view.

These included the business aligned development, governance and practical implementation of:

  1.  An effective API development, brand, portfolio management and delivery strategy
  2.  An effective data governance and management strategy to pool, integrate and actively manage data for trusted, timely and accurate business insight
  3. An existing, defined and working ESB (Enterprise Service Bus) application messaging platform or platform/s (I refer to this as an “inner ring / outer ring” ESB / EAI strategy)
  4. The appropriate and targeted selection of buy (& Integrate) vs build (Dev/Ops) and run application capabilities
  5. Aligned Business and IT Executive sponsorship, funding and organization / cultural factors
  6. Structured, thoughtful analyze to select the most appropriate IT capability building blocks
  7. A plan to explore, target and integrate cognitive computing, ML / AI capabilities

Crucial also is timing, as various new and emerging technologies typically traverse the Gartner “technology hype” curves (or Forrester Waves) at different speeds, with specific vendors, technologies and/or platform/s emerging to becoming a or the “de-facto” standard or dominant provider in a particular function or area.

Within this context it’s clear that some fundamental and basic “table stakes” still apply including:

  1. A real, demonstrable, sustained commitment to Open Source offerings and capabilities
  2. The selection and integration of at scale, viable “top right” quadrant platforms, products and/or technology partners
  3. Appropriate Business into IT funding to phase delivery – Start small, prove, then scale fast
  4. Understanding when to buy vs build – The Factory IT vs Competitive Advantage IT question
  5. Understanding that multi source, highly cost optimized, outsourced IT strategies are relatively unlikely to provide a firm foundation for the agile delivery of new business into IT capabilities – as Business into IT Driven “Digital Innovation Requirements” become ever more critical

Which in turn relates towards the phased evolution vs revolution question that is described and nicely summarized in a GA Moore technology model below.

ga-moore-technology-adoption-model-300117
Where often within individual global enterprises various Business into IT delivery programmes will sit in different segments under the “cross the chasm” curve.

In a prior Imperial College Business Innovation course it was clear that the most successful and effective business innovation strategies and platforms (including the Apple’s iPhone), were in the majority of successful cases, actually combining proven prior individual technology components and eco system building blocks in new innovative ways.

It was the innovative new combination of these proven capabilities and technology building blocks often within new value based networks that creates the greatest and/or most disruptive business value, often not brand new or immature technology.

It was also interesting to observe a recent joint Schaeffler Group / IBM Watson IoT / Manufacturing 4.0 partnership announcement and youtube video that is grounded on a number of these principles as described below:

schaeffler-strategy-external

In addition to a recent LinkedIn CIO / Data Management forum item that nicely described effective Data Management and IoT strategies as the “King and Queen” partners of aligned IT Innovation capabilities in the complex game of chess that is successfully implementing viable, long term IT strategies.

If these are the King and Queen I’d also then say that Hyperledger and blockchain represent the Castles in chess terms, enabling swift directed movement combined with protection and security.

Additionally in my view as described in a short you tube video about IBM’s Hyperledger blockchain pilot system within IBM Global Financing (IBM IGF process $44 Billion Dollars of transactions, within a network of 4,000 partners, suppliers, shippers, banks).

Where the implemented open source based IBM Hyperledger solution provides an “individual client ledger” neutral, secure, immutable, auditable digital asset / document and transaction supply chain without seeking to force change the participating partners back offices platforms which is costly, risky and typically has extended cycle / lead times vs speed to value.

Approximately 10 years ago a number my IBM colleagues in our Consulting, Hosting and Global Services teams invested significant time and effort in a structured review of large scale complex IBM / Client project deliveries both successful (and a few unsuccessful) to help better inform future joint projects and joint IBM / Client success.

The output of that study is as valid now, if not more so now than before (in our Hybrid Cloud, Cognitive world), in that it logically identified and confirmed what we all know to be true, but is unfortunately often ignored or lost in the heat of an early project life cycle.

In particular these structured approaches become even more important as many large, medium and small Enterprise clients seek to successfully deploy and manage relatively complex “Hybrid Cloud” scenario’s.

hbr-hybrid-cloud-factory-it-vs-innovation-it-300117

The success of any significant IT initiative crucially depends upon the initial business aligned requirements definition and closely managing and defining the interfaces and hand offs between the different partners and functions that are described in a 10 box IT operating and innovation model.

With the 1st and most crucial box being the initial terms of reference and requirements definition box prior to developing a 9 box “design, build and run” model in 3 layers:

  1. The business transformation requirements, value into the application delivery layer
  2. The business application, integration and data management layer
  3. The IT Infrastructure, platform and IT service delivery layer/s

The success of this model and approach is then defined by the success (or otherwise) of carefully defining and managing the interfaces between the 9 boxes in people, culture, technology, funding, capability teamwork and strategic terms as follows:

10-box-model-v1a-300117

One of my Client IT Architect colleagues working in the Retail and Consumer products industry also recently highlighted that it’s never been more important to manage these interfaces effectively to avoid the unwelcome emergence of the “IT to IT Gaps” that will inhibit successful delivery, in combination with the critical success factor of selected and assembling proven building blocks in new innovative ways that is at the heart of the most successful Business into IT innovations:

it-innovation-requirements-flow-300117

Where the basis rule applies more often than not, assembling proven capabilities and building blocks (using a Lego like analogy) will typically yield more predictable and effective outcomes.

enterprise-architecture-methods-300117

I hope this item is helpful, in highlighting the requirements and prerequisites for successful “Choice A vs Choice B” business into IT innovation delivery.

IBM provides a combination of proven scalable, virtualized building blocks for Enterprise scale SAP Hybrid or Private Cloud platform delivery including DB2 v11.1 LUW, IBM POWER8, AIX, PowerVM, Linux, IBM System z, DB2 and/or Linux One, System I with DB2.

Disclaimer: The views expressed by the author in this blog reflect 33 years of experience in Enterprise IT and ERP / Application platform delivery are my own and do not represent formal IBM views or strategies. Vendor trademarks are respected

 

 

 

 

 

Section 6 – HTAP, OLAP vs OLTP SAP Application Throughput, Optimizations

Historically for SAP Business Suite / ERP / OLTP Systems we have all previously used SAP (Sales and Distribution (SD) 2 Tier benchmark results for both sizing the significant majority of SAP workloads and to enable “common currency” comparisons of relative SAP server and database throughput a peak 99-98% utilisation levels.

Subsequently SAP introduced the SAP BW Enhanced Mixed Load (EML) test, which in turn has recently been replaced by the new SAP BW Advanced Mixed Load (AML) benchmark test which in my view are both aimed at OLAP orientated Business Intelligence (BI) HANA based workloads.

There has been a significant absence of published SAP SD results for the SAP HANA database platform, whilst SAP SD results continue to be published for Sybase ASE and/or DB2 10.5 etc.

In my experience when client related large Enterprise intense SAP NetWeaver / ECC transactional (OLTP) and/or batch workloads for both SAP SD order entry type transactions and/or for a representative mixture of client customised and SAP optimised OLTP transactions are executed it becomes clear very quickly that traditional, mature and optimised row orientated database platforms like DB2 10.5 offer significant performance, throughput and efficiency benefits , indeed this short youtube video from Coca Cola Bottling Co highlights very significant improvements in both SAP transactional and batch throughput whilst concurrently saving ~ $1m in TCO reduction through enhanced rates of SAP DB2 data compression.

Conversely running an identical SAP SD like workload “side by side” on both DB2 and SAP HANA with the same SAP application and database server resources simply served to highlight significant “write” (single SQL insert, update and/or delete) penalties associated with running existing customized SAP NetWeaver OLTP / transactional workloads against columnar in-memory data stores vs prior row optimized SAP NetWeaver rdbms platforms.

With the availability of SAP HANA over Linux on POWER8 (LoP) it is also possible to run a representative “side by side” set of SAP BW 7.4 OLAP queries and reports over both SAP HANA and DB2 10.5 with BLU identical fully virtualised SAP BW application server capacity.

These results are indeed very interesting and confirm the benefits of a columnar in-memory strategy for OLAP workloads, whilst also clearly demonstrating the efficiency and maturity existing DB2 query optimisers and multi-threaded and multi core workload distribution and management with DB2 BLU compared to alternatives like SAP HANA.

Indeed it was possible to observe both superior scaling and throughput as the workload concurrency and complexity increased with DB2 BLU whilst using ~ 50% of the configured memory database server capacity and identical SAP BW application server resources.

It was also clear as described in a prior section that SAP BW 7.4 “Flat InfoCubes” and/or semantically partitioned flat Infocubes provided a significant throughput gain on a in-memory columnar platform vs traditional relational row platforms (even if parallel like DB2 DPF).

Personally unless a new SAP S/4 HANA “read optimised” application template has deployed I view Suite of HANA (SoH) simply as a rather uncomfortable mismatch and “half way house” in application and platform technology terms, one that in my view should be avoided if possible.

I have produced the following chart to highlight my viewpoint in this area:

Throughput Choices 260816

Recently I was also sent a link to this related item on LinkedIn by Shaun Snapp, this item also highlights many of the concerns and questions that I also have about the principle of a size” SAP HANA “one columnar size fits all” workloads and platform strategy.

Indeed some observers would suggest this is being driven as much by SAP SE’s commercial desire to displace existing proven SAP NetWeaver rdbms choices like DB2 10.5 and/or Oracle 12c with their own rdbms platform, irrespective of the benefits or otherwise for their major existing SAP Business Suite clients.

My input to existing large Enterprise SAP Business Suite clients with significant, intense and business critical OLTP workloads would be to ask SAP SE for guarantees that a representative set of critical SAP OLTP and/or batch transactions will perform at a similar of higher level, whilst using a similar set of SAP platform capacity, understanding significant increases in core count and memory capacity to “throw in-memory columnar iron” at a OLTP problem can have very unwelcome real TCO increase issues and really hurt prior DC efficient / Green IT strategies & KPI’s.

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view. They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).

In-memory marketing hype vs reality – Hype Busting

This section 5, let us briefly looks at some – In-memory marketing hype vs reality

To see if these really stack up and what alternatives exist for clients who are worried about the disruption, maturity, risks and commercial lock in of the new SAP S/4 HANA, SoH and/or SAP BW HANA platform strategy.

This section could also be called a degree of “Hype busting” as we likely need clearly separate the excellent and pervasive marketing from the technical and solutions deliverable reality.

Is SAP HANA your destination ?
For the more technical minded reading this item, we shall now drop into some relatively technical discussions related to relational databases and systems design, I make no apologies for doing this as it’s likely important to help reset or gently correct a number of the relative benefits and themes that are normally associated with SAP HANA and/or S/4 HANA “Digital Core” presentations including at recent Sapphire and/or SAP TechEd conferences.

Where are we now in my view with respect to SAP S/4 HANA adoption rates vs a Gartner Type Hype curve:
Gartner Hype Curve

 

In this case I’ll use IBM’s DB2 SAP optimized data platform as a point of reference, it’s not that Oracle 12c SAP “AnyDB” platform choices don’t share a number of similar capabilities (I’d naturally say we do it better, more efficiently etc), it’s just that it would be rather technically presumptuous of me to try and represent Oracle’s 12c in-memory cache capabilities without sitting down with them to understand the capability of Oracle 12c and their ongoing development roadmap vs SAP HANA in greater detail for SAP NetWeaver and/or SAP BW 7.x workloads.

Assuming SAP SE commercially actually want to best leverage these AnyDB and/or enable these capabilities (on not), hence I won’t attempt to do this in this item.

 “In-Memory” Columnar Myth / Hype Busting – Number 1

Firstly I know it sounds obvious but all databases run in computer memory, we are really simply discussing if the database is organized in a columnar relational form (ideal for analytical / OLAP “multi SQL select” read orientated SQL workloads) or if it is organized in a row relational form that is typically used demanding transactional (OLTP) workloads with higher volumes of “single SQL select, insert, update and/or delete” and/or often row based batch updates, let’s call these more traditional read / write OLTP workloads.

Where 70/30, 80/20, 90/10 read / write ratios are common, with higher write ratios typically often observed for demanding OLTP, batch, planning (SCM) and/or MRP manufacturing workloads.

Indeed the IBM DB2 10.5 BLU “In-memory” columnar capabilities are named after a IBM Research Project at IBM’s US West Coast based IBM Almaden Labs called “Blink Ultra” in 2007 / 8 which effectively observed that by converting prior relational rows to columns in memory, that up to x80 times SQL query reporting times speed up were observed for more demanding OLAP / SQL analytical queries.

A detailed research paper from Guy Loman and his team in IBM Almaden from 2007/8 can be found here, if required.

It’s also true that with DB2 LUW (and/or DB2 on Z/OS) that IBM has spent many years optimizing the use of relatively moderate amounts of DB2 database cache (called DB2 Buffer Pools) and systems memory to provide optimal throughput with justifiable levels of systems platform memory investments, whilst persisting data to disk / SAN Storage and also sustaining ACID database transactional consistency.

Hence the idea that any one vendor has a technology unique in this area is largely marketing hype from my point of view, for sure a particular vendor has marketed this capability very effectively, whilst IBM has been less effective with the marketing and likely more effective with an evolutionary, non disruptive deliverable.

For examples of this DB2 + SAP BW deliverable, refer to a couple of summary you tube videos at Yazaki (a large privately owned Japanese manufacturer of custom auto / car wiring looms) and/or at Knorr Bremse a large manufacturer of advanced braking systems for trains etc.

Yazaki and Knorr Bremse – SAP BW plus DB2 10.5 BLU videos

In-Memory “Commodity Computing, Multi Core is cheap” Myth / Hype Busting – Number 2

DB2 10.5 LUW (Linux, Unix, Windows) has been optimized to take advantage of the more recent multi core processor architectures, including both Intel Xeon and POWER (AIX, Linux, iOS) based architectures whilst offering a choice of Operating System support with ongoing SAP ERP / SAP NetWeaver 7.40 and 7.50 certification, optimization and support through to 2025.

If for example we consider the proven and mature “Symmetrical Multi-Threading” (SMT) capabilities of the IBM PowerVM Hypervisor with either AIX / Unix and/or Linux, these proven capabilities have have been extended over time to provide options to switch between one, two, four or eight threads to best match the application workload instruction flow that are then assigned and executed on multiple CPU cores (up to 12 per socket).

This helps to both increase application throughput and increase IT asset utilization levels.

Indeed in recent IBM Boeblingen Lab tests with DB2 and BLU we tested the relative benefits of SMT 1, 2, 4 and/or 8 for a SAP BW 7.3 and/or 7.4 analytical workload, it was clear during these tests for this particular workload SMT 4 provided an optimal balance of throughput and Server / IT asset utilization (CPU capacity, cycle & thread utilization) whilst avoiding excessive “time slice” based hypervisor thread switching that can significantly hamper the throughput of alternative less efficient hypervisors serving the Intel / Linux or “WINTEL” market demand.

Typically with IBM’s POWER for both DB2 10.5, v11.1 and/or HANA on POWER we observe a typical x1.6-x1.8 times greater throughput per POWER8 core (vs alternative Intel Processors), which is supported in balanced systems design terms by roughly x4 times the memory and/or IO throughput compared to alternative processor architectures.

For example if you have a demanding SAP IS Utilities daily, monthly or quarterly billing batch run for tens of 1,000’s of your Utility clients with SAP ERP 6.0 / SAP NetWeaver, the combination of DB2 10.5 and POWER8 with AIX 7.1 (and/or Linux) is really very hard to beat in batch throughput, availability, reliability and delivered IT SLA / Data Centre efficiency terms.

In parallel considerable and ongoing DB2 development lab efforts have resulted in DB2 10.5 SAP platform solutions that also fully leverage modern “Commodity” Intel based multi core cpu architectures, hence this is not a SAP HANA rdbms unique capability by any means.

During mixed SAP or other ISV application workload testing it’s true to say that some ERP / ISV applications better exploit multi-threaded CPU architectures and modern OS Hypervisors than others.

This remains as true for various SAP Business Suite / SAP NetWeaver (or indeed S/4 HANA) as other ISV workloads, where multi-threaded application re-engineering and optimizing typically takes many months and/or many man years of effort, indeed at one SAPPHIRE (2014) Hasso Plattner (Co-Founder and Chair of the SAP Supervisory board) reflected on the significant and ongoing effort to re-optimize many millions of lines of ABAP code in the existing SAP NetWeaver core platform for S/4 HANA, in addition to the subsequent CDS “push down” initiatives briefly mentioned before.

Also as previously mentioned in my prior Walldorf to West Coast blog, I’m rather reserved about the later upgrade complexity and costs I’d previously observed in a Retek / Oracle Retail scenario which pushed retail merchandising replenishment (RMS) functionality down from the Clients specific application configuration through the Oracle Application tier into the Oracle 10g rdbms tier leveraging PL SQL stored procedures.

For sure this helped to speed up key replenishment batch vs prior IBM DB2 or IMS based M/Frame platforms, however with the later penalty that the overall Retek RMS or WMS solution stack became very tightly coupled and interdependent in version terms.

It also essentially limited (like SAP HANA) the Oracle Retail / Retek platform rdbms choice to one only, where later application version upgrades were really very significant “re-implementations”, conversely the prior segregation and separation of application and rdbms duties in a SAP IS Retail / NetWeaver helped to reduce on mitigate this issue.

Hence in this case the structured development and enablement of SAP’s Core Data Services (CDS) interface between the application and deeper database functionality becomes vital for SAP clients

It’s also true to say the functional depth and breadth of capabilities being built into SAP HANA is very impressive, however this does mean a high rate of change, patching and version upgrades that in turn will need to be aligned to Vora / Hadoop platform versions.

In an Intel environment DB2 10.5 and/or 11.1 LUW also naturally also leverages Intel / Linux and/or Windows “Hyper Threading” (typically dual threads per physical processor core).

In my view the myth here is that “per say” Intel multi core architectures are inherently cheaper than alternative mature, Type 1 (or type 2) hypervisor implementations on IBM’s POWER or IBM System z (refer to this item for a summary of the difference between Type 1 and Type 2 hypervisors)

For example, within IBM we internally consolidated many 1,000’s of prior distributed Unix / AIX / Linux systems and applications onto a limited number of large IBM System z servers running Linux with a highly efficient and mature type 1 hyper visor, this was in fact significantly cheaper and considerably more efficient in Green IT and DC PUE terms, than alternative distributed computing options.

I’m not saying here that Intel / VMware ESX or Linux based hypervisor solutions don’t also provide considerable IT efficiency and platform virtualization opportunities, they do, it’s just that I rarely favour “a one size fits all” binary IT platform strategy.

In my experience a single platform strategy rarely does (for the largest Global Enterprises, it’s likely rather different for small and medium sized enterprises).

Typically implementing “a one size fits all strategy” often forces rather uncomfortable compromises for very Large Enterprise scale clients, who often naturally both virtualize and tier out their server and storage platforms (increasingly also in Hybrid Cloud deployment patterns) to match the requirements of different workloads and/or delivered business driven IT SLA’s and real life practical delivered TCA / TCO’s and cost / benefits.

For sure it’s relatively easy to compare an older or partially virtualized Unix / Oracle environment with a fully virtualized x86, Intel, VMware / Linux scenario (or Intel / Linux Cloud) and demonstrate TCO / TCA savings, however theses often tend towards being potentially rather misleading “apples & pears” comparisons vs comparing one rdbms platform under load against another on the same platform and operating system for the same set of OLAP or OLTP workloads (a much more balanced comparison).

Where the intense IBM focus is really on the most efficient use of the available systems resources (cores, memory & IO) in combination with increased IT agility and responsiveness to help optimise Enterprise Data Center efficiency (some call this Green IT) whilst minimizing the required input power (often measured in Mega Watts) for larger DC’s as measured by the Data Centre Efficiency ratio (PUE).

In this area with the significant consumption of many GB and/or TB of Ram and/or many thousands of cores (for large Enterprise SAP Landscape deployments) the SAP HANA architecture can be very costly indeed in DC Efficiency terms, in particular with limitations currently associated with the virtualization of on premise HANA production environments.

In-Memory “Data Compression Rate and TCO Savings” Myth Busting – Number 3

With DB2 10.5 “Adaptive and Actionable” compression we often observe and sustain 75-85% rates of DB2 DATA compression (call it a 5:1 compression ratio).

In particular with DB2 BLU columnar conversion of targeted SAP BW tables we leverage advanced Huffman encoding, in addition to significantly reducing the requirement for aggregates and indexes resulting in compression rates of 85-90% or more (vs prior uncompressed base lines) depending on the specific nature of the clients SAP BW 7.x tables.

For SAP BW with HANA ratio’s of 3.5 to 4:1 vs uncompressed maybe typically observed (client data depending etc).

Hence in these scenario’s Clients implementing SAP HANA columnar strategies will actually likely observe a reduction in compression rates if they are already using either DB2 10.5 Adaptive and/or DB2 10.5 BLU actionable compression with SAP BW 7.x.

In addition to “Doubling up” the required memory for SAP HANA working space, whilst sizing a combined SSD / HDD (Solid State or Hard Disk Drive Storage) at FIVE times compressed data for HANA database persistence (x4) and HANA logs (x1).

In these scenario’s that client will actually observe a significant net increase in SSD / HDD or SAP HANA TDI based SAN based storage capacity, not a reduction as often claimed in SAP HANA marketing presentations and brochures, in particular when these differences are then multiplied up over multiple SAP environments (Dev, QAS, Production, Dual Site DR, Pre-Production, Training etc for a real life SAP Landscape (operating in either a dual or single track landscape on the path to production from Sandpit / Development, through QAS, Pre-Production, Regression to Production).

Naturally for both SAP HANA and/or indeed DB2 10.5 BLU with SAP BW clients can complete older data house keeping and/or BW NLS archiving, in the case of DB2 BLU using a common BLU NLS archiving capability, for SAP HANA + BW it’s SybaseIQ based BW NLS archiving currently, potentially Hadoop / Vora in the future.

This diagram is often presented at SAP Sapphire and/or TechEd conferences to summarize the target, potential SAP HANA storage savings with Suite on HANA for SAP’s own ERP deployment.

HANA Storage Reduction Screen Shot

However what is rarely mentioned is that ~ 3.x TB of older table data from a prior Oracle to DB2 (over HPUX, Superdome) SAP ERP migration was removed “cleaned up” in house keeping terms in advance of the ERP on HANA migration, this then gives very different picture in terms of the realized data compression rates vs a prior partially compressed older DB2 environment, however this would then really spoil a good story, SAP HANA marketing chart!

Naturally the required additional new hardware capacity investment that is required is then of significant commercial interest to the multitude of Intel / SAP HANA SAN platform providers (either HANA Appliance and/or consolidated TDI SAN based).

For example I created this simple chart to reflect the relative SAP DB2 BLU and SAP HANA memory sizing ranges, understanding for both technologies it’s also individual client table data dependent.

Relative SAP HANA vs BLU memory sizing

In my experience this seems to have turned into a bit of a server and storage hardware vendor feeding frenzy with multiple h/w vendors rushing to endorse the whole SAP S/4 HANA adoption story for obvious reasons whilst largely ignoring prior existing, proven, incremental SAP platform solutions!

In-Memory “Future Optimization, SAP Roadmap” Myth Busting – Number 4

In many SAP Enterprise Client engagements, I receive the following comments “but we have been advised that we will miss out on future SAP application optimizations if we don’t migrate to a SAP HANA rdbms and/or S/4 HANA “Digital Core” sooner rather than later”.

These comments are often made irrespective of the actual, real life S/4 HANA adoption rates that are actually a very small fraction (= > ~ 1%) of the installed SAP Business Suite / NetWeaver installed base, such is the largely sales incentive driven pressure for SAP sales and technical sales teams.

At best this is only partially true, where SAP continue to enable and develop a “Core Data Services” (CDS) rdbms abstraction layer that creates a logical structure for the push down and optimisation of SAP HANA “re-optimized” application code to the rdbms database tier.

Consequently and logically IBM with DB2 (and indeed Oracle with 12c) continue to develop, optimize and align DB2 capabilities to SAP NetWeaver CDS functionality, which incidentally is supported and certified with SAP NetWeaver 7.40 and 7.50 with DB2 10.5 (& above) through to 2025.

Additionally  typically for IBM Financial Services Clients CDS has been deployed in conjunction with FIORI transactional applications to significantly improve SAP usability whilst protecting the Clients investment in IBM’s System z and/or DB2 over z/OS with Linux or AIX SAP application server capacity.

In practical terms this means that ongoing SAP HANA based SAP ABAP code re-engineering and optimization efforts (there are many many millions of lines of single stack ABAP and/or prior dual stack ABAP / JAVA code) are aligned via CDS to optimized rdbms alternatives like DB2 and/or Oracle 12c in the near and mid term IT investment and planning horizon.

At Sapphire NOW 2016, I picked up a number of initial comments that the “Suite on HANA” SAP HANA Compatibility Views would only be developed and sustained for a finite period (until 2020), allowing Clients more limited time to migrate to new simplified SAP S/4 HANA Enterprise Management code steams and table structures (the new Universal Ledger in Simple Finance as an example).

From a personal point of view, deploying an existing deeply customized regional or global SAP NetWeaver / ECC application template that has been “Read / Write” optimized for existing rdbms platforms over many years over HANA (SoH) is likely an application and rdbms platform mismatch.

It’s likely more logical to implement a new simplified S/4 HANA Digital Core “read optimised” application template with over a HANA columnar rdbms platform. This assumes the require application functionality is available and that the business is willing to remove or remediate prior customizations to align to a forward SAP S/4 HANA digital core roll out and transition strategy.

However it is also becoming clear now that in addition to the prior SAP Business Suite / NetWeaver code line (and various PAM defined OS/DB supported combinations) the SAP HANA initiatives have created at least 4 if not more different SAP S/4 HANA “simplified” code lines or releases including:

  1. Simplified S/4 HANA solutions hosted on the HANA Enterprise Cloud
  2. The prior S/4 HANA Simple Finance (sFin v1) code, maintenance and release line
  3. S/4 HANA Enterprise Management and Simple Finance v2 “On Premise” code & release line
  4. The S/4 HANA Enterprise and Simple Finance “On Premise” code line but HEC hosted

The clear risk for both SAP SE and/or SAP Enterprise Clients is that there is simply a switch from developing, managing, testing and releasing multiple “Any DB” OS/DB choices over a SAP Business Suite / SAP NetWeaver code stream to managing, aligning and releasing multiple S/4 HANA editions and code lines (on or off premise), this is just a different set of complexities to manage, but with a new restriction of prior Client “AnyDB” choices, this is not in my view SAP HANA “simplification”.

S:4 HANA Simplification ? 

In-Memory “Commodity / Cloud Based TCO Reduction” Hype Busting – Number 5

In our industry we are observing the convergence of multiple significant structural changes, where previously we would typically deal relatively speaking, with a single significant structural change every 3-5 years (Desktop Computing, Client / Server, Distributed, the emergence of Eclipse, JAVA, Linux Open Source etc).

Today we have to manage and prioritize limited IT investment resources over multiple concurrent significant structural changes (Mobile Devices, IoT, Public / Hybrid Cloud, Big Data, significant Cyber Security threats), some of us older folks with many years in IT (and a few grey hairs), might suggest some of these themes are a being a little “over hyped” in IT Industry fashion terms, hence we tend to take a cautious view, then asking the harder “but, so what questions ?”, helping to sort out material delivered benefits, ROI and progress from the considerable IT industry hype (it is a bit of a fashion industry also !).

In my view it’s perfectly possible to architect, build and deploy an “at scale” fully virtualized SAP Private Cloud that is every bit as efficient ( if not more so in Data Center Efficiency / PUE terms) than either a Hybrid Public / Private cloud based on AWS (Amazon Web Services and/or MS Azure) platforms based on Intel Commodity “ODM (Original Design Manufacturer) 2 or 4 socket servers.

Indeed the author was directly involved and responsible for the successful deployment of a fully virtualised IBM DB2 SAP Private Cloud in support of ~ 8 Million SAPS, 600+ Strategic SAP environments with ~ 12 Petabytes of fully virtualised and tiered SAP storage capacity spread over Dual Global Data Centre’s with WAN Acceleration to support prior SAP GUI, SAP Portal and/or Citrix enabled SAP Clients leveraging DB2 and PowerVM, AIX, where in practical terms it remains a highly efficient, flexible and scalable SAP platform in support of a 50+ Bn Euro (~ $75 Bn annual t/o) Consumer Products business.

In this case, as mentioned before briefly in a prior blog section, when we completed detailed modelling of a SAP HANA Appliance based deployment over 4 regions and 4 at scale workloads / SAP Landscapes (ECC, APO/SCM, BW, SAP CRM) with dedicated production appliance and VMware ESX / Intel virtualized capacity for smaller non production SAP HANA instances with a shared, common TDI based storage strategy, this carried a DC TCA (Total Cost of Acquisition) premium over the existing Virtualised, Tiered IBM DB2 SAP and IBM POWER deployment strategy of between 1.5 and 1.6 times.

On one SAP HANA video a x10 landscape capacity reduction was indicated, however this really did not correlate in anyway with the actual worked example mentioned above.

For sure, I would not debate the agility, flexibility and initial responsiveness (assuming the required VPN links and security , data encryption needs are met) of AWS, MS Azure and/or indeed IBM’s own SoftLayer Cloud offerings for rapid provisioning of Dev/Ops enabled “Front Office, Big Data and/or next generation Mobile Enabled application workloads including S/4 HANA or indeed SAP NetWeaver with DB2 10.5 and/or CDS which is also available on MS Azure, AWS and/or IBM’s SoftLayer / CMS4SAP platforms.

The crucial factor here is a proper base line and measurement of the “before & after” environments and to avoid the considerable temptation to compare different “apples & pears” generations of SAP platforms that rather “mixes up” the whole TCO analysis and results equation.

I consistently observe Cloud TCO comparisons of prior “Legacy” partially virtualized older generations of Unix / rdbms systems with fully virtualised Intel x86 Cloud environments, these types of old vs new compares can be rather misleading and should in my view, be taken with a large and rather cynical pinch of salt.

Any comparisons TCA / TCO should really use “like generation” CPU / Virtualization platforms and virtualised, tiered storage combined with current generation rdbms platform choices. For example comparing an older version of Oracle (or indeed DB2) over a prior Unix platform generation with a fully virtual x86 Cloud with initial development SAP HANA + SAP BW (including any risks of noisy neighbour, unless dedicated capacity is deployed) scenario can be very misleading whilst potentially creating impressive but also potentially rather misleading headlines during Cloud vendor marketing events and presentations.

In-Memory “IT Agility, Sizing, Solution Responsiveness” Hype Busting – Number 6

After many years of SAP and/or ERP Platform sizing experience, we all understand that sizing complex SAP Systems Landscapes is a combination of science (user input on expected user, transaction volumes, data volumes and expected user & data growth rates, expected roll out rate and planning horizons, workload scalability testing, Client specific PoC’s etc).

Which is then combined with detailed prior experience and judgement on the likely system sizing variation and future growth rates after SAP Application configuration and customization, along with catering for the typical often changing business requirements and/or fluid ERP / SAP roll out schedules by country or region over different SAP ERP and/or related non SAP systems alignment and integration requirements.

In this context it really nets out to one of two sizing strategies in particular if SAP HANA appliance vs TDI strategies are being considered.

  1. The “Appliance based model”Define the target environment, future growth horizon and then add a safety margin for errors, unexpected changes in inbound demand (an increasingly frequent issue)Then you deploy the targeted 2, 4, 8 or more socket / server appliance building blocks with the appropriate data compression rates and GB / TB of Ram sizing methods
  2. An On Demand (In IBM we call it Capacity Upgrade on Demand – “CUoD”) ModelWhere you size a scalable platform with Active live and/or “Dark” CUoD capacity that is then activated “on demand” when the actual workload requirement is known vs initial SAP ERP sizing estimates.
  3. Then on top of these 2 models or approaches you then consider the realistic IT / ERP platform technology / capacity refresh cycle vs expected roll out schedules, workload and data growth rates to ensure you don’t break the target capacity building blocks for peak vs average demand over a typical 3-5 years IT asset write down cycle.

These rules mostly apply irrespective of the SAP Solution Cloud deployment model (Hybrid, Public, Private) selected to match the various development and roll out phases (remembering a chart I defined back in Feb 2005 as below !) to describe this typical Enterprise SAP ERP workload and roll out cycle (just to prove some things don’t really change as much as we might imagine !).

Dynamic Infrastructure Sizing

One of my very experienced SAP platform solution architect and sizing colleagues, said that he felt that sizing SAP HANA appliance based landscapes (vs fully Virtualized System p + DB2) was a bit of a “back to the future” experience in SAP / IT platform sizing, server capacity and life cycle / refresh terms.

E.g. there are significant issues and penalties in capacity, disruption and and building block upgrade terms if the initial SAP HANA sizing is incorrect, as in addition to the typically 24-36 month refresh frequency on commodity / Intel x86 platforms.

This means that selecting the wrong sized SAP HANA appliance TYPICALLY means later rather uncomfortable conversations are required with at the CIO, CTO and/or CFO level when these need to be refreshed, often in advance of typical 4-5+ year Enterprise IT asset write down cycles and System of Record technology refresh terms.

In my view, it’s very important for these technology refresh cycles to be factored into any SAP Platform TCO /TCA analysis, in one prior large Retail scenario we used 3-4 Years for Intel / Linux, 4-6 Years for POWER / DB2 and 6-8 Years for M/frame System z, DB2 with either Intel Linux or Power AIX application server capacity, which aligned to the clients scenario and two of their 5 year fiscal write down / budgeting processes.

If you end up refreshing “commodity” technology or with a proliferation of different appliance based solutions (with large volumes of cores in the Data Centre to install, manage, power, cool and maintain with typical DC power to cooling ratio’s of 1.5-1.7 times) this can quickly become a rather costly and inflexible SAP platform strategy.

Personally I prefer to deploy a proven, scalable, flexible virtual platform upfront and then scale as required through Capacity Upgrade on Demand (CUoD) options. This helps to effectively manage business driven changes in requirements, unexpected mergers / acquisitions etc.

However if you have an existing workload that is stable, with clear growth rates and can deploy this over an appropriate appliance building block after a detailed PoC to help with sizing this can also work. It’s then really all about unexpected workload growth, which often driven by mergers or later acquisitions, disposals and/or business driven SAP platform consolidations activity.

Indeed, for example even last weekend I was reading about the continued significant rates of mergers, acquisitions and consolidation that is ongoing in the FMCG / Consumer Products Industry.

In these scenario’s suddenly finding your core SAP ERP “System of Record” platform now needs to scale by a factor of 3 or 4 times (vs 1.5-2 times) is actually not that uncommon as the back office functions for two substantive businesses need to be merged into a single SAP instance / template and platform to realize prior or committed merger / acquisition savings and economies of scale.

It’s for sure a case of buyer beware, the age old, golden rules of making sure your target ERP platform has at least x2 capacity headroom, has never been more true, if you “tight size it” it will for sure hurt later, please refer to the follow SAP HANA – 7 Tips and resources for Cost Optimizing SAP Infrastructure” Blog

https://blogs.saphana.com/2014/11/06/7-tips-and-resources-for-cost-optimizing-sap-hana-infrastructure-2/

For sure Cloud / IaaS based models can help with initial project agility, responsiveness and/or even to help actually size “model” configured environments, but per say it’s still important not to simply assume “a Cloud / Commodity” model is always cheaper than an effectively designed and deployed, virtualised “Private Cloud” or hosted “Private / Hybrid Cloud” model, in particular if you are implementing at scale over a 4-5+ year write down cycle vs 12-36 months.

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view.

They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).


SAP NetWeaver Core + Best of Breed / SAAS Strategic IT Alternative Investment Choices?

In this case the focus is on speed to value and business into IT driven competitive advantage, back in a full circle to Choice A vs Choice B again.

This leaves Enterprise IT decision makers and Enterprise IT architects facing the following choice:

SAP Digital Core Propensity v2
Whilst this diagram looks complex and multi-dimensional in terms of its various axis and considerations it’s really relatively simple.

With the Enterprise Client mapping their planned “as is” and “to be” position into the “two cheeses” (the evolutionary, hybrid green area or the more revolutionary and potentially more disruptive all SAP S/4 HANA Digital Core all orange area).

Essentially the client has to decide where and when he maps into these choices, from an “as is” today and “to be” in the future perspective.

Another, but similar way of looking at the choice now facing Enterprise IBM SAP Clients is as follows which is essentially another view on the initial Choice A vs Choice B diagram:

Simplified Choice ?

Basically SAP NetWeaver Enterprise ERP Clients are now being asked to make a rather complex and difficult choice between investing their typically limited IT resources in a SAP S/4 HANA “Read Optimised” back office custom template remediation, simplification and transformation that is typically combined with the integration of SAP centric front office / SaaS solutions (SuccessFactors, Ariba, Hybris, Concur, Fieldglass) via SAP HANA Cloud Integration (HCI) and/or the HANA Cloud Platform (HCP).

For some SAP / IBM Enterprise Clients this is a logical and good choice, in essence then their IT Strategy is SAP S/4 HANA Digital Core and Extended “All Orange” aligned  (in effect as summarised by the CIO of Nestle at Sapphire Now, Orlando 2016 in the Day 2 key note with Rob Enslin, although he did comment that they were still having to pressure SAP SE into developing better integration of the S/4 HANA and new HEC / HCP hosted portfolio solutions).

Are you going for “All Orange” SAP S/4 Digital Core or a SAP NetWeaver Core + Best of Breed ?

However I’ve encountered in conversations with various CIO’s, CTO’s and/or Chief / Enterprise SAP architects I’ve noticed that some Enterprise Clients with often with constrained IT budgets actually prefer to leverage their existing SAP NetWeaver ERP 6.0 template and prior significant regional or global roll out investments whilst integrating “Best of Breed” Front Office Hybrid Cloud / SaaS based solutions via SOA based standards and API enabled integration buses, appliances and/or cloud based API integration services or vendors, to increase the delivered IT speed to value.
In these cases I’ve observed a noticeable switch from a prior SAP 1st back office IT investment priorities to leveraging an existing prior SAP ERP NetWeaver core (to realise a ROI from prior significant SAP roll out investments) to a SAP NetWeaver Core + API integrated Best Of Breed / SaaS Cloud based alternatives to deliver the speed to value that is increasingly being demanded by “digitally aware” Line of Business (LoB) users who typically have rather limited interest or time for large scale, complex back office “systems of record” transformation project IT investments.

Whilst facing these choices existing SAP clients may choose to read the SAP Nation 2.0 booked published by Vinnie Mirchandani, author of The New Poylmath.

https://www.amazon.co.uk/SAP-Nation-2-0-empire-disarray-ebook/dp/B013F5BKJQ

On page XII in the first edition he describes with further segmentation and detaisl the various choices SAP Clients are now making. The Un-adopters, Diversifiers, Pragmatists and the Committed.

The views and choices expressed in my summary view in Section 1 really describes a combined Diversifier / Pragmatists as Choice A and the committed “all Orange” as Choice B.

Where I’m assuming practically reversing out of often significant prior SAP ERP / Business Suite investments is as painful (in prior sunk ERP platform investment terms) as going “all Orange” in terms of loss of future commercial leverage and the risk of IT vendor lock in vs faster open source technology innovation terms, as going fully committed and “All Orange” Choice B.

This prioritisation of strategic IT investments aligned to recent commentary from Philip Howard at Bloor Research (http://www.bloorresearch.com/profiles/philip-howard/) who effectively summarised the relative CEO IT investment priorities from the 18th annual PwC CEO survey as follows (there are similar surveys from Gartner, IBM’s Institute for Business CxO, CIO etc surveys):

Strategically Important Technologies - Bloor and PwC

Additionally the following IoT / API Hybrid Cloud Architectures are now emerging as follows to integrate prior Systems of Record (SOR) platforms into API / IoT open platform enabled hybrid cloud architectures, with Docker Containers rapidly and strongly emerging as an Open Source container based technology to practically enable these architectures from an IT platform deployment and management perspective.

A summary of the trends driving the integration market and aligned IT architecture strategies are summarized below:

Trands driving the Integration Market

With an example of a architectural approach to addressing these trends as follows:

An architecture for Digital Business

Which aligns to trend number three as follows:

Integration Trend 3 Digital Transformation

In effect in this latter case the Enterprise clients are really betting on the higher rate of innovation that is typically observed over time in an Open Source environment and a API enabled SaaS / Hybrid Cloud scenario and/or community.

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view.

They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).

 

Will Open Source Enabled Big Data, IoT / API Enabled Innovation prevail – YES or No?

In this section (4) we consider the question

– Will Open Source Enabled Big Data, IoT / API Enabled Innovation prevail – YES or No?

It is also clear that unless IT Functions embrace and lead in an API / IoT enabled economy we will continue to see the development of “Shadow IT” capabilities that are closely aligned to and embedded within the individual lines of business (sales, marketing, supply chain, manufacturing, distribution, multi-channel, partner enablement).

Indeed I believe we will continue to observe a switch from Business to Consumer (B2C) towards Business to Individual (B2I) insight based targeted enablement based on location, weather, preference, event, location insights (which follows IBM’s acquisition of the Weather Company) in addition to prior IBM alliances with Twitter, Apple and/or more recently Cisco in the IoT / Edge and Data Analytics area .

Indeed IBM already delivers solutions in this area with our Metro Pulse solution for Consumer Products Industry clients where multiple sources of un or semi structured Big Data (SQL Schema After) and/or Little Data (SQL Schema Before) are seamlessly combined with location, weather, preference, local event, historical POS data and promotional data etc to increase sales and product availability in “Metro” city based locations like London, New York or Singapore.
Also a high rate of innovation (and change) is currently being observed in the Big Data platforms and analytics solutions area where it seems that the majority of the Enterprise IT Architects and Clients I’ve spoken to are firmly committed to Open Source aligned Big Data solutions and platform choices. This then naturally raises the following question

Will Open Source aligned Big Data solutions eventually prevail?

In my view the answer is a 100% YES, although I also believe that a balance between open source driven innovation and large enterprise scale IT non functional requirements is required as summarized below:

Open Innovation

Indeed in the area of Proprietary vs Open Systems (at one time these used to be defined as Unix based Client / Server systems vs the IBM Mainframe), IBM previously tried a relatively closed and proprietaty approach when the IT market was rapidly transitioning towards Unix or “Open” distributed client / server platforms in the early 1990’s.
IBM subsequently / consequently suffered a near death experience in business terms as prior continued M/Frame MIPS platform capacity growth rapidly switched towards these alternative Distributed / Client Server platforms. Indeed SAP delivered SAP R/3 vs R/2 over the Mainframe with DB2 before to align with this “Open Systems” choice and market trend.

Although it is also true to say in more recent times IBM M/Frame MIPS capacity growth (combined with Open platform M/Frame Linux enablement) continues at a pace, often for mission critical systems of record / big batch scenarios.

Something rather similar happened in the PC market where IBM developed a technically superior but incompatible IBM PS/2 MCA (Micro Channel Architecture) as a follow on to the original IBM PC and/or PC AT IO adapter architecture.

Just as we technically turned right the rest of the market turned left with an ISA (Industry Standard Architecture) PC Input / Output (IO) adapter and bus strategy the rest is history as IBM’s PC Company went from having a largely dominant “IBM PC” market share to a significantly smaller share over time, is the same thing happening now in the core ERP / Systems of Record market?

As a direct result and subsequently IBM’s commitment and contribution to Open Source driven projects and innovation has been second to none amongst the major IT vendors, in summary IBM essentially previously learnt a very hard business into IT lesson in IT innovation and industry change terms.

This commitment includes significant investments and technical alignment to the following:

  • The Apache Software Foundation (1999) subsequently Eclipse (2001)
  • Linux (2007), OpenStack (2012), Cloud Foundry (2014)
  • jS (2014), Docker and the very significant Apache SPARK in-memory analytics operation system (2015) investment
  • In addition to the more recent innovative Blockchain based Hyperledger project (2016).

These commitments are in addition to the ODPi (Open Data Platform) Hadoop initiative are now both pervasive and very significant within IBM, indeed IBM recently published a paper that summarises this commitment and the resulting rates of Open Source driven innovation that in the longer term, in the view of the author will always eventually prevail over proprietary aligned alternatives no matter how big a single vendor or aligned partner eco system commitment.

Hence in my view, It’s not really a case of if, simply a case of when Open Source based innovation prevails.

Indeed in support of this viewpoint, Vinnie Mirchandani (in SAP Nation v1.0) mentions the success and growth of the Cloud Integrator Appirio with BoB / SaaS integration solutions and a large TopCoder community, in addition the rapid growth IBM is experiencing in the Bluemix and/or API Connect area’s.

Of course this Open Source commitment does not mean clients will not require a prior trusted solution partners to help them to safely bridge between their existing system of record and planned front office API enabled strategic IT platform investments, either Public, Hybrid Cloud or indeed prior Private Cloud / On-Premise often for mission and business critical data protection and/or privacy / IP reasons – It all starts with the data !.

The above mentioned paper can be found here, it nicely summarises the evolution of various Open Source platforms over time.

https://www.ibm.com/developerworks/cloud/library/cl-open-architecture-update/

More recently one of the potentially most significant and innovative Open Source projects is the rapidly emerging Blockchain “Hyperledger” distributed ledger project that will in my view be truly transformative for many clients and industries.

Indeed I’d also recommend a rather detailed report published by the UK Government Office for Science, Chief Scientific Advisor, Mark Walport in December 2015 called :

Distributed Ledger Technology: beyond block chain, which can be found at:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/492972/gs-16-1-distributed-ledger-technology.pdf

Commencing initially with the Financial Services industry but then likely rapidly extending into other industries like Consumer Products and/or Discreet Manufacturing where complex extended and distributed supply chains and resulting financial transaction flows and ledger entries are the norm.

I’d also recommend the following short youtube video that describes the future impact of the this project in addition to this item on the Financial Times ft.com as follows:

https://next.ft.com/content/eb1f8256-7b4b-11e5-a1fe-567b37f80b64

https://www.youtube.com/watch?v=hMUNfxcmyEE

Having covered some of the strategic IT investment choices in the above lets now dive back into the details of some of the “hype” and largely commercially driven pressure to migrate to SAP S/4 HANA and/or HANA OS/DB migrations (vs prior Oracle, DB2, MS SQL etc SAP AnyDB platform choices)

Now we move onto the final section in this series of blog entrances – In-memory marketing hype vs reality, section 5.

Disclaimer – This blog represents the authors own views vs a formal IBM point of view

The views expressed in this blog are the authors and do not represent a formal IBM point of view.

They do represent an aggregate of many years (20+) of successful ERP / SAP Platform deployment and IT strategy development experience that is supplemented with many hours of reading, respective DB2 and/or SAP HANA roadmap materials and presentations at various user conferences and/or user groups, in addition to carefully reading input from a range of respected industry / database analyst sources (these sources are respected and quoted).