Do we need HANA?! – Part 1

HANA

When I hint at my profession to my circle of friends and acquaintances at barbecues or other festivities, I can essentially take it as read that I will be immediately become a lightning rod for all “SAP-damaged” listeners.

Of course I take the time to listen to them and of course it is not actually the evil and complicated SAP system that is annoying them, but the most confusing swarm of interconnected processes and procedures, with which one wants to become the lord and master of the increasingly complex world outside. On the list of potential scapegoats for everything that goes wrong “IT” has always been far and away the preferred choice. That will probably always be the case.

For new and intensively advertised technologies, I hear another classic “Well, do you have another solution looking for a problem?” The more quickly this question is raised, the more important it is to address the underlying question. What specific problem is this solution actually designed to solve?

Unless there is a good answer to this question, then the aforementioned colleague has a very fair point. In the case of HANA technology, what response can be given at this point in time and what answers could be added later?

Blog - Do we need HANA1At this point, an aspect comes into play, which is of fundamental importance for the development of business solutions. Roughly speaking, one could say “A little bit of HANA is no better than none”. A solution must be fully functional and self-sufficient in the HANA data space or you have gained nothing or very little. A second message is: HANA must be able to work on large data sets in order to be able to exploit its strengths. The processing of a single delivery or an order-entry dialogue will gain little to no benefit from HANA.

Because of this, application areas associated with processing large amounts of data were an obvious choice for the first HANA-based solutions. So of course, SAP BW was one of the first candidates for implementation. Likewise the emphasis on function libraries that deal with forecasting methods was a consequence of these HANA design paradigms.

THE buzzword in this context is simply “predictive analysis”. It is about a forecast of future developments at all levels of economic activity. A little cross-reading and researching inevitably brings two giants of the IT business in the foreground, both are following this concept with all its consequences. They are Google and Amazon. Both companies are very successful at analysing the trail of data which internet users leave behind themselves, leveraging it to offer their customers tailor-made advertising concepts and to create offers in their own web stores which are tailored to the customer. That sounds good for a start.

Of course CRM and other SAP solutions already offer a way to answer questions of this kind successfully. What can and must therefore happen to bring HANA into play? Taking the existing data in hand and analysing it using all the rules of art and HANA’s own stochastic and other forecasting methods, often gives rise to interesting perspectives. The actual art is at this point in the appropriate modelling of the data base, to put it bluntly, I need to know what I want to know, and my modelling concept must reflect this appropriately. If that works well, I can use simple logistic regression analysis to multivariate conjoint analysis to do almost anything with my data in just a few clicks.

It’s about the application of suitable forecasting models for these questions; the selection of the methods utilized is the next crucial factor in this process. So when I take a decision, I must be able to assess the quality of a forecasting model and a suitable criterion for this evaluation is the accuracy of the method for the developments of the past. So I move a few months back in time, let my forecasting model “act” on the data before that date and compare the forecast created in this way with the actual development. If there are gross deviations, then maybe my data model does not fit or my forecasting model is unsuitable or there are external factors to which I do not have access. Then I can include more data when I have them, adapt my data model, modify my forecast model, etc.

All this is admittedly quite intensive, and as each new run needs computation time, the number of variations of these components is limited. It is at this point that HANA comes into play. When a run through of my analysis takes an hour to complete, which is often the case, my scope is considerably limited. If HANA needs only 2 minutes for the same analysis, then I can use a very different “scope”. I do not even have to manage the controls myself; HANA can try them out for me and leave a review. Here, for example, so-called genetic algorithms come into play. These are able to evaluate a forecasting model qualitatively. The data space in which I can move around here can be almost as large as I want. Many elements of modern AI research have been incorporated in the HANA function libraries. These include methods for machine learning as well as fuzzy logic and adaptive heuristic methods from bioinformatics.

Of course, this type of analysis is limited to the world of numbers and the famous Einstein quote; “Not everything that can be counted counts and not everything that counts can be counted” can be applied to the value of such forecasts in the appropriate framework.

Interested in learning more? Then visit this blog next week to read part two of my little blog series.

– by Mario Luetkebohle, Consultant, itelligence AG –

Similar posts

SAP HANA Enterprise Cloud
Read more
Digital Transformation
Read more
SAP S4HANA 4
Read more
man-writing-on-desk
Read more
Read more
Read more
Follow us: