The repertoire of HANA of course includes not only prognostic procedures of all kinds, but also very tangible analyses. Thus, an existing order backlog can be analysed in terms of procurement and production requirements and can incorporate far more data than was previously possible. Various variants for procurement and production can be calculated and optimized in no time. If necessary, external data sources can be included and the complete supply chain involved in accordance with the technical possibilities.
Certainly it will take a lot of time and brain-power before we make the somewhat airy HANA construction into an attractively furnished dwelling, but there are still some nice individual pieces around to admire and there are constantly more being created.
Currently, the topics of “predictive analysis” and “cloud” are very massive linked with the hope of greater transparency and a significantly increased hit rate in prognostic methods. These expectations can be met only to the extent to which these models are ever able to map future realities, at least in a reasonable approximation.
At this point it is worth reminding oneself that prior to the great banking crisis, the most complex forecasting methods possible and extremely powerful computer systems were used. The apparent safety and “Everything is under control” feeling, which this number-crunching generated, disappeared in a very short time into thin air.
There is always something that is outside the range of a forecast model and that something happens sometime.
Common sense and the ability to incorporate a wide variety of information both numerical and non-numerical in nature into decision-making remain crucial, this needs human input.
Data needs to be seen
We humans are masters at recognizing patterns. In particular we are cable of evaluating images of any kind in a few moments and identifying similarities with known patterns. This capability benefits us in the analysis of numerical data.
Not for nothing do almost all statistical methods make use of meaningful graphical representations, since purely descriptive metrics for the qualitative evaluation of data series can often lead you astray.
As an example let us have a group of 4 series, which F.J. Anscombe in 1973 used as an example in the field.
These four data sets provide almost identical values in terms of mean, median and variance. It is the visualization of the data series reveals the completely different nature of the data.
This example illustrates very clearly that visualization is not just a nice-to-have, but is fundamentally important for assessment.
You will find that tools such as Excel offer countless possibilities to show your data in a visual way. Of course, this also has its pitfalls and really smart solutions for visualization are able to uncover design flaws and inconsistencies of all kinds inherent in the preparation of such charts.
Suppose that a list of sales figures for a particular product have the general appearance of the upper left graph. This characteristic applies to various periods of observation and perhaps even for different products. Then we have to create a repeating pattern and HANA must learn this pattern in the situation.
Now suddenly there are data series that might have the appearance of the lower left-hand panel and the deviation from previously learned “typical” pattern would be detected by HANA immediately. This data would first of all be taken out for further validation, and not just accepted. This capability for qualitative analysis of the available data material is one of the most powerful and important capabilities of a good piece of software for stochastic analysis.
Smart solutions can do that.
— by Mario Luetkebohle, Consultant, itelligence AG —