We’re now hearing a lot about risk. It might be worth looking at risk in different ways.
We could find something to learn, test and put eventually to practice in different business sectors.
Reading the most recent book written by Michele Bocci and Fabio Tonacci, two italian journalists who investigate Italy’s public health system(s).
The book is published by Mondadori, a major italian publisher, linked to the US McGraw Hill, and is very rich with international bibliography.
In year 2000, the World Health Organization rated the Italian health system as one of the best in the world and second only to France’s. But, thirteen years later, two million Italian citizens don’t have primary health care.
Among the many health variables the two journalists analyze try to analyze is the confusion between risk factors and disease.
In our terminology we would name it failur, or damage.
This is a significant difference that newcomers to risk theories and practices have to understand.
The trick is we confuse a risk factor with its connected ‘disease or failure.
We’re all industry risk professionals and I myself am quite reluctant to speak authoritatively about a topic I barely know like health.
While risk factors are often quietly and happily changed – we all know FMEA (Failure Mode Effects Analysis) practices. Failures are what they are. They do not really depend on how high or low risk factors are rated.
We can only estimate that a high risk factor and the higher it is the more severe the failure effect will be.
This is even more so when corrective or preventive actions are ‘administered’ to our system. Depending on how we numerically express the improvement – that is absolute figures or percentage – we may observe dramatic or poor changes.
From 20 to 10 the reduction is 10 only, in the first case, 50% in the second.
If we understand FMEA’s RPN’s (risk priority number) as risk factors, it’s rather easy to make them higher or lower, we could find any risk explanation or justification we want or need to.
But real failures, those that will be detected in our customer’s premises, will not be affected at all – or just very minimally – by our RPN’s.
Surely FMEA and similar tools are very powerful in reducing failures’ impact; but we must never forget that RPN’s – or risk factors – have only a limited connection to reality.
Let’s just imagine a risk factor. Chernobyl, Japans’ nuclear power-stations, the Costa Concordia ship accident and the ever recurring floods that we call ‘Acts of God’ to do nothing after we’ve cleaned the mud away – just to name a very, very few much media-beloved events.
Those who don’t want nuclear power-stations or big ships, or poorly designed and constructed uphill towns in Italy might be somewhat right. My question is, instead: ‘shouldn’t we be more aware of how risk is determined and taken care of?’
If, by using a given algorithm, we determine that, for a given model car, seventy percent engines die out after 150,000 miles, and ninety percent after 200,000 miles, we might be tempted to treat all engines in same way, for example using a different engine oil or carefully washing engines metal parts.
But if, data at hand, we discover that the treatment only reduces failure from – say – two to one percent – it would mean that we’ve uselessly treated 99 engines to save one.
But the improvement is certainly dramatic, if we look at it percentage wise going from 2 to 1%, the improvement is a huge 50%.
Small numbers in the example, large numbers in the second, but the same principle holds. Good risk factors have to be carefully weighed or we’ll find ourselves obese or undernourished.