#69 – RISK BASED QUALITY CLEANING – UMBERTO TUNESI

Umberto Tunesi pix… or to keep things clean are not to make them dirty.”

When a customer of mine told me of this principle I was hit by its simplicity, by its obviousness.  Here are my thoughts on this expression.

FOCUS


One: There’s no system or process totally “clean”, and whose dirt can be totally eliminated.  Any (c)lean system or process management cannot but attain a reduction of “dirt”.

Two:  There’s no point for auditors – both internal and external – in continually raising the same non-conformances against the same causes of system or process dirt. These can be said to be system or process intrinsic, let me say, system or process DNA-specific.

Three: (Value Added Auditing – like any communication process – cannot be one-way: it requires partnership between auditee and auditor(s): when auditee – as a company – is hostile or tricky to auditor, the audit effectiveness is jeopardized, in the first place.

Of course, thinking of it afterwards some of its limitations has surfaced:

First, though it can have many applications, this principle can also run into the risk of being applied by extremes: e.g., if nothing is done, nothing can get spoiled – or almost so.

Second, there is seldom a process where there’s no by-product, or side-effect.

And – thinking in terms of system or process management – it cannot be envisaged that any process or system can be run without producing any by-product, or scrap, or “dirt”, or without any side-effect. Thermodynamics’ rules state that, both for micro- and macro-systems, for closed and open systems.

We have therefore to accept that whatever and however, whenever we do it, there’ll always be some unavoidable and unpreventable quantity of “dirt” that will have to be swept away, somehow.

Thinking of (quality) management systems, beside the natural more or less quantitative “dirt”, some of which can be reworked as de-classified product or sold or used as scrap or by-product to feed down-stream processes, there are the always-daunting system non-conformances.

Despite ISO’s efforts, reworking of services to make them agreeable to customers still needs further development to be effective.

These non-conformances can be assimilated to system scrap, by-product or side-effects that continuously creep out and make themselves visible even when the system performs according to targets, so that trying to burst them is usually found more costly than letting them be.

Auditors – both internal and external – obstinately fight these so called back-ground noise, non-conformances instead of looking for the worst offenders and prioritize risks.  They do not perform effective audits.  They are often found to stick to the letter of the standards instead of going beyond and look for their effective application.

I dare say that in my own auditing career very, very seldom did I saw non-conformances raised to the right level of criticality.  They were instead hastily written and presented by the auditor to the auditee before jumping in the car or on an airplane.

What was really worth, instead, were the – often only verbally communicated – “opportunities for improvement”, that neither the auditor nor the auditee recorded.

And therefore did not remember when the next audit came up.

Under the present business conditions by which registrars are paid by auditees, it is difficult to find very tough auditors, unless some suffer from professional suicidal tendencies.

Nonetheless, the iceberg golden rule applies here, too.  Two thirds of any management system are bound to suffer from intrinsic weaknesses, that are not difficult to find under the surface.  While the one third shines in the sun, blazing auditors to blindness.

Just the same as there’s no point in going fishing where there are no fish.  It is also pointless that auditors keep raising to the same company non-conformances against the same requirements, year after year, for two basic reasons:

One, this behavior has a taste of continual “dis-provement” on registrar’s side that is, instead of adding value to its audits, the registrar keeps on sticking to formal compliance with standards.

Two, as auditors’ profession comes closer and closer to a formally recognized “risk age”, auditors can less and less limit themselves to verify that what is written on the standards is also implemented.  This is even more so because, by definition, a standard is a standard, and it needs to be interpreted against the multi-faceted realities and complexity of a company’s system.

Differently from quality management systems that focus, more or less acutely, on their performance vis-à-vis customers, other management systems focus on risk.

While quality “dirt” can be somehow accurately predicted and therefore – although to a limited extent – prevented, not all and any risks can be predicted, and – if even so – the magnitude of their effects can be underestimated, depending on the scale used and on the “weighers” using it.

AUDIT RISK RATING SCALE
I’ve already suggested that – when auditing risk management systems – a very first step should be to know and evaluate the risk rating scale used by the auditee, if any, and to agree a common scale when none is used.

On one hand.

On the other hand, I see the need for teaming up between accreditation bodies, registrars, sector-specific industry representatives, training organizations to evaluate the existing risk assessment and rating methodologies and adapt or develop them as necessary.

Though not exempt of criticism, automotive quality management systems use two key performance indicators, that is PPM (Parts Per Million) and OTD (On-Time-Delivery), risk management systems can certainly be based on a limited number of key metrics to measure their effectiveness.

These metrics could include but not be limited to:

  1. Quality of the “intelligence” system to detect, identify and prioritize risks: besides an initial validation.  This system would be continually validated and upgraded based on incoming inputs;
  2. Time between the first warning and intercepting the risk;
  3. Quality of risk interception, on a scale from poor to good, even better if numerical;
  4. Quality of feed-back of warning and interception system performance to improve it continually;
  5. Quality of continual improvement of warning and interception system.

If this approach may sound a bit military-like, we must not forget that quality standards were fathered and mothered by the military.

1.  Obviously, risk intelligence is of utmost importance when dealing with risk management; and it is probably a crucial, most difficult step, because – as many stories and history itself tell – man tends – for various reasons – to minimize risk. Though audacity can help to jump beyond overwhelming obstacles, business is more matter-of-fact minded, and cannot therefore afford to take fancy or risky decisions.

The quality of the risk intelligence system can be evaluated by means of usual team-based methodologies, i.e.  brain-storming or free-wheeling or more advanced approaches. In practice, both brain-storming and free-wheeling reveal themselves more effective tools when the team sets itself just one target, preferably in terms of a simple question, e.g. “what are the risks connected with the hospital protection of cyber data?”, let each team member individually to think of it on his own, then meet, say once a week, to sample what ideas have come out.

Reviewing one’s own work is quite a difficult process: it requires detachment from it and understanding, at the same time. I would say – please don’t misunderstand my words – that it’s rather close to a low level of schizophrenia, in so far that it recognizes two different realities in the same person: the creator and his critic.

Needless to say, biological and human sciences keep walking great steps forward in discovering still unknown capabilities in Nature and Human History to detect and distinguish friend from foe, that the layman is very far from even imagining.

However, an effective risk intelligence system should not let pass through its fishnet more than a few percents of the “big fish”, and it should also be designed to at least warn of the presence of small fish, or plankton.

2. Time lag between first – or early – warning and intercepting the risk much depends on the warning and intercepting tools and devices; or, when the former is left to personnel, to their alertness. Also intercepting the risk to prevent that it develops to undesired effects might depend on personnel timely action, though both warning and intercepting systems are nowadays more automated and faster than in the past.

3. The capability of the intercepting risk and eliminate it – or the quality of risk interception – can include both the intercepting capability and the elimination capability. I don’t want to repeat myself on risk intelligence and interception, but I want to emphasize, instead, especially to those who insist in waving Deming’s PDCA flag as ultimate principles, that there’s a great absent in this cycle, that is “intelligence”, or investigation. DCA sub-cycles are fair enough as far as risk interception and elimination are concerned. But here again, just like with risk intelligence, we have to do with human and non-human tools.

It is no news that some risks can be intercepted and eliminated either by human means or by devices, and that the former, although more subtle and less costly, as the case might be, are more subject to human variability. This is where the internal and the external risk auditor should focus their attention, more than assessing the implementation level of risk prevention procedures and practices.

4. Deming’s C sub-cycle, or system feed-back, is probably systems’ cinderellas’ cinderella: it’s too boring a job, when the problem is settled, nobody wants to review it. Call it shortsightedness but when the game is over, nobody thinks of the next; this a tremendous risk in a risk management system because it prevents the system from learning, therefore jeopardizes its continual improvement. Auditors – both internal and external – should focus on this system feature, it’s a very big and powerful fish, it’s bound to destroy the thin fishnet put down to catch small sardines. Failures in this sub-cycle are easily found by investigating internal communication quality, just by means of common sense: when the auditors hears sand grinding the gears, hey, there’s somewhere to dig deeper in.

Unfortunately, I haven’t heard so far of non-human, automated system or device capable of reliably measuring communication effectiveness.

Here, too, the specific system risk is evident: system designers and managers excessively rely on devices to measure the system effectiveness, just as if watching our car dashboard only makes us driving it.

5. Risk management probably borrowed the concept of continual improvement from quality management systems standards, which were never operatively clear on how to establish and implement continual improvement policies and procedures, and to put them into practice. But we – risk professionals – are and must keep a step ahead, we are talking of risk management, we can’t afford risk management systems to drift away simply because there’s no clear rule on how to steer them right.

Continual improvement is a logical, intrinsic feature of risk management systems, they wouldn’t exist without it being built-in.

CONCLUSION
The best practice to clean or to keep clean does not generically require to prevent formation of dirt but rather specifically preventing the formation of difficult-to-remove or noxious dirt. And again and again (go) over the operation, step by step, trying to eliminate every weak point and anticipate every possible unexpected emergency.

If there’s a secret recipe to keep clean, might be this one.

Leave a Reply

Your email address will not be published. Required fields are marked *