During an FMEA, severity (S) is ranked from 1 to 10, depending on the severity of an effect on a product, customer, manufacturing process, or operator.
Occurrence (O) is ranked from 1 to 10, based on the number of incidents per items per vehicles. It’s interesting to note that a rank of 1 (i.e., very low) is based on the criterion “failure is eliminated through preventive control.”
Detection (D) is also ranked from 1 to 10. A rank of 1 is based on error prevention, but a rank of 10 is assessed as “no current process control.” Curiously enough, ISO/TS 16949:2009 cites “error prevention” in the notes to clauses 7.1 and 7.3, but then switches to “error-proofing” in the note to clause 7.3.2.2 (“Manufacturing process design input”: mark it!), as well as for clauses 7.3.3.1, 7.3.3.2 (“Manufacturing process design output”), 8.5.2.2, and Annex A, Section A.2d.
Now, if detection is ranked as 1 when errors are prevented, should it be ranked zero when error-proofing methods are in place?
There is something more to this. ISO/TS 16949 clause 7.6.1 states, “This requirement shall apply to measurement systems referenced in the control plan.” Since these systems shall be developed from the process FMEA, and clause 7.5.2.1 requires that “all process for production and service provision” be revalidated, doesn’t this mean that the process FMEA must be subjected to measurement system analysis and revalidation?
It may sound a bit crazy, yet it isn’t really so when we consider the reality behind FMEA and observe how S, O, and, D rankings are personality-driven. As happens elsewhere in human interactions, the person who shouts the loudest, looks the fiercest, or whose fist hits the table first, is the “winner who takes all.”
Let’s look at a typical production process. In the beginning, there’s a customer’s request for a quote, along with the usual deadline for submittal “yesterday.” Then the supplier’s top management—under time and budget constraints—comes up with a quote based on the criteria, “Let’s put the order in the box, then we’ll see; this widget is similar to what we’ve been doing for years.”
The drawing comes in and is read as such, not as the design record that it actually is. There are also tests and their specifications, and these reference further standards, specifications, and customer-specific requirements.
A production part approval process (PPAP) package is hastily put together and submitted to the customer, together with an initial sampling. The customer’s quality manager, under similar time constraints as the supplier, signs the parts submission warrant, and there we are.
We then move to the ramp-up and mass production, where the only documentation is the setup and work instructions from the similar widget the company has already produced, and the drawing of the new one. But what about the records? Well, let’s not waste time making the line operators write down the measurements that they read; they take more measurements than what’s required, so an OK or a tick is more than enough. Plus there’s quality control at the end of the line; they will do offline controls with a CMM and all sorts of expensive devices.
Sooner or later, though, there’s a mess, a catch, be it an 8D or similar request, a CSL1 or CSL2, a new business hold (NBH), a customer audit, or the periodic registrar’s audit.
And the mess, whatever it is, highlights that the PPAP package is mostly comprised of counterfeits: process flowcharting, process FMEA, control plan, work instructions, measurement system analysis, training records, feasibility commitment, and so on.
Therefore—and now we’re back to time constraints—the business is at stake. The poor quality manager, who may have sounded the alarm well in advance of the quote and the PSW submittal, now bears all the weight on his shoulders.
Of course, this is a worst case scenario, yet many similarities are found in real cases, where control plans come after work instructions, FMEAs come after control plans, process flowcharting comes after PPAP—just the opposite of any golden rule for prevention.
It’s true the automotive supply chain is under a lot of pressure to save both money and meet deadlines. And I have no financial title to back my opinions about costing issues, yet I believe suppliers could and should do better, in terms of risk assessment, feasibility analysis, and prevention.
For one thing, it’s the suppliers that own the knowledge of the machinery, materials, personnel, products, and processes. It’s useless to start a process FMEA at the same time as the incoming inspection, especially when incoming materials are inspected only for quantity and external appearance. The same holds true for sampling plans, both in-line and at the end of the line: The usual answer to the question, “Why every hour and not every four?” is, “We’ve always done it this way.”
Process flows are charted with the same level of detail as the history of humankind, beginning with Adam and Eve. These charts don’t focus on risks and often are too generic to pinpoint what can—and will—make the process go wrong.
Process FMEAs often suffer the same problem: All sorts of potential failure modes are listed, along with issues that have little to do with the operation in question, based on the criterion that “one never knows what can happen.” The redundancy is built in just to err on the safe side. (“Let’s see, belt and suspenders, what else? Fasteners on the waist?”)
No wonder that “severities” are seldom ranked below 7. Is this effective risk analysis?
The process of determining potential effect(s) of failure is based on the same criteria, but it’s made worse when FMEA-makers confuse product failure—and therefore design failure—with process operation failure.
To quantify these situations, Mr. Pareto would need to revise his famous 80-20 rule—where 80 percent of the effects come from 20 percent of the causes—to say that 99 percent of the effects of process failure are “human error”—i.e., humans who erred when they wrote the process FMEA, and humans who erred by endorsing it. This can extend even to high-severity rankings when no corrective action is determined. It seems these people adhere to the principle that “to err is human,” but forget that “to persist is the devil’s work.”
Control plans, which should in principle originate from process FMEAs, often are mish-mashes of input from the FMEA, previous experience of the same or similar process, and a constraint to produce either stamp-sized or a monster-sized documents—in either case useless except for documentation purposes.
But there’s no need to drift into a Hamlet-type soliloquy here: It’s not a question of “to FMEA or not to FMEA.” Rather, how should we effectively assess risk, using FMEA or alternative methods? I find hazard analysis and critical control points (HACCP) a great, simple, and effective way. It’s still used chiefly in the food and cosmetic business, although its key principles pop up elsewhere occasionally. HACCP is based on FMEA, and in its simplest form states that, given any potential failure, if the downstream process will take care of it, then it’s not critical, or a risk, anymore.
This is the closest to error-prevention I can think of—and to error-proofing, too, for that matter. The product-realization process can be so designed and engineered that it can take risks for various reasons (e.g., cost, cycle-time, tolerance, machinery age, shop-floor layout, operators’ skills), but there will always be an operation, or a device, that will correct or scrap the defect.
Those of you who are familiar with AIAG’s APQP manual may share my interest in it. I find it very valuable. The supplements J and K, and A-1 through A-8, pose stimulating, though sometimes redundant, questions. I particularly like the following from the A-7 Process FMEA checklist:
- Do the effects consider the customer in terms of the subsequent operation assembly, and product?
- Have the causes been described in terms of something that can be corrected or controlled?
- Have provisions been made to control the cause of the failure mode prior to the subsequent or the next operation?
And the A-8 Control Plan checklist: Are sample sizes based upon industry standards, statistical sampling plan tables, or other statistical process control methods or techniques?
ISO 31000:2009—“Risk Management—Principles and guidelines” is surely worth at least reading. Sections 4.3—“Design of framework for managing risk”; 4.4—“Implementing risk magement”; and 4.5—“Monitoring and review of the framework” demonstrate that risk assessment and analysis, as part of risk management, is itself a process, and therefore worth investigating for stability, variability, and revalidation.