#116 – THE CONSTANT FAILURE RATE MYTH – FRED SCHENKELBERG

ABC FredHave you said yourself or have you heard someone say any of the following:

  • “Let’s assume it’s in the flat part of the curve”
  • “Assuming constant failure rate…”
  • “We can use the exponential distribution because we are in the useful life period.”

Or perhaps some similar presumptive statement was uttered. Did you cringe? Well, you should have.

There are few if any failure mechanisms that actually occur with a constant hazard rate. (We often even use the technically incorrect term “failure rate“ when talking about the instantaneous failure rate or hazard rate.) The probability of failure over a short period of time now and sometime in the future, say, next year, is most likely going to be different.

SO, WHY DO WE CLING TO THE ASSUMED CONSTANT FAILURE RATE?

Peer, Das, and Pecht wrote in Appendix D: Critique of MIL-HDBK-217 within the National Academy of Sciences book Reliability Growth: Enhancing Defense System Reliability about the nature of failure (hazard) rates. The original handbook gathered data and calculated point estimates for the failure rates. Later editions of the handbook included the assumption of the generic constant failure rate model for each component. The adoption of the exponential model, which implied calculations, started in the 1950s.

In part, owing to the contractual obligation to use the 217 handbook and widespread adoption of the prediction technique, the constant failure rate assumption became enshrined in the methodology. McLinn in his 1990 paper “Constant failure rate – A paradigm in transition?” (Quality and Reliability Engineering International 6:237–241) commented that the users of the system worked to propagate the method rather than improve the accuracy of the method.

HOW DO WE KNOW THE FAILURE RATE CHANGES?

Beginning in the 1950s researchers and analysts noticed that components did exhibit changing failure rates. They also noticed the range of failure mechanisms that occurred and began modeling failure mechanisms. The work to predict failure rates based on the physical or chemical changes within a component resulting from applied use stress became known as the “physics of failure.”

Numerous studies and data analyses have shown either a decreasing or increasing failure rate with time. For example, the increasing failure rate behavior for transistors has been shown by Li et al. (Li, X., Qin, J., and Bernstein, J.B.. 2008. Compact modeling of MOSFET wearout mechanisms for circuit-reliability simulation. Device and Materials Reliability, IEEE Transactions on 8 (1): 98–121) and Patil et al. (Patil, N., Celaya, J., Das, D., Goebel, K., and Pecht, M. 2009. Precursor parameter identification for insulated gate bipolar transistor (IGBT) prognostics. Reliability, IEEE Transactions on 58 (2): 271–276).

Your own data most likely shows the nonconstant failure rate behavior. All you need to do is check the fit of the data to an exponential distribution to see the discrepancy.

Today we have the embedded assumption of a constant failure rate and the reality of nonconstant failure rates. We also face the need to accurately describe the probability of failure based on field data, experimental data, or simulation. Simply avoiding the assumption of a constant failure rate frees us to use the information contained within time-to-failure data and models.

Bio:

Fred Schenkelberg is an experienced reliability engineering and management consultant with his firm FMS Reliability. His passion is working with teams to create cost-effective reliability programs that solve problems, create durable and reliable products, increase customer satisfaction, and reduce warranty costs. If you enjoyed this articles consider subscribing to the ongoing series at Accendo Reliability.

Leave a Reply

Your email address will not be published.