When should you stop testing? The quick answer is to stop testing when the testing provides no value. If no one is going to review the results or use the information to make decisions, those are good signs that the testing provides no value. Of course, this may be difficult to recognize.
Some time ago, while I was working with a product development team, one of the tasks assigned was to create an ongoing reliability test plan. This was just prior to the final milestone before starting production. During development we learned quite a bit about the product design, supply chain, and manufacturing process, each of which included a few salient risks to reliable performance.
INVESTIGATING PREVIOUS ONGOING TEST PLANS
Being new to the group and knowing that the project was a small evolutionary step for an existing product, I suspected there was a previous plan already in place. Now, the motivation was not to replicate the previous plan. My intent was to understand
- what data previous testing produced,
- how these data were being collected and presented,
- and how the information helped the team make decisions.
The new plan should be consistent and useful where it worked well and maybe offer improvements where it didn’t work so well.
I also asked a several people who had previously created ongoing reliability tests how they approached the task. They all basically replicated the design qualification test plan and conducted the full suite of testing each month. That is a lot of testing! It’s expensive, too—just think of all the data that have been collected over the past few years. I really wish I had known about that treasure trove of data during the development process as we could have avoided a few tests, improved others, and focused on the largest risks based on those data.
FINDING THE DATA
The first person who told me about the existence of previous test plans and that the testing was accomplished monthly said that the data were probably on the team’s share drive. Since everything was on the shared drive we spent a few minutes looking, with no luck. Then I was off to talk to the other engineers and managers on the team who would be most likely to know the whereabouts of the said data. Everyone I talked to over a month of searching was aware of the testing and that the data were “somewhere.” This was turning into a quest and I was not sure whether it was becoming a fruitless search for the Holy Grail.
To make a long story very short, with the help of a country manager (since the manufacturing and testing were being done in China) and the financial engineer (or some such titled person), we found the data after two months of searching. The person collecting and organizing the data did a wonderful job and the data were complete and well presented, including the raw data.
I asked when the last time anyone asked for the test data, and the data archivist said I was the first in the five years he had been maintaining the database.
USING THE TEST DATA
The requirement to create and run testing that evaluated the product’s performance and durability was written into the product lifecycle and development guidelines. At some point in the past the testing was considered worth the expense of creating test plans and paying for samples, testing, and data collection. But somewhere along the way, the value of these data diminished. No one in the development team nor the manufacturing team took the time to review or even monitor the data.
Of course, I grabbed the data, made a few simple plots, and discovered in the historical record indicators of most of the excursions of higher than expected field failures. (Most of these would have been prevented or minimized if someone had looked at the test data and made a decision to do something about it.) Then I revised the test plan I had created by eliminating many of the tests as they did not show any failures or adverse variation over multiple generations of design and increased the samples and frequency of a few based on larger-than-expected variation in the data.
More importantly, I stopped about two thirds of the existing testing sequences for two reasons: 1. No was really needed to look at the results and 2. the process and supply chain demonstrated years of stability and capability. Those tests didn’t show any indication of risk of failure. That left a manageable set of meaningful tests tailored to each product’s set of unique risks. Those become useful to monitor and take action to improve the existing and future products.
STOPPING TESTING
When should you stop testing? Ideally, before setting the testing in motion. This axiom holds not just for ongoing reliability tests but for any test. Anytime you take a product out of the process of development or manufacture to conduct an evaluation, it takes time and resources to do so. Therefore, collect the testing results only to support a decision by someone who knows they are going to make a decision.
Sure, there are all kinds of tests, and some are inexpensive, exploratory, or quick. The impact of having no meaningful information is minor. When the testing is expensive, time consuming, etc. then the value of the results had best warrant the investment.
Here are a couple of simple rules:
- Do not design and conduct a test unless there is some specific purpose. If it is done merely because you always do it, that is a clear signal to stop and ask a few questions.
- If the testing is established as a routine and ongoing process, then, when it no longer serves a meaningful (valuable) purpose, it is time to stop the test.
Testing appears to be like government agencies: Once established they exist as any living being with a will to survive and will find ways to continue despite having outlived any useful purpose. Tests are not entities in and of themselves. They are tools we use as engineers and managers to understand characteristics of our products or processes.
You should evaluate your existing testing and for each test ask the following:
- What is its purpose?
- What do the results mean?
- What question or decision does the data support?
- What is the value of the test?
If the answers to these questions indicate that the test either will not or does not have sufficient value given the investment to create the data, then stop the test.
Bio:
Fred Schenkelberg is an experienced reliability engineering and management consultant with his firm FMS Reliability. His passion is working with teams to create cost-effective reliability programs that solve problems, create durable and reliable products, increase customer satisfaction, and reduce warranty costs. If you enjoyed this articles consider subscribing to the ongoing series at Accendo Reliability.