For as long as quality has been part of the human endeavour we have been measuring those products and services we provide.
Context of measurement
Quality control as a practice has been around ever since man has been making things. There is even a school of thought that evolution itself is a form of quality control. One favoured term is “survival of the fittest.”
A difficulty in charting the course of quality control over the years is to separate developments in measurement and quality control from changes in production efficiency. This efficiency has, in turn, been driven by a market need for more or cheaper product. Quality control has evolved as the need for increased quantities of goods, reductions in cost to satisfy a new market or the market expectation for quality has increased.
First thoughts
One early example of quality control is from the excavation of a mine for producing flints in Denmark (believed to date from around 3500 BC) – used in the production of Viking boats. The excavation work uncovered discarded parts- finished tools rejected at the mine (internal failure costs) and before they were sold to the travelling merchants. The reason for rejection was to prevent unsuitable flints from being transported to Sweden, only to be rejected at the point of use (external failure costs). An early example, perhaps, of a move to reduce the cost of non-quality by finding rejects in-house.
The role of measurement
Around 3000 B.C., the Egyptians came up with a measure of length – the Royal Egyptian cubit. It was deemed to be “equal to the length of the forearm from the bent elbow to the tip of the extended middle finger plus the width of the palm of the hand of the Pharaoh or King ruling at that time,” According to the NCSLI.
The “master” of this measure was carved onto granite and workers were given “transfer standards” in the form of wooden or granite copies. The calibration frequency was defined by the full moon and failure to bring their cubits back was punishable by death. These length measures, together with other measuring equipment (including set squares and plumb bobs), were used to set up precise right angles to establish the orientation of the great pyramid at Giza built for King Cheops (whose reign was between 2551 – 2528 BC).
Legal responsibility for control of quality
One of the earliest examples of legal-invoking of control of quality was the code of Hammurabi (c2000 BC) in Ancient Mesopotamia. One of the many laws in the code calls for the death penalty for the builder of a house that later collapsed and killed the owner – certainly a focus for ensuring control of quality and encouragement to measure and check the building was made correctly.
Control of measuring equipment
Leading into the Zhou dynasty (11th – 8th Century BC), a standardized system of measuring equipment was set up. This included a twice-yearly calibration by the State through an official organization – this parallels the trading standards / accredited calibration laboratories of current time.
Control of measuring equipment was further developed in China to include acceptable tolerances, annual calibration and corrective actions in the event of a tolerance being exceeded – again normally by punishing an official. Given this precursor, perhaps 4.11 (g) of BS 5750.1979 and the new calibration follow up requirements in ISO 9001 and ISO/IEC 17025 are not quite as onerous as we might believe! During the Qin dynasty, the state went further by producing large batches of standard measuring tools (transfer standards) distributed to all corners of the empire.
Later in the Tang dynasty (618 – 907 AD) developments in calibration requirements included an annual calibration in August and a seal to identify calibration status – with penalties if procedures were not followed.
In Ancient Greece, quality control of building work included use of a straight flat surface to control the quality of joints. The kanon stone was used with vermillion – a bright red pigment – to identify poor fit between blocks. The worker was then required to improve the mating surface. Specifications were drawn up for contractors to clearly specify requirements.
Mediaeval developments
Although organizations of craftsmen are reported to have been in existence in India during the Veda-period from 2000 – 500 BC, the next major development in Europe (mirroring much of what had already taken place in China above) was the development of mediaeval guilds for control of product quality and to provide training for apprentices. The training was long and demanding under the watchful eye of the master, and the apprentice had to show evidence of their ability to create high-quality products before they could join the guild and become the next generation of craftsmen.
Around the same time product marking, as evidence of quality, was further developed. In 1300 A.D., Edward I of England instituted legislation for assaying (testing) by officers of the Goldsmiths’ Guild in London. The legislation also instituted subsequent hall-marking of precious metals to allow it to be offered for sale. The hallmark design adopted was a Leopard’s Head.
The principle behind product marking remains the same today; protection of the consumer for the quality of goods purchased and of the trader from unfair (or inferior) competition.
Venetian Arsenal
Ship-building had existed in Venice for many years. In 1320 a new (much larger) shipyard – the Arsenal – was built to allow the state’s navy and merchant ships to be constructed and maintained. In the Arsenal, they developed methods of mass-producing warships, including the frame-first system (to replace the Roman hull-first practice). The Arsenal’s workforce grew from around 3,000 to 16,000 people at its peak and was able to produce nearly one ship each day. The key to this productivity was standardized parts and work methods that enabled a consistent product – remarkable similarities with lean manufacturing developed in Japan up to the 1970s.
Interchangeable parts
This idea of interchangeable parts, key to the Arsenal’s productivity, developed over the years. It propelled an industry where one ship a day was an achievement, to industries producing much higher volumes. Examples might include the printing and armaments industries.
Gutenberg’s invention of movable type relied on interchangeability. He lived around the turn of the 15th Century and invented a process for making printing type in quantity and with precision to enable a practical system for printing books.
For manufacture of arms, Americans like to credit Eli Whitney with perfecting interchangeable parts for muskets in 1803. In practice, this was first used by Frenchman Honoré Blanc in around 1778. In a demonstration, Blanc had made parts for a thousand muskets and placed them in separate bins. He called together a group of academics, politicians, and military men and proceeded to assemble muskets from parts drawn at random from the bins.
The Royal Mint – Isaac Newton
Another individual credited for optical and physics research but not renowned for quality control was Sir Isaac Newton. He became Master of the Mint in 1699. One of the hardest tasks at the Mint was control of coin quality. Each coin had to have the same weight and material composition. Newton developed special ladles for taking molten metal samples to his office for testing.
Standardization
From the days of the Industrial Revolution, pressure for efficiency and production led to less of an emphasis on individual quality – leading indirectly to the formation of inspection departments with responsibility for filtering out bad products. With all these developments in products, production methods and arrangements for the control of quality, it became apparent that proliferation of sizes and shapes of components and materials would cause problems with production and maintenance of machines and infrastructure. The American Society of Mechanical Engineers (ASME) was one of the first standardising bodies, established in 1880. It attempted to address a toll of 50,000 fatalities a year caused by pressures systems explosions. ASME is still a leading organisation for pressure systems with ASME codes for pressure vessels.
In 1901 Sir John Wolfe-Barry asked the Institution of Civil Engineers to form a committee to consider standardizing iron and steel sections. This became the Engineering Standards Committee (ESC). Their work quickly reduced the variety of sizes of structural steel sections from 175 to 113. For tramways, the reduction was much more spectacular – from 75 varieties to 5.
The British Standard Mark (later to become the Kitemark) was introduced as an indication (or hallmark) that goods were ‘up to standard.’ Internationally, in 1906 the International Electrotechnical Commission (IEC) was formed following a meeting in 1904 of leading scientists and industrialists. IEC is responsible for the development of world standards in electrical and electronics areas.
In 1916, the American Institute of Electrical Engineers joined with the American Society of Mechanical Engineers, American Society of Civil Engineers, American Institute of Mining and Metallurgical Engineers and the American Society for Testing Materials to establish a national body to coordinate standards development.
In 1917 the German body Deutsches Institut fur Normung e.V. (DIN was formed) with a very similar mission. In 1918, the ESC in the UK became the British Engineering Standards Association and was granted a Royal Charter in 1929. In 1931, under a supplemental charter, it changed its name to The British Standards Institution. In 1946 the first Commonwealth Standards Conference was held in London which led to the establishment of the International Organization for Standardization (ISO).
Walter Shewhart – Statistical quality control
When Dr. Shewhart joined the Western Electric Company Inspection Engineering Department at Hawthorne in 1918, industrial quality control was limited to inspecting finished products and removing defective items. In 1924, he transformed quality control by introducing the control chart. Using this chart, a process could be monitored and, where required, action taken to prevent quality problems through reduction of process variation.
The problem of variation was also developed into terms of “assignable cause” and “natural variation”. Control charts could be used as the tool for distinguishing between the two. This differed significantly from traditional views of normal distribution where all variation was considered “natural”.
He also developed and first published (in 1939) the PDCA cycle. It is sometimes known as the Shewhart cycle, but more commonly referred to as the Deming cycle after the man who made it famous. The idea is that it is a continuous process of:
- Plan
- Do
- Check
- Act
Improvement activity is cycled until a process is continuously delivering the product, as required.
Joseph M Juran – Managing for quality
Shewhart and Dodge were part of a team from Bell Laboratories that visited Western Bell’s Hawthorne factory in 1926. This factory was famous for the Hawthorne Effect – when experimental studies on factory lighting and its effect on productivity indicated that the mere act of taking an interest in employees at work would affect their behaviour. The aim was to apply some of the laboratory’s tools and techniques. They put in place a training programme at the factory and one of the trainees was Joseph Juran. He went on to join the company’s Inspection Statistical Department, one of the first in the country. The rest, as they say, is history.
Quality control changes – impact on measurement
All of the developments in quality control listed above have resulted in direct and indirect effects on measurement methods and measurement capability, particularly the move to standardized and interchangeable parts. The need for more precise control of component dimensions to allow for quality assemblies required more accurate measuring instruments and a better understanding of measurement results. If the measurement is inaccurate then there is a risk that products that do not conform are released for assembly, or products that conform are rejected as unsuitable.
In using measurement systems the following questions are therefore often asked:
- How well do results from the measurement system represent the product?
- How precise is the measurement system?
- What is the resolution of the measurement system?
- Do the results of measurement contain any bias?
- Does the measurement system operate consistently over time?
- How much does measurement system variation contribute to variation in product measurements?
Study of these questions resulted in an understanding of and analysis of the uncertainty of measurement. This topic will be studied further at a later date.
BIO:
Paul is the current Chair of ISO TC 176 Sub Committee 2. The committee is responsible for ISO 9001 and ISO 9004 among other quality management standards He runs s2a2s Limited providing consultancy and training in areas of quality management, management systems and risk.
With a first degree in engineering, Paul has extended his postgraduate qualifications in marketing and business as well as professional areas of auditing, risk, health & safety and quality. Paul contributes to the quality and risk professions through UK and International Standards committees, articles and volunteer roles.
Paul Simpson
Director – Strategy to Action, s2a2s Limited
email: paul@s2a2s.com
Phone: +44 (0) 7879 812008
Website: www.s2a2s.com
References
A history of managing for quality: the evolution, trends, and future direction of managing for quality.
- M. Juran, editor-in-chief (1995) ASQC Quality Press, MilwaukeeOut of the crisis.
William Edwards Deming(1986) Massachusetts Institute of Technology Center for Advanced Engineering Study, Cambridge (Mass)Kaizen, the key to Japan’s competitive success.
Imai, Masaaki (1986) McGraw-HillWikipedia – the online encyclopedia
http://en.wikipedia.org/wiki/Main_Page
http://en.wikipedia.org/wiki/Walter_A._Shewhart