Benjamin Disraeli said there were `lies, damn lies, and statistics’. The quotation is out of place today, when a combination of statistical science (the empirical modelling of variability) and engineering science (the study of physics and materials) is necessary to achieve the superlative performance customers demand. New ways of managing product variation are being developed as a result of the cross-fertilisation between engineering and statistical science.
The challenge of engineering is to design products which are as insensitive as possible to sources of variation in performance – that is, they are robust, by which I don’t mean `built like a tank’, but strong designs which are cost efficient.
An example of the need for statistical science in engineering can be seen in the efforts made by Ford a few years ago to improve the start time of the Zetec engine. The major sources of variation were fuel quality and ambient temperature. To counteract the variations, Ford’s engineers could experiment with injector type (of which there were six) and six other factors, with three different values each. There were therefore 6×36 – 4,374 – possible combinations of engine design to evaluate.
The statistical technique was used to select which of the 4,374 to test, and to devise a series of experiments in which all the factors were changed simultaneously, but in a somewhat structured way, so that any combination of any two variables was tested at least once.
Injector location, inlet valve timing, and the spark plug’s position in the cylinder were found to have the largest influence on reducing variations in the fuel-to-air ratio. With the right selection of values for these variables, Ford made the ratio more robust and improved start time across the range of field conditions.
This approach to experimenting, which I call statistical engineering, is counter-intuitive to many engineers, who wrongly believe that the only way to conduct experiments is to hold everything constant and vary one thing at a time. This, I believe, stems from deterministic thinking, where we see the world governed by the laws of physics, with our mindset firmly in verification mode.
For empirical investigations where we do not understand perfectly how the physics works – particularly because of the impact of variation – our mindset needs to be one of discovery rather than verification. The right way to proceed is through statistically designed experiments to make the discovery process as efficient as possible.
These ideas have been around for decades, but very little of these powerful, statistically based engineering methodologies has permeated our profession. Graduates and members of the professional institutions often arrive at Ford without much knowledge of these approaches, and teaching them is a formidable challenge. It is sobering to think of the state of affairs in smaller companies.
The most influential figures in our profession need to ensure that statistical engineering methods be taught and embedded in the undergraduate curricula and professional experience requirements of our institutions. It should be taught by engineers in the context of design and engineering, supported by first-class applied statisticians as necessary. These methods should not be taught through a separate course in statistics, run in isolation by the statistics department.
Richard Parry-Jones is group vice-president for product development and quality at Ford