Who’s afraid of the big bad bug?

Like many engineers I have watched the rise of what I can only call millennium bug hysteria, at first with interest, and latterly with amazement. As the stories in the popular press have become more sensationalist global economic catastrophe, death, failure of basic services my attention has been drawn away from the problem to the […]

Like many engineers I have watched the rise of what I can only call millennium bug hysteria, at first with interest, and latterly with amazement. As the stories in the popular press have become more sensationalist global economic catastrophe, death, failure of basic services my attention has been drawn away from the problem to the hysteria and its causes.

There is one key underlying factor. Many people do not understand software. This lack of understanding is accompanied by a dawning realisation that they have become dependent on it. The combination of ignorance and dependence leads to a susceptibility to fear.

This fear needs something to spark it off. In the case of the millennium bug that trigger is the inability of many managers and engineers to distinguish between the potential for a problem to occur, the likelihood of it happening, and the consequences if it does.

If you cannot make these distinctions there are only two courses of action complacency or blind panic. Many so-called millennium bug experts have opted for panic.

Of course, being a prophet of doom is quite a comfortable position. If nothing happens, you can argue that it was only your foresight that enabled people to avert disaster. If the worst does happen you are a true prophet. Being a prophet of doom can be rendered even more comfortable if, like some large consulting firms, you stand to gain financially from the fear you induce.

It helps also if you can play on a touch of millennial angst. Though there are relatively few people who believe the end of the world is nigh, the belief that the turn of the century is special is deeply ingrained in the public psyche. It is difficult to think of it as a calendrical quirk.

Of course many doom prophets are well-motivated. By publicising the worst conceivable consequences of the millennium bug, they believe they are making people sit up and take notice. Exaggeration is in the public’s best interest, they argue. This, though, is a dishonest strategy. It damages the credibility of computing professionals and shows a lack of respect for the public and for managers.

This may explain much of the hysteria. But what of the causes of the millennium bug problem itself? Consider two scenarios.

In the first, Joe Schmoe, computer systems manager, enters the office of the managing director. ‘We have a problem,’ he says. ‘I have not told you this before, but the company is now entirely dependent on its computing and software systems. If there is a significant failure of these systems the whole company could fail.

‘Worse than that, many of the systems on which we depend are elderly. We have lost the documentation and no longer retain the skills or knowledge to be able to do very much if they collapse. We don’t know which are the business-critical systems and have no overall picture of the way in which the information they manipulate is used.

‘We have no record of these systems and have not retained any degree of control over their development. There are embedded systems sprinkled throughout our production processes, though they contain software they have not been managed as such.

‘In addition, we have now woven ourselves into a network of systems, many managed by outside organisations over which we have no control. We have no knowledge of the dependability of these systems. I am unable to provide you with any risk assessment with regard to what are probably critical aspects of the organisation’s operation. I require a very large sum of money to sort this out. It will probably take at least five years and I can offer no guarantee of success.’

The managing director thinks for a moment, then replies: ‘You have been in charge of these systems and let things get into this sort of mess. You did not tell me before, and now expect me to fork out for your incompetence. Leave your letter of resignation with my secretary.’

Now look at the alternative scenario. Joe Schmoe goes to the managing director, but this time he says: ‘I have come to give you a warning of a terrible problem. On the stroke of midnight, 31 December 1999, all our computer systems may collapse. We will be caught up in a monumental technological disaster: planes will fall out of the sky, people will die in hospitals, water and electricity will not be supplied, banking systems will fail to operate. This is caused by a sinister bug which we were unaware of until now. Only I can save you and place you in the safe haven of Year 2000 compliance. I require a very large sum of money to sort this out. It will all be finished by the millennium and you will never have to worry about it again.’

The managing director replies: ‘I don’t understand the problem but clearly I cannot afford not to do this. Hang on while I write the cheque.’

And this is the heart of the difficulty. The millennium bug is, in many organisations, a relatively serious and costly problem. However, this is because of long-term failure to address legacy computer systems and to understand the lifetime costs of software. The millennium bug is not a freak occurrence: it is a symptom of an underlying difficulty. Our problems are chronic, not acute. This observation is not very newsworthy, but it’s true.

Anthony Finkelstein is professor of software systems engineering at University College, London