The Future of Microelectronics

Dr. Douglas J Paul from the University of Cambridge describes what might be facing electronic system designers in the new millennium

The microelectronics industry has changed dramatically since the demonstration of the first transistor by Bardeen and Brattain in 1947.

By the end of 1999, the first memory chips with a billion transistors were on sale, 1GHz microprocessors have been demonstrated, and annual sales of semiconductor chips are over $130 billion, about 0.8% of the gross world product. Indeed, there are very few areas of life which have not been touched by the silicon chip in some way.

Gordon Moore, a founder of Intel, one of the first to analyse the exponential growth in the semiconductor industry, observed that chip performance has been doubling every 18 months since 1970 – the phenomenon now known as Moore’s Law. This increase in performance has mainly been achieved by decreasing minimum feature size of transistors and circuits, allowing a higher density of transistors on one chip.

Today’s memory and microprocessor chips have minimum feature sizes of 0.18 micro m and this is predicted to decrease to 35nm in 2012. However, it’s clear that this decrease in size cannot continue indefinitely, so when and why will the increase in performance be halted?

Researchers have already demonstrated transistors operating with minimum features of only 35nm so the basic physics works to at least this length scale. The lithographic process of shining light of ever smaller wavelength through a mask to pattern chips has recently been used to demonstrate minimum features sizes of only 50nm.

The Semiconductor Industry Association Roadmap which predicts the future trends in microelectronics suggests that these sizes will be required after 2010 when microprocessors will have over 108 transistors per cm2.

It is therefore clear that design complexity will be a major issue. A 30 million transistor microprocessor with 100nm minimum feature size will require around 20 lithographic layers for manufacturing, corresponding to about 10TB of data for design. Such circuits are expected on the market place by 2003.

A related problem is testing circuits to make sure they perform the tasks they have been designed for. Clear examples of potential problems were demonstrated by the famous floating point error in the early Pentium processors, and testing costs for some chips already account for 60% of the total chip cost. All complex chips now in production have bug lists, some of which are quite large! Self-testing or fault tolerant architectures are required but none have yet been demonstrated. Most circuit architectures in microprocessors require almost all the transistors and interconnects to operate, hence phenomenal yields are required in production. It is only in memory that redundancy can really be used to great effect.

Another problem is that while transistors get faster and smaller, the speed of transmitting information between the transistors through metal interconnects slows. Each interconnect has to be charged in a time given by the resistance times the capacitance of the metal wire. As features sizes decrease, capacitance is reduced but resistance increases more quickly.

Resistance is a fundamental property of the metal being used to interconnect, but capacitance depends on the circuit layout. The easiest way of reducing this problem is to change materials to wires with lower resistance but the optimum metal, copper, is already being used. The only solution is a design which minimises interactions between interconnects to reduce the capacitance.

Power dissipation is another problem. 40% of the power dissipated in a DEC Alpha microprocessor is related to distributing the synchronising clock signal around the chip. Without appropriate thermal management to remove dissipated power, most chips would end up melting the metal interconnects if run at high clock speeds.

Some non-clocked architectures have been demonstrated but many require a larger number of transistors than the clocked version and hence are more difficult to design and test.

The final significant problem for future manufacture of silicon chips is economics. Manufacturing plants producing the latest microprocessors and memory chips already cost over $1 billion and this cost is rising as the minimum feature size decreases. Yields are typically increased by manufacturing on larger wafers as non-functioning chips are normally produced at the edges of wafers and the ratio of good to bad chips may be increased by moving to larger diameters.

Most production today is on 8inch diameter wafers but the first 12inch wafer manufacturing has been demonstrated with yields of over 145% of that possible on 8inch wafers.

The industry is in a situation where a slow down in the market or the investment growth rate may severely limit the future performance of chips. Many of the big players are now beginning to share development costs for the next generation chips. So what is the expected performance of future chips?

In 2006, microprocessors are predicted to have minimum feature sizes of 70nm with 200 million transistors operating at a clock speed of 3.6GHz and memory will have 17.2GB per chip. By 2012, they will have 35nm minimum feature sizes with 1.4 billion transistors operating at 10GHz and memory will be 275GB per chip compared to the 1GB available at the end of 1999. The performance of these chips will depend heavily on architecture designs and efficiency of software but performance is certain to be much greater than anything available today.

It’s likely that the doubling of performance of semiconductors will continue for at least another 10 years, especially considering current investment. However, a number of factors may slow the growth. Transistors can be scaled to at least 35nm sizes while still operating and so the major problems for the microelectronics industry are in economics, power dissipation, interconnect design, design complexity and testing. Already computer aided x (CAx) where x equals design, manufacture and testing are embedded in microelectronics manufacture although overall design productivity has been increasing by 21% per year while integration density has increased by 68%. Investment in design is clearly required if growth of microelectronics is to continue.

So what happens when silicon transistors can go no further? Current research is looking at self-assembled logic circuits where DNA or organic molecules mimic the formation of the human brain and potentially allow enormous densities to be realised without the need for expensive production lines. Already molecules which show memory or switching functions have been demonstrated and the challenge is to find ways of accurately assembling them into useful circuits.

However, the best silicon performance expected is still well away from the power of the human brain, and it is clear that silicon may not be the ultimate solution for processing and computation. Quantum computing is an exciting area in the research laboratories for architectures where quantum mechanics allows all possible solutions to a problem to be explored simultaneously.

Potential uses include ultra-quick searching of large databases or in cryptography with the ability to break any code – important for all communications. While such research is a long way from the market place, the enormous investment in microelectronics makes it difficult to bet against enormous increases

Information: University of Cambridge Tel: 01223 337385

Copyright: Centaur Communications Ltd and licensors