PALO ALTO — You've probably heard about the year-2000 problem, which holds that billions of dollars' worth of business software will crash at the moment of the millennium.
But how about the year-2010 problem?
"Right now I view the year 2010 as a very dark curtain," Stanley Williams says. "And I don't know what's behind it."
It's no accident that Williams sounds slightly apocalyptic.
As principal laboratory scientist of Hewlett-Packard Laboratories here, Williams counts among his chief responsibilities determining exactly what lies behind the dark curtain of 2010. For that's when current technological trends are finally expected to lead to a computer chip too small to work (and too expensive to manufacture).
The fortunes of the $132-billion semiconductor industry--and, by extension, a large chunk of the world's increasingly technology-driven economy--rest on piercing the 2010 curtain and learning how to commercialize whatever lies behind it.
Semiconductors are the materials and devices that power computers and countless other electronic devices in home, office and factory.
Over the last 30 years, the semiconductor industry has relied on the axiom that computing devices would continue shrinking in size and increasing in power at an exponential rate according to what is known as Moore's Law. As articulated by Gordon Moore, one of the founders of semiconductor giant Intel Corp., the law states that computing power will roughly double in scale (or halve in price) every two years.
This is the trend that has led to computers packing hundreds of times more power into a desktop PC than once fit into computers so big they were operated by teams of men walking around their innards.
Most computer professionals have long understood that such an exponential increase in power could not continue indefinitely; at some point the very laws of physics would interfere. Accordingly, most of the debate about Moore's Law has been over when, not if, it would break.
Today the smart money says that silicon-based computing devices have about four generations left, "with every generation representing a quadrupling of transistors on a chip and a 25% gain in speed," Williams says. That means chips with a basic device size of 0.07 micron, or about three times as detailed as the circuits on today's most advanced chips.
"Most people believe that's basically it," Williams says.
Most, but not all. Some experts argue that many functions of today's chips will continue to improve even as miniaturization passes the 0.07-micron limit. Others say that advances in networking and communications will take up the slack in enhancing computing performance.
Indeed, the demise of Moore's Law has been forecast before, only to be deferred for decades by sudden and unheralded advances in technology.
"There's a lot of punch left in Moore's Law," says S. Atiq Raza, chief technical officer at Advanced Micro Devices and the developer of that company's well-received new K6 microprocessor. He acknowledges, however: "We are hitting a wall of how far we can push silicon technology."
Two basic trends are coming together to mark the end of "classical" semiconductor design. One is the sheer cost of the extraordinarily precise machines needed to fabricate smaller and more elaborate chips.
The price of chip factories, or "fabs," as they are known in the industry, long ago passed the $1-billion mark. One plant to be opened in Dresden by Advanced Micro Devices in 1999 will cost $1.9 billion.
Industry experts say that at current rates of growth, the price of a fab will hit $10 billion or even $25 billion soon after the turn of the century. At that point, the cost of manufacturing computer chips becomes almost a political issue, for there may be no single company--or perhaps just one--with the money to erect a facility that costly. Overcoming that hurdle may require an unprecedented level of research and development cooperation among chip makers and industry suppliers.
The second limiting factor is more fundamental: the behavior of the electron. Electronic devices such as microprocessors work essentially by sending streams of charged electrons over circuits and through logic "gates" designed to switch "on" or "off" depending on whether a current is present or not.
The configuration of these gates makes up the basic architecture of the integrated circuit and thus of the computer itself, performing the functions of addition, subtraction and other mathematical processes at the heart of digital computing.
Today's chips need several hundred electrons at a time to switch the gates. As the chips become smaller, fewer are needed. That's good, because it means the chips consume less power and generate less heat as they operate.
But it's also bad. The fewer the electrons needed to switch the gate, the more troublesome the background noise.