Advertisement
 
YOU ARE HERE: LAT HomeCollectionsIntel Corp

A Single-Minded Focus on Multiple Threads

In developing a radical new technology, Intel designers and engineers had to change the way they thought about chips.

November 09, 2002|Alex Pham | Times Staff Writer

Bucking tradition is nothing new for Deborah T. Marr. The Cornell graduate is one of a handful of female computer chip designers in a field that is 95% male.

So when Intel Corp., the world's largest computer chip maker, embraced a radical new design for its microprocessors, it turned to the tenacious 36-year-old mother of two to push the technology out of the lab and into PCs.

The product of Marr's three-year effort is expected to debut next week in the form of a 3-gigahertz Pentium 4 chip that the Santa Clara-based company boasts is the fastest commercial microprocessor for personal computers.

The phenomenal speed of the processor is less significant than the way it works.

It relies on a technology Intel calls Hyper-Threading -- a radical concept that makes a single computer chip act like two. That notion, however simple, defies the entire architecture of computers, which excel at executing instructions one at a time, millions of times a second.

"Hyper-Threading is like giving a cook two pans to work with instead of one," said Brian Fravel, Intel's marketing manager for desktop chips. "It's not as fast as having two cooks, each with his own pan. But it's also not as expensive because you're only paying one cook."

Like chips, the people who make them generally tackle work in sequence, one task after another. Building a new kind of chip required a new kind of thinking. For order-loving geeks, it was a tough sell.

Enter Marr, who joined Intel in 1988 and whom a colleague describes as the "conscience of Hyper-Threading."

Her journey demonstrates how Intel was able to introduce a technology that defied its decades of monomaniacal focus on building smaller, faster chip transistors, an endless cycle predicted by company co-founder Gordon Moore and later referred to as Moore's Law. Instead, Hyper-Threading turns Moore's Law on its head by making existing chips work more efficiently instead of just throwing more transistors into a chip.

Another challenge with Hyper-Threading is its unproven nature. When the concept was floated in the early 1990s, no one knew exactly how it would perform in the real world, and many still are uncertain how much benefit it will provide. Intel claims the process boosts a computer's performance by 25%. Independent tests meant to simulate real-world computer use yielded mixed results.

That uncertainty, combined with a natural reluctance to change, meant Marr had her work cut out for her when she joined the Pentium 4 team as an architect in 1996. By then, the corporate decision already had been made to incorporate Hyper-Threading into the Pentium 4.

The idea began with Glenn Hinton, 45, one of Intel's top engineers in charge of designing chips at the company's campus in Hillsboro, Ore.

Hinton had been thinking since the late 1980s of ways to give a chip the ability to tackle more than one task by splitting its resources. When he worked on a project to make a smaller version of a chip in 1992, he noticed that slicing the chip in half slowed things only 25% to 30%.

"Then it hit me: If I could get two threads executing at the same time, each thread getting half the chip's resources, I could improve performance 1 1/2 times," he said.

Traditional chips crunch only one program at any given point in time. Computers give the illusion of running more than one program by rapidly switching from one to another. But because the chip is working only on a single program at any instant, only about a third of its transistors on average are engaged at once. The rest sit idle.

Hyper-Threading lets chips work on two programs at once and make better use of all of a chip's transistors. Although not as fast as two chips, it's also not as expensive.

A chip with Hyper-Threading is about 5% larger than one without. And in the world of semiconductors, size relates directly to cost. Larger chips are more expensive both to produce and to use because they require so much power.

Hinton began to champion the idea when discussions about the Pentium 4 took place in 1993. People were starting to use their computers to do several things at once.

Operating systems, especially Microsoft Corp.'s Windows, were finally capable of exploiting dual-chip PCs. Software engineers were beginning to write programs in ways that would take advantage of multiple processors.

Hinton also was helped by two papers published in 1995 and 1996 by University of Washington researcher Dean Tullsen, who first simulated a chip capable of processing multiple streams of instructions simultaneously.

In 1996, the company gave Hinton the green light. He paired up with Marr, whom he had met on an earlier chip project.

Hinton and Marr were a study in contrasts.

At 6 foot 2 and loquacious, Hinton was constantly throwing off ideas, cornering engineers in the hallway to talk about a new thought he just had. Intel workers who showed up late for meetings would say they had run into Hinton and would get understanding nods.

Advertisement
Los Angeles Times Articles
|
|
|