The biggest question in theoretical computer science today is whether there is a way to make hard problems easy. In computers, as in life, it looks as if there isn't.
To be more specific: A "hard" problem is one that grows exponentially, like the so-called traveling salesman problem, about which more in a moment. As the number of possibilities increases, these problems quickly get out of hand, even for the fastest computers.
An "easy" problem is one that behaves more tamely, such as sorting a deck of cards or alphabetizing a long list.
Unfortunately, many different kinds of real world problems grow exponentially. Many scheduling problems are of this type, as are problems in how to pack objects into containers and in the sequencing of DNA. One book contains a list of 300 such problems, all of which are functionally equivalent.
There is no known easy way to do them. Computers can never be fast enough to examine all of the possibilities. Even if you had a computer that could do a billion computations a second (which is faster than any existing computer) and even if you increased its speed by a billion times, the number of possibilities in these exponential-growth problems would overwhelm the machine. But if there were a way to solve any of these problems, it could solve all of them.
But unless some computational trick is found, there is no practical way to solve a large problem of this kind. So far, no such trick is known. Nor has it been proved that it cannot be done.
The real question is whether people just haven't been smart enough yet to figure out how to do these kinds of problems or whether they are inherently unsolvable. This aspect of the problem has attracted the attention of mathematicians and logicians and drawn them in.
To make clear why certain problems grow exponentially, consider these examples:
Suppose you want to sort a deck of 52 cards. You go through them all until you find the ace of spades, which you put aside, and then you go through them again until you find the deuce of spades, and you put it aside, and so forth.
The worst that can possibly happen under such a sorting scheme is that the ace of spades will be the last card, the deuce of spades will be the next to last, the three of spades will be the one before that, and so forth. If you started with \o7 n \f7 cards, the worst that could possibly happen is you would have to take \o7 n \f7 squared looks to sort the cards. (The number is actually less than \o7 n \f7 squared, but it can never be any worse than \o7 n \f7 squared.)
Anyway, \o7 n \f7 squared isn't so bad. Fifty-two squared is a number (2,704) that a computer can easily do. Even if \o7 n \f7 were 50,000 or 500,000, the computer could still examine \o7 n \f7 squared possibilities (2.5 billion and 250 billion) in a reasonably short time. Problems of this sort, which grow like \o7 n \f7 squared, are called polynomial (because \o7 n \f7 squared is a polynomial).
In the language of computer science, they belong to the class called \o7 P. \f7 Alphabetizing a list is another problem in this class. These are the easy problems.
But now consider the following: Suppose you are a traveling salesman and you have to visit \o7 n \f7 cities, and you'd like to find the shortest route that goes to each city exactly once and comes back to the beginning.
This is much harder. You can pick any city to start with, say, Albuquerque, because you're going to have to go through that city, and you might as well say you're starting there. For the next city, you have n-1 choices, and you have n-2 choices for the city after that, n-3 choices for the city after that, and so forth.
When you take the product of all of these, (n-1) times (n-2) times (n-3) and so forth, you have a number that's much bigger. It's called n minus 1 factorial, written (n-1)!, and that exclamation point begins to give a clue how big that number is.
For 50 cities, 49 factorial is about 10 to the 62nd power, which is 1 followed by 62 zeroes. "Even in astronomy they don't have such big numbers," says Ron Graham of AT&T Bell Laboratories. "These are beyond astronomical."
To get some idea just how big these numbers are, think of the following: Suppose you had a chessboard and you put a penny on the first square, two pennies on the next square, four pennies on the next square, eight pennies on the next square, and so forth. By the time you got to the 64th and last square, there would be enough money to buy about 7 billion tons of gold! And that number--2 to the 63rd power--is much, much smaller than 10 to the 62nd power.
As with polynomial problems, there are ways to speed up exponential problems, but they never eliminate the explosive growth that characterizes them. These methods just postpone the explosive growth.
These problems are called NP-complete (for reasons that it would take several paragraphs to explain), and eventually, problems of this type swamp even the fastest computers. "For 1,000 cities, there's just no way," Graham says. "They're too big." It's easy to make these NP-complete problems so large that there isn't enough time in the history of the universe to use the brute-force approach and try all of the possibilities in order to solve them.
Is there a way to reduce NP-complete problems to P? Fame and fortune await anyone who can do it. Though it is always dangerous to say something is impossible, right now the betting is that NP-complete can't be done.