THE title says it all: By 2050, David Levy predicts, ordinary people will routinely fall in love with robots and have sex with them. Well, I know students whose most intimate companion is their laptop, and I have no doubt that some people are already having sex with robots, for the simple reason that some people will have sex with anything.
Levy, however, goes further: He suggests that, by midcentury, intimate relations with humanoid robots will be a commonplace, accepted feature of society. Fortunately, 2050 is still reasonably far away; in 2050, I intend to be either (a) dead or (b) 90 years old. In either case, it will take one heck of a sexy robot to get me aroused. Other, more youthful folks may not be so lucky. Do future tourists in Times Square run the risk of being accosted by lusty automata?
Levy musters some serious arguments for his claim. His chapter on blow-up rubber sex dolls and the like will surely rank as the definitive study of such phenomena for years to come. Because of the thoroughness of his research and the care with which he sets out his arguments, his thesis deserves a hearing. He begins by discussing why people fall in love with other people. One could say a great deal about this topic, or little, and Levy opts for little. People are attracted to each other emotionally and sexually, and one thing leads to another. Technology allows love to blossom into strange and new flowers, Levy argues; witness love on the Internet between two people who have never met. His point is that falling in love is natural and that people fall in love in many ways for many reasons. Fair enough. He then devotes considerably more space to exploring the love people feel for their pets. People love their pets a lot and frequently regard them as family members. Pet love is important for Levy, as he wishes to emphasize that the object of our strong love need not be human.
So far, so good. When I was a child, I loved my cat. Now, as an adult, I am fond of the family guinea pig. If Levy is attempting to persuade us that familial pet love can be extrapolated to romantic robot love, however, he is overreaching. Most people's feelings for their pets have never verged on the romantic or sexual. There is another word for that sort of love, and it is not a pretty one. (The love that dare not squeak its name.)
From pets, Levy turns to emotional attachment to electronic objects. Here too is ample evidence of strong feelings on the part of humans: Electronic pets, such as Japan's hand-held digital Tamagotchi or Sony's robotic dog, can take the place of a real pet in a person's life. (It's not clear whether such electronic relationships possess the same staying power as flesh-and-blood relationships between humans and animals.) Again, romantic love seems far off.
More bittersweet and fraught are the relationships people have with their computers and the programs animating them. Even when computers are not specially programmed to act like human beings, they can move us in strangely human ways. Unfortunately, most of these ways are negative. Levy notes that humans speak to their computers in human terms. I certainly address my computer frequently, in very human terms -- terms that could not be printed in this newspaper. My computer, in turn, is adept at finding the action -- among all others it might perform -- that drives me craziest. For example, my old Macintosh PowerBook, when asked to perform a task it cannot, insists in a plaintive, feminine voice, "It's not my fault. It's not my fault."
This annoying inventiveness of computers is indeed a sign of incipient humanity. When humans created computers, we gave them the gift of language. Although these languages, such as Java or C++, are far from the language of love, they share many of the features of human languages that facilitate logic and reason; in particular, via their language, computers are capable of self-reference, like my whiny Macintosh. In human beings, the ability to reason about and refer to oneself is a key ingredient of free will; in computers, the capacity for self-reference leads, if not to full-blown free will, at least to an intrinsic unpredictability. The only way to know for sure what your computer will do when you press "enter" or click on an icon is to do it and see what happens.
Because we gave them language, computers may someday be grudgingly acknowledged as independent, sentient beings. Unlike Levy, who holds that by 2050 they will have passed the Turing test (in which people converse "blind" with a computer program and try to determine whether they're talking to another human), I'd guess that that watershed is much further away.