YOU ARE HERE: LAT HomeCollections


Separate Realities : Immersive Video Lets Viewer Call the Shots


It's Thanksgiving Day some year in the not-too-distant future, and you're augered deeply into the living room sofa, having contributed immoderately to the season's decimation of the turkey population.

Naturally, you're watching a football game--it has a lot of impact on that new 127-inch universal television-Internet view screen. But instead of passively watching the game as broadcast by the TV producers--the way it was done back in, oh, 1996--you, as they say, make the call.

That is, you decide what view of the game to watch at any given moment, unrestrained by the angles of a particular camera. Maybe you want to run downfield with a pass receiver, or perhaps stay right there with the clashing titans on the front line. Or maybe you decide to follow the football's point of view after the quarterback throws one deep.

If research by a small team of scientists at UC San Diego's School of Engineering bears fruit, this interactive view of the game will be possible through a new technology called immersive video. It will enable viewers to interact with three-dimensional digital environments, in effect "moving" through the scene by directing a computer mouse or other control device.

Under the direction of professor Ramesh Jain, long an authority in the fields of computer vision and artificial intelligence, UCSD's Visual Computing Lab has managed to produce a prototype immersive video system. Though still experimental, their system offers a hint of the myriad ways that television, computers and the Internet will come together in the future.

If immersive video sounds a lot like the technological concept known as virtual reality, it is--but with one remarkable difference.

"So far, virtual reality implementations have all been made with graphics software and so they're all synthetic images, basically," explains Saied Moezzi, a lecturer at UCSD and the Visual Computing Lab's chief researcher. "Immersive video changes virtual reality applications to a much more realistic and believable environment."

The promise of immersive video is apparent. By applying virtual reality modeling concepts to images taken from reality, the systems can create an experience fundamentally different from that provided by ordinary television, movies or existing VR tools.

"Jain's work needs more detail and better definition of the 3-D character in the virtual world for it to be deployed," says Mike Zyda, a professor at the Naval Post Graduate School in Monterey, Calif., and a leader in virtual reality research. "But his work is really outstanding and the start of something very important for VR."

"Right now, the image quality is pretty lousy," Jain concedes. "But as that improves and becomes indistinguishable from broadcast television, the applications are tremendous."

The technology could be used for interactive movies and games, viewing sports, business conferences and other unguessed-at applications, he says.

"I'm quite enthused about it," says Bob Amen, director of technology for Cinesite, a major Hollywood film-effects house. "There are so many possibilities for special effects and making movies." With the virtual camera angles provided by immersive video, for example, a director could get shots he or she never filmed.

In simple terms, the Visual Computing Lab's immersive video system works by ingesting the separate data streams sent from multiple video cameras shooting an event and combining them into a single three-dimensional model--a computer's version of a mental image.

A viewer watching the immersive video broadcast would then use a mouse or some other device to choose the perspective he or she would like to see. The immersive video software engine analyzes the request, studies its 3-D model and "imagines" what the viewer would see if a camera was shooting that perspective. Finally, the system digitally constructs that scene and displays it.

Image quality depends on the complexity of the subject or event being viewed, the amount of movement or change in the scene, and the number of cameras--the more the better.

"For example," Jain explains, "it would be easy to create a system that tracks a few people walking through a courtyard, but a busy train station in Tokyo would be extremely difficult."

Immersive video evolved from multiple perspective interactive video, or MPI, another technology under study in Jain's Visual Computing Lab and elsewhere. MPI, though it also uses several video cameras at once, is much simpler: bIt takes the viewer's request, then responds with the feed from the real camera that most closely matches the desired view. Immersive video, by contrast, generates a virtual image from a virtual camera angle that isn't produced by actual cameras.

The value of immersive video lies in its interactivity, Jain says. "With immersive video, the question of who is controlling the TV camera and what you want to watch is completely different," he says.

Los Angeles Times Articles