Advertisement

THE CUTTING EDGE

Separate Realities

May 06, 1996|JENNIFER OLDHAM

Software developed by scientists at UC San Diego creates 3-D virtual reality images by combining video from cameras stationed at different points around a real environment, such as a sporting event. Some of the first applications of so-called immersive video will be available on the Internet, where the technology will allow people to watch live or videotaped events on their computers. The new twist is that immersive video enables spectators to watch from any viewing perspective--even ones that don't emanate from an actual camera. At right, how immersive video could be used at a karate match.

1. Multiple video cameras are positioned around the ring, above; there's no theoretical limit to the number of cameras, but the more the better.

2. Video streams from each camera are fed into a computer. The computer searches for moving figures in each frame, left, rather than static objects such as trees or bleachers. When the computer is finished, its memory retains the shape of the objects and their movements at each angle.

3. Treating the event as if it were a puzzle, the computer takes edited information from each camera angle and assimilates these viewpoints into a 3-D model, right.

4. The viewer, probably using a mouse, tells the system which view of the match he or she wants to see at any moment--following the path of a participant, for example.

5. The system examines the video camera streams that most closely match the viewer's request and generates an online of the desired virtual image. Using graphics software, the system transforms the 3-D object from what looks like a blob into a realistically colored and textured person, left. To do so, the software draws a model of the figure and paints it using original videos, which the computer projects onto the model.

6. Virtual reality modeling language (VRML) allows users to see these 3-D images on the Internet. (Webspace Navigator software, a VRML-based interface, has tools the computer user can manipulate to change the angle of view on the objects.) Demonstration versions of the program will be operating later this year at http://vision.ucsd.ed.

Advertisement
Los Angeles Times Articles
|
|
|