Sustainable Energy

Meeting the Future

Internet: Tele-immersion makes virtual conferencing more real

Sep 1, 2000

Video teleconferencing is often touted for its potential to promote better communications and curb expensive travel. Yet technology for such virtual face-to-face meetings has yet to catch on as a routine business tool. Among its perceived failings: the inability of participants to make eye contact (due to camera placement limitations), the need for a dedicated “dry room” away from office or lab floors, and a lack of shared workspace for collaborative brainstorming.

Given the expected development of far greater bandwidth than is available with current data lines, what is the next step toward more realistic virtual meetings? The answer is known as tele-immersion, a conceptual hybrid of virtual reality and Star Trek’s Holodeck. One of the principal applications areas for Internet2 (a research project involving 170 academic institutions and 50 corporations to develop tomorrow’s faster Internet), tele-immersion visually replicates, in real time and in three dimensions, slabs of space surrounding remote participants in a cybermeeting. The result is a shared, simulated environment that makes it appear as if everyone is in the same room.

In May, researchers at the University of North Carolina (UNC) at Chapel Hill, the University of Pennsylvania in Philadelphia, and Advanced Network and Services in Armonk, N.Y., demonstrated for the first time the building blocks of this meeting room of the future. A participant sees two projected “windows” of life-sized colleagues, each hundreds of miles away from the site and each other. Lean forward and the foreground figure and background bookshelf in a window shift slightly in three dimensions, as if right there.

The 3-D view and precise position tracking, however, come at a cost; awkward goggles and a silvery head tracker perched on a user’s head are required. Less evident is an array of seven standard video cameras and two special ones that capture distance information by reading light patterns subliminally projected into each participant’s environment.

In addition to the need for bulky and costly equipment, the demo suffers from video glitches (just like videoconferencing). Still, it provides a glimpse of what lies beyond today’s videoconferencing. And it provides vindication for the more than two years of research by a collaboration of computer scientists from UNC, Penn, Brown University, Columbia University, the University of Southern California, Carnegie Mellon University and the University of Illinois. “It’s a significant accomplishment,” says Jaron Lanier, chief scientist for the project. “We demonstrated viewpoint-independent real-time scene sensing and reconstruction. And we got rid of the dry room.’”

By the end of the decade, when next-generation broadband is in place, immersive conferencing could be rigged from any office or lab. Still, there are challenges. The researchers want to better integrate the real and virtual worlds and provide overlapping workspaces for shared whiteboards and 3-D modeling. In the demo, the edges of the virtual cubicles do not conjoin; ultimately tele-immersion is meant to be seamless. Lanier, who helped invent and popularize virtual reality in the 1980s and 1990s, says the scientists are also working on autostereo screens (for 3-D without glasses) and advances in haptics (for full-hand tactile simulations).

If it all works, you’ll finally be able to really reach out and touch someone.