Intelligent Machines

Dancing Chairs and 3-D Puppets Will Make Kids Love the Kinect

Playtime is about to get an upgrade.

Oct 11, 2012

Microsoft’s Kinect is well loved by the gaming community and is being taken apart and put back together in labs all around the world. But two Kinect hacks I saw this week tell me the device will become a favorite of a much younger crowd.

Take the KinEtre, a 3-D animator that lets you control virtual objects on a screen with your body. All that moving furniture you may remember watching in Beauty and the Beast? KinEtre lets you create your own virtual world of hopping chairs and dancing brooms, Microsoft researcher Jiawen Chen explained, using the magical depth-sensitive camera on a Kinect and the KinectFusion real-time 3-D constructor.

Say you want to play a chair, as the researchers did in this demo video. The first step is to scan that candidate object, which KinEtre understands as a mesh skeleton frame. Now, the Kinect already has the ability to watch your body move. As the video shows, with a single word (“Possess”), the system superimposes your virtual skeleton with the back legs of the virtual chair, animating the chair according to the movement it detects as you jump and bend and kick in front of the camera.

The delightful part is that KinEtre recognizes animation cues from more than just one person. Chen joined his colleague on screen and they proceeded to animate a virtual horse, having “possessed” two legs each. Chen said that the most obvious application for KinEtre would be in gaming—to create avatars that are truer to reality. It would also be an easy way to introduce computer graphics into home movies, and a quick way to throw together a 3-D animation using a cast of real people—say, at the family gathering at Thanksgiving.

The 3-D puppet project out of the Visualization Labs at UC Berkeley is also likely to win big points from kids, but parental guidance is advised. Like the KinEtre creators, they specialize in 3-D animation for folks who aren’t CG whizzes. A puppet is scanned into the system (captured using ReconstructMe software). The puppet is recognized when it wanders into the field of the camera’s vision, and the software has the ability to ignore the puppeteer’s hand. The puppets are identified, their orientation and position detected, and their images rendered in a virtual storyland, all in real time.

A few perks: the system allows you to change the position of the “camera” looking on the scene you created, and also allows light to be turned up and turned down. For anyone interested in a hands-on experience, the team recently made the source code for their setup available for free.