Following Double Fine's recent announcement of the Kinect-exclusive Happy Action Theater, lead technical artist and upcoming GDC China speaker Drew Skillman discussed the studio's experience working with Microsoft's depth-sensing hardware, noting that it required the studio to re-think its approach to game design.
In particular, Skillman notes that it takes a lot of experimentation with Kinect to get things working as intended, and Double Fine had to take some unusual measures to reliably test its latest title.
At next month's GDC China, Skillman will delve further into the implications of working with Kinect in a session titled, "Rapid Prototyping Techniques for Kinect Game Development," which will focus on the studio's process for creating its motion-controlled games.
In anticipation of the talk, Skillman discusses the challenges of working with the Kinect's depth-sensing technology, offering tips on how to best make use of the hardware's strengths, and to design games around its weaknesses.
Considering Double Fine's history of making games with traditional gamepad controls, what has it been like for the studio to work with the Kinect hardware? Are there any particular challenges you or the team have encountered?
The biggest challenge has been adapting to the different types of input you get from the Kinect. A gamepad controller is quite literally a handful of very precise inputs, but the Kinect is a continuous stream of video and depth data. Even after processing that data into player joints and segmentation IDs, you will still never be able to pinpoint the exact frame when a character is supposed to jump, for example.
That difference has informed our designs at the deepest levels. We want to leverage this new technology for the amazing new interactions it allows, and not just try and use it as a calorie burning substitute for a gamepad.
What tips would you offer developers looking to start working with Kinect or other motion control hardware?
One great way to start is to check out all the phenomenal Kinect hacks that are flooding the web right now. If you do a Google search for "Kinect Hack," you will see a massive number of inspirational and creative applications, many of which are already in game form, or translate to games naturally. One reason for this rampant experimentation is that programming languages like Processing and Open Frameworks make the hardware accessible to everyone with a computer. This open source approach gives it traction in disciplines like science, education, interactive design, student games, etc. Kinect is definitely a melting pot of ideas right now.
Also, consider investing in life size, cardboard cutouts of your favorite characters. A member of our team made a genius purchase early in the project, and as a result cardboard cutouts of Dumbledore, Darth Vader, and Elvis have been invaluable testers throughout development. The Kinect detects them as very patient players. Another late arrival to our "test team" was actually a Yoga ball, which the Kinect recognizes as well.
Specifically, how did the specialties or limitations of the Kinect hardware inform the development of Happy Action Theater?
Happy Action Theater was conceived as a party game that could be played by very young children as well as adults. That guaranteed we were going to have a lot of chaos happening in the play space. People would be coming into and out of frame, occluding each other, hiding behind furniture, and jumping off of it. These are all big problems for skeletal tracking, and in addition skeletal tracking only supports two players. Those limitations led us to a computer vision technique called "Blob Detection" that can identify movement without requiring a skeleton. We created an early prototype in Processing with OpenCV, and then created our own version of it in engine which is how we're able to support six simultaneous players.
The Kinect XDK also provides you with the floor plane, camera orientation, FOV, video feed, and depth feed. That combination let us experiment with augmented reality since we could match our in-game camera to the Kinect. It's kind of surreal because that means assets are authored in real world units. You can make a ruler in Maya, drop it into the game and use it to get a rough measure of real world objects.
Considering Kinect functions unlike most other forms of motion control, how would you suggest developers use Kinect to create a successful game experience?
Don't take anything for granted. You can't assume that joint positions are reliable, or that the depth is stable. You really have to build a tolerance for input glitches into the design of your game up front. This mentality can be a little jarring at first, since most of us have the privilege of working with incredibly smart people who often solve seemingly impossible technical challenges in the eleventh hour. This is not one of those problems. The input can be filtered and predicted and smoothed, etc, but don't expect to directly attach a sword to a character's hand unless you want it to be sticking through his or her head a lot of the time.
How will your GDC China presentation address Kinect game development and what do you hope attendees will take away from it?
Now that Double Fine has transitioned to a multiple project studio, there are a lot more opportunities for prototyping and R and D. This has given us the luxury to hammer on Kinect in some unusual ways, and that's the experience that I'm looking forward to sharing at GDC China.
Specific examples I'll describe include shaders that combine the depth and color for visual effect, computer vision techniques to process player motion, and a number of augmented reality tricks such as applying our deferred lighting to the depth feed.
Some of that work was inspired by early prototypes we created in the languages I mentioned earlier, (Processing and Open Frameworks), and I'll be discussing that workflow in more detail. Attendees will leave with a breakdown of that approach and the tools to do their own rapid prototyping.
Additional Info
As GDC China draws ever closer, show organizers will continue to debut new interviews with some of the event's most notable speakers, in addition to new lectures and panels from the event's numerous tracks and Summits.
Taking place Saturday, November 12 through Monday, November 14, 2011 at the Shanghai Exhibition Center in Shanghai, China, GDC China will return to bring together influential developers from around the world to share ideas, network, and inspire each other to further the game industry in this region.
For more information on GDC China as the event takes shape, please visit the show's official website, or subscribe to updates from the new GDC China-specific news page via Twitter, Facebook, or RSS. GDC China is owned and operated by UBM TechWeb.