I managed make it out to Seattle this year for Gamefest and figured I’d share my thoughts on some of the different presentations I saw. They are not available yet, but it looks like Microsoft is going to be posting the slides/audio for the different presentations here soon.

Tiled Resources for Xbox 360 and Direct3D 11 – Matt Lee

This talk was about mega-texturing in DirectX 11/Xbox 360. Matt Lee was showing a new DirectX SDK sample that’s coming in the next SDK release giving a reference implementation of a mega-texturing run-time. I’ve only skimmed mega-texturing papers so I got a lot out of this talk since he walked through all the steps in the run-time.

The sample shows off how you begin by creating different tiles for different resource formats. Each pool is dedicated to a different texture format. The tiles in the pool are all the same size; However the tiles may vary in size depending upon the texture format to maximize cache efficiency. When you render the scene you have a shader that can write out texture look-up failures. When the UV coordinates and mip level are not found to be resident in memory a failure is added to this list. After the shader completes you read back the failures and proceed to load the tiles that will fit in your established pools.

Unlike most texture streaming systems you’re not loading an entire mip level or the entire mip chain of the texture. You’re only ever loading into the tiles a sub-region of a texture (like a 64×64 pixel region), which overcomes one common texture streaming problem, texture memory fragmentation. Because the tile pools you create are never deallocated you don’t have to worry about fragmenting your texture memory because of different sized textures being streamed in and out.

Now the sample is not without its short comings, but that is mostly due to hardware limitations. Ideally the virtual texture system would be transparent, you wouldn’t need to write a shader that recorded look-up failures. The GPU and DirectX would simply report when a failure occurred and allow you to handle it. Maybe some day…

Gesture Detection Using Machine Learning – Claude Marais

If you have ever been interested in machine learning this is a worthwhile presentation to check out when the slides are posted. Claude Marais talked about a case study they performed to try and use machine learning to detect a Punch and a Kick. For their experiment they used Adaboost which is a machine learning technique that combines thousands of weak classifiers that ‘boost’ each other and provide you with a high degree of accuracy in the results.

The classifiers are all extremely simple things, for example you may have a classifier like:

if (elbow_joint_angle > ANGLE)
 
      return 1;
 
  return -1;

Then simply create a macro and have 180 variants of this classifier one for each ANGLE. If you can imagine all the different things you could measure about the skeleton, creating simple variants of the kernels for each of the possible test cases will explode the number of weak classifiers you have; Claude had around 21,000 weak classifiers for his system.

The training phase looks at labeled data sets to know what examples of punches look like (positive examples) and what -not- punches look like (negative examples). It uses the +1/-1 scores each weak classifier provides to determine the weights to apply to each classifier. After it has determined the best weak classifiers to detect a punch and not detect a negative example as a punch on accident you can use the classifiers at run-time with the weights applied to detect a punch.

The results were undeniable; they had a demo setup the the expo area that was really good at detecting a punch and kick.

The only real drawback to this solution is the data collection; they needed something on the order of 70,000 examples of punches and 7x that in negative not a punch examples before the training produced accuracies over 90-95% from the chart they had; if my memory is correct.

In training the system they had 70,000 frames worth of recorded training data.  The actual number of recorded punches used to train the system was 25 different people doing 10 punches, so around 250 punch examples.  Then they had about 7x that number in negative training examples, which might be things like waves, or other actions that SVM can use to differentiate between random movement and an intentional punch. (Thanks to Claude for clarifying this)

Kinect and Kids: Pitfalls and Pleasantries – Deborah Hendersen

If you had asked me to make a Kinect game for kids (ages 3-6) before seeing this presentation I likely would’ve designed something with a dumb-me as the target audience. What I quickly realized is how wrong I would’ve been to make that assumption. At that stage of development kids are not capable of interacting with games I’m used to playing.

Something as simple as a menu of options is an impossibility since they are illiterate. How many games have you seen that you could play without knowing how to read?

When interacting with an onscreen character, the kids ignore social norms of waiting for the person to finish talking. They may just jump the gun if they already know what is expected and get frustrated if they can’t do it when they want to.

Kids are distracted very easily and will make their own games out of game behavior. Deborah mentioned one story where a kid stopped playing the game because he realized he could get the game to react to leaving the play area and Kinect could no longer detect him the game would do something. So he made up his own game of jumping in and out of the play area to activate this condition; utterly boring for adults, completely entertaining for this kid.

You almost have to design the game like passive experience like a children’s TV show. Where on TV because there is no feedback, the TV show host asks the kid, “Can you find _______?” and the kid at home says something, and expecting this the show simply pauses while he waits for the response. The game has to function in essentially the same way, regardless of the kid participating in the expected fashion the game has to move forward. If it functions like a state machine that requires proper actions to move forward the kid may become bored and simply want to move on. If the game refuses to let them move on, they’ll just walk away.

I really enjoyed this presentation because it was very clear how difficult the problem space is and it was interesting to hear how they tried to solve each one.

Kinect Hands: Finger Tracking and Voxel UI – Abdulwajid Mohamed and Tony Ambrus

This presentation was broken into two completely different parts, the first part was on finger tracking with Kinect. This is one area I’ve been playing around in for awhile so it was interesting to see someone else’s attempt to solve the problem. Because the Kinect is a structured light depth camera you don’t necessarily have depth at each pixel like you would on a time of flight depth camera. Structured light cameras build a topology of depth using the light pattern it projects into the scene, viewed from a different angle it can discern depth, but a single dot does not give you depth. It connects groups of them when determining the depth of a surface. This means that even though your hand can be seen by Kinect, the further you back away from the sensor, the more like a mitten your hand becomes. The gaps between your fingers disappear until they are just clumps on your wrist.

Because of this limitation you can’t go past 10 feet, there simply isn’t enough data. Ideally the user is at 6 feet or closer, past 6 feet the accuracy begins to break down.

The way Microsoft tackled the problem was to first capture lots of hand examples and then to train an SVM (Support Vector Machine) against a curvature analysis of the hands. So once you know all the pixels that make up a persons hand you find the points on the hand that result in the largest changes in curvature. On an open hand these curves are your fingers and if you’re close enough to the camera that it can see the gaps between fingers it’s a very large change in curvature. A closed hand has more or less a uniform curvature change viewed from any angle. By training the SVM against a set of closed hand curvature examples vs. open hand curvature examples they were able to get pretty accurate results at about the 6-8 foot range for an adult, 5-7 feet for kids.

Because the detector is instantaneous i.e. it can tell you in a single frame is the hand open or closed, you need some something to counteract a single/couple misinterpreted frame. So they trained an HMM (Hidden Markov Model) on examples of a flaky transition where the system is quickly switching between 2 states because the hand is at an odd orientation confusing the SVM; I thought it was an interesting solution to the problem. I’ve only ever tried something simple like requiring 3 contiguous frames of agreement to have a state change.

The second half of the presentation was on a 3D (not stereoscopic) UI for Kinect. One of the problems with navigating a ‘push to click’ interface is that it’s hard to correct for user drift. When a user pushes forward they may do several things,

  • Push toward the TV
  • Push toward the sensor
  • Push forward (wherever forward happens to be at that moment in time)

Depending upon what you’re expecting them to do there’s going to be drift away from the thing on the screen they are trying to click. To attempt to correct this Abdulwajid presented a UI where the hands are visualized as voxelized clumps of boxes in a 3D environment with 3D buttons that could be mashed. Seeing the hand in the same space as the button appeared to make it much easier to perform the click.

One thing I noticed that was not called out was his use of 2 directional shadow casting lights. By having 2 directional lights facing each other both casting shadows, the resulting effect is a focal point. As the hand gets closer to a surface the eye perceives the two shadows heading towards each other and can see the point where they will meet. I thought that was and additional powerful indicator of where your hand was moving in the space and made it much easier to correct drift.

Trip Report: Gamefest 2011 – Seattle @ nickdarnell.com