::K.A.M.A.S

This semester, I’m working with Greg Borenstein to finish developing the Kinect Abnormal Motion Assessment System. The idea was developed out of the Health 2.0 hackathon, where they discovered that the Microsoft Kinect can be used to help conduct more reliable version of involuntary motion disorder tests.

Rating scales and tests already exist for many of these motion disorders. These are indexed here. By using these scales, we are fine-tuning the current iteration of the current Kinect detection program as designed by Greg such that it can be used to conduct a series of patient tests at Tufts, under the guidance of Daniel Karlin (MD, MA).

I have come on to the project to help fine-tune it from a rough sketch, to a final product. However before I could do that, I needed to catch up to where KAMAS currently is. Here are the steps thus-far to (re)building KAMAS.

Set-up for Kinect

First step: Get a KINECT! (and a cup of coffee)

1. Install  OpenNI NITE. This is what allows us to access all of the 3D data from the Kinect for use in Processing. It is the software written by PrimeSense that allows us to communicate with the Kinect. This is the only proprietary software used with the Kinect. To install, download the file, then launch the Terminal (Application s –>Utilities –> Terminal). Then change the directory by typing “cd” and dragging the unzipped folder to the Terminal which will set the correct file path. Hit return and type sudo ./install.sh to run the installer.

2. Install Simple OpenNI. This is the library that allows us to communicate the data from OpenNI to Processing. To do this, install and unzip the file linked above. Once unzipped, drag the folder to your libraries folder in Processing. Once it is there, restart Processing.

**Note: In initial Kinect stages, it was necessary to use OSC to communicate skeleton data back and forth from Processing to the Kinect on OS X. The skeleton data, or “skeletonization” is what reads where your skeleton (joints/limbs) are in 3D space. This is what allows you to designate various parts of your body to be a “controller”. Fortunately, PrimeSense released their software (OpenNI NITE) that serves as middleware to perform the skeletonization. This makes things much easier.

3. Launch Processing, plug in your Kinect and get started coding!

**Note: To capture Processing sketches (1.0) in realtime as video, run MovieMaker. To capture sketches in Processing 2.0, use Quicktime screen capture.

Working the NYKO Zoom

This week I tested the Kinect with the NYKO Zoom which reduces the required range by 40% – meaning you can stand much closer to the Kinect and it will still display your entire body. This is ideal for use in areas where you may not be able to stand the 10 or so feet from the Kinect to do full skeleton tracking. There are certainly advantages to using the zoom, but also some drawbacks. Below is full documentation on the benefits and drawbacks of using the zoom and how it looks.

The Zoom and Real life Depth

Without zoom, the depth image Processing sketch reads the beautiful mannequin I am standing with as 66-69 inches away (depending on what part of her body I click).

Here is what we look like as a happy couple, ~70 inches away.


With the zoom, the real world depth image Processing sketch reads the beautiful mannequin I am standing with as ~100 inches away (depending on what part of her body I click).

Here is what we look like as a happy couple in pretend NYKO-land, where we exist a full 30 inches further away – OR if you care to do the math, ~40% further away, which is, as you may have guessed, exactly the increased range the NYKO brags in its description!

So, we know that the NYKO zoom can sense you when you are 40% closer (about a full step forward) than without the zoom (you’ll look cut off). But, how good is it at resolving the minutia of your depth image? In other words, if I hold up my two hands at just slightly different distances, will it be able to detect the slight change still?

Here is me standing (very focused) with my left hand about 5 inches forward from my right, without the zoom (I had the help of a friend here). When you click, you find that the depth image is read as 72″ away and 66″ away (respectively).

Here is me standing in the exact same position and distance and my hands are registered as 97.5″ and 93.5″. If I step a foot closer, the resolution improves and it detects my hands as about 6 inches apart. Thus the resolution does degrade faster as you step backward. 

The Zoom and Skeleton Tracking

How does the zoom effect skeleton tracking? Here I am running the KAMAS joint distance tracking sketch without the zoom. My total score, standing as still as possible was 13851.

When I put the zoom on, I have to step a foot closer for it to calibrate me. At this distance, my total score is 26401 when standing as still as possible. I need to test this again, more precisely where I am standing at the exact real world depth distance away in both situations (i.e. the Kinect should sense me as being 100″ away with and without the zoom, which would mean I’d need to physically move from one demarcated spot to another for this to be the case) More to come on this soon.

One Response to ::K.A.M.A.S

  1. Pingback: Microsoft Kinect Specifications & Sensor Breakdown | ::itpblog

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>