Kinect Biomechanics: Part 1

The Hardware

Dubbed ‘the fastest selling electronic product in history’, Microsoft’s Kinect has clearly captured the attention of the gamer. The market was fundamentally changed by Nintendo and the Wii, the Kinect is Microsoft’s attempt at a user-friendly, demographic spanning input device. While the Wii takes signals from a hand-held ‘Wii-Mote’, the Kinect does away with controllers completely.

The Kinect can track a person and register their movements within a gaming environment, as demonstrated by the video below (although I don’t believe perpetual grinning is necessary).

This interaction is made possible by the Kinect’s hardware. In addition to a standard RGB camera it contains a ‘depth sensing’ camera which feeds very clever algorithms. The depth camera utilises the principle of ‘structured light’; an infra-red projector sprays an invisible pattern of dots onto the surroundings which is captured and measured by a corresponding infra-red camera (See FuturePicture for a more in-depth description). Depth information allows separate objects to be detected within a field of view much more reliably than traditional image processing techniques. Processing algorithms take this depth information and identify individual users as well as the position and orientation of their limbs, amazing stuff and potentially very powerful.

Kinect Hacked

The device has made a massive impact with programmers who hacked it only days after its release. As attention around Kinect ‘hacks’ grew, a large number of novel applications were developed and inevitably, the research community has begun to utilise the device. A month after the release of the Kinect (and the first hacks) the makers of the hardware – PrimeSense – released drivers to give access to the powerful motion detection algorithms within. Microsoft has also announced that they are releasing a ‘non-commercial’ software developement kit (SDK) with future releases and more powerful features promised for the future.

The primesense drivers give access to object (user) detection, gesture recognition and the ability to model full 3D skeletons (with access to segment position and angle). Functionally at least, the Kinect seems ideal for use in a sports science or biomechanical environment. There are however two questions which need to be answered first;

  1. What is the scope for analysis?
  2. What are the applications?

This is the first of a series of articles which will look at different possible applications of the Kinect in the field of Biomechanics.

Body Segment Tracking

In sport biomechanics the position and orientation of an athlete is vital knowledge when assessing performance, injury risk and joint loading. Since the inception of the discipline scientists have searched for a method of obtaining this information quickly and accurately. Complicated calibration routines take time and can introduce error if not performed correctly, intrusive body markers can be a distraction to an athlete when asked to perform a complicated action.

As an accessory to the Xbox 360, the Kinect is not only used as a gesture recognition tool. It also tracks a user’s body position as a means of controlling an on -screen avatar (such as in dancing games). In other words, the Kinect is capable of tracking a user’s body segment positions and orientations in 3D, in real time. No markers are needed on the body and the minimal calibration consists of standing in a specific position for just a few seconds.

If we compare this with a cutting edge motion analysis suite, calibration is a complex process taking several minutes, a user must be accuractely palpated and markered and they cost over 100’s times more than a Kinect (£80 at the time of writing). Given a cursory glance, the Kinect seems more like a revolutionary analysis tool rather than living room trinket, however, there must be a caveat to all these wondorous properties?

Body Segments
A calibrated user in the Kinect’s field of view, the lines represent the fitted body segments

Of course, the Kinect is a tool aimed at the gamer, not a researcher where errors of a fraction of a millimetre can be unacceptable. As impressive as Kinect body tracking is, it’s accuracy is unlikely to compare to current standard methodologies in which anatomical joints are palpated and marked individually. However, there are thousands of students interested in tracking body motion. Sports enthusiasts and students would probably find an inexpensive and accessible tool for measuring joint angles and body timings invaluable.

Could the Kinect be used as a tool for simple motion analysis?

We recorded two simple body motions using a standard DV video camera and the Microsoft Kinect. The motions were a depth squat from standing and a walk and run on a Kistler Gaitway treadmill. These simple motions involved different speeds and movements. The aim was to compare the angle between the upper and lower leg (knee angle) as measured by the standard video analysis and the Microsoft Kinect.

The Kinect’s normal mode of use is with with the user face-on so that leg movement would be directly perpendicular to the image plane. While the depth camera should be able to cope with this, we were aiming to test the quality of the body tracking algorithms, not the robustness of the depth camera. For this reason the Kinect was placed at 45 degrees to the plane of motion, while the video camera viewed the motion directly side-on.

In order to obtain the Kinect’s value of the knee angle throughout the motion, we wrote a basic computer program to access skeleton position and orientation data. Dartfish was used to manually analyse the recorded video motion, a tool used widely in schools and universities (the animated gifs above show the calculated angles). The ankle, knee and hip joint were palpated and marked to enable accurate analysis.

It can be seen in both plots that the Kinect under-predicts knee angle at all points. It is unlikely you’d want to use the Kinect values for any kind of quantative study, but they could be used to show general trends in the data. I’m sure there are many more applications for this kind of information (illustrations of body angles) which don’t require a high level of accuracy but would benefit from instant results and minimal setup.

Sources of error

The Kinect hasn’t performed as accurately as we might have hoped, which leads to the following quesion;

What are the sources of inaccuracy and could it be accounted for and improved?

The angle obtained from the tracked skeleton data is a resultant angle in three dimensions. The computer program doesn’t account for the fact that the knee bends in a single plane.  Any error in the Kinect’s body tracking (resulting in an impossibly crooked knee bend) will be carried into the angle measurement. It should be relatively straightforward to introduce filters and logic gates to ensure this is minimised in future.

The video below shows how the Kinect performed at tracking body segments throughout the movement, difficulties in torso identification clearly amplify the error in thigh position and hence knee angle. Developing the existing Kinect algorithms was obviously a massive project. Therefore, it’s unlikely these could ‘re-developed’ to a great extent. However, it’s likely that the camera position could be optimised to mimise errors. In addition, it should be noted that this is only the first generation Kinect, who knows what will be released in the next few years.

Another possible source of error comes from the type of movement analysed; a single, lower body joint angle at an odd camera angle. It is possible that the Kinect is much better at calculating upper body segments in plane view, after all, it was designed with arm-based gesture recognition in mind. Hopefully more specific skeleton tracking software will be developed in time, its likely it could be greatly improved with the addition of easily tracked visual markers placed on anatomical landmarks or perhaps using multiple Kinect’s to analyse the same movement.

Conclusion

Overall, it may not be up to scratch for serious analysis, but to get (very) approximate values almost instantly the Kinect is hard to beat. It would be great to see some specific software which addresses some of the issues highlighted in this post, and maybe even some more work investigating the accuracy of joint angles for an upper-body motion. With the release of Microsoft’s SDK I wouldn’t be suprised to see PC specific programs which utilise the Kinect as an analysis tool with some nice interfaces.

Keep checking back for the next part of the article which will look at a different potential application for this amazing device.

Simon Choppin