Kinect Biomechanics: Part 2

This series of posts looks at the Kinect as a potential tool for analysis in Biomechanics. Previously we explored the quality of algorithms which detect a user’s body segments, finding real potential should the appropriate tools be developed. The power of the Kinect comes from its ability to ‘see’ depth, every point on an image can be resolved according to its distance from the Kinect camera. The corresponding point cloud drives many of the amazing functions which give the device so much potential. This post focuses on the point cloud data which can be extracted from the Kinect and work we’ve done to exploit it for Biomechanics.

The depth information from the MS Kinect, colour coded according to distance from sensor

Body Parameters

Modern motion capture technology is incredibly sophisticated. There exists a multitude of ways to record a body’s movement as it performs a complicated action. High speed video, electromagnets,  accelerometers and gyroscopes are all used to calculate the body segment positions of a person at a particular instant. In Biomechanics, the resulting digitised motion is generally used to perform an analysis of some sort; to assess motion and gait or look at joint angles and positions (how we used the Kinect in the previous post). It is also possible to look into the recorded motion and calculate the forces and moments which drive it. In order to do so accurately it’s vital to have accurate data regarding the mass, size and inertia of the segments which make up the body. Aside from this, segment mass, size and moment of inertia is also important information in itself.

The problem of measurement

How do you measure a body segment? To make things easy(er) a segment is usually defined as a rigid mass situated between movable joints (upper arm, lower leg etc.). It’s difficult to measure them directly because they tend to have all the other bits of the body still attached (although studies which use cadavers form a basis for much of the work in this area, this still isn’t straightforward). The most sophisticated techniques to measure segment parameters involve sophisticated Imaging equipment, although the most accessible (and arguably most widely used) method uses time, patience and a tape measure. Typically, specific, anatomical measurements of a segment are taken and compared with previous high accuracy studies using a regression model, or the measurements are used to construct a geometric volume which approximates the segment’s overall shape (a good online resource and overview can be found here if you need more information). These body segment models have the advantage of being accessible to anyone who wants to perform a study, but have varying accuracy. By trying to account for everyone, they inevitably don’t fit anyone perfectly. Durkin and Dowling (2003) measured the segment properties of a large control group using DEXA techniques and compared them with existing segment models. A sample of the results shown in Table 1 indicate the great range of error between different models, going as high a 21.21%. Of course the error depended very much on the segment and parameter being measured but this illustrates the point nicely.

Model used Durkin Winter Zatsiorsky (regression) Zatsiorsky (geometric) Hanavan
% error (mass) 7.91 21.30 16.08 8.51 21.21
Table 1. The percentage error in the mass of the male thigh (age 19-30), accurate X-ray method compared with commonly used models

If there was some way of creating a model of a body segment, it wouldn’t have to be that accurate in order to achieve a significant reduction in error compared to commonly used methods. Of course, we propose that the Kinect could be used for this very purpose. It’s cheap and can be used as a surface scanner in order to obtain a segment geometry. If it’s well executed it could also analyse a segment in much less time than the more commonly used methods discussed earlier.

Much like in my previous post I ask the question; “could the Kinect be  a cheap and accurate body segment scanner?”. If there are potential gains to be made in the area could the Kinect be an accurate enough tool to realise them?

Existing Methods

In order to explore the effectiveness of the Kinect as a body scanner a system was set-up to capture complete body segments. The major obstacle in doing so being that the Kinect only scans a ‘slice’ of an object, that which is directly in front of the sensor. This results in a ghostly detached surface of scanned points. In order to get a complete scan, a number of different viewpoints have to be stitched together so that the entire surface is mapped. There are a number of different ways to do this. An interesting method that has been demonstrated for robotic navigation is to fit individual viewpoints together according to the resulting error. An application of this is shown below:

This produces fantastic looking 3D environments which can be used for a number of applications, but in our tests wasn’t as robust when applied to smaller scale segment scanning. Another way and perhaps more obvious is to create a complete body scanner from a series of Kinect sensors, demonstrated in the video below. In order to do this the system has to be calibrated such that the relative positions of each sensor are known. With this information the depth clouds from each can be knitted together to form a complete object.

Our Own Tests

The work described in this section is a summary of work being presented at this year’s BASES conference (Wheat et al. 2011)

We chose to use a subtly different method in which a magnetic sensor was attached to the segment and a single Kinect was used. Several snapshots were taken of the object, which was moved appropriately between each frame. The magnetic sensor tracked the segment’s motion and provided the information necessary to accurately stitch the individual snapshots together. We were very interested in the possible resolution of this method, how much detail could be captured and how closely these points matched reality. In order to assess the accuracy of the method a dummy segment (literally) was scanned using two methods;

  • The Kinect
  • A non-contact laser scanner (accurate to around 0.1 mm)
The laser scanner gave us our ‘gold-standard’, any deviation from this model was recorded as an error in our analysis. In order to compare both methods the point clouds were transformed into a 3D surface using Geomagic software.
The overall volumes and localised deviations between the two models were analysed by carefully aligning the two models, effectively laying one on top of the other.

The red regions in the image above show the greatest discrepancies between the two models. The fierce red region in the centre of the back was where the magnetic sensor was placed during Kinect scanning, it wasn’t present during laser scanning (you can almost see a trace of red leading downwards, which is likely to be the trailing wire). While this is clearly not a perfect match, it’s astonishingly close (the average error being only 1 or 2 mm). In order to obtain the inertial parameters of a segment fine detail is largely irrelevant, the volume and distribution of mass is key.  The full scan (shown in the images above) was dissected in order to obtain the geometry of the lumbar segment, used frequently when building a model of the human form.

The table below shows the percentage difference between the mass, centre of mass position (COM pos) and principal moments of inertia for the laser scanned and Kinect scanned lumbar segment. Constant density was assumed in this case, with the value taken from the literature.

Mass COM pos Ixx Iyy Izz
1.9% 0.5% -3.2% -2.8% -3.0%
The figures above show great accuracy in all associated body segment parameters, even in the principal moments of inertia which are difficult to obtain accurately (Wickes and Dumas 2010).
While this is only a limited study and initial work, this shows some fantastic potential.  We hope to further develop this methodology and would like to see this work lead to affordable, fast and accurate estimates of body segment parameters going into the future.

Thank you for your time.

Simon Choppin

Jon Wheat


Durkin, J L and Dowling, J J (2003): Analysis of Body Segment Parameter Differences Between Four Human Populations and the Estimation Errors of Four Popular Mathematical Models, in Journal of Biomechanical Engineering, vol 125 pp 515

Wheat, J. Hart, J, Domone, S and Outram, T. (2011). Obtaining body segment inertia parameters using structured light scanning with Microsoft Kinect. British Association of Sport and Exercise Science Conference, University of Essex, UK. September 2011. (Abstract accepted)

Wicke, J., & Dumas, G. a. (2010). Influence of the volume and density functions within geometric models for estimating trunk inertial parameters. Journal of applied biomechanics, 26(1), 26-31.


About wiredchop

Simon Choppin Simon’s sports engineering career began at the age of six when he loosened the wheels of his skateboard in order to make it go faster. While the experiment was chalked up as his first failure, his resulting dimpled skull has provided an aerodynamic advantage in more recent sporting pursuits. Academically, Simon completed a degree in Mechanical Engineering with Mathematics at Nottingham University before joining the Sports Engineering Research Group at Sheffield to start his PhD. His main interests include work with high speed video, mathematical modelling of various sorts and experimental work involving machines with big buttons. As a sportsman, Simon has an unfortunate lack of talent for anything requiring skill, tactical awareness or the ability to learn from mistakes. He does however seem to posess the ability to move his legs around for a long time until other people get tired, for this reason you’re most likely to see him on a bike of some sort or running up a hill in offensively small shorts. Simon was fortunate enough to have a stint at the Guardian newspaper as part of the BSA’s media fellowship, which gave him the idea for this blog. Other than this, his writing experience includes his PhD thesis and various postcards to his Mum.

5 Responses

  1. George

    So it would seem that any method of synchronizing the I/R emitters on Kinect devices would reduce interference. Unfortunately they run totally independently.

  2. […] We’ve done some initial tests with the Kinect and it appears that it can measure the volume of a torso to around 2% accuracy and the position of the centre of mass to around 0.5%.  Joint angles are not quite as good at the moment, particularly for the lower body (the Kinect appears to be optimised for the upper body) but early use shows promise.  Apple appears to have taken out a patent on 3D cameras for its iPhone and no doubt other manufacturers will be thinking the same, with the possibility that we will soon be able to measure the 3D world around us. […]

  3. I came across this in my hunt to see if the kinect might be used for collecting statistical data from a game in much the same way that some very expensive products will collect data such as, passes made, goals scored, distance run etc.

    Could anyone comment on this as a possiblity?

Comments are closed.