The Hardware
Dubbed ‘the fastest selling electronic product in history’, Microsoft’s Kinect has clearly captured the attention of the gamer. The market was fundamentally changed by Nintendo and the Wii, the Kinect is Microsoft’s attempt at a user-friendly, demographic spanning input device. While the Wii takes signals from a hand-held ‘Wii-Mote’, the Kinect does away with controllers completely.
The Kinect can track a person and register their movements within a gaming environment, as demonstrated by the video below (although I don’t believe perpetual grinning is necessary).
This interaction is made possible by the Kinect’s hardware. In addition to a standard RGB camera it contains a ‘depth sensing’ camera which feeds very clever algorithms. The depth camera utilises the principle of ‘structured light’; an infra-red projector sprays an invisible pattern of dots onto the surroundings which is captured and measured by a corresponding infra-red camera (See FuturePicture for a more in-depth description). Depth information allows separate objects to be detected within a field of view much more reliably than traditional image processing techniques. Processing algorithms take this depth information and identify individual users as well as the position and orientation of their limbs, amazing stuff and potentially very powerful.
Kinect Hacked
The device has made a massive impact with programmers who hacked it only days after its release. As attention around Kinect ‘hacks’ grew, a large number of novel applications were developed and inevitably, the research community has begun to utilise the device. A month after the release of the Kinect (and the first hacks) the makers of the hardware – PrimeSense – released drivers to give access to the powerful motion detection algorithms within. Microsoft has also announced that they are releasing a ‘non-commercial’ software developement kit (SDK) with future releases and more powerful features promised for the future.
The primesense drivers give access to object (user) detection, gesture recognition and the ability to model full 3D skeletons (with access to segment position and angle). Functionally at least, the Kinect seems ideal for use in a sports science or biomechanical environment. There are however two questions which need to be answered first;
-
What is the scope for analysis?
-
What are the applications?
This is the first of a series of articles which will look at different possible applications of the Kinect in the field of Biomechanics.
Body Segment Tracking
In sport biomechanics the position and orientation of an athlete is vital knowledge when assessing performance, injury risk and joint loading. Since the inception of the discipline scientists have searched for a method of obtaining this information quickly and accurately. Complicated calibration routines take time and can introduce error if not performed correctly, intrusive body markers can be a distraction to an athlete when asked to perform a complicated action.
As an accessory to the Xbox 360, the Kinect is not only used as a gesture recognition tool. It also tracks a user’s body position as a means of controlling an on -screen avatar (such as in dancing games). In other words, the Kinect is capable of tracking a user’s body segment positions and orientations in 3D, in real time. No markers are needed on the body and the minimal calibration consists of standing in a specific position for just a few seconds.
If we compare this with a cutting edge motion analysis suite, calibration is a complex process taking several minutes, a user must be accuractely palpated and markered and they cost over 100’s times more than a Kinect (£80 at the time of writing). Given a cursory glance, the Kinect seems more like a revolutionary analysis tool rather than living room trinket, however, there must be a caveat to all these wondorous properties?
Of course, the Kinect is a tool aimed at the gamer, not a researcher where errors of a fraction of a millimetre can be unacceptable. As impressive as Kinect body tracking is, it’s accuracy is unlikely to compare to current standard methodologies in which anatomical joints are palpated and marked individually. However, there are thousands of students interested in tracking body motion. Sports enthusiasts and students would probably find an inexpensive and accessible tool for measuring joint angles and body timings invaluable.
Could the Kinect be used as a tool for simple motion analysis?
We recorded two simple body motions using a standard DV video camera and the Microsoft Kinect. The motions were a depth squat from standing and a walk and run on a Kistler Gaitway treadmill. These simple motions involved different speeds and movements. The aim was to compare the angle between the upper and lower leg (knee angle) as measured by the standard video analysis and the Microsoft Kinect.
The Kinect’s normal mode of use is with with the user face-on so that leg movement would be directly perpendicular to the image plane. While the depth camera should be able to cope with this, we were aiming to test the quality of the body tracking algorithms, not the robustness of the depth camera. For this reason the Kinect was placed at 45 degrees to the plane of motion, while the video camera viewed the motion directly side-on.
In order to obtain the Kinect’s value of the knee angle throughout the motion, we wrote a basic computer program to access skeleton position and orientation data. Dartfish was used to manually analyse the recorded video motion, a tool used widely in schools and universities (the animated gifs above show the calculated angles). The ankle, knee and hip joint were palpated and marked to enable accurate analysis.
It can be seen in both plots that the Kinect under-predicts knee angle at all points. It is unlikely you’d want to use the Kinect values for any kind of quantative study, but they could be used to show general trends in the data. I’m sure there are many more applications for this kind of information (illustrations of body angles) which don’t require a high level of accuracy but would benefit from instant results and minimal setup.
Sources of error
The Kinect hasn’t performed as accurately as we might have hoped, which leads to the following quesion;
What are the sources of inaccuracy and could it be accounted for and improved?
The angle obtained from the tracked skeleton data is a resultant angle in three dimensions. The computer program doesn’t account for the fact that the knee bends in a single plane. Any error in the Kinect’s body tracking (resulting in an impossibly crooked knee bend) will be carried into the angle measurement. It should be relatively straightforward to introduce filters and logic gates to ensure this is minimised in future.
The video below shows how the Kinect performed at tracking body segments throughout the movement, difficulties in torso identification clearly amplify the error in thigh position and hence knee angle. Developing the existing Kinect algorithms was obviously a massive project. Therefore, it’s unlikely these could ‘re-developed’ to a great extent. However, it’s likely that the camera position could be optimised to mimise errors. In addition, it should be noted that this is only the first generation Kinect, who knows what will be released in the next few years.
Another possible source of error comes from the type of movement analysed; a single, lower body joint angle at an odd camera angle. It is possible that the Kinect is much better at calculating upper body segments in plane view, after all, it was designed with arm-based gesture recognition in mind. Hopefully more specific skeleton tracking software will be developed in time, its likely it could be greatly improved with the addition of easily tracked visual markers placed on anatomical landmarks or perhaps using multiple Kinect’s to analyse the same movement.
Conclusion
Overall, it may not be up to scratch for serious analysis, but to get (very) approximate values almost instantly the Kinect is hard to beat. It would be great to see some specific software which addresses some of the issues highlighted in this post, and maybe even some more work investigating the accuracy of joint angles for an upper-body motion. With the release of Microsoft’s SDK I wouldn’t be suprised to see PC specific programs which utilise the Kinect as an analysis tool with some nice interfaces.
Keep checking back for the next part of the article which will look at a different potential application for this amazing device.
Simon Choppin
Really fantastic blog Simon and very interesting to see a critical appraisal of the Kinect’s capabilities. I was starting to get a little tired of hearing how the Kinect was so amazing that it could solve all the world’s problems… I’m glad that you’ve shown that there’s still some development work out there for us sports engineers and biomechanists!
I agree David. I am particularly interested in what will come out over the next few years. In my opinion this is only the beginning.
Super interesting article !
Indeed, it is not always practical to make sure that what we measure is exactly on the image plane (and it’s impossible to know by how much the measurement plane is tilted).
If improved, this would be a great way to decrease the perspective error.
Great article Simon, have shared on my Facebook page! Things are moving in the right direction though, Nintendo may have the next ‘game’ sorted out.
Simon, What is a typical angle measurement error using Dartfish? Also, it looks like there is a sytematic 20-40 degree error rather than random errors?
I’d agree the error in angle measurement is more systematic than random, and probably due to the fact that resultant angle was plotted. The error was compounded around all three axes, although this could be improved with clever filtering techniques. Error on the dartfish method wasn’t assessed, and without digging into the literature I’d say that if we can get the Kinect values to within 2 degrees of Dartfish values, we’re at a very usable level of accuracy.
Simon C
Great stuff Simon! I’m in a biomechanics lab in the US and I recently got my PI to buy me a Kinect. I did a quick shoulder joint angle analysis for an occupational biomechanics class and it worked pretty well.
Are you going to ISSSMC in Newcastle this August by chance?
Is it possible that the software you used with the kinect measures the angles at the actual axis of rotation instead of one fixed point on the knee? That may contribute to the discrepencies between the two sets of measurements.
Absolutely, we’re hoping to develop a more ‘intelligent’ method of measuring joint angle which takes into account the anatomy of the joint too. It will also be interesting to see how much more accurate upper body measures are.
Thanks for reading.
Simon C
Simon, thanks for this great information. Being a physical therapist and movement scientist I was hoping that kinect would make motion analysis affordable for the large mass. Despite large errors the potential seems enormous with improving algorithms over time! I am not a computer software specialist, but I am very interested in obtaining the raw data to calculate the segment angles, to use the kinect for eductional purposes at the department of physical therapy. Could you please help me with that?
Thanks for your blog!
Could you please provide the methodology of your analysis or a reference to the paper that provided you this information.
thks
Unfortunately we’ve not published anything at this time, our investigations are still relatively embryonic, I don’t mind providing methodology details although there’s not much more that isn’t mentioned in the article.
Thanks for reading
Simon Choppin
Simon,
Can you comment on the software that you used to obtain the sitckfigure and joint angles from the Kinect?
John
[…] series of posts looks at the Kinect as a potential tool for analysis in Biomechanics. Previously we explored the quality of algorithms which detect a user’s body segments, finding real potential should […]
Hello,
you have inspired me!
I’m developing a software using Microsoft Kinect. My wish is to capture and analyze movements like squats, deadlifts and kettlebells swing.
Just a video:
http://www.facebook.com/video/video.php?v=121110901319217
and the video list
http://www.facebook.com/video/?id=181867965210688
My software is still a prototype, the kinect is fantastic but the raw data flows are instable and there is a poor coerency, but I hope to solve the problem adding information about the movement to correct the data.
Thanks you for the idea!
Fantastic work, it looks really interesting and we’re very keen to see how you progress. Don’t hesitate to get in touch when you’ve finished the software, we’d be happy to mention it on the blog.
Simon Choppin
[…] technology to monitor patient movements. This technology could some day be used as a simple way to measure patient range of motion. The video below shows the Kinect being used for exactly this purpose (although only for […]
Is there information about the accuracy of length measurements taken from a Kinect? I want to measure upper extremity work space volume.
We’re currently working on assessing the accuracy of the Kinect as a biomechanics and scanning tool. When you say length do you mean segment length as predicted by the skeleton tracking software, or the measurement accuracy of the depth camera system? We will be publishing data soon with relation to scanning accuracy.
Thanks for reading
Simon Choppin
Have you guys done any analysis on maximum distance from event and field of view? I’m particularly interested in whether or not the Kinect can pick up the movement of an athlete with a ball, and make any estimates of
release of the object… for instance, see noahbasketball
Hi Bob,
This is a tricky one. The Kinect seems to be able to perceive depth for around 10 metres away from the camera. However, the algorithms which drive the skeleton tracking etc. are only functional within four. It’s not away from the realms of possibility to use the depth cloud directly to calculate what you’re after if you don’t mind getting involved in programming. However, the resolution of the depth value becomes very coarse at larger distances. We think this is due to the nature of the projected pattern used to calculate depth values.
Hope that’s of some use, thanks for reading!
Simon Choppin
Thanks Simon… sounds like there may be some hard limits here.
Hi everyone, my name is Job. Its really a gr8 blog Simon. In fact i am doing some stuffs related to the accuracy of kinect(particularly through microsoft SDK) in terms of the X- factor analysis. In my case am trying to find the angle between the left-rght shoulder vector and left-right hip vector. I think am gonna borrow a lot of concepts from your blog, if you don’t mind. Anyway, if you have some thing to share with regard to the X-factor stuff, you are really welcome. here is my e-mail: eyolla2010@yahoo.com
Thanks for reading Job and thanks for the kind comments. There is a system using a depth camera for golf coaching which might be of interest to you: http://www.gurutrainingsystems.com/swinguru-pro/presentation/
We’re also working on a separate website which will describe some of our latest findings and include the software we’ve used for analysis. Watch this space!
Simon Choppin
Great stuff. Thanks for this blog. I am currently doing some research about developing a realistic gaming simulator using a systems of kinect and treadmill to enable player (or patient) to explore its enviroment freely (in just 1D forward direction for now). Hope we can discuss sometimes soon.
Kin F. Kam
Hello! I am a biomechanics researcher just getting interested in how I might be able to use this technology to study subjects who can’t be brought into a lab. I’m wondering if you are aware of any new advances since you wrote this article that would help to increase the accuracy of the position and angle data. I am interested in angular velocities of the arm, which of course requires good accuracy of the position data. I’m wondering, for example, if there might be a way to calibrate in post processing to reduce some of the error. Thanks in advance for any help!
You may be interested in the following paper:
“Biomechanical Validation of Upper-body and Lower-body Joint Movements of Kinect Motion Capture Data”
[…] series of posts looks at the Kinect as a potential tool for analysis in Biomechanics. Previously we explored the quality of algorithms which detect a user’s body segments, finding real potential should […]