Obviously, the first thing you need is the Kinect for Windows. The Beta 2 worked with Kinect for Xbox 360 (although depending on the unit, a power supply might be required). As of version 1, the SDK supports only the Kinect for Windows hardware. There are a couple of differences between the Kinect for Xbox 360 and the Kinect for Windows. First, the Kinect for Windows allows a Near mode that supports users being as close as 40 centimeters (15.75 inches) from the Kinect. Second, the Kinect for Windows is specifically built and tested with an improved USB interface for PCs. Microsoft also has dedicated a large team of engineers to continually improve the hardware and software associated with Kinect for Windows and is committed to providing ongoing access to its deep investment in human tracking and speech recognition. To find out where to purchase a Kinect for Windows (hereafter referred to simply as Kinect) go to microsoft.com/en-us/kinectforwindows/purchase/and click Learn More.
If you want to be able to swap between using kinect tracked hand coordinates and mouse coordinates, you can store the PVector used in the 2D conversion as a variable visible to the whole sketch which you updated either by kinect skeleton if it is being tracked or mouse otherwise:
If you need the mouse coordinates simply to test without having to get in from of the kinect all the time, I recommend having a look at the RecorderPlay example (via Processing > File > Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay). OpenNI has the ability to record a scene (including depth data) which will make it simpler to test: simply record an .oni file with the most common interactions you're aiming for, then re-use the recording when developing.All it would take to use the .oni file is using a different constructor signature for OpenNI:
This lab will explain the following:How to get the BodyFrame from the Kinect.How to use to CoOrdinateMapper to map the body joint positions.How to use a predefined class to create a skeleton of the body and draw it using simple XAML shapes.
If you want to manipulate joints in another way (other than showing the skeleton), then you can implement a method similar to the UpdateBodiesAndEdges method within the BodiesManager.The drawing of each skeleton with the bodies manager is accomplished as follows:Each joint is a small ellipse, each bone is a line between ellipses, each separate body has different color lines. If any joint extends outside the edge of the cameras view, there is a rectangle drawn on that side. If a hand joint is detected as open then it is displayed as a Green ellipse, and if a hand is closed it is displayed as a Red ellipse.All 25 joints in the body are tracked. If some joints are uncertain, they are called inferred joints, and they are drawn with a smaller circle and a thin line as the bone.
Let's try and draw a skeleton in a step-by-step way that shows how it all works. The instructions that follow just get you to the point where we can access the skeleton data as described in chapter 6. So while there is nothing new it at first it is included for completeness.
However we are going to want to modify the video returned from the camera by drawing a skeleton on it in the same position as the user. For this reason we need to convert the video to a Bitmap and use the GDI graphics object to draw on it.
The Viewbox is used to keep the image and the canvas at the same size. The second canvas (kinectCanvas) is used to display the green skeleton (using a class available in the sample : SkeletonDisplayManager).
This chapter gave you an inside look at the different components of the Kinect sensor. You saw that the major components of a Kinect device are its color sensor, IR depth sensors, IR emitter, microphone arrays, and a stepper motor that can be tilted to change the Kinect camera angles. While the color sensor and depth sensors ensure video and depth data input, which is of prime importance for the functioning of the device, the microphone arrays on the other hand ensure that the audio quality is also at par. Also worthwhile is mentioning about how kinect processes the depth data, and the array of microphones, which is a design novelty that helps in clear voice recognition with the use of the noise suppression and echo cancelation mechanisms. Kinect for Windows is also capable of tracking humans at a close range of approximately 40 centimeters using Near Mode. It wouldn't be wrong to say that it is this combination of technological innovations that make Kinect the awe-inspiring device that it is. You have also gone through the different possibilities of applications that can be developed using Kinect. In the next chapter, we will walk you through the step-by-step installation and configuration of the development environment setup along with different troubleshooting tips and tricks that will help you to be sure about everything before beginning with development.
rumen i have a question, im a beginner at this so sorry if it is a simple question. What is the difference between the Kinect v1 and v2? and if i use v2 can i work with Kinect for Xbox 360 or do I have to use kinect for windows v2?
Hi Franz, here is a comparison: -does-the-kinect-2-compare-to-the-kinect-1 and here some more info, too ? -v2-whats-new/ To your 2nd question: Yes, you can use K1 & K2 on the same PC, but they keep in mind they use different SDKs. So, you need to install them both.
oh i almost forgot, I already have a xbox kinect, but I was thinking of buying the v2 of kinect for windows, would that make it better and easier for me during development and can the difference between the two become noticable in the game i am developing. Or is the xbox for kinect sufficient enough for a game that is only for a project in my university?
Rumen I need some help in making my own movement to be recognized by the kinect. Do you know any source that gives tutorials on how to do that? I have tried to google but all i found are the basics in kinect programming. Can you help me?
I have found another blog that teaches how to create the cursor that looks like the cursor in games but not uses a hand instead red circular shape cursor -the-dots-adding-buttons-to-your-kinect-application.aspx
hi rumen, i have a quick question, is it possible to create a dynamic gesture like running, jumping while hands are up in the xbox 360 kinect and ms sdk v1? if so do you know of any tutorials of that? i seem only to find simple gestures on the net like hand above the head gestures
rumen, does the kinect manager block rotation from scripts? i have tried rotating my character using transform.rotate and rigidbody, but during play mode, when the gesture is detected, the only thing that rotates is the camera and it rotates around the character. What i want to do is have the character rotate with the camera at the same time
Im using the older one, the v1 kinect xbox 360. not the kinect v2. Before I did the Kinect Manager for mulitple scenes, I can control my cursor in the Main Menu Scene of mine and my pause menu. Now it doesnt get controlled and I cant seem to find the problem. The gesture listener is being connected to the Kinect manager done by the setsceneAvatars script and can detect gestures as shown in the GestureInfo object during play mode but the cursor wont move at all
Hi, Rumen, I am student of computer science and I am working on a project for head tracking in unity 3d using kinect v2 but till now I have totally failed, can you please help me doing so.Thanks in advance. 2b1af7f3a8