Hello People of Chief Delphi!
We have been having some issues with our Kinect working with LabVIEW, so I was hoping you all might be able to provide some help and constructive criticism.
First off, we can’t get a reading off the Kinect. We know the Kinect itself works, has we have seen in the “Kinect Skeletal Viewer” and the “Kinect Explorer.” However, we cannot get the Kinect to work with our LabVIEW programs. We don’t get any errors about the program not compiling or having build errors, so we are very confused about this. We did, however, make one modification to allow the program to run independent of the cRIO, on our desktop computers. (The modification is the image attachment.) Could this be our problem?
Next, we were curious as to how we could get the image out of the Kinect and onto our program. Would we use a Image display, and if so, how would we do this?
Thank you in advance for your help, Chief Delphi!
Actually, there were two changes. The code you attached was intended to be part of the robot code. It was moved to run on the PC, where it will not run due to a missing library called FRC_NetworkCommunication.out. Since that didn’t work, it was changed to not call any library at all.
Anyway, the Kinect VIs in WPILib can be used on the cRIO in a location such as autonomous or periodic. The basic Kinect joystick data is available using just the Joystick opening a Kinect 1 or Kinect 2 edition. The other data such as the Kinect Header can be used to get at the advanced data fields. I uploaded some examples to CD and FIRSTForge showing how to map various arm movements from the low level vertices and make your own joystick on the robot. The FIRSTForge URL is http://firstforge.wpi.edu/sf/frs/do/listReleases/projects.wpilib/frs.2012_kinect_examples
Greg McKaskle
Thanks for the reply.
New question: How would I calibrate the Kinect so that I could hold my arms up, and thus creating a point of reference for the Kinect to base motor values off of?
I suppose you could remember the highest Z you’ve seen on a hand or something, but my suggestion would be to use some other point on the skeleton for reference. Shoulders, spine, possibly head.
The code I referenced in the other post used the height between hip and shoulder, or something similar to normalize the arm gestures, and then had general linear scaling and offset to make the gestures comfortable and full-range.
Another useful point to use is the Center of Mass of the player.
Greg McKaskle
Thanks again! We think we figured out a solution, using a combination of case structures and feed back nodes to compare the original value that we calibrated vs. the new position of the hands. We’ll see how it works when we test it. Hopefully, it’ll work!