Quote:
Originally Posted by apalrd
To put to rest any myths you guys are circulating, I have data obtained from the beta test information and scrolling through the 2012 LV code.
-The Kinect has a "server" application (written in C#) which communicates with the driver station and Kinect. I haven't looked at the server yet. I do know that the default Dashboard does apparently optionally show Kinect analysis data instead of the camera image.
-The Kinect feeds "virtual joysticks (a few analog axis relating to skeletal angles of arms and buttons relating to hand/head motions) to the robot, which in LV show up as a 5th joystick (Kinect1 and Kinect2, although Start Communication only shows Kinect1 as being populated with data). The actual gesture analysis can be done on the robot end using this data. The user can also request the raw skeletal data (20 points of x-y-z).
-The beta test information instructs us to drive the robot using this joystick. I can only see this going VERY BADLY and hope teams do not attempt to do this in competition. There are a bunch of warnings about safety and learning to use this interface.
-There is no information yet to suggest this is required. There are still 4 USB joystick inputs in addition to the Kinect data.
-We are now using LV 2011. I hope I haven't said too much.
|
Nope, you haven't. We can talk all about what's going in beta - in fact that's what we're all supposed to do. You can even take screenshots of working code, demoing, etc. Just don't post the documentation, and always add that little "this is only beta, your mileage may vary" addendum. Good to see you guys got accepted!