Hello everyone,
So my team members were recently considering using the Kinect sensor as a camera-like sensor on the robot instead of at the driver's station, because it has capability of measuring distance and depth. We figured this would be a good strategy for accurately shooting frisbees at the goals. The problem is, we have no clue how....
If anyone knows of ways to "hack" the Kinect sensor into using it as a depth camera-kind-of-thing, or how to program it to do so using LabView and SDK, some assistance in those matters would be greatly appreciated.
Thank you!
