Firstly, it needs to be decided whether placing the kinect on the robot will in some way enhance the robot during the hybrid period. Seeing as I can't really think of a reason why it would help to give feedback to your robot during hybrid, we'll assume putting it on the robot is a better idea.
But so far, most of the discussion focuses on interfacing the kinect to the cRIO on the robot, directly or indirectly. Here's why this is not a good idea:
- It will likely not be possible to interface the kinect directly to the cRIO (reimplementing USB would not be possible w/o access to the FPGA, which we do not have; USB-serial communication would be too slow and is a subset USB protocol and is not compatible).
- The cRIO is slow (300MHz), performing image processing on it is probably a bad idea in the first place (at least in my team's experience).
- The additional cocomputers (>1GHz; less w/ uncompressed stream) have a chance of working, but are fairly expensive (at least for some teams, I know we don't have a spare $100-200 or more).
The problem is is that I don't have any counterpoints. The fact that the kinect uses a USB interface is a huge issue. Last year our team worked out a system to have an application on the driver station grab images from the ethernet camera, do the processing on the laptop, and send back commands, but this only worked because we were able to bypass the cRIO entirely when doing our image transmission. To do something similar this season with the kinect, you would need to convert the USB image stream to ethernet... and at this point (due to the hardware required to do this), you might as well put a computer directly on the robot, which is list item #3.
So this turns into an argument of smart cRIO vs. dumb cRIO (in the dumb/smart terminal sense). Last year, our team had a dumb cRIO with a command framework that worked pretty well, interpreting commands sent back from the computer. This year, a similar system would be doable, but only by shelling out for an integrated system and using that to do the image processing.
The deciding factor becomes cost. While you might be able to go cheaper than a Panda Board, someone had already mentioned Beagle Boards and similarly processored boards being too slow. It really depends on how worthwhile you think the depth data from the kinect's IR camera will be. Personally, I don't think it will be that gamechanging, seeing as you should know distance from the basket based on where you start.
As for using it in hybrid mode...? Still seems rather useless, seeing as anything you might want to tell it would be static, and could be accomplished through more orthodox means (like switches on the robot or something). Our team will probably forgo the kinect entirely, and might end up trying to sell it if we can't find an off-season project to put it in.