Quote:
Originally Posted by Jared341
Thanks for sharing! I know that we thought about doing something similar this year, but were scared off by the added complexity of having to power and interface with a second computing device. If it's a viable and strategically valuable option next year, we will definitely put some of the lessons learned in this whitepaper to good use!
|
Yeah, that is the hard part about this. If you read the "algorithms" we used, you can see that part turned out to be way easier than vision! We did have one match in Las Vegas where we sat dead for 30s while our cRio tried to connect to the Pandaboard so the extra complexity is definitely a risk. In an ideal world, NI would come out with a USB module and a port of OpenKinect!
Your team's vision system really inspired us to take another look at vision too though. Using the dashboard to do the processing helps in so many ways. The biggest I think is that you can "see" what the algorithm is doing at all times. When we wanted to see what our Kinect code is doing, we had to drag a monitor, keyboard, mouse, power inverter all onto the field. It was kind of a nightmare.
If anyone can point us in the direction of a way to stream video (stream the frames that the kinect code renders) from the Pandaboard/Ubuntu to the SmartDashboard, that would be a huge improvement for this kind of control system. That would be a good offseason project.