Quote:
Originally Posted by Aaron.Graeve
I know it is possible and a good idea. Don't quote me on it but I believe 118 used the Raspberry Pi's enhanced I/O equivalent, a BeagleBone, for the vision tracking on their 2012 robot.
|
This is what I know... as I spoke with their developer on this:
They used a beagleboard (
http://beagleboard.org/) running embedded Linux with Ethernet and USB interfaces
They used HTTP GET calls via libcurl library... processed using OpenCV and finally sent UDP packets across to the cRIO as input for the control loops.
One thing we didn't discuss... which is something I may want to talk about at some point... is the danger of sending UDP packets if the robot is not listening to them. This can flood the buffers and corrupt tcp/ip causing the driver station to lose connection. The solution we tried to overcome this issue is to open the listener immediately on its own thread (task) that starts on power-up. This should work as the time it takes for the camera to power on (about 30 seconds)... is much later than what it takes for the cRIO to power up and start listening.
Oh yes and we both use WindRiver c++
