|
Vision Targeting on Laptop
Our team recently purchased a new laptop to replace our broken Classmate. The new laptop is a quad core (thanks Woot!), so we have a bunch of processing power that can be used.
I remember in a thread a while ago that one team mentioned that they did their vision processing on their driver station's laptop (Maybe 1114).
This seems a good choice to pursue, as the "Take single image, act on it" method may not be viable for next years game, like if it was a clone of lunacy, for example.
A theory I have on how to accomplish this is to run the vision processing in the dashboard, as it already has a connection to the cRIO, and just transmit some of the target info (distance, angle, etc.) to the robot
Am I going in the right direction, or is there a better way to go about it?
__________________
2012 FLR Regional Champs, with 1507 and 191
|