Processing Vision Code on Laptop

We have heard just some ideas that vision code could theoretically be processed on a laptop on the robot. After using some of the vision examples implemented in the framework we realized how much of a processor intensive task it is. How would be go about processing code on an on board computer?

The Rectangular Target VI has code for My Computer target and the cRIO target. One is a PC, the other is on the robot. This will let you compare the results and performance.

I would encourage you to think about the rates you need from the camera and how it will be used before jumping to the laptop. But if you decide to use a laptop, you have two choices. You can use the dashboard program running on you driver station to do the processing and send data back to the robot using UDP or a similar mechanism. Or you can strap a laptop to the robot. That robot still needs to communicate to the cRIO, likely using UDP. The difference is that one is wifi connected and the other uses a cable.

Greg McKaskle

Take a look at the White Paper that FIRST released this season. I believe it is on First Forge… It gives step by step instructions on how to the the vision processing.

http://firstforge.wpi.edu/sf/go/doc1302?nav=1

I don’t know how the FRC rules will govern this, but it is still a great idea. A x86 based micro PC would be great for this. Low power consumption, low weight, and a lot of extra CPU resources. This would be good for the off-season for advanced autonomous features.

We were able to process the targets at about 12Hz using a 640x480 image on the cRIO. I’m more worried about motion blur than update rate at this point. Your mileage may vary.

The cRIO has quite a bit of capability. The program has to be fairly efficient, and be wasteful, but overall fairly easy to accomplish quite a bit natively. I could imagine much more advanced and parallel algorithms being developed in the off-season.