|
Re: How fast was your vision processing?
We used the Nvidia TK1. We used c++ and opencv with cuda gpu support. The actual algorithm was very similar to the samples from GRIP. Everything up to findContours() was pushed to the gpu. It would normally run at the full framerate of the MS lifecam (30fps). It sent a udp packet to the roborio every frame. The latency of the algorithm was less than 2 frames, so 67 ms.
We felt we still couldn't aim fast enough. We actually spent more time working on the robot positioning code than we did on the vision part. At least for us, rotating an FRC bot to within about a half degree of accuracy, is not an easy problem. A turret would have been much easier to aim.
One helpful exercise we did that I think is worth sharing: Figure out what the angular tolerance of a made shot is. We used 0.5 degrees for round numbers. Now, using the gyro, write an algorithm to position robot. We used the smart dashboard to type in numbers. Can you rotate the robot 30 +- .5 degrees? Does it work for 10 +- .5 degrees? Can you rotate the robot 1 degree? Can you rotate it .5 degree? Knowing these and improving them helps a lot.
__________________
Ryan Shoff
4143 Mars/Wars
CheapGears.com
|