Quote:
Originally Posted by jesusrambo
Considering different coprocessors, last year we just did all our processing on the driverstation. The Axis camera feed is available directly to it, you can run the processing by just adding it into the VI for the dashboard, and then values can be sent back using NetworkTables. This also has the benefit of letting you use NIVison, and the Vision Assistant to develop your processing algorithms.
|
I agree that is another good approach. That's one of the things I love about FRC is to see the many different solutions to the same problem.
We did use Axis camera strictly for driver control and contingency manual aiming. Our main goal in 2013 was to add an additional camera dedicated to targeting
without increasing bandwidth. The only data feed from the vision co-processor on the network was a ASCII string of target ids and target position coordinates. For 2013, all the driver had to do was point the robot towards the goals and then the automatic targetting/shooting would take over and shoot autonomously.
Quote:
|
Originally Posted by SoftwareBug2.0
How did you determine what speed you needed? Was 10 Hz too slow just because it wasn't meeting your goal of 15 or did you try it and not like the results?
|
From our experience during the high speed movement (and defensive collisions) in competition, 10 Hz isn't enough to keep a persistent target lock. It could be done, but for the 2013 game, speed in scoring and accuracy was everything.