Quote:
Originally Posted by Jon Stratis
At regionals each year, you can usually count on one hand the number of teams that have some sort of autonomous targeting using the vision targets. Most of the teams with cameras simply have them there so the drivers can line it up on the computer screen manually. I don't think FIRST needs to go making vision tracking harder when most teams haven't successfully done it as it is!...
|
I have to say, I find these statements sad, fairly true, and the same time off base.
Jon, your point that few teams use vision tracking is true, yet it shouldn't be. Vision tracking is not all that difficult to achieve when you consider the tools FIRST has provided each year. With Lunacy, the two color tracking was already there. All you needed to do was integrate it and calibrate it to your system.
Breakaway had the Circular target tracking and alignment set up for you. Again, all you needed to do was integrate it and calibrate.
Logomotion was a bit different in that the packaged code leaned toward line following.
This year, the Rectangular vision processing example is an easy vi to integrate. Just follow the direction right in the vi. Calibration is not all that difficult. Integrating the output array is also fairly easy. It provides X,Y and Z (distance) data already for you and in a format that is easy to use.
Quote:
Originally Posted by Jon Stratis
Add to that the difficulty of hitting a target - see all the low scores we have this year - and the exponentially more difficult job of hitting a moving target with any sort of object...
|
As far as the difficulty, are you saying FIRST needs to dumb down the challenges? Seriously?
Why lower the bar? If you don't set it high, how can you possibly expect these kids to give their best. Even if they don't reach the bar, they make huge stride trying to do so. Additionally, why not reward the more skilled teams for doing the hard work?
I'll admit, hitting a moving target is no simple task. We were the only team to achieve that at the Sacramento Regional while playing Lunacy. Once we proved we could do it, we turned it off. Why? there was no benefit to doing so. If you made one out of seven Moon Rocks in a trailer, you more than likely dropped six on the floor. So, we let the robot track down a target and line up on it. Once Teleop started, we unloaded and usually got five out of seven.
When we attended CalGames that year, the scoring in Autonomous was rewarded bonus points. So, we re-enabled and successfully scored again. At 10 points per Moon Rock, hitting just one made it all worth it!
So, is vision tracking worth the effort? I emphatically say YES! Is it appropriate every time? that is a decision each team needs to make based on their understanding, skills and game play strategy.