Vision tracking w/ water cannon vs squirrel

30 minutes long, but an interesting talk on vision tracking and targeting squirrels. Perhaps FIRST will have a game with moving targets that we can use this on.

From Gizmodo

The last game to use moving targets was lunacy…

I would like to try a game with moving targets though, because lunacy was before my time.

You wouldn’t be saying that if you played Lunacy… a lot of the time, human players scored more points than their robots did!

At regionals each year, you can usually count on one hand the number of teams that have some sort of autonomous targeting using the vision targets. Most of the teams with cameras simply have them there so the drivers can line it up on the computer screen manually. I don’t think FIRST needs to go making vision tracking harder when most teams haven’t successfully done it as it is!

Add to that the difficulty of hitting a target - see all the low scores we have this year - and the exponentially more difficult job of hitting a moving target with any sort of object…

Depends: Is the target moving in a known and predictable pattern, or is it random?

I’m getting nervous…

A game that has arcade style shooting with large targets :eek:

I have to say, I find these statements sad, fairly true, and the same time off base.

Jon, your point that few teams use vision tracking is true, yet it shouldn’t be. Vision tracking is not all that difficult to achieve when you consider the tools FIRST has provided each year. With Lunacy, the two color tracking was already there. All you needed to do was integrate it and calibrate it to your system.

Breakaway had the Circular target tracking and alignment set up for you. Again, all you needed to do was integrate it and calibrate.

Logomotion was a bit different in that the packaged code leaned toward line following.

This year, the Rectangular vision processing example is an easy vi to integrate. Just follow the direction right in the vi. Calibration is not all that difficult. Integrating the output array is also fairly easy. It provides X,Y and Z (distance) data already for you and in a format that is easy to use.

As far as the difficulty, are you saying FIRST needs to dumb down the challenges? Seriously?

Why lower the bar? If you don’t set it high, how can you possibly expect these kids to give their best. Even if they don’t reach the bar, they make huge stride trying to do so. Additionally, why not reward the more skilled teams for doing the hard work?

I’ll admit, hitting a moving target is no simple task. We were the only team to achieve that at the Sacramento Regional while playing Lunacy. Once we proved we could do it, we turned it off. Why? there was no benefit to doing so. If you made one out of seven Moon Rocks in a trailer, you more than likely dropped six on the floor. So, we let the robot track down a target and line up on it. Once Teleop started, we unloaded and usually got five out of seven.
When we attended CalGames that year, the scoring in Autonomous was rewarded bonus points. So, we re-enabled and successfully scored again. At 10 points per Moon Rock, hitting just one made it all worth it!

So, is vision tracking worth the effort? I emphatically say YES! Is it appropriate every time? that is a decision each team needs to make based on their understanding, skills and game play strategy.

That was the easy part. The hard part was figuring out how to make it work under the lights on the arena…at which point we gave up.

No, not dumb it down. Providing vision targets and software to help with it is great, and it gives teams a chance to try to work with those and raise themselves and their teams to a new level. However, do we have to raise the bar even higher when a majority of teams haven’t even hit it as it currently stands?

What would be the point of setting a goal in a competition that no one could reasonably reach (or in your case, and with my team in Lunacy, you got it to work, only to find out that manual was more reliable)? It gives a wrong impression and discourages students. Once we get to a point where half the teams can successfully work with the vision targets, then its time to raise the bar and make it more difficult. Until then, you would only be increasing the gap between those teams that can and those that can’t.

Figuring that out is an issue every year. One major key to achieving it is to understand how the lighting differences affect your calibration. Once you understand that, calibration at your school or shop will be easy, then just modify the calibration once you get to the field for it’s lighting condition. Typically the only compensations you might need would be for exposure and white balance.
As an example: This year took all of 10 minutes on the field to lock in the White Balance and then fine tune the color and target detection.

Making sure you take advantage of the time provided on the field at each event for calibrating for it’s lighting, is vital to having a reliable tracking system.

Yeah…sounds good…we spent quite a bit of time on the field trying to get it working, with the help of the NI rep, and didn’t get anywhere.

I think this year’s reflective system is much better than the Lunacy system. The cylinder was pretty much impossible to calibrate under harsh lighting. So, going forward, I think that the GDC knows enough about it now to make it so vision tracking is achievable.

I couldn’t agree more about the Retro-reflective tape. The only other targets we have been given that had anywhere nearly as tractability were the green lights in Aim High.