Is Aim Assistant Worth It? Are You Using It?

Hello,

I am curious if you and your teams have decided to use some sort of image processing aim assistant to help you get the ball into the hight goal. We are currently contemplating adding such functionality, but is it worth it and do you guys/gals think it will make a difference in competition?

if it works it’s definitely worth it. Saves a lot of time and boosts your accuracy

Basically what Magic said; if it works, it’s worth it. :slight_smile:

The thing that separates the good teams from the amazing is human error. Essentially the more consistent you can get your robot the better it does. At a regional if your robot scores 150 points alone in 1/10 matches will top teir teams really pick you? how about if you score 80 points in 11/12 matches? Reducing as much human error will boost effectiveness of the robot.

My team has attempted to use one type of automated vision targeting or another for the past 10 years with absolutely zero success. The only successful goal shooting robots we’ve made were designed to shoot from a fixed point on the field (usually with the robot against a solid object) to ensure consistency. :rolleyes:

It is tough, and if you do not have the basics in place by now, it may be difficult to get it in place in time. If you have a practice bot, then you can continue to refine it before competition. If you don’t, then I would not devote any time to it.

BTW: Vision targeting is the only way you are going to reliably shoot goals in Autonomus (unless you are the spy bot).

During teleop, the easy solution is to put the camera image on the Driver Station, place a piece of clear plastic on the screen, and draw a “+” where the boulder will go. The Driver then has to position the robot so the “+” is centered in the goal.

Try to think about some ways that you can help your drivers aim without using some complex system to do it. Something like a really bright flash light.

Thanks for the input guys. How do you think it will work with other robots playing defense on you. My initial thought was to use a camera with a dot instead of an auto targeting system to allow the driver more dynamic shooting. I think it will be common to shoot while moving and auto aiming might get in the way of that and make you easier to defend.

Shoot while moving? I’m pretty sure it would be worth it to take that extra 2 seconds to line up and don’t miss your shot. In 2014 we had an ultrasonic give us the distance from the wall, and our shooter would adjust itself. Combine that with a camera to auto-center the target and you’re on a very good path.

The programming team on my team has already written code that moves a practice turret that we build them and auto aims and the build team already has designs on how to implement it. But the team is unsure if we should do it. How does being on the field with defense robots effect the practicality of a auto aiming shooter.

Vision systems have caused our programmers to pull their hair out and make build team behind. We are because we are not going to be able to see the other side of the field and plan on not doing low goas

I am always wary about putting effort into vision targeting because it is very time consuming to get working both quickly and accurately. In addition, every year there seems to be a way to get by without it. It is a complex solution for what is usually a simple problem.

Let’s take 2012 for example. Everyone was saying vision targeting was required in order to be accurate, especially from the key. 3322’s programming team spent days on end to get the vision targeting working, but what we realized is that the odds were against us. It takes a long time to process the information and get a heading, then more time to PID your drivetrain or turret to turn to that heading. Then if you want to be sure you’ve lined up correctly, that’s more processing time. In the end, we scrapped vision targeting entirely because our driver had a knack for lining up correctly.

Our season turned out great! We had one of the most accurate shooters at each one of our events. One of the reasons was that our programming team put some very well-invested time into tuning our shooter’s PID algorithm so its range was very consistent.

I heard the same kind of story from 67. They had vision targeting working, used it in some actual matches, and decided that they didn’t need it because their driver was faster at lining up.

I know that the teams that are good at vision targeting like to increase their processing power by using an extra device on their robot, like a laptop. To me, that seems like too much effort for not enough reward. Why not have your programmers whip out some amazing autonomous modes, for instance?

Actually, I think a lot of the elite teams have programmers that do everything - multiple accurate, high scoring autonomous modes that incorporate vision targeting. But for the middle-resource teams, I would definitely recommend prioritizing pretty much anything else.

You can probably tell this is one of my “hard-line, won’t-budge, cranky old man” opinions. I am like that, even though I’m only 23. But I am speaking in the capacity of someone who has experience on powerhouse teams and successful middle-tier teams. Everyone has different experiences in FIRST, and I am sharing mine.

An argument I like to make is: The last year vision targeting was absolutely necessary to complete a game task was 2007. Ever since then, teams have been doing great without it.

I was looking at 2012 scouting data from the Michigan State Championship, and I was surprised at how inaccurate the teams were. Overall teleop shooting accuracy was around 60%. The two teams John discusses in this post were both in the top 5 for teleop shooting accuracy (close to 80%).

Admittedly, this year’s shooting challenge is tougher because of the vision obstructions. But for the typical team, I’d agree with John and rely on simpler aides (camera with overlaid crosshairs, photon cannon, etc.).

I don’t know about that :slight_smile:

When I was a student on 968, we managed to figure out that making a purposefully, poorly tuned heading controller typically shimmies the keeper tube into place. It’s the same one that we ran at IRI for the 6 keeper match!

Another vision challenge bites the dust!

I can say this for certain. 1538 doesn’t have computer vision experts, but, we have a bunch of folks that are well versed in control theory. We’ll stick to what we know best.

Work within your material and technical capabilities.