Are vision system worth it

Are team has been using and improving are vision system over the past couple years. We have had very good success with are vision system (when tuned properly) however it has not been the most reliable system. We seem to have to tune the system every time we get on the playing field due to the light changing (From windows, big screens, etc).

We are thinking this year to not use a vision system due to the hassle. I just wanted to know what everyone else think?

I think the value of a vision system greatly depends upon the game.

Focusing on autonomous alone…
2011: You were probably better off with gyros and range finders.
2012: Very important for a great autonomous (see FRC341).
2013: Very helpful for centerline discs (see FRC987), but if you didn’t collect any discs off the ground it wasn’t necessary.

It depends on the game, but it also depends on your ability to make it work. In past years, we’ve wasted a lot of time trying unsuccessfully to get it to work. Although it wasn’t really a waste of time, because some students learned a lot from the attempts.

Follow the guidelines in the post below, and your vision code will be much less sensitive to environment.
http://www.chiefdelphi.com/forums/showthread.php?p=1248042&highlight=camera#post1248042

we have played with a lot of these setting. Also we went to IR camera last year but still ran into tuning problem. Do most teams tune there camera before each match?

The good news is each year, the vision system released at kickoff is better and easier to use than the year previous.

The bad news is that as of 2013, the vision system is still really complicated and requires dedicated mentors and/or unusually experienced students (and a lot of time) to really be able to use it effectively. Even then, it is not like the vision system is the most important part of most robots.

341’s 2012 robot used camera-based targeting to aim every single shot we took during the season. But there were plenty of other teams that (in teleop at least) were just as accurate and quick using manual aiming. Sometimes quicker. We heavily emphasized the vision code since it was a requirement to pull off our ambitious autonomous strategy (which really drove the design of the entire robot, from drive to intake to hopper to shooter). Once we had it working pretty well in autonomous, we didn’t have to change anything to use it in teleop as well. But if autonomous hadn’t had such a huge impact on scoring that year, we likely would not have worked as hard as we did on the vision code.

I strongly recommend looking at the 341 vision code I released last year (and I promise that some day soon I’ll release the 2013 version with “point and click” calibration) and see if you can make sense of what is being done. If you have the time, try it on a prototype or old robot. Feel free to ask questions about it. But I would strongly recommend that unless software is a particular strength of your team, keep it “out of the critical path” and have a fallback plan in case it just isn’t ready on time.

Right; this year, while some really good teams used targeting, I’d argue most consistently good teams used the pyramid or their drivers for alignment.

I’d also like to point out that humans are almost always more intelligent than an algorithm. :wink:

Our team’s strategy in 2012 initially required lots of vision targeting and feedback/closed-loop systems. We felt confident we had the mentor-student experience, and had good results on our practice field. However, come regional, we had lots of problems (both with software and with actual part failures), and ended up learning some hard lessons.

<Moral>

As with any subsystem, part, or component of the robot (mechanical, software, or otherwise), don’t let it’s ability to function decide whether or not you are able to execute your strategy. Always build in redundancy, overrides, and in case of cascade failure, the ability to quickly replace.

</Moral>

I don’t know that moral that affects your particular system exactly, but there’s my $0.02.

What programming language did you use and did you do it on the CRIO?

We used Java on the cRIO that year.

To my knowledge, 341 processed off of the cRIO, on their driver station with a SmartDashboard plugin (Java). I don’t know what language they used on their robot.

341 in 2012 used smartdashboard and OpenCV in Java for their vision. You can get it here.