View Single Post
  #7   Spotlight this post!  
Unread 21-12-2013, 11:07
magnets's Avatar
magnets magnets is offline
Registered User
no team
 
Join Date: Jun 2013
Rookie Year: 2012
Location: United States
Posts: 748
magnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond repute
Re: [Ri3D] Help BOOM DONE. Rpi Camera Tips and Tricks

Quote:
Originally Posted by Joe Johnson View Post
I hear you with regard to vision. I have seen many teams focus on vision only to never get it on the robot because they didn't make weight or their main program (i.e. the one that allows the robot to MOVE) never gets done (or done right).

That said, our goal is a robot that would likely be playing after lunch on Saturday. At least the past two years, one way to do that has been vision helping locate the goals. Admittedly it is hardly the ONLY way, but certainly part of some team's strategy for success.

Are there pitfalls in this approach? Certainly. But we hope to avoid those traps and model ways for others to avoid them as well.

For example, the ME team knows it has to support the CS team by giving them surrogates early in the 72 hour countdown so that they can develop code before they get THE robot. What is more, they have to deliver a robot that is essentially ready to code with enough ticks on the clock left to allow for the final code deployment.

I think that 72 hours with BOOM DONE.'s talent is a reasonable surrogate for what most teams can get done in the 6 weeks. Time scales pretty well. So if teams should give their coders at least a week then we should have the robot to the coders with 12 hours on the clock or we're doomed. The ME team is shooting for twice that 24 hours.

Lofty goals. Let's see how it turns out.

Dr. Joe
In 2013, most of the great shooters didn't use a camera. It's really easy to smash the robot up against the pyramid and shoot. We trained our driver to judge the shot distance without using the pyramid in under 2 hours. It takes more than two hours to program the vision stuff.

That being said, in 2012, an auto targeting routine could be really great, but time consuming. The team I was with that year was a team with an incredible software team, and let me tell you, it wasn't easy. The vision stuff was student led (by one of the brightest students I've ever met), but the five programming mentors (two of which design optical recognition quality control stuff, and another with using camera feedback to align a robotic arm) agreed that it would have taken us well over a week to do this. We had a team of 3 students plus 2 mentors working on the vision system every build meeting (>35 hours per week) for the entire build season, and the full functional vision system worked at week five, and the tuning for PID, bandwidth, image size/quality/exposure... was determined a week after build ended.

However, I think that for a group of reasonably experienced programmers a vision system of sorts could be written in three days. A second team I helped in 2013 decided to do a vision system, but only had one kid and one mentor for programming. So, they took the example LabVIEW vision code, ported it to run on a computer instead of the robot controller (less latency), and transmitted the values using network tables. A sophomore in high school was able to do this by himself in 4 hours and could get the turret on their 2012 robot tracking a target perfectly.

My advice to you is before you dismiss driver station vision processing or onboard (on crio) processing as too slow, do some trials. Do you really need to process 15 fps, or can you grab one image and figure out the angle. If you need only one image, use the cRIO, if you need live stuff up to 20 fps, try the driver station vision processing. Miss Daisy (341) had what was in my opinion (and many others) one of the most accurate auto aiming system, and they just processed images on their driver station laptop.

Besides for being more expensive and difficult, using the raspberry pi makes your robot really complicated and prone to silly bugs and glitches. The robonauts in 2012 didn't move on einstein at all because there network buffer overflowed from the computer on the robot. Every extra component adds thousands of ways for the system to fail, and a competitive FRC robot has the least number of possible failures.
Reply With Quote