View Full Version : How many Robots at the nationals had an operational Camera Track Autonomous?
Steelix4532
21-04-2009, 22:46
How many Robots at the nationals had an operational Camera Track Autonomous?
i know that 2056 tracked and scored in autonomous and 79 used the camera to set up in front of an opposing trailer. outside of autonomous i know that several teams used the camera for their turret such as 1114, 217, 1902, 25, 40, the list goes on.
rogerlsmith
21-04-2009, 23:04
904 had camera tracking in autonomous, but chose not to use it in Atlanta, as far as I know.
They scored in autonomous twice in the Traverse City District event (week 1), but decided not to use it much after that. The problem was that the robot would drive forward, then stop and look for a target. If it found something quick, there wasn't a problem. If it didn't find anything, it was a sitting duck. They decided it was better to not get scored on, then try to get lucky and maybe score.
Akash Rastogi
21-04-2009, 23:10
Team 56 on Galileo was tracking perfectly every single match. I think 494 was the other robot who tracked perfectly and scored in autonomous on us.:yikes:
Chris Hibner
22-04-2009, 08:38
70 and 494 both had camera tracking, autonomous scoring.
Jared Russell
22-04-2009, 08:45
Actually, Paul Copioli has written elsewhere that 217 seldom-to-never used the camera during actual matches. Which makes their accuracy that much more impressive.
sdcantrell56
22-04-2009, 09:02
We had tracking in autonomous that would score if it was touching an opposing trailer and if not, it would line us up for teleop. We also had full distance and angle tracking of our turret in teleoperated mode.
From my viewpoint scouting in the stands it looked like team 368 in Newton was using camera tracking with great success. They were a really awesome robot able to deliver tons of balls accurately and pick up and score super cells quickly!
Francis-134
22-04-2009, 09:18
Team 40 tracked with the camera with great sucess, scoring 6+ balls in auto many times on Galielo. Team 190 also used the camera in auto to find and track to a trailer. We decided that it was to our advantage not to shoot the balls until after autonomous so as to be 100% sure that we were in a good position to shoot.
darkmessenger88
22-04-2009, 09:26
Team 1318 had camera tracking, and it worked excellently in Portland (7 balls scored in several matches), and pretty good in Seattle. We had a few issues with it in the Championships, and of course it is much harder to have an awesome camera tracking mode when most of the robots have a spinning defensive autonomous.
However, we did do some fun stuff with our camera. For example, we have video from all our Saturday matches that is recorded from the ethernet camera, complete with the vision code's target bounding box being displayed. It makes for an interesting post-match analysis :)
Magnechu
22-04-2009, 10:33
We had a pretty decent camera code- Probably 2/3rds of our matches it would find a trailer and line up with it and we would get the first 7 balls right when teleoperated began. Gave us time to go bring an Empty Cell while some balls were dropped on the floor for us to collect.
Chris is me
22-04-2009, 11:52
Our robot had a tracking autonomous written that would lock on and fire to a trailer, but we never used it in a competition match. We did have the turret aim at a trailer to start one match though so we could walk forward and pull the trigger.
Track-and-follow autonomous was something I wanted to look into for Atlanta, but the way our programmers wrote the code they said it would cut across methods or something.
We mostly never used the camera, as our manipulator driver was amazing at lining up shots.
Tom Line
22-04-2009, 12:04
I'd like to see a breakdown of who used what code to track - C++ vs. Labview, and how effective they were comparitively.
Actually, Paul Copioli has written elsewhere that 217 seldom-to-never used the camera during actual matches. Which makes their accuracy that much more impressive.
This is the same with Team 1114. Although the camera was on board this season, it never quite reached the point where active camera tracking was more effective than our human operator. In Atlanta there were many factors causing problems, most notably the changing light conditions in the dome as clouds passed over the stadium.
Paul Copioli
22-04-2009, 12:42
217 never used the camera in an actual match the entire season. We were just not satisfied with the results. Our human aim was much faster and accurate.
188 used the camera to feedback to a set of crosshair LEDs mounted on the turret operator's controls.
Even these LEDs were seldom used; they were only required when the view of our turret was completely obstructed.
We had a camera tracking PID loop available, but it never performed anywhere near the same level as our operator manually controlling the turret and hood. Auto-tracking was triggered by a button on the controls what was never touched at any of our competitions.
gorrilla
22-04-2009, 17:36
We used our camera in Florida for tele-op...
not in Atlanta though..played defense:rolleyes:
artdutra04
22-04-2009, 17:53
We used the camera with varying results for tracking robots in Atlanta in autonomous, but refrained from having the robot actually score (we'd score as soon as teleop started). The lighting and other factors in the Dome really brought the tracking accuracy down, although there were a few matches where our robot was right on the tail of an opposing trailer until the end of autonomous.
Everything on our robot was programmed in C++.
Team Titanium 1986 (Archimedes - long range catapult shooter) had autonomous aim-and-shoot which aimed the turret and measured shot distance. I cannot claim we made any long range autonomous shots, but the bot took a few tries at it. Our code was having trouble recognizing that it was locked onto a target, probably the changed lighting conditions from our last regional.
rogerlsmith
22-04-2009, 23:33
I'd like to see a breakdown of who used what code to track - C++ vs. Labview, and how effective they were comparitively.
Me too. 904 programmed in C++.
Kingofl337
23-04-2009, 07:31
Team 40 used the camera for both auto and during matches. We feel the camera could track faster then the drivers could line up. It also allowed us to fire while we were approaching or chasing a target. We took a image capture and saved it to the cRio every time the robot was commanded to fire. This was done for color calibration after the match.
We used C++
Ok so now that the season is over would any of you that programmed in labview be willing to send me the code for tracking? i could never figure out how to track properly.
Ok so now that the season is over would any of you that programmed in labview be willing to send me the code for tracking? i could never figure out how to track properly.
We were in C++, but our general idea could be adapted to Labview:
You need a gyro and the camera
1) Set up a PID controller to maintain your robot's heading at some set target (sensor input: gyro. setpoint input: some variable, let's call it targetHeading. PID output: a virtual joystick's x axis).
2) Whenever the camera detects a target, set targetHeading to (your current heading + angle offset that the camera sees the target). The robot will point itself at the target.
Having a PID heading controller is also useful if you want to run along the side of the field against the wall to pick up balls. With human control, robots tend to spin off because of the higher-grip carpet. With a heading controller, it automatically adjusts your motor's power levels to maintain its orientation relative to the wall, allowing for long runs of picking up the balls that gather on the outside of the field.
gwytheyrn
23-04-2009, 14:09
We had our camera tracking working for our second to last match I think, and we scored 6/7. Our robot had our camera mounted to our turret, so we could estimate the error in angle based on the position of the target in the image. We would add that error to the angle that the turret was pointing (found by potentiometer) to get our final error. And since your PID loop really just needs error, this will skip the need for a gyro to a point, since it won't remember target location.
We had our camera tracking working for our second to last match I think, and we scored 6/7. Our robot had our camera mounted to our turret, so we could estimate the error in angle based on the position of the target in the image. We would add that error to the angle that the turret was pointing (found by potentiometer) to get our final error. And since your PID loop really just needs error, this will skip the need for a gyro to a point, since it won't remember target location.
As I understand your system, your potentiometer is fulfilling approximately the same purpose as our gyro. It is providing a heading of the object that does the shooting. Since we can't use a potentiometer to figure out our robot's heading, we have to use the gyro.
gwytheyrn
23-04-2009, 14:28
I wish that were exactly the case. Unlike a using a PID+gyro, our system returns to going straight when there is no target in sight (or continues turning at the same rate, depending on how we want it) whereas with a gyro you can remember an angle and PID to the angle, effectively turning to the last known location.
Otherwise, they work in essentially the same manner
Team 40 used the camera for both auto and during matches. We feel the camera could track faster then the drivers could line up. It also allowed us to fire while we were approaching or chasing a target.
You guys had an awesome display of autonomous scoring! It was fun to watch it track down and shoot.
We wished that robot scoring during auto had a higher point value.
JoeyTNT280
28-04-2009, 21:09
So I guess most teams who used the camera used C++ but did anyone use LabView I would love to see the code for LabView
Kingofl337
28-04-2009, 22:11
I also noticed that most teams using the camera were C++ teams.
You guys had an awesome display of autonomous scoring! It was fun to watch it track down and shoot.
We wished that robot scoring during auto had a higher point value.
Thank you I'm glad you enjoyed it, we wish auto would have been worth more this year as well. Though WPI will be giving us 10 balls for autonomous at BattleCry.
BrianT103
28-04-2009, 23:19
40 (http://www.thebluealliance.net/tbatv/match/2009gal_qm27) had an extremely impressive auto mode, I remember one match on Gailieo they drove almost the entire length of the field to get to an opposing trailer and then continued to drain all 7 balls into the trailer. I agree that they should of had a higher point value for getting balls into the opposing trailers in auto, but I think implementing that would have made the game much more confusing for spectators.
Tom Line
06-05-2009, 09:58
I suspected most teams that were having success with the camera were using C++. We spent a lot of time working with the camera in labview. Sometime we could tune it so it worked perfectly. Then the slightest change in the lighting would wipe our progress out.
I'm hoping to have time to dissect the labview code and understand how they're doing it. It's pretty complex. I'm certain fundamentally the programming language shouldn't matter, so it's mainly all about comparing the implementations in both C++ and labview.
youngWilliam14
30-06-2009, 15:11
i'm not the team programmer, so i don't know anything about our code, but we did score 6 moonrocks in autonomous :cool:
Edit: it only happened once. we were consistently tracking opposing trailers in autonomous though
461 had a successfully tracking robot and as mentioned above scored 6 moonrocks in autonomous during one of the matches. our programmers keep expanding their skills and are getting really great at getting the robot to do everything that we want it to accomplish... they even help other teams out!
Go 461!
ShotgunNinja
03-09-2009, 22:27
I think that it must have something to do with the visual nature of LabVIEW pushing people to use the visual side of the brain, which can get pretty cluttered when trying to conceptualize a camera control system. Not to mention how C++'s OOP paradigm lets you reimplement the camera controller into your own custom class the way that you conceptualize it best. It's like the difference between C and C++, except that in C the programs actually have a greater chance of not having bugs than C++.
P.S. LabVIEW doesn't qualify as a programming language!
Greg McKaskle
07-09-2009, 09:23
I think that it must have something to do with the visual nature of LabVIEW pushing people to use the visual side of the brain, which can get pretty cluttered...
Yeah, that must be it. By that argument, CADing up the robot must be a really bad idea.
P.S. LabVIEW doesn't qualify as a programming language!
Here we go again. Technically you are right, though. LabVIEW is a tool that is used to write G code. G is the language, even though most people call it by the product's name.
If you have learned LabVIEW well enough to criticize it, give it a shot. Explain why it isn't a programming language. While you are at it, explain what a programming language is.
On the vision tracking topic, has anyone started comparing the approaches and determining the key elements that led to success? I did a number of presentations in Atlanta, and I have my list of things.
Greg McKaskle
Chris is me
07-09-2009, 09:57
I believe he was being facetious with that comment about LabView's status as a programming language, he probably doesn't prefer it though.
I doubt camera tracking autonomouses failed based on anything other than varied lighting conditions and lack of incentive to do so; 70 / 494 were the only teams I saw do it, though if I recall correctly 2056 tracked. If points were worth double in auto or something then I bet you'd see more teams do it instead of loading in autonomous, but loading paid off way more.
Nick Lawrence
07-09-2009, 11:36
We worked all year on our vision system, and it worked pretty well. We did not use it during auto. We had a piezo buzzer attached to our operator's gamepad, which would sound whenever we were relatively close to a trailer we could score on, during Teleop. We also had a button on our pad that, if held, would automatically dump balls, if I had driven into a correct trailer and was locked on to it.
At nationals, it worked about 80% of the time, when it was sunny out. :rolleyes:
-Nick
Akash Rastogi
07-09-2009, 12:16
Something pretty cool we tried along with 1771 was adding on an IR blocking lens onto the camera. The camera values came out much clearer and tracking did work a little better. With calibration, 1771 had some better luck, we did not, but still chose to load in auto. The lenses were from some special military grade goggles (I totally forget what they actually were) donated by 1771's sponsor.
192 had an operational camera tracking autonomous at the Silicon Valley Regional, but they weren't at the Championship.
Chris is me
07-09-2009, 13:38
One thing my team wanted to try was putting a polarized lens over the camera. A 1732 mentor suggested it to us in Wisconsin, but shortly thereafter we gave up on the camera (before we could get one and try it).
Nick Lawrence
07-09-2009, 14:03
I know that a couple teams put a fish-eye lens over the camera, due to the rather limited field of view. It seemed to work pretty well for them.
-Nick
rwood359
08-09-2009, 04:09
192 had an operational camera tracking autonomous at the Silicon Valley Regional, but they weren't at the Championship.
This shows how awesome 192's autonomous was at the Hawaii Regional.
They start in the top right corner steer toward the center, make a major course correction and score (just off camera).
http://www.thebluealliance.net/tbatv/match/2009hi_qf1m1
Team 79 used the camera (like posted in the beginning) but it never really worked properly. Then we had a match at the North Star Regional in Minnesota.... and once it tracked a trailer... it drove itself the whole match. I couldn't even drive it... quite funny to watch our robot skynet us, even tho it didnt make a score. :cool:
Greg McKaskle
08-09-2009, 08:38
I never tried an additional IR filter. The lens from the mfgr has one built in. Do you have any before and after images, or values?
As for the polarizer, they are useful when the light is polarized, otherwise they are equivalent to a neutral density filter (a gray piece of glass). The atmosphere polarizes the sunlight to some degree, so polarizing outdoors is pretty effective for blocking glare more than other light. Indoor lighting is not polarized.
The wide angle lens helps with seeing more of the field without panning the camera. I saw some put lenses in front of the camera, others replaced the lens. If you replace it, beware to get an IR filter.
Anything else?
Greg McKaskle
martin417
08-09-2009, 13:32
The filter we used was actually a lens from a pair of laser safety goggles. We use IR lasers at work, and have safety goggles to prevent eye injury. What we found is that the incandescent lighting that First uses at events is very heavily weighted towards the IR end of the spectrum, so filtering out a bunch of the IR helped leave bandwidth for the colors we wanted to detect.
In addition, we used a fisheye lens to give us a wider field of view.
Greg McKaskle
08-09-2009, 15:16
Do you have any before and after images? I'm curious how much saturation boost this gave you?
Greg McKaskle
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.