View Full Version : Vision tracking: Did they get it right?
During the kickoff this morning when Dave Lavery spoke about the vision tracking, he seemed optimistic. He said that he thinks "they finally got it right this time." Its obviously early to tell, but what do you think? Will the vision tracking aspect of Breakaway be a disappointment (like past games) or did they get it right this time?
Radical Pi
09-01-2010, 18:31
No way to tell until we get our hands on the system. I assure you I will be begging to try it out on Monday :P
Joe Matt
09-01-2010, 18:32
During the kickoff this morning when Dave Lavery spoke about the vision tracking, he seemed optimistic. He said that he thinks "they finally got it right this time." Its obviously early to tell, but what do you think? Will the vision tracking aspect of Breakaway be a disappointment (like past games) or did they get it right this time?
There has been at least two other times when someone has sworn this is the year for vision tracking....
Honestly, I don't see much of an advantage of it for auton. Unless you can get a ball, line up the target, and shoot it 100% of the time.
It's going to be essential for teleop though. There is no way a driver can manually make a shot without a camera feed on their netbook/drivers station. That's where vision will be awesome!
While there is a use in autonomous, it seems as if you can place the balls where you'd want, thus allowing the use of encoders and some dead reckoning to allow a shot on target (You know your position, you know the ball's position, and you know the goal's position. It's all a matter of math and distances/timing from there). With regards to teleop....yes having a live video feed is useful, but that's not necessarily vision-tracking, rather just a camera feed. And as your goal is on your side of the field, homing in on the target is not that big of a deal in my opinion. It might be useful if you're looking to score in your opponents goal rather...
Chris is me
09-01-2010, 20:39
Honestly, I don't see much of an advantage of it for auton. Unless you can get a ball, line up the target, and shoot it 100% of the time.
The reason auton scoring is beneficial, in my opinion, is that the goals will be completely undefended. You might not even need to camera track to score in them.
DonRotolo
09-01-2010, 20:56
Can't say if they got it right, but can say we want to use it this year, IF we can get it to work.
indubitably
09-01-2010, 20:59
Is there going to be some kind of prepared program we can download?
If so can someone please post a link.
The reason auton scoring is beneficial, in my opinion, is that the goals will be completely undefended. You might not even need to camera track to score in them.
Not necessarily. A defensive bot's autonomous mode may be to block a goal.
Chexposito
09-01-2010, 21:07
when they have proven that they can lock on the target and are giving us some of the program i would say yes they have beyond all reason got it a lot better than before
Radical Pi
09-01-2010, 21:17
Is there going to be some kind of prepared program we can download?
If so can someone please post a link.
It's probably inside the windriver update (for the c people). Excuse me while I go check for zip files :P
Not necessarily. A defensive bot's autonomous mode may be to block a goal.
<G28> says otherwise:
AUTONOMOUS PERIOD ROBOT Movement - During the AUTONOMOUS PERIOD, a ROBOT cannot completely cross the CENTER LINE.
Can't cross the centerline, so can't block opponent shots
Not necessarily. A defensive bot's autonomous mode may be to block a goal.
Highly unlikely, per <G28>:
AUTONOMOUS PERIOD ROBOT Movement - During the AUTONOMOUS PERIOD, a
ROBOT cannot completely cross the CENTER LINE. Violation: Two PENALTIES; plus two
PENALTIES and a YELLOW CARD if a BALL or ROBOT is contacted after completely
crossing the CENTER LINE, and two additional PENALTIES for each additional BALL or
ROBOT contacted.
when they have proven that they can lock on the target and are giving us some of the program i would say yes they have beyond all reason got it a lot better than before
Um, there's been some kind of vision demo nearly every year, and they've given us enough code to get started each time. I particularly remember impressive-looking demos in both 2005 and 2006. 2007 featured examples of tracking two lights, (2008 dumped the camera in favor of the "Robocoach"), and if I remember correctly, 2009 had a demo of the new Axis camera tracking trailers.
I may be wrong, but I think 9 out of 10 goals will be shot from within 10 feet of the goal. Given that range and the fact that the goal is right next to the drivers, camera-based tracking isn't going to help much.
The reason auton scoring is beneficial, in my opinion, is that the goals will be completely undefended. You might not even need to camera track to score in them.
What I meant by that was it is pointless to use the camera to score in auton, not that it's pointless altogether.
I totally agree, you could use encoders to score really effectively.
Radical Pi
09-01-2010, 21:32
Found the code for targeting. According to the comments lighting should have no effect on it. For you tech savvy people (isn't that everyone here?), download the windriver updater (http://first.wpi.edu/Images/CMS/First/WorkbenchUpdate20100107.exe), rename it to a .zip, and unzip it. In the folder go to vxworks-6.3\target\src\demo\2010ImageDemo. Target.cpp is the source code for the targeting system.
Looks like they just went with converting the color image into monochromatic and looking for the changes there. they even made detecting the circle an API call.
oh and indubitably you can find the LabVIEW update here (http://joule.ni.com/nidu/cds/view/p/lang/en/id/1534).
indubitably
09-01-2010, 21:34
thank you
Anyway I can find the Java camera code? I saw it once somewhere, but can't remember. Thanks in advance!
http://first.wpi.edu/FRC/frcjava.html
Nevermind.
Chris is me
09-01-2010, 22:12
Not necessarily. A defensive bot's autonomous mode may be to block a goal.
Nope. Penalty for crossing the white line and penalty for each game piece the robot touched.
Nope. Penalty for crossing the white line and penalty for each game piece the robot touched.
Actually, it's a double penalty for crossing, double penalty + yellow for the first ball/robot hit after crossing, and double penalty per contact after that. That was a nice understatement, Chris.
But there is a goal on your side of the line that you could block.:cool:
fabalafae
10-01-2010, 21:40
Is there any starter camera tracking code for LabVIEW yet? I looked through the examples, but those are only for color recognition- not for shapes.
Paradise1165
10-01-2010, 21:58
perhaps. this year doesnt so sound so imortant or controlable this year. if we had this camera last year and these cool little laptops, yeha this would make it a whoooooooooooooole lots better.
Is there any starter camera tracking code for LabVIEW yet? I looked through the examples, but those are only for color recognition- not for shapes.
The starter code for LabVIEW is actually built into the default template. This means that when you create a new robot project, it already contains some sample vision code for finding the target.
davidalln
10-01-2010, 23:23
Is there any starter camera tracking code for LabVIEW yet? I looked through the examples, but those are only for color recognition- not for shapes.
The LabVIEW off the CD does not have it (probably to keep the people compiling the CD from getting a hint about the game early). You have to download the update here: http://joule.ni.com/nidu/cds/view/p/lang/en/id/1534
I think that, because the targets are stationary rather than moving like last year, there is definitely an opportunity to score in autonomous using the camera to track. Long range shooting was impossible last year because of the moving targets and the unpredictable floor, but this year I can see it make a comeback (although it might have been nice to see a point increase for a long range goal... sort of like a 3-pointer in basketball. Oh well...)
TD-Linux
11-01-2010, 22:55
I've only looked at the C++ vision code so far, but I would imagine the labview code to be similar.
I am going to test it tomorrow with the old robot, but there are a few things that worry me:
- While the ellipse detection only uses luminance data for detecting edges, it does so by allocating memory, mixing the image down, processing it, then freeing the memory. I don't know how efficient vxworks' malloc is, but this seems like a rather bad idea.
- From what I can tell, the ellipse detection uses the edges of ellipses - meaning that it will detect two ellipses around the inner and outer edges of the black circle. While this is perfectly acceptable when one bases navigation of the center of the circles, it has to potential to throw a wrench into distance algorithms (e.g. inverse perspective transform). Some sort of algorithm will be needed to pick one of the edges (preferably the outer one).
- The tracking algorithm only samples the image once, then bases all further turning on the gyro without sampling any more images. There are both problems with this approach, as well as problems in the implementation. I won't elaborate on this point, as it probably deserves its own separate thread.
I'm impressed that they created a decently working camera example for teams to start with, though it definitely is not a perfect solution. I have to wonder if they did this on purpose - after all, it would be no fun if everyone's robot ran the same code.
- While the ellipse detection only uses luminance data for detecting edges, it does so by allocating memory, mixing the image down, processing it, then freeing the memory. I don't know how efficient vxworks' malloc is, but this seems like a rather bad idea.
You are absolutely correct. And no... malloc on VxWorks is not amazingly fast or anything. It is a bad idea and any patch you'd like to submit to address the issue would be greatly appreciated. I hear the AxisCamera2010 does the same thing, but with C++ new! (hint hint)
- The tracking algorithm only samples the image once, then bases all further turning on the gyro without sampling any more images. There are both problems with this approach, as well as problems in the implementation. I won't elaborate on this point, as it probably deserves its own separate thread.
This is intentional... when you are moving, your image is usually very blury, meaning that you are unlikely to be able to find high contrast edges. By using the gyro, you are able to get higher bandwidth data to move you pretty close to your target. Since your target is not moving, you will likely be pointed very close to the target when the gyro is at the detected angle and the next update will be a minor adjustment.
This method is more reliable and typically faster when homing in on stationary targets.
If you'd like to discuss it further, feel free to start another thread.
-Joe
slavik262
12-01-2010, 11:22
I'm really excited for this year. Live video streams to the drivers will be amazing, and the targeting is built into the API (through their ellipse-finding code). Now teams will have consistant, (hopefully) reliable, and contained targeting code that's standard across the board.
davidalln
12-01-2010, 14:43
- From what I can tell, the ellipse detection uses the edges of ellipses - meaning that it will detect two ellipses around the inner and outer edges of the black circle. While this is perfectly acceptable when one bases navigation of the center of the circles, it has to potential to throw a wrench into distance algorithms (e.g. inverse perspective transform). Some sort of algorithm will be needed to pick one of the edges (preferably the outer one).
Correct me if I'm wrong as I don't have the code in front of me, but doesn't the example find the target based on locating two concentric ellipses and then find the data such as the height and width and (x,y)? Therefore, the target will essentially act as one circle that can be used for distance algorithms.
It uses RobertDrive class which does not support crab drive
Greg McKaskle
18-01-2010, 21:53
Since it is not possible to support all type of robot bases with the current WPILib, it controls what is probably the most common. At one level it computes an angle to rotate, then uses robot drive to rotate. It should be pretty easy to map to alternate drive bases. Of course you will likely want it to move forward, kick, line up with another, etc.
Greg McKaskle
Paul Copioli
18-01-2010, 22:58
Is vision needed in this game?
Is the goal stationary? YES
Do you know the location of the balls prior to Auton? YES
Do you know your robot location prior to Auton? YES
Is there potential defense in auton? NO
Can a human score without camera? YES
Is vision needed in this game? NO
We are all for using the camera, when the effort is worth it. We used it in 06, because it was worth it. We will not use it this year because it is not worth it.
I think the camera will be worth using for aiming the kicker
basicxman
19-01-2010, 08:39
The reason auton scoring is beneficial, in my opinion, is that the goals will be completely undefended. You might not even need to camera track to score in them.
I'm going to be devil's advocate and say the goals will not be defended for all 15 seconds in autonomous.
Vision tracking is definitely beneficial in any game, especially with this new control system allowing you to feed it straight to the classmate.
The window of use in autonomous is generally smaller this year because unless you have a strong kicker there's not a lot of opportunity. (Well, I can think of more for defensive bots but I'll keep them to myself ;) )
It uses RobertDrive class which does not support crab drive
I assume you're referring to the "RobotDrive" class.
It would be easy enough to create your own CrabDrive class which extends the RobotDrive class. This would allow you to pass a CrabDrive to the vision code and have it work properly.
class CrabDrive : RobotDrive {
//build your class here.
}
Yay for inheritance and OOP!
Our drivers were very impressed by a demo of the system. They feel that they can drive and play with the camera alone!
This should be an interesting game!
flameout
22-01-2010, 22:10
We just tried it yesterday, and I agree that it works great this year.
Last year, I couldn't get it to work correctly, but this test, I tried the example code with a basic setup, and it worked right out of the box! It's quicker and more accurate than any human driver, unless the driver wants to aim with the camera (waiting for the updates) or is standing right behind the robot.
Hopefully, we'll have a good autonomous program -- if our kicker isn't accurate to score from the far zone (we'll try to be in the far zone on most matches,) then I'll aim towards the center (so I don't kick it out of the arena.)
Team 619 finds that the camera code works great this year! It truly was a turnkey operation. Knowing what a pain the camera has been in previous years, I assigned three programmers the task of making it work. They finished in about two days. :D
CircleTrackerDemo contains a function called "getHorizontalAngle()". You call it and it gives you the horizontal angle to the target. It's that simple. No lighting constants to tune, color constraints to check...etc.
They even went ahead and showed you how to set up a PID loop and gyro. We don't use those high-level features of WPIlib (that's a little too easy IMO), but one can literally download the sample program to the robot and have a working vision tracking system.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.