Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   FRC Control System (http://www.chiefdelphi.com/forums/forumdisplay.php?f=176)
-   -   Vision tracking: Did they get it right? (http://www.chiefdelphi.com/forums/showthread.php?t=79753)

GGCO 09-01-2010 21:37

Re: Vision tracking: Did they get it right?
 
Anyway I can find the Java camera code? I saw it once somewhere, but can't remember. Thanks in advance!

GGCO 09-01-2010 21:47

Re: Vision tracking: Did they get it right?
 
http://first.wpi.edu/FRC/frcjava.html

Nevermind.

Chris is me 09-01-2010 22:12

Re: Vision tracking: Did they get it right?
 
Quote:

Originally Posted by jspatz1 (Post 895139)
Not necessarily. A defensive bot's autonomous mode may be to block a goal.

Nope. Penalty for crossing the white line and penalty for each game piece the robot touched.

EricH 09-01-2010 22:19

Re: Vision tracking: Did they get it right?
 
Quote:

Originally Posted by Chris is me (Post 895225)
Nope. Penalty for crossing the white line and penalty for each game piece the robot touched.

Actually, it's a double penalty for crossing, double penalty + yellow for the first ball/robot hit after crossing, and double penalty per contact after that. That was a nice understatement, Chris.

But there is a goal on your side of the line that you could block.:cool:

fabalafae 10-01-2010 21:40

Re: Vision tracking: Did they get it right?
 
Is there any starter camera tracking code for LabVIEW yet? I looked through the examples, but those are only for color recognition- not for shapes.

Paradise1165 10-01-2010 21:58

Re: Vision tracking: Did they get it right?
 
perhaps. this year doesnt so sound so imortant or controlable this year. if we had this camera last year and these cool little laptops, yeha this would make it a whoooooooooooooole lots better.

jhersh 10-01-2010 22:26

Re: Vision tracking: Did they get it right?
 
Quote:

Originally Posted by fabalafae (Post 896155)
Is there any starter camera tracking code for LabVIEW yet? I looked through the examples, but those are only for color recognition- not for shapes.

The starter code for LabVIEW is actually built into the default template. This means that when you create a new robot project, it already contains some sample vision code for finding the target.

davidalln 10-01-2010 23:23

Re: Vision tracking: Did they get it right?
 
Quote:

Originally Posted by fabalafae (Post 896155)
Is there any starter camera tracking code for LabVIEW yet? I looked through the examples, but those are only for color recognition- not for shapes.

The LabVIEW off the CD does not have it (probably to keep the people compiling the CD from getting a hint about the game early). You have to download the update here: http://joule.ni.com/nidu/cds/view/p/lang/en/id/1534

I think that, because the targets are stationary rather than moving like last year, there is definitely an opportunity to score in autonomous using the camera to track. Long range shooting was impossible last year because of the moving targets and the unpredictable floor, but this year I can see it make a comeback (although it might have been nice to see a point increase for a long range goal... sort of like a 3-pointer in basketball. Oh well...)

TD-Linux 11-01-2010 22:55

Re: Vision tracking: Did they get it right?
 
I've only looked at the C++ vision code so far, but I would imagine the labview code to be similar.

I am going to test it tomorrow with the old robot, but there are a few things that worry me:

- While the ellipse detection only uses luminance data for detecting edges, it does so by allocating memory, mixing the image down, processing it, then freeing the memory. I don't know how efficient vxworks' malloc is, but this seems like a rather bad idea.

- From what I can tell, the ellipse detection uses the edges of ellipses - meaning that it will detect two ellipses around the inner and outer edges of the black circle. While this is perfectly acceptable when one bases navigation of the center of the circles, it has to potential to throw a wrench into distance algorithms (e.g. inverse perspective transform). Some sort of algorithm will be needed to pick one of the edges (preferably the outer one).

- The tracking algorithm only samples the image once, then bases all further turning on the gyro without sampling any more images. There are both problems with this approach, as well as problems in the implementation. I won't elaborate on this point, as it probably deserves its own separate thread.

I'm impressed that they created a decently working camera example for teams to start with, though it definitely is not a perfect solution. I have to wonder if they did this on purpose - after all, it would be no fun if everyone's robot ran the same code.

jhersh 12-01-2010 00:17

Re: Vision tracking: Did they get it right?
 
Quote:

Originally Posted by TD-Linux (Post 897104)
- While the ellipse detection only uses luminance data for detecting edges, it does so by allocating memory, mixing the image down, processing it, then freeing the memory. I don't know how efficient vxworks' malloc is, but this seems like a rather bad idea.

You are absolutely correct. And no... malloc on VxWorks is not amazingly fast or anything. It is a bad idea and any patch you'd like to submit to address the issue would be greatly appreciated. I hear the AxisCamera2010 does the same thing, but with C++ new! (hint hint)

Quote:

Originally Posted by TD-Linux (Post 897104)
- The tracking algorithm only samples the image once, then bases all further turning on the gyro without sampling any more images. There are both problems with this approach, as well as problems in the implementation. I won't elaborate on this point, as it probably deserves its own separate thread.

This is intentional... when you are moving, your image is usually very blury, meaning that you are unlikely to be able to find high contrast edges. By using the gyro, you are able to get higher bandwidth data to move you pretty close to your target. Since your target is not moving, you will likely be pointed very close to the target when the gyro is at the detected angle and the next update will be a minor adjustment.

This method is more reliable and typically faster when homing in on stationary targets.

If you'd like to discuss it further, feel free to start another thread.

-Joe

slavik262 12-01-2010 11:22

Re: Vision tracking: Did they get it right?
 
I'm really excited for this year. Live video streams to the drivers will be amazing, and the targeting is built into the API (through their ellipse-finding code). Now teams will have consistant, (hopefully) reliable, and contained targeting code that's standard across the board.

davidalln 12-01-2010 14:43

Re: Vision tracking: Did they get it right?
 
Quote:

Originally Posted by TD-Linux (Post 897104)
- From what I can tell, the ellipse detection uses the edges of ellipses - meaning that it will detect two ellipses around the inner and outer edges of the black circle. While this is perfectly acceptable when one bases navigation of the center of the circles, it has to potential to throw a wrench into distance algorithms (e.g. inverse perspective transform). Some sort of algorithm will be needed to pick one of the edges (preferably the outer one).

Correct me if I'm wrong as I don't have the code in front of me, but doesn't the example find the target based on locating two concentric ellipses and then find the data such as the height and width and (x,y)? Therefore, the target will essentially act as one circle that can be used for distance algorithms.

rjbarra 18-01-2010 20:40

Re: Vision tracking: Did they get it right?
 
It uses RobertDrive class which does not support crab drive

Greg McKaskle 18-01-2010 21:53

Re: Vision tracking: Did they get it right?
 
Since it is not possible to support all type of robot bases with the current WPILib, it controls what is probably the most common. At one level it computes an angle to rotate, then uses robot drive to rotate. It should be pretty easy to map to alternate drive bases. Of course you will likely want it to move forward, kick, line up with another, etc.

Greg McKaskle

Paul Copioli 18-01-2010 22:58

Re: Vision tracking: Did they get it right?
 
Is vision needed in this game?

Is the goal stationary? YES
Do you know the location of the balls prior to Auton? YES
Do you know your robot location prior to Auton? YES
Is there potential defense in auton? NO
Can a human score without camera? YES


Is vision needed in this game? NO

We are all for using the camera, when the effort is worth it. We used it in 06, because it was worth it. We will not use it this year because it is not worth it.


All times are GMT -5. The time now is 02:47.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi