Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Where is the multi object tracking code for the RC? (http://www.chiefdelphi.com/forums/showthread.php?t=51048)

arshacker 07-01-2007 00:48

Re: Where is the multi object tracking code for the RC?
 
any reason why easyc can't open bds file for the multi-object tracking code??

MattD 07-01-2007 02:06

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by arshacker (Post 549717)
any reason why easyc can't open bds file for the multi-object tracking code??

Are you sure that you are using easyC PRO, and not easyC for Vex?

Astronouth7303 07-01-2007 12:36

Re: Where is the multi object tracking code for the RC?
 
The way the camera works is that it takes a picture, finds all the pixels that fall into a color range, and finds information about them. The information I've used in the past is:
  • bounding box (x1, y1, x2, y2)
  • The median point (mx, my)
  • The pan/tilt (see note)
  • a "confidence" (basically a ratio of the color blob area to the area of the image)
  • IIRC, some statistical information about the color is also provided.
Note: In 2005, the camera would drive its own servos and report their values. In 2006, the code would not configure this way, instead relying on the RC to do it. (I changed this to the 2005 behavior in my own code.) I do not know yet what the behavior is this year.

I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.

Of course, I'm likely to eat my words when the actual code gets around.

EDIT: Wow. How much discussion occurs while I post!

Kevin Watson 07-01-2007 12:48

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by Astronouth7303 (Post 549946)
I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.

Yes you can, but the RC needs to control the pan/tilt servos...

-Kevin

Eclipse 07-01-2007 14:43

Re: Where is the multi object tracking code for the RC?
 
How exactly does one access the data pertaining to the individual blobs? I didn't remember ever seeing anything like that in the code...

Kingofl337 07-01-2007 14:51

Re: Where is the multi object tracking code for the RC?
 
They are not individual blobs they are a single large blob. The camera just draws a box around the target pixels.

If you load up easyC PRO we have a program in the Sample Code that
shows what the camera is seeing. It draws a box around the blob and shows an "X" for the centriod (center of blob) and shows the data the camera is showing.

If your not using easyC the CMU JavaApp also can show you the region. I don't know if labview can show this data.

joe250 07-01-2007 18:07

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by Kevin Watson (Post 549953)
Yes you can, but the RC needs to control the pan/tilt servos...

-Kevin

If one were to use the VW command (virtual window, see page 55 of the CMUCam2 manual), processing could be done on a particular chunk of the camera's view. By examining a 50px wide window and then repeatedly sliding that window over by a given number of pixels and re-processing, one could reconstruct the two distinct blobs and make an estimate of the number of pixels between them.

Of course if you used this method, the camera's servos would have to be driven by the RC because otherwise resetting the virtual window would cause the camera to track/center on that particular portion of the window. (I'm pretty sure anyways, haven't ever actually tested out the command).

Can anyone verify that using the VW window causes the camera to re-process only that chunk of the view? Also I'm not sure if a sliding window would be too slow. Eagerly anticipating any more hints from Kevin! :)

Kevin Watson 07-01-2007 18:41

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by joe250 (Post 550232)
If one were to use the VW command (virtual window, see page 55 of the CMUCam2 manual), processing could be done on a particular chunk of the camera's view. By examining a 50px wide window and then repeatedly sliding that window over by a given number of pixels and re-processing, one could reconstruct the two distinct blobs and make an estimate of the number of pixels between them.

Of course if you used this method, the camera's servos would have to be driven by the RC because otherwise resetting the virtual window would cause the camera to track/center on that particular portion of the window. (I'm pretty sure anyways, haven't ever actually tested out the command).

Can anyone verify that using the VW window causes the camera to re-process only that chunk of the view? Also I'm not sure if a sliding window would be too slow. Eagerly anticipating any more hints from Kevin! :)

Yes, this is one of the cooler approaches that you could try. A simpler way might be to rotate the camera fully clockwise, call Track_Color() and then rotate the camera counter-clockwise until the camera detects the light. Then continue to rotate counter-clockwise until the entire blob is in frame (i.e., the blob isn't touching the edge of the image). Now you know where the right most blob is and its size. Do this again to find the left most blob. A little math, and you should know where the closest scoring location is.

It's a fun problem <evil grin>.

-Kevin

drakesword 08-01-2007 23:07

Re: Where is the multi object tracking code for the RC?
 
i had a solution which i do not think is legal but would be cool.

Parts
CMU Cam 2 x2
Basic Stamp Or Javilin
BOE Programming Board

Serial data into the free pins. Program the stamp to tack on a L or R to the packet befor sending it to the robot. That would allow for multi cameras but i dont think you can use a stamp. Maby a pic though.

gnirts 15-01-2007 00:19

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by Kevin Watson (Post 550267)
A little math, and you should know where the closest scoring location is.

Any chance you could elaborate on this math? Even if you know the relative headings of two lights, I am still having a hard time figuring out how you can approach the spider leg head on (not at an angle).

Thanks in advance,
Robinson

Kevin Watson 15-01-2007 00:58

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by gnirts (Post 557034)
Any chance you could elaborate on this math? Even if you know the relative headings of two lights, I am still having a hard time figuring out how you can approach the spider leg head on (not at an angle).

Thanks in advance,
Robinson

You don't need to approach the spider leg head on to score (I'm told that is was designed to be fairly forgiving). One bit of math that will come in handy is the equation that allows you to calculate range to the light from the camera tilt angle then it is pointed directly at the light. This method was described in a post last year. This year the centroid of the light will be about 116 inches above the floor.

-Kevin

gnirts 15-01-2007 01:22

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by Kevin Watson (Post 557055)
You don't need to approach the spider leg head on to score (I'm told that is was designed to be fairly forgiving). One bit of math that will come in handy is the equation that allows you to calculate range to the light from the camera tilt angle then it is pointed directly at the light. This method was described in a post last year. This year the centroid of the light will be about 116 inches above the floor.

I'm familiar with the method for calculating range, but I'm afraid that the mechanical subteam may win in the war to have a manipulator that is (or isn't) robust enough to accommodate any approach to the target except for a head on one.

While I would like it if we weren't angle sensitive, if we are, what math is necessary to figure out your approach angle to the target (eg. head on, coming in at 20*, etc.)?

Thanks in advance,
Robinson

Kevin Watson 15-01-2007 02:57

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by gnirts (Post 557065)
I'm familiar with the method for calculating range, but I'm afraid that the mechanical subteam may win in the war to have a manipulator that is (or isn't) robust enough to accommodate any approach to the target except for a head on one.

While I would like it if we weren't angle sensitive, if we are, what math is necessary to figure out your approach angle to the target (eg. head on, coming in at 20*, etc.)?

Thanks in advance,
Robinson

It's not an easy problem because you'll need a very agile robot that, at the very least, will need to be four wheel drive so that you can do a turn-in-place (of course, several high school students will read this and grin because they've thought of a cool way to solve the problem that the NASA guy didn't think of -- it happens every year <grin>). If I were you, I'd push back and let the mechanism folks know that their solution for delivering a scoring piece needs to be more robust so the 'bot can approach at more oblique angles. To help visualize how you might accomplish the task, here's a link to a PDF containing a scale Visio drawing of the field: http://kevin.org/frc/2007_frc_field.pdf.

-Kevin

maniac_2040 15-01-2007 15:50

Re: Where is the multi object tracking code for the RC?
 
Quote:

Originally Posted by Kevin Watson (Post 557083)
It's not an easy problem because you'll need a very agile robot that, at the very least, will need to be four wheel drive so that you can do a turn-in-place (of course, several high school students will read this and grin because they've thought of a cool way to solve the problem that the NASA guy didn't think of -- it happens every year <grin>). If I were you, I'd push back and let the mechanism folks know that their solution for delivering a scoring piece needs to be more robust so the 'bot can approach at more oblique angles. To help visualize how you might accomplish the task, here's a link to a PDF containing a scale Visio drawing of the field: http://kevin.org/frc/frc_2007_field.pdf.

-Kevin

Our team is coming across the same dilemma. However, do you really need four wheel drive to do a turn in place? My team is using a forklift style drive this year(two drive wheels in the front, steering wheels in the back). The engineers of my team told me that we can turn in place by just turning the steering wheels almost perpedicular to the front and spinning the front wheels in opposite directions(ie- to turn left in place, spin the left wheel backwards and the right wheel forward). I am sceptical of this method. Will it really work?

But also, I have an idea of how to determine the orientation of the rack/vision target from information from the camera and would like to know the feasibility of it. It draws on the fact that the blob size is proportional to the angle that your approaching the target from. the blob size will be "thinner" if you're approaching from an angle, and larger if you're approaching head on. Do you think it would be possible to determine the angle of the rack based on this information, and the distance?

It seems that our robot will only be able to score(feasibly) head on.

Dominicano0519 15-01-2007 16:44

Re: Where is the multi object tracking code for the RC?
 
one question,is it possible to use feedback from the camera to controll the robot, almost like a mini-autonomous mode built in to the code.

i.e. have the size of the rectangle tell you your robot if it is head on or if the light is to the right or if it is between two lights. then use that info to triangulate it's position{like kevin said it would rotate clockwise unitl it only has one light is sight. then it would rotate counterclockwise until it has only one light in sight, then use the that to find the distance between the two using some sort of equation(custom)} once that distance becomes let's say the equivalent of 45* it would go straight forward and hang the ringer.

i know that it is complicated but is it possible with the hardware and software.


All times are GMT -5. The time now is 03:28.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi