Where is the multi object tracking code for the RC?

They are not individual blobs they are a single large blob. The camera just draws a box around the target pixels.

If you load up easyC PRO we have a program in the Sample Code that
shows what the camera is seeing. It draws a box around the blob and shows an “X” for the centriod (center of blob) and shows the data the camera is showing.

If your not using easyC the CMU JavaApp also can show you the region. I don’t know if labview can show this data.

If one were to use the VW command (virtual window, see page 55 of the CMUCam2 manual), processing could be done on a particular chunk of the camera’s view. By examining a 50px wide window and then repeatedly sliding that window over by a given number of pixels and re-processing, one could reconstruct the two distinct blobs and make an estimate of the number of pixels between them.

Of course if you used this method, the camera’s servos would have to be driven by the RC because otherwise resetting the virtual window would cause the camera to track/center on that particular portion of the window. (I’m pretty sure anyways, haven’t ever actually tested out the command).

Can anyone verify that using the VW window causes the camera to re-process only that chunk of the view? Also I’m not sure if a sliding window would be too slow. Eagerly anticipating any more hints from Kevin! :slight_smile:

Yes, this is one of the cooler approaches that you could try. A simpler way might be to rotate the camera fully clockwise, call Track_Color() and then rotate the camera counter-clockwise until the camera detects the light. Then continue to rotate counter-clockwise until the entire blob is in frame (i.e., the blob isn’t touching the edge of the image). Now you know where the right most blob is and its size. Do this again to find the left most blob. A little math, and you should know where the closest scoring location is.

It’s a fun problem <evil grin>.

-Kevin

i had a solution which i do not think is legal but would be cool.

Parts
CMU Cam 2 x2
Basic Stamp Or Javilin
BOE Programming Board

Serial data into the free pins. Program the stamp to tack on a L or R to the packet befor sending it to the robot. That would allow for multi cameras but i dont think you can use a stamp. Maby a pic though.

Any chance you could elaborate on this math? Even if you know the relative headings of two lights, I am still having a hard time figuring out how you can approach the spider leg head on (not at an angle).

Thanks in advance,
Robinson

You don’t need to approach the spider leg head on to score (I’m told that is was designed to be fairly forgiving). One bit of math that will come in handy is the equation that allows you to calculate range to the light from the camera tilt angle then it is pointed directly at the light. This method was described in a post last year. This year the centroid of the light will be about 116 inches above the floor.

-Kevin

I’m familiar with the method for calculating range, but I’m afraid that the mechanical subteam may win in the war to have a manipulator that is (or isn’t) robust enough to accommodate any approach to the target except for a head on one.

While I would like it if we weren’t angle sensitive, if we are, what math is necessary to figure out your approach angle to the target (eg. head on, coming in at 20*, etc.)?

Thanks in advance,
Robinson

It’s not an easy problem because you’ll need a very agile robot that, at the very least, will need to be four wheel drive so that you can do a turn-in-place (of course, several high school students will read this and grin because they’ve thought of a cool way to solve the problem that the NASA guy didn’t think of – it happens every year <grin>). If I were you, I’d push back and let the mechanism folks know that their solution for delivering a scoring piece needs to be more robust so the 'bot can approach at more oblique angles. To help visualize how you might accomplish the task, here’s a link to a PDF containing a scale Visio drawing of the field: http://kevin.org/frc/2007_frc_field.pdf.

-Kevin

Our team is coming across the same dilemma. However, do you really need four wheel drive to do a turn in place? My team is using a forklift style drive this year(two drive wheels in the front, steering wheels in the back). The engineers of my team told me that we can turn in place by just turning the steering wheels almost perpedicular to the front and spinning the front wheels in opposite directions(ie- to turn left in place, spin the left wheel backwards and the right wheel forward). I am sceptical of this method. Will it really work?

But also, I have an idea of how to determine the orientation of the rack/vision target from information from the camera and would like to know the feasibility of it. It draws on the fact that the blob size is proportional to the angle that your approaching the target from. the blob size will be “thinner” if you’re approaching from an angle, and larger if you’re approaching head on. Do you think it would be possible to determine the angle of the rack based on this information, and the distance?

It seems that our robot will only be able to score(feasibly) head on.

one question,is it possible to use feedback from the camera to controll the robot, almost like a mini-autonomous mode built in to the code.

i.e. have the size of the rectangle tell you your robot if it is head on or if the light is to the right or if it is between two lights. then use that info to triangulate it’s position{like kevin said it would rotate clockwise unitl it only has one light is sight. then it would rotate counterclockwise until it has only one light in sight, then use the that to find the distance between the two using some sort of equation(custom)} once that distance becomes let’s say the equivalent of 45* it would go straight forward and hang the ringer.

i know that it is complicated but is it possible with the hardware and software.

It’s more complex mechanically, and the software will be a pain, but once dialed-in, it should work fairly well.

It will be hard to do this because you have too few pixels to work with. As an example, from the closest starting distance possible (~180 inches), the light at 0 degrees only lights-up 12 pixels, at 25 degrees 8 pixels are illuminated. I think you’re better off with a scoring mechanism that will work over greater range of angles.

-Kevin

Ive looked through this a bit and basically conceptualized a couple of ways to go about this. First off… if you look at kevin’s old camera code you will see that all of the data that you could ever need it within the t_packet_data structure. The bounding box corners are there, along with the centroid location.

the way that i am conceptualizing going about this is pretty much to use size and confidence in order to determine the number of targets. the way i see it, the confidence will decrease when you have a low amount of tracked pixels within a large bounding box. When this confidence goes below a certain threshold, the code will know that only one target is in sight. The next challenge is finding the centroid of each target.

Ok, you know that the camera’s X boundaries (but not necessarily the y boundaries) will mark the left edge of the left target, and the right edge of the right target. you will not know the height from this data, but you will not need to. the targets are in a fixed aspect ratio (like 2:1 w:h i believe), and you know the tracked pixels that you have. With this data, by dividing the tracked pixels by two and conforming each dividend to the aspect ratio, you can get an approximate X location of each target.

if you combine this method with the frame differencing data you could probably get the approximate Y value of each target as well. I will have to play around with this method a bit more in labView before i am able to come up with a conclusive algorithm… but that is what i have for now.

Yes, it’s true! Thanks for noticing.

Excellent analysis. Another bit of data you can use is the location of the centroid within the bounding rectangle. The centroid will be closer to the side of the rectangle with the nearest green light.

-Kevin

One way to do it is illustrated in the attached illustration.

-Kevin

One_way_to_do_it.pdf (16.2 KB)


One_way_to_do_it.pdf (16.2 KB)

Yes, but you can’t line up your robot exactly facing forward or the judges can move it. And the rack can be translated and rotated, too. So nothing is a given. I’ll know by tomorrow whether our manipulator can handle oblique angles or not. [crosses fingers]

Thanks,
Robinson

Yes, but you can’t line up your robot exactly facing forward or the judges can move it. And the rack can be translated and rotated, too. So nothing is a given. I’ll know by tomorrow whether our manipulator can handle oblique angles or not. [crosses fingers]

if you load your tube so that it is parallel to the floor, you can essentially load it at any angle. I’m not sure if this should be posted here, but you asked the question. it really is a bad idea to mount it dead on… there are too many accuracy woes to worry about.

Who said that you can’t line up your robot exactly facing forward?:confused:
I thought the general rule was that you couldn’t come on the field with tape measure and other measuring tools to precisely position your robot. Other than that, you can place it however you want. That is the whole point. Last year, you could aim your robot “exactly” facing toward the corner or center goal so you could score.

This is exactly what I am pushing for. However, I am on a robotics team, and if the rest of the team decides that the other alternative on the drawing board (a gripper claw that grips from the inside, and loads perpendicular to the floor) is better/easier to make, then so be it. It becomes a software problem. I should know my fate by tomorrow afternoon.

The multi-object tracking cmucamera 2 code shown in the 2007 kickoff can be downloaded here:
http://first.wpi.edu/FRC/25814.htm

This link was also accessible from the usfirst.org programming resource library accessible from usfirst.org. Anyways, I’m posting this because the intelitek webisite link ( http://www.intelitekdownloads.com/easyCPRO/ ) that was posted earlier, while containing the same code I beleive, is currently down due to exceeding their bandwidth.

Michael
1353, Spartans

So…IS there going to be multi-object tracking code available?(Not the EasyC) Or do we have to modify the existing camera code ourselves?