Hi, I’m having a small trobble finding out how can I track 2 targets with the new CMUCam 2 code.
Can someone explain it to me?
10x Tal
Hi, I’m having a small trobble finding out how can I track 2 targets with the new CMUCam 2 code.
Can someone explain it to me?
10x Tal
From my understanding you cannot actually “track” multipul targets. You can only identify how many there are. The blob size returned from the camera will be twice as large as a regular blog size if you are looking at two targets, but I dont believe that you can actually “track” two target…
But if one is far apart from the other (Like on the rack) and I want to know the angle between them?
rst of all, which are you using, EasyC or mplab?
I’m going to assume that you’re using mplab and relatively new to this, given your team number.
Have you downloaded Kevin Watson’s camera code at http://www.kevin.org/frc? It’s a great place to start if you’re unfamiliar with the CMUCam.
Anyway, once you download that, compile it and load it onto your RC. Be sure that your pan and tilt servos are plugged into pwm01 and pwm02. Connect power and the TTL inputs to the camera to the appropriate place. Turn it on, and the camera should start panning around looking for the target.
Now, when your programming cable is plugged in, the console open, and the camera tracking, you can see a bunch of data, including the x and y locations of a bounding rectangle, the center of that rectangle, and the percentage of pixels in that rectangle that are being tracked. That percentage value is called the confidence. If you’re tracking a single target, your confidence value should be fairly high. If you’re tracking multiple targets, there will be a huge space of untracked pixels in between the two targets, which will lower your confidence substantially. So, if you see a big bounding box with low confidence, you can figure out that you’re tracking two targets.
Thank you, but I know all this from last year.
All I’m asking is how can I track 2 targerts or at least know the angle between them.
He told you.
There is no way to “track” two targets. The CMUCam2 firmware still only allows 1 target to be tracked. If it sees multiple color blobs, it will include them all in its “bounding box” and return a low confidence value.
Look at the spread of the bounding box. When tracking two targets it will most likely look like a very wide rectangle. From that you can determine the left and right limits of the targets. Using the servo angle, X center of the bounding box, and a little math, you can figure out exactly where they are in relation to your robot.
Since the two lights should be at the same height, it seems that you only need the width of the bounding box to determine their two positions. However I think in general it should be possible to generate multiple bounding boxes for each target by using the virtual window features of the camera. That is, you should be able to cut the area being processed in half and operate only on that side, thus cropping the other light out of the picture.
But I’m not sure - I’m just an alumnus, I’ve never even plugged in the camera.
Take a look at the following web site:
http://first.wpi.edu/FRC/25814.htm
It has a video that describes how to identify that you are tracking two lights and some sample code that will let you get a start playing with the camera.
Unfortunately, the only place in the frc_camera code which can be used as data from the camera to do non-default searching and tracking behaviors is the T_Packet_Data_Type struct in camera.h. And it is not documented in that file as to what the fields are. Or in a txt file.
If you look at the only non-easyC code on the WPI site it uses an undocumented API call which does not seem to be available with the frc_camera code. Where that API call is used in the non-easyC code has descriptive variable names, assuming that the argument order happens to match the order of the fields in the T_Packet_Data_Type struct.
Is there anyplace where this stuff is officially documented? If not it *really *should be added to the .h file where the struct is defined. Last year new people kept asking questions about this right up to the first regionals. I expect the same this year.
The information you seek is the the file “CMUcam2_data_packets.pdf” available on my website since January 10th, 2006. The point about the t-packet not being documented in the header file is a good one and I’ll make that change when I freshen the code in the next few days.
-Kevin
Alrighty then, I got a quick, late night question. This “rectangle” will be determined by blob size?
Hi Kieth -
Good question, I guess I should have done that.
For the easyC versions of the program, all the calls are extensively documented in the easyC help with the API and examples. If you use the WPILib version of the program, the entire API (not just the camera) is documented here:
WPILib docs
The library is the same one that easyC uses so it was trival to make a version of Neil Rosenberg’s program that didn’t require easyC for people who wanted a quick stand-alone MPLab project.
thanks for the link this really helped
Kevin, thank you very much for that link! That is exactly what I was looking for.
I see that web page also has the camera calibration procedures, which our default 2007 camera has a problem with. Looks like I need to do some reading. But this is a different thread.
Regards,
Keith
Does anyone know if the code on the camera’s microprocessor is different than the previous 2 years? If so, what changes were made?
As far as I can tell, the camera firmware does the same thing as it did last year. The hardware is not physically capable of controlling servos, but the camera still acts as if it can.
Has anyone experimented with using different lenses? Did it help? Where can you get different lenses?
We’ve been playing around with several different optical elements, but not replacing the camera lens. it looks like different lenses and mounts can be purchased from Marshall Electronics
OK, now that I have read closer, no, it isn’t exactly what I’m looking for.
“Left” and “right” obviously say which X value is higher. What about the Y values? Is one always larger than the other?
The way the tracking function works, is that it draws a box around the blob of pixels, and gives you the top left coordinates and the lower right coordinates. So, yeah, there is always one y value that is larger than the other.