Does kevin’s code work for this?
Does kevin’s code work for this?
Do we use last year’s unlock code for this?
So wait, are you only able to use that code with EasyC?
Currenty only easyC and WPILib have code written for Multi-Target.
Brad Miller will be making a project on Monday for WPILIB.
The reason being the GDC used easyC for FRC to program
the Demo and most of the test robots.
easyC PRO has a full IDE builtin now so you can write C code
as much as your heart desires. Also, easyC PRO has a graphics
display window that works similar to the way the demo did.
Team contacts will recieve a CD-KEY Monday.
You can run in evaluation mode till then.
I am confused; I didn’t even know that the hardware supported multiple objects. I’ll have to have a look-see at that code to figure out how multi-tracking works.
I canr wait.
If course it could. Its all about the software implementation.
OK, so we have to use WPILib to get the tracking code? We were going to use FusionEdit and not easyC.
Yes, but if you have 2 targets in one field-of-view, you will get a wide box and a low confidence value. That’s at the camera’s firmware level. So just by looking at the tracking packets, you wouldn’t be able to see 2 targets. And I know we couldn’t dump the contents of the frame into the RC; we don’t have that kind of bandwidth.
However, if you only had one target in the field-of-view, and panned until you saw another, this would work fine. I’m guessing that’s how the software works.
The hardware doesn’t support tracking of multiple objects. The code that Dave mentioned, written by FIRST’s Neil Rosenberg, infers the location of a “spider leg” by looking at the size of the image. If there are two lights in the scene – which is not always the case – the camera returns a blob size that is far too big to be from one light. If the blob size is small, you know that there is only one light in the scene and there is a spider leg directly underneath.
If you did write code (I hope I’m not misinterpreting your post), do you have an approximate release date?
Bump? It sounds like Kevin’s leading us on a little, but I would love love love to have that code.
Kevin, if your 2007 code can has the tracking similar to what Neil wrote then I apologize as I was misinformed.
Kevin is 100% correct the CMU can’t track multiple objects the code just looks at the size of the target region and if its over a certain size prints that its looking a 2 targets. In the demo they used a terminal emulation program and used VT100 calls. We made our own custom terminal window and function to do the same thing in easyC PRO.
WPILIB is 100% compatible with MPLAB, and Eclipse. Infact it’s currently being written and maintained in Eclipse,
WPI has been using it since 2005 on their FRC robots.
any reason why easyc can’t open bds file for the multi-object tracking code??
Are you sure that you are using easyC PRO, and not easyC for Vex?
The way the camera works is that it takes a picture, finds all the pixels that fall into a color range, and finds information about them. The information I’ve used in the past is:
- bounding box (x1, y1, x2, y2)
- The median point (mx, my)
- The pan/tilt (see note)
- a “confidence” (basically a ratio of the color blob area to the area of the image)
- IIRC, some statistical information about the color is also provided.
Note: In 2005, the camera would drive its own servos and report their values. In 2006, the code would not configure this way, instead relying on the RC to do it. (I changed this to the 2005 behavior in my own code.) I do not know yet what the behavior is this year.
I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.
Of course, I’m likely to eat my words when the actual code gets around.
EDIT: Wow. How much discussion occurs while I post!
Yes you can, but the RC needs to control the pan/tilt servos…
How exactly does one access the data pertaining to the individual blobs? I didn’t remember ever seeing anything like that in the code…
They are not individual blobs they are a single large blob. The camera just draws a box around the target pixels.
If you load up easyC PRO we have a program in the Sample Code that
shows what the camera is seeing. It draws a box around the blob and shows an “X” for the centriod (center of blob) and shows the data the camera is showing.
If your not using easyC the CMU JavaApp also can show you the region. I don’t know if labview can show this data.