Dave said the multi object code was available online, but I can’t find it. Does anyone know where it is?
Thanks,
Eric Haskins
Dave said the multi object code was available online, but I can’t find it. Does anyone know where it is?
Thanks,
Eric Haskins
Does kevin’s code work for this?
http://kevin.org/frc/frc_camera_2.zip
Do we use last year’s unlock code for this?
So wait, are you only able to use that code with EasyC?
Currenty only easyC and WPILib have code written for Multi-Target.
Brad Miller will be making a project on Monday for WPILIB.
The reason being the GDC used easyC for FRC to program
the Demo and most of the test robots.
easyC PRO has a full IDE builtin now so you can write C code
as much as your heart desires. Also, easyC PRO has a graphics
display window that works similar to the way the demo did.
Team contacts will recieve a CD-KEY Monday.
You can run in evaluation mode till then.
Really?
-Kevin
I am confused; I didn’t even know that the hardware supported multiple objects. I’ll have to have a look-see at that code to figure out how multi-tracking works.
JBot
I canr wait.
If course it could. Its all about the software implementation.
OK, so we have to use WPILib to get the tracking code? We were going to use FusionEdit and not easyC.
Yes, but if you have 2 targets in one field-of-view, you will get a wide box and a low confidence value. That’s at the camera’s firmware level. So just by looking at the tracking packets, you wouldn’t be able to see 2 targets. And I know we couldn’t dump the contents of the frame into the RC; we don’t have that kind of bandwidth.
However, if you only had one target in the field-of-view, and panned until you saw another, this would work fine. I’m guessing that’s how the software works.
JBot
The hardware doesn’t support tracking of multiple objects. The code that Dave mentioned, written by FIRST’s Neil Rosenberg, infers the location of a “spider leg” by looking at the size of the image. If there are two lights in the scene – which is not always the case – the camera returns a blob size that is far too big to be from one light. If the blob size is small, you know that there is only one light in the scene and there is a spider leg directly underneath.
-Kevin
Kevin,
If you did write code (I hope I’m not misinterpreting your post), do you have an approximate release date?
Bump? It sounds like Kevin’s leading us on a little, but I would love love love to have that code.
Kevin, if your 2007 code can has the tracking similar to what Neil wrote then I apologize as I was misinformed.
Kevin is 100% correct the CMU can’t track multiple objects the code just looks at the size of the target region and if its over a certain size prints that its looking a 2 targets. In the demo they used a terminal emulation program and used VT100 calls. We made our own custom terminal window and function to do the same thing in easyC PRO.
WPILIB is 100% compatible with MPLAB, and Eclipse. Infact it’s currently being written and maintained in Eclipse,
WPI has been using it since 2005 on their FRC robots.
any reason why easyc can’t open bds file for the multi-object tracking code??
Are you sure that you are using easyC PRO, and not easyC for Vex?
The way the camera works is that it takes a picture, finds all the pixels that fall into a color range, and finds information about them. The information I’ve used in the past is:
Note: In 2005, the camera would drive its own servos and report their values. In 2006, the code would not configure this way, instead relying on the RC to do it. (I changed this to the 2005 behavior in my own code.) I do not know yet what the behavior is this year.
I do not remember if the camera provides the number of separate blobs in the default information. If it does not, the communication overhead and computation requirements would likely be prohibitive. If it does, there is a good chance that you can not get the separate bounding box/median information for each blob.
Of course, I’m likely to eat my words when the actual code gets around.
EDIT: Wow. How much discussion occurs while I post!
Yes you can, but the RC needs to control the pan/tilt servos…
-Kevin
How exactly does one access the data pertaining to the individual blobs? I didn’t remember ever seeing anything like that in the code…