How fast is everyone’s color tracking code running when it sees the target?
Ours runs around 9-10 Hz, probably less, with resolution set to 160x120. Our image processing runs in a separate thread.
Has anyone had better luck with improving processing speed? It’d be great if we could improve the priority of our thread, but that completely screws up the built-in camera task. Would compression help any? The lag time might have to do with GetImage.
do you really need to retrieve any video feed? if not rip that code out of the two color tracking demo, if you guys are using that or something similar…
When i ripped out all the camera viewing related code, my camera was moving so smooth it was quite amazing…
What camera viewing related code comes with the two color tracking demo??
The major things that affect frame rate are the image size. The images coming from the camera are already compressed JPGs. Large size takes the cRIO 100ms to decode to pixels, medium takes 22ms, and small takes about 8ms. The other impact size has is on the number of pixels to process. To do a color threshold on the large pixmap means processing 307,200 pixels, medium is 1/4th of that, and small is 1/16th that many pixels. These effects added together clearly make image size the primary factor on the frame rate.
The next issue is how the processing is done. There are a number of ways to detect the red/green combo. After looking at a number of them, NI and WPI decided on what is in the example. Other approaches will work, and in fact you may find something even better, but many of the other approaches are slower.
The next issue is the debugging displays. The displays on the PC sent from the cRIO have a pretty big cost. When you need them for debugging, they are certainly worth it, but especially when you are looking at timings, you will want to close subVIs, close probes, and don’t leave the HSL debug buttons on in the Find VI either.
As for ripping out the displays, it shouldn’t be necessary to rip them out completely. Placing them in a case and only display them when you want them. That way when you want them, you just push a button. Again, that is what the LV examples do.
The last piece I’ll comment on for the debugging is the dashboard. The dashboard image display has a button to turn it on and off. In addition, if the dashboard isn’t running, is turned off, or is blocked, the loop on the cRIO that is sending the images will block on a TCP call and will not cause any CPU usage. In other words, no need to rip it out either, you can simply turn it off when not needed.
If you’d like to make more precise measurements of where the image processing time is being spent, especially with LV, I’ll be glad to help.
Greg McKaskle
For the camera, what value of compression (0-100) would you suggest? What difference, if any, would it make?
Thanks!
Feel free to test it.
From my experience, the amount of compression has little impact on framerate. The exceptions I remember were that near 100%, the rate would often drop lower, and near 0%, the same. So keeping it somewhere in the middle, like 20% to 80% has always worked well. The more compression you give, the more blocky the image elements, and IMO the more it will mess with your image processing.
Greg McKaskle
One idea to improve the speed would be to add a separate microprocessor to pre-process the data before sending it to the cRIO.
And how do you plan on sending the data to the cRIO?
You are not allowed (this year) to tap into the ethernet port on the camera side nor are you allowed to use the serial port (SIGH).
You could use a couple of digital IO pins as a serial port. It’s been done before…
I observed (on the browser-based camera GUI) that making the image slightly out of focus improved the frame rate. I’m not sure if this is something you would want to do in competition or not.
who is getting 27-30 hz and what resolution are you using? We are now getting 15-17. but 30 man.
I’m managing around 3-5hz on average at 640x480. It’s easier to debug at this size, but I’ll probably be cutting it down to 320x240 or smaller for final work.
It’s possible to request uncompressed images from the camera… It might be a thought to mess around with that if JPEG decoding becomes the limiting factor here. Although I’m not sure how much network processing is off-loaded on this guy, nor by how much it’d outweigh decoding delay. Uncompressed 900KB vs. 40-60KB compressed. There’s also the fact the camera can’t (so far as I know) stream BMPs, only request one at a time, so there’d be some extra delay there, although you could use a keep-alive connection…
I’m glad to see people looking into the camera capabilities. I’m still doing that myself.
The BMP capabilities on the camera are interesting to look at. What I found was that the time to transmit the much larger file added a delay comparable to the decompression time. You really don’t need to worry about the overhead of the http session. The LV camera VIs use the JPEG cgi and do that currently, and the overhead seemed miniscule.
One thing I recently learned which will make its way into a more official document… When playing with the camera settings, the Exposure priority parameter can have a pretty big impact on performance, especially when in normal to lower light. If you set it to image quality, it will avoid bumping the sensor gain and will drastically lower the framerate when there isn’t lots of light. When set to framerate, it will bump gain to preserve framerate which will result in grainier images. I haven’t done enough testing to see if this has a negative impact on brightly lit usage. Finally, the default of none lets the camera balance this, changing both.
I’d encourage you to look at the performance pretty soon, and if you find something, I’d be glad to hear about it.
Greg McKaskle
Hmmm, I wonder what the transmission rate of the DIO pins are …
The GPIO inputs are sampled at 173KHz according to the GDC. Don’t know about output rate (probably the same?).
My code can process 1000 640x480 images per second.
EDIT: I’m sorry. ~.00012 seconds was the time difference. It’s actually about 10000 images at 160x120. I’ll post the time for 640x480 tomorrow.
-TheDominis
I’m curious to know what your code is doing with the image, and what information it provides to the rest of the program when it has done its processing. What language is it written in, and would you consider sharing it?
I won’t share it. I’m using C++ and my code provides accurate data to be used by our cannon.
-TheDominis
Pics or it didn’t happen.
WE NEED HELP!
My team is testing it’s camera and it sees the colors just fine. The problem we are having is that even the camera sees the color it won’t track the color using the servos. Does anyone know what’s going on?:eek: