![]() |
Speed of the camera
How fast is everyone's color tracking code running when it sees the target?
|
Re: Speed of the camera
Ours runs around 9-10 Hz, probably less, with resolution set to 160x120. Our image processing runs in a separate thread.
Has anyone had better luck with improving processing speed? It'd be great if we could improve the priority of our thread, but that completely screws up the built-in camera task. Would compression help any? The lag time might have to do with GetImage. |
Re: Speed of the camera
do you really need to retrieve any video feed? if not rip that code out of the two color tracking demo, if you guys are using that or something similar...
When i ripped out all the camera viewing related code, my camera was moving so smooth it was quite amazing... |
Re: Speed of the camera
What camera viewing related code comes with the two color tracking demo??
|
Re: Speed of the camera
The major things that affect frame rate are the image size. The images coming from the camera are already compressed JPGs. Large size takes the cRIO 100ms to decode to pixels, medium takes 22ms, and small takes about 8ms. The other impact size has is on the number of pixels to process. To do a color threshold on the large pixmap means processing 307,200 pixels, medium is 1/4th of that, and small is 1/16th that many pixels. These effects added together clearly make image size the primary factor on the frame rate.
The next issue is how the processing is done. There are a number of ways to detect the red/green combo. After looking at a number of them, NI and WPI decided on what is in the example. Other approaches will work, and in fact you may find something even better, but many of the other approaches are slower. The next issue is the debugging displays. The displays on the PC sent from the cRIO have a pretty big cost. When you need them for debugging, they are certainly worth it, but especially when you are looking at timings, you will want to close subVIs, close probes, and don't leave the HSL debug buttons on in the Find VI either. As for ripping out the displays, it shouldn't be necessary to rip them out completely. Placing them in a case and only display them when you want them. That way when you want them, you just push a button. Again, that is what the LV examples do. The last piece I'll comment on for the debugging is the dashboard. The dashboard image display has a button to turn it on and off. In addition, if the dashboard isn't running, is turned off, or is blocked, the loop on the cRIO that is sending the images will block on a TCP call and will not cause any CPU usage. In other words, no need to rip it out either, you can simply turn it off when not needed. If you'd like to make more precise measurements of where the image processing time is being spent, especially with LV, I'll be glad to help. Greg McKaskle |
Re: Speed of the camera
For the camera, what value of compression (0-100) would you suggest? What difference, if any, would it make?
Thanks! |
Re: Speed of the camera
Feel free to test it.
From my experience, the amount of compression has little impact on framerate. The exceptions I remember were that near 100%, the rate would often drop lower, and near 0%, the same. So keeping it somewhere in the middle, like 20% to 80% has always worked well. The more compression you give, the more blocky the image elements, and IMO the more it will mess with your image processing. Greg McKaskle |
Re: Speed of the camera
One idea to improve the speed would be to add a separate microprocessor to pre-process the data before sending it to the cRIO.
|
Re: Speed of the camera
Quote:
You are not allowed (this year) to tap into the ethernet port on the camera side nor are you allowed to use the serial port (*SIGH*). |
Re: Speed of the camera
Quote:
I observed (on the browser-based camera GUI) that making the image slightly out of focus improved the frame rate. I'm not sure if this is something you would want to do in competition or not. |
Re: Speed of the camera
who is getting 27-30 hz and what resolution are you using? We are now getting 15-17. but 30 man.
|
Re: Speed of the camera
I'm managing around 3-5hz on average at 640x480. It's easier to debug at this size, but I'll probably be cutting it down to 320x240 or smaller for final work.
It's possible to request uncompressed images from the camera... It might be a thought to mess around with that if JPEG decoding becomes the limiting factor here. Although I'm not sure how much network processing is off-loaded on this guy, nor by how much it'd outweigh decoding delay. Uncompressed 900KB vs. 40-60KB compressed. There's also the fact the camera can't (so far as I know) stream BMPs, only request one at a time, so there'd be some extra delay there, although you could use a keep-alive connection... |
Re: Speed of the camera
I'm glad to see people looking into the camera capabilities. I'm still doing that myself.
The BMP capabilities on the camera are interesting to look at. What I found was that the time to transmit the much larger file added a delay comparable to the decompression time. You really don't need to worry about the overhead of the http session. The LV camera VIs use the JPEG cgi and do that currently, and the overhead seemed miniscule. One thing I recently learned which will make its way into a more official document... When playing with the camera settings, the Exposure priority parameter can have a pretty big impact on performance, especially when in normal to lower light. If you set it to image quality, it will avoid bumping the sensor gain and will drastically lower the framerate when there isn't lots of light. When set to framerate, it will bump gain to preserve framerate which will result in grainier images. I haven't done enough testing to see if this has a negative impact on brightly lit usage. Finally, the default of none lets the camera balance this, changing both. I'd encourage you to look at the performance pretty soon, and if you find something, I'd be glad to hear about it. Greg McKaskle |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
My code can process 1000 640x480 images per second.
EDIT: I'm sorry. ~.00012 seconds was the time difference. It's actually about 10000 images at 160x120. I'll post the time for 640x480 tomorrow. -TheDominis |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
I won't share it. I'm using C++ and my code provides accurate data to be used by our cannon.
-TheDominis |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
WE NEED HELP!
My team is testing it's camera and it sees the colors just fine. The problem we are having is that even the camera sees the color it won't track the color using the servos. Does anyone know what's going on?:eek: |
Re: Speed of the camera
Make sure the servo channels match your wiring, make sure the channels you are using have jumpers, make sure your RSL light on your digital sidecar is steady green. All of these are necessary for the servos to move under computer control.
Greg McKaskle |
Re: Speed of the camera
How are you guys checking the frequency? What we see so far is just the framerate which is currently at 7.5fps, which is not tat great.
|
Re: Speed of the camera
I've just tested 640x480 and it takes ~.0008 seconds for each image. 1250 images per second.
-TheDominis |
Re: Speed of the camera
I find that extremely hard to believe since the camera can only support up to 30 Frames per second, which means it can receive 1 image at every ~0.0333 seconds, and thats the MAX possible by the Axis 206.
BTW, whats everyone's framerate running at? |
Re: Speed of the camera
I am processing the same image more than once. I disabled the timestamp checking to see how many per second I could process.
-TheDominis |
Re: Speed of the camera
Quote:
Another possibility for speeding it up would be to remember the position of the target between images, and use that as an estimate for where it will be the following time. If you get the cycles fast enough, it shouldn't move too much in between (or, you could even add a simple first-order approximation of position based on current and previous positions). Then find the particles within 50 pixels of the previous spot or such. I'm not sure if it's possible to force cropping on the camera side, but that'd possibly be handy - although, again, probably force constant re-connection (I don't know if the camera supports keep-alive connections. Although it does have a scripting interface...) Another fun improvement might be to sync the servo movement up with the camera, so that pictures are returned a little less blurred. Or use a gyro to counteract turning of the robot automatically, also lessening blur. If you wanted to go _really_ in depth, you could take the cropping idea above to a whole second level, and write your own JPEG code to only decode the blocks of the image that you think the target would be in (IIRC from the file format, you'd still have to go through the huffman encoding for the entire image, but you could do the IDCT and such only on the parts you need). There are literally tons of things I'd like to explore to try and speed up this code. Although, for those interested, my first attempt is going to be to get libjpeg compiled for this processor and see if it's able to decode any faster (although for all I know, NI's JPEG code could easily be based on it, and I'm wasting my time - research is to be done). |
Re: Speed of the camera
Again, I encourage you to run lots of experiments with it. Many of the things you mention are used on a regular basis with industrial cameras, but those weren't in the price range to go in the kit. The 206 is primarily a security camera or monitoring camera, and the features like crop or ROI or precise timing don't exist. The camera does a pretty good job of getting a decent image back to the computer, but many of the things that would be useful for optimization aren't in the current camera's firmware. Maybe next year.
I actually looked at our JPEG algorithm to see about a subset decode, and it would be easy to do the bottom of the rect, but the others get hard really fast. Please let me know if you find a better JPEG algorithm. I was told this is a good/fast one, and I was unable to locate one from Freescale tuned to this architecture. Greg McKaskle |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
Because I was curious as to how much running image processing on a faster processor would speed it up, I tried it on an (admittedly extremely slow - probably under 2 ghz) laptop and got a framerate of 20 at a res of 320x240. This was just a simple loop of getImage; with simple processing (threshold, fill holes, get measurements of particles), it didn't really slow down. At 160x120 it was more like 30 fps. Since it was running locally, outputting the image didn't slow it down.
Something I'm curious about is what the actual framerate of the axis 206 is - ie, not what the specs say, but what it can actually serve. The laptop - running xp - was reporting cpu usage around a consistent 10% or so, so either it was the network - 100 megabit ethernet (12.5 megabytes/sec) or the cpu onboard the camera is too slow to deliver at higher speeds. I could probably have sped the processing up a bit by running it in separate loops communicating through a global var so it could utilize both processors. Actually, best performance could (I think) be gotten by using a dual core and decoding and processing in one, and acquiring images in another processor/thread. Based on the results I've gotten, the network/camera seems to be a major bottleneck at higher resolutions, unless much more of the cpu was being used than what was shown. Tomorrow I'll try with the priority set to framerate and see if there's a difference. Does anyone have something they've written to test the bmp image retrieval capability? I would be interested in tweaking/changing that and posting any changed or improved code... Otherwise, I'll just post any code for it that I come up with. -jonathan |
Re: Speed of the camera
The camera framerate is often limited by the amount of available light. If you reproduce the test, be sure to aim the camera at the ceiling, or at the lights. You should see the framerate go up. Then aim it at dark stuff, or mostly cover the lens and it should go down. Oddly, when you put your finger over the lens to make the camera go completely dark, it usually falls somewhere in between, presumably it gives up on a decent exposure.
If you set the exposure priority to framerate, the lower light images will get a higher framerate, but will be grainier. I've wrote and retested the BMP stuff just the other day. The framerate was miserable. Greg McKaksle |
Re: Speed of the camera
Quote:
Also, what think is at least as important, if not more important, is the effective range. Using the FRC sample tracking program had a range of 10-15 feet. As it increased over about 10 ft, it had more and more trouble tracking a moving target although stationary was fine for a few more feet. This is probably a result of the low res (160x120) we're using, although thats the only res with any usable framerate - around 18 fps. -jonathan |
Re: Speed of the camera
I only tested two resolutions, but I remember large being 4fps, medium is around 5.8, and small is around 7.5.
The ability to detect the target reliably will be greatly affected by the lighting. It sounds like your lighting is good enough to get stuff to work. But how does it compare to the event. And when it gets flaky, is it purely due to distance, or is it because you are going into a dark spot or an area with different lighting. Use the debug HSL button to sample the images if that helps. See if some flood lights like they have at events help -- note that depending on how they are mounted, they can cause lots of glare, especially when in the same horizontal plane as the camera. Also note whether you are processing all of the 160x120, or are you decimating it further? Finally, you may need to switch resolution to see further. Depending on strategy, I could imaging that when there are no close targets, you switch to 320 for awhile, then when the size goes over a threshold, you go back to 160 for faster frames. Greg McKaksle |
Re: Speed of the camera
Has anyone that is willing to share found a way to speed the processing up yet?
Just curious... Thanks |
Re: Speed of the camera
I am running the two-color tracking code on a 160x120 image at 5Hz and it consumes nearly 50% of the CPU bandwidth (measured using 'spy').
|
Re: Speed of the camera
How smooth does that run the servos?
I am going to be running a motor, not servo's... I would feel safer if it were 10-15Hz... |
Re: Speed of the camera
Quote:
Greg McKaskle |
Re: Speed of the camera
"Is this with LV or C? What framerate did you ask the camera to run at? Did you add printfs or anything?"
That is in C/C++. I passed 5 as the framerate parameter to StartCameraTask. And I have pulled out the code from the demo robot task (minus the wheels stuff) and run it every 10th ds message or about 5Hz. The DPRINTF code is there but Target_debugPrint is 0 so no messages. |
Re: Speed of the camera
If I remember correctly. A couple of teams greatly improved the overall performance of the old camera by puting a polerized lenz in front of it. Has anybody tried this with the new camera?
|
Re: Speed of the camera
Have you tried running it faster? The CPU will ideally be below 100% to keep odd things from happening to your scheduling, but you have lots more bandwidth.
Greg McKaskle |
Re: Speed of the camera
If I use a 320x240 image, the camera consumes close to 75% of the bandwidth. If I keep it at 160x120 and run it at 10Hz I go just over 80%. So I'll probably run it faster after all the rest of my code is running.
This is an embedded system where we desire soft real-time performance. Consuming the CPU bandwidth is not the correct measure or goal. We want it run reliably in a fixed amount of time each iteration. The dynamic memory allocations are a real killer. Walking the free list (while locking everyone else out) is a time consuming and non-deterministic algorithm. The HitNodes are all fixed size so I'll probably convert that code to use a set of pre-allocated buffers. I'll bet that will speed it up quite a bit. |
Re: Speed of the camera
I am currently running some tests of decoding speed for other JPEG libraries. I'll also look a little bit more closely at the image structure used in the JPEGs from the camera to see if there is much shared data - apart from width, height, etc., I'm wondering whether the MJPEG chip on the camera is making similar huffman tables and other things between each frame.
Currently I have ported one other JPEG library over to the system (mainly just changing a bunch of x86-specific stuff, endianness, etc.). I'll get figures on speeds back as soon as I can (likely tomorrow) and am hoping I can see some sort of speedup. I have also noticed the large amounts of allocations/deallocations used in the current code, and actually rewrote just about everything I could (camera streaming code, decoding functions, analysis, etc.) to remove unnecessary dynamic memory allocations. Sharing buffers between functions, only calling new when a buffer needs to be expanded and keeping track of its growing size, etc. The effects are not really noticeable in standard usage, but in debugging I'm able to get a live feed with no slowdown on the system or streaming. Looking at the GFLOPS on the processor, I'm fairly confident in finding a way to get 8-12FPS on a 640x480 image, which is my goal. It's just all about optimizing properly. *edit* Oh, and just a random musing... At 1000 images per second of 640x480, (that is _assuming_ that the entire image is fully decoded) the processor would have to be approximately capable of decoding one pixel every instruction... Meaning, each instruction would have to decode the Red, Green, and Blue components of a pixel at once. This leaves a number of thousand instructions (about 20% of the time) over to actually analyzing the said image, and presumably, converting the RGB to HSL. On my Core 2 Duo 2.53GHz, utilizing one core at 100%, I can decode images from the camera in around 29ms (640x480). Based on what I could find, the processor in the cRio is about 1/4th as fast as mine here, in terms of GFLOPs (JPEG decoding is very math heavy, usually with floating points, sometimes fixed point integer math). I am unsure of the validity of these sources, however... That puts an upper limit on JPEG decoding speed of around 120ms for a 640x480. When you consider that most of this speed comes from architectural differences rather than clock speed, you realize that the actual speed will be even slower after accounting for the many non-math portions of the code (traversing trees, function overhead, etc.). So I believe the maximum speed for decoding a 640x480 on the cRio will be around 180ms if you use code approximately as optimized as the JPEG library used on my mac. I have been unable to find any PowerPC optimized decoder, backing up what Greg mentioned earlier. So I imagine the wall-blocks are around here (theoretically, at least, based on the logic pattern I've gone through): 640x480: 180ms, around 5.6fps. Funnily enough, this isn't too much more than what I was getting in my original tests. 320x240: 45ms, around 22.2fps. Seems reasonable, and a good place to aim for a fair trade-off. 160x120: 11.25ms, around 89fps. I somehow doubt this. Besides, it's not really useful anyways above 30fps, even if it was. Does anyone have code that beats these marks? (That is, that actually fully decodes the image) |
Re: Speed of the camera
The times you mention are about double what I've measured on the cRIO. The measured times are 100ms for large, 22 for medium, and 8 for small. What I'm getting at is that your estimates are relatively close to the current times on the cRIO. With enough time and effort, I'm sure we can trim and shave and reclaim some time. In the off season I hope to spend a bit of time there, but not now.
As for the allocations and stuff that are in the demo code, it is certainly true that the more allocations you have, the less deterministic the execution timing of the code. On the other hand, I wouldn't expect the time cost to change much. The memory managers these days have the right algorithms and enough memory to get good timings on most operations. Greg McKaskle |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
For fun I made someone reproduce my camera timings, and here are a few things we learned. He wasn't able to get decent camera timings, he was getting around 4fps for 320x240.
1. He had set his JPG compression to 0 when messing around with the camera. When set to very small numbers and to very big numbers near 100, the camera takes much longer to produce the jpgs. This increased the time to get an image to several hundred millisecs. This improved the numbers, but they were still not what I'd been seeing. 2. He had never set his camera up using the setup utility. He didn't have an account of FRC/FRC. This means that the first authentication to the camera always fails, and the camera is always using the second session attempted. This will pretty much cut the frame rate in half. This issue is probably LV specific, but if you have not done so, run the utility or log into the camera using a web browser. If you cannot log in using FRC for account and FRC for password, you have this problem. Run the utility, or log in and manually create an account. Hope this helps someone out there. Greg McKaskle |
Re: Speed of the camera
Quote:
The information about the compression time, however, is interesting - On this camera, the JPEG compression is handled by a dedicated ASIC, so I wouldn't expect such dramatic changes. I have simply been using the default, whatever that is - I imagine it to be 85. |
Re: Speed of the camera
The LV code doesn't use MJPGs. It does SW timed acquisitions of JPGs. Each of these need to provide authentication, and this is done first with FRC/FRC. If that fails, two more attempts are made just to support older cameras. Anyway, because a camera that isn't set up will fail on its first attempts each time, you can get much better frame rates by doing the camera setup.
If you are doing your own authentication, this doesn't apply. As for the compression penalty, it is a surprise and a mystery. I suppose it is because the ASIC pipeline isn't deep enough. Who knows. The default for the camera is 30 by the way. I was just wanting to point out that there is a bad side effect to making the camera compression to be set to zero or really small numbers. Similarly, be cautious with numbers near 100. Greg McKaskle |
Re: Speed of the camera
Just wondering, what exactly do you guys mean by "hz?"
|
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
When running the code locally on a laptop, the frame rate improved dramatically with compression ~85 instead of 30, although I remember seeing a slowdown on the cRIO when increasing the compression a lot.
Does anyone know why the default code doesn't use the mjpeg stream? -jonathan |
Re: Speed of the camera
The default code initially used the MJPG stream when prototyping. This worked fine until something caused the cRIO to fall behind on the images. At that point, the TCP session will queue up images waiting for the cRIO to read them. It would at times have up to five seconds of images sitting in the queue. This of course caused a huge lag.
The other approach is to have the cRIO ask each time it is ready for an image by using the JPG cgi script. Comparing the two approaches, they were very close in how they behave. Due to individual preferences, the LV code ended up going with the SW timed JPG approach and the C code ended up doing the MJPG stream. Greg McKaskle |
Re: Speed of the camera
who is getting 30 hz and is not getting 10000 hz (not that I don't trust that person but I am inclined to doubt that nothing is wrong with that tracking code.) I'm up to 20 hz and I was wondering if those 30s are real 30s.
|
Re: Speed of the camera
With enough decimation, sure. But for me, a more normal rate for the gimbal tracking two color at small size is around 20.
Greg McKaskle |
Re: Speed of the camera
Quote:
In any case, our current code processes images around 17-24 frames/second. Pretty usable, but I'm trying to make it faster. |
Re: Speed of the camera
our code runs at about 99hz, it loops through, processes waits 1/100 second, and repeats, very smooth and fairly fast, our only trouble is overshooting the target, but almost done
|
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
Quote:
edit: midPinkY = par.Center_mass_y (or something like that) |
Re: Speed of the camera
I'm having trouble calculating the time it takes the camera to track the target.
I did a series of cases in LV which check if the current state of the camera is "Search-Begin" or "Searching", and on the first run of the case, it will save the time in miliseconds. The code keeps running and once the state is "tracking", it saves the currect time again. Then I display the subtraction of the tracking time in the initial time in an indicator. Some first times that I run the code, it works and shows around 200 to 400 miliseconeds, and sometimes it shows that the camera is alrady in tracking state, on the first iteration, and therefore the display shows 0. how would you track the time it takes for the camera to track? I'd like to do this check without having the front panel open so it will be faster. Thanks, |
Re: Speed of the camera
There is an output from the Find Two Color that shows the time spend analyzing each image. This isn't the entire task, but it is the most expensive step. So if you watch that number you'll get a pretty good idea of the processing cost. Note that this doesn't count the time spent waiting for the camera to return an image, which usually happens in parallel, but sometimes it isn't finished when the processing is.
The other reason I mention this is because timing diagrams where things can happen in parallel is sometimes a bit tricky. If you open the Find Two Colors VI, you can see on the left and right are the milliseconds calls and the subtraction. The extra sequences and wires branching off here and there are to make sure that one time happens real early, and that the other happens last. If you aren't careful, the thing you are timing happens, then you take two times and subtract. Also, especially with loops involved, the loop could run some, then you start your time. One time some is none, and you get a good timing. The next time some is most, and you end up with a small number. The most foolproof way to time a diagram is to put a sequence around the entire diagram, then take the time in a sequence frame before and after. It is just to distracting that most people do it without the monster sequence. Greg McKaskle |
Re: Speed of the camera
So you are saying, the first sequence frame gets the first time, where we get the intial time, and the other sequence frame gets the time when we reach the "tracking" mode and then it substracs the two times?:confused:
|
Re: Speed of the camera
I wasn't being specific. Time once just before and once just after the thing you want to time. This delta it the time the code in between took.
If you are instead wanting to time subsequent visits to the same code, you can have just one millisecs call and a feedback/shift reg/local. You subtract the last time from the new one, then update the stored time. Greg McKaskle |
Re: Speed of the camera
Quote:
-TheDominis |
Re: Speed of the camera
Quote:
Either way it will get patched eventually. In other news, I am also running the default tracking C++ code (160x120) at ~20 Hz. I experimented with cropping, and the gains were negligible. (This could be due to a lighting issue that I did not realize at the time.) This backs up what Greg has been saying. Make sure your target is well lit. |
Re: Speed of the camera
We've been having some problems relating to the TCP connection, and was wondering if anyone could help. Although our image processing was going fast enough, it was taking up to ~1.5 seconds for image acquisition through the TCP connection because it was being opened, read, closed, etc every time in the loop. When we took the TCP open/close out of the LabView loop, we discovered that it wouldn't work for more than 1 image (probably because the camera's server was closing the connection). When we looked on the camera's online interface, we could see near-seamless near-realtime video. Is there any way to get around this (keep the connection open somehow?)? Our processing is working fine, just the images are coming too slowly for any useful processing.
Thanks for any help anyone can give. |
Re: Speed of the camera
Quote:
IDK just a thought... |
Re: Speed of the camera
The overhead of the TCP stuff is not much. It would certainly not cause 1.5 secs of delay. My guess is that you have lots of panels open. Close all windows, open only the top panel and do your timings again. Another reason that you camera may be slow is because you didn't do the camera setup to create the FRC/FRC account.
Greg McKaskle |
Re: Speed of the camera
Quote:
I don't know specifically what this individual is doing, but I don't doubt his numbers. I considered writing the custom image handling code, but I ended up working on the rest of the bot instead. We found that at 15 FPS non-threaded we were reasonably good at tracking, though we still have some tweaking. So far, I've only put in about an hour to the camera tracking code, but it seems solid. I'm considering switching it to a separate thread though, since it has slowed down the unit significantly. The one thing I have noticed looking through the image handling code, is that it is quite inefficient. There are multiple techniques that could be used to speed up the processing speed. Unfortunately, the "Image" struct wasn't all that well documented (that I could see), so I decided to just use the Target.cpp stuff. I think that next pre-season I might write up a better library and share it here, hopefully we'll be able to get much better image processing performance than the WPI stuff. |
Re: Speed of the camera
Ok I have a question then. I am not well versed at all in programming so I am asking the community. Is it possible to get adequate real time tracking using custom code on the cRio or is a coprocessor necessary? By adequate I am talking about being able to find a moving target while we are moving and directing our turret to follow the target without a delay. If this is possible to do on the cRio I would love to hear an answer.
|
Re: Speed of the camera
With custom code, there would be no problem with that. The cRio processor is MORE than capable of real-time-ish(obviously 30 FPS max) tracking of a moving object.
We are somewhat able to do it using the twoColorTrackingDemo code, though it isn't as smooth or as fast as we would necessarily want. One of the limitations is the efficiency of the search functions, which aren't terribly fast as implemented in the demo/included code. I have done much more complex tracking code using a much worse processor in the past, with full 30 FPS. |
Re: Speed of the camera
So more specifically, we have a turreted shooter capable of firing 4+balls per second. Would we be able to write a tracking code using the cRio that could follow a moving target very accurately allowing us to not miss many if any shots within say 7ft? It would also have to give us distance value at the same time.
Do you see any reason at all to use a coprocessor running an integrated Nvidia 9300 plus 2 much higher quality webcams? Or can the cRio do everything we are asking of it? Mechanically our shooter and turret can follow anything and we are trying to not handicap it in any way aka we want to be shooting at the 4 balls/ second rate constantly and be shooting on the fly not having to wait for our opponent to stop to fire. |
Re: Speed of the camera
Here's my $0.02 coming from a control systems perspective...
Most people who want to use the camera for tracking are going to want to use the camera data as an input to a feedback loop (generally a PID controller). Analog (continuous) PID controllers are generally great at controlling systems with low-order dynamics. However, when we digitize the controller, we start to degrade this performance. You really want your sample rate to be 10-20x the highest frequency you want to track. With 15FPS from the camera, this means that a lot of high speed maneuvers can cause you to lose track of the target. Obviously, getting the FPS up is going to help. But there are other things we can do. Consider a PID loop where the input is the offset in the camera frame of the target centroid, and the output is the speed of the turret motor. This is the most direct way to do tracking, but it suffers from the drawbacks above. But what if rather than commanding motor speed, we command motor position? Use an encoder or pot for a position control feedback loop on your turret (i.e. make it into a servo). Then use the camera to tell you where you need to point. Example: The centroid is 5 pixels left of center. You can do some trig to determine how many degrees this corresponds to. Now use your high-speed turret PID loop to go to that position. When another processed frame comes through, you can send a refined position. The only real challenge here is getting the turret sensor reading at the precise moment the image is captured, not when the processing results are ready. With a perfect mapping from pixel offset to degrees, as little as a single frame should be enough to get your shooter/dumper pointed in the right direction. Even if it isn't perfect, the refined commands as the camera captures more images will get you on target. PID control for turret position. P control mapping camera data to turret position commands. Nested control systems are beautiful things. |
Re: Speed of the camera
Quote:
PS: We have 3 nested control loops controlling the shooter's speed ;) |
Re: Speed of the camera
Assuming a moving turret, it might be helpful to know when the image was grabbed on the camera, rather than when the image processing was completed. The Get icon for the image processing sleeps waiting for notification of an image arrival. The Get then decompresses the image. Noting the time at that point, before processing will give a timestamp a fixed amount away from the camera capture time. The image processing will not be very deterministic, so I wouldn't wait until after the processing.
The rough time estimates on the decoding of the jpeg are 8ms for small, 22 for medium and 100 for large. You can obviously measure this for more detail. The TCP transmission will be just a few additional ms, and then you are at the end of image capture time. Another approach is to dig into the vision lib and write the timestamp into a global each time an image arrives. It would be great if the jpegs were timestamped. I'm pretty sure they aren't. This timing probably doesn't matter much unless you want to calculate the lead amount for the turret. Then, knowing the image frame time for each position setpoint will help with the estimate where the next will be or correct for the time elapsed since the image get and perhaps take into account the response of the turret. Anyway, those are hopefully ways of reducing the error caused by timing jitter caused by image processing. Greg McKaskle |
Re: Speed of the camera
Quote:
This is helpful advice. Thanks |
Re: Speed of the camera
1 Attachment(s)
I've uploaded the C++ and C# versions of Team 2152's image processing software. The C++ code compiles in our magical project, but has not been tested.
The C# program was made in Microsoft Visual Studio 2008. I'm not sure what framework this requires (3.5 perhaps). It performs all the actions that the C++ code does and contains the beta ball tracking software. -TheDominis |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
Hes' not using C#, he wrote the same algorithm in C# so he could run it in Windows.
The code is pretty clean, and I can see why it is faster than the abomination that comes with the Two Color Tracking code. |
Re: Speed of the camera
Quote:
-TheDominis |
Re: Speed of the camera
Quote:
Quote:
|
Re: Speed of the camera
Quote:
-TheDominis |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
I doubt you have math1.cpp so I would comment out lines 8, 166 - 169, Add at line 424
Code:
//line 8 - #include "math1.cpp"Code:
CleanValues(); //at line 424-TheDominis |
Re: Speed of the camera
Quote:
|
Re: Speed of the camera
Quote:
-TheDominis |
Re: Speed of the camera
1 Attachment(s)
I come with gifts of code that actually attempts to find the target! I did a sloppy job with the transition from C# to C++ :*( . I want a robot to test on...
-TheDominis |
Re: Speed of the camera
Looking through the code I see a few things that jump out at me. On the good side, you have some expressions that ID the two target colors in RGB space. This will probably be faster than HSL comparisons, but be sure to test if it will be accurate. Be sure to test it against a full color spectrum from fully saturated, pastels, dark colors, etc. It will also be harder to tune if/when the lighting color changes, but otherwise it is a good approach.
On the worrisome side, it looks like you are starting up the camera with compression setting of 0. In my testing, that always resulted in slower framerate and skipped images. If you see issues, move that up to 20% or so, just watch the numbers near 0 and 100. Greg McKaskle |
Re: Speed of the camera
Quote:
We haven't actually tested the camera FPS since our hardware team is somewhat on the slow side in comparison to the software team. This is also why I wrote my algorithms in C#. I'll make sure to keep that in mind. -TheDominis |
| All times are GMT -5. The time now is 21:52. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi