Quote:
Originally Posted by writchie
Depending on how good the jpg encoder is, i.e. speed and file size at high compression settings, it looks like it might be possible to handle higher quality 320x240 images (for better tracking), scale them to 160x120, compress them with a high setting, and ship them off to the dashboard while maintaining 25 fps.
Since the jpg decoding speed is pretty slow, I would expect the encoding speed to have similar issues. Have you measured how much processor time is required to encode at high compression settings?
|
Since our entire set-up was implemented on the Thursday night/Friday morning of the competition, I didn't have an opportunity to do extensive testing. As soon as I get access to the code I'll try to do some benchmarking.
Quote:
Originally Posted by Greg McKaskle
Everybody seems to be doing great on this thread. I only have a few things to add. I've seen the camera timing get flaky when compression is set too high. I'm not sure where the threshold is or if it involves images size or is simply affected by compression factor. By flaky I mean that the image framerate will drop because some frames will take two periods to be sent.
|
In what we were doing, the video didn't necessarily have to be 25 fps, we settled for 3-5 fps (also probably because of time constraints. along the lines of "does it work?" "yes" "ok awesome don't break it"); all of our images were being split across multiple frames. We had a some problem with dropped UDP packets (probably somewhere along the line of 1 out of every 50 or so), which means we would drop a frame every 2-3 seconds, but it didn't seem to be a problem.
It should be noted that our application was essentially a monitor to tell our operator if he should push the "dump" button, since our robot was tall and opaque; we weren't actually trying to drive the robot real time with this. It's my opinion that basically no matter how good you get the video framerate/quality, it's not going to be able to beat just watching the field for driving ability, due to the camera's relatively narrow field of view. I don't say this to discourage anyone, just to make sure teams are being reasonable about what this can accomplish.
Quote:
Originally Posted by Greg McKaskle
Also, modifying the image and recompressing on the cRIO doesn't seem like a good use of CPU time. If you can find a single camera setting so that the image can be piped to the dashboard, that will have minimal impact on other tasks and will get the highest framerate to the dashboard.
|
I agree, I'll have to do some experimenting with the camera compression settings.
However, I was finding that by using parallel while loops, I could keep the bottle neck as the network transfer speed (granted we were still using 10 frames to transfer one image). Again, I'll have to do more rigorous testing, but it seems like most teams won't be doing much else with CPU time than running a basic driver control loop, and using parallel whiles with appropriate considerations for timing should keep this running smoothly.
--Ryan