Camera resolution

What does everyone run their camera at, resolution and fps. Our programmer is worried about getting dinged for using too much bandwidth so we have been keeping it ridiculously low. Just wondering what others are using.

1 Like

No one will ding you for using too much bandwidth this year. It’ automatically throttled on the field.
If you use too much you’ll just not see a picture or will get jerky images from your camera.

Because some images will be more complex to compress than others, try to keep normal bandwidth down around 5 Mbps.
That’ll help keep the variance under 7Mbps.

Our two webcams (front and rear view) seem to be streaming at 160x120. Not amazing but good enough to show the drive team what they need to see.

320X240 at 20 fps live stream is running under 3.5Mbps.

Our 320x240 resolution is running at 8 fps in under 1 Mbps. But does anyone know a way to increase the frame rate in the code? Our camera is just a Microsoft LifeCam so I know it can have a higher fps but I can’t figure out how to change it.

Depends on the type of code you’re using.

Labview has a function in the WPI Lib (or at least they did a year or two ago when I looked for it last) that allows you to change various camera settings including FPS and Resolution (it might even be a “Start” VI for “Begin” code if I remember correctly). I’m not familiar with how C++ or Java handle this but I suspect it’s something similar.

This year was the first we really found a need for a first person view from the robot for our driver in order to locate gears on the other side of the field. So we did some research into how to get the best quality back inside 2MB with decent latency.

The only way to get anything we deemed reasonable inside the 2MB was to use h.264 encoding. The Rio just isn’t capable of doing this while running robot code and keep any latency (lag) out of the system. The answer was in off the shelf security cameras. Most of them have 720p (20fps) output over RTSP with h.264 encoding. This worked well, but we dropped the resolution down to VGA (still at 20fps) to drop the latency down to just a few ms.

This is the specific camera we used: http://www.microcenter.com/product/470325/IP_Security_Camera_with_PoE
It was a pretty simple job to take it apart and make a smaller enclosure so it fit better.

ILAMtitan, curious if you’re displaying this camera in the SmartDashboard (i.e. do you have a SmartDashboard plug-in that could potentially do image processing on the individual frames) or are you displaying the camera video in some other application?

Since we’re using 254’s Nexus5 solution for vision, we don’t NEED to have this one go to the driver’s station.

We honestly got it on with only a day or two before the Dallas regional to play with it. The driver’s station didn’t pick it up natively (It should though since it’s an RTSP stream, but maybe it doesn’t like the h.264 encoding. Regardless I didn’t play with it much), so we tried streaming with VLC. For some reason that added a lot of lag (probably some kind of buffering), so we just ended up using the built-in web server and streaming client that the camera itself hosts. It’d be nice to have it work in the driver’s station; so I plan to just toss it over to the programmers to see what they can do with it.

We’ve been having issues with USB webcams and I’d like to go back to ethernet cameras. Thanks for this link.

What remained of the camera after you got rid of the shell? What kind of footprint?

How does the H.264 work? I’d like to feed this camera to grip. Will grip deal with it or will I need some other software to receive this feed, decompress it and then feed that to grip? I’ve seen H.264 in a number of posts and see that it is compression related but not sure what the client(?) needs to do to deal with it.

We’ve been having issues with USB webcams and I’d like to go back to ethernet cameras. Thanks for this link.

What remained of the camera after you got rid of the shell? What kind of footprint?

How does the H.264 work? I’d like to feed this camera to grip. Will grip deal with it or will I need some other software to receive this feed, decompress it and then feed that to grip? I’ve seen H.264 in a number of posts and see that it is compression related but not sure what the client(?) needs to do to deal with it.

We were able to just disconnect the IR LEDs on the front, as well as the POE power supply that wasn’t needed anymore. So the whole thing got pretty small. The final enclosure that was printed for it is about 2"x2"x1" (I’d estimate 10% of the original size. Photo of it with a Lego Darth Vader for scale is attached.

Regarding GRIP, I don’t think it will work because GRIP uses an MJPEG stream, not h.264. However, the streamer engine for GRIP seems like it’s modular, and swapping it out might not be too bad. Reencoding it seems like it’d be computationally expensive. Someone with more experience with it will have to chime in. The extra resolution you get from a camera like this is only beneficial for driver aid, though, and vision processing can be done with much less data. It would be interesting to see h.264 decoding in GRIP however since a low res stream would use so little bandwidth.





Wow! That was a lot of extra packaging. I do zero with vision and am trying to get a grasp on it. Can I read out of this that h.264 is some other type of stream than MJPEG? If so, what do you use to see it with on the dashboard?

Essentially MJPEG is sending a full JPEG image for every frame. In h.264, there is interframe compression so really only the data that changed from the last frame needs to be sent. Tom Scott has a really good video about compression here: https://www.youtube.com/watch?v=r6Rp-uo6HmI

And some high-level text about the differences from Axis here: https://www.axis.com/us/en/learning/web-articles/technical-guide-to-network-video/compression-formats

The downside is that the compressed video is a lot more CPU intensive to receive and decode, making any vision processing difficult, especially compared to a single JPEG frame. Based on 5mins of Googling, it’s possible to decode the stream, then send it to OpenCV (which GRIP uses) using FFmpeg, but the actual implementation is outside of anything I’ve tried.

Since the dashboard didn’t see the stream natively on the first try, we just went down the path of least resistance and used the web interface on the camera.

That’s some great information. Thanks! I think the C615 do H.264. I’ll verify tonight and if so, then we’ll look for the feed directly from the camera rather than through smart dashboard. And we’ll have to play around some more with the ones connected to USB.

The camera I use the most needs to have low delay, and all I need to see is the gear. I’ve set it at 360x240, and 30fps. Currently, I think it’s just sending PNG’s constantly, and there is still a lot of delay. The other camera I keep at 180x120 and 15fps. I rarely use it, so that takes a lot less bandwidth.