Camera resolution might be too much for bandwidth?

Hi everyone,
My team is using the Axis M1011 camera with a resolution of 640X480,20 fps,(I think 30% compression)
Could there be a problem with the band width limit or it it ok?
If too much could it be enough to lower fps and increase compression?(We really want to use the current resolution)

Thanks!

If you are using the default or LV dashboard, it includes an indication of the bandwidth being used and an LED that will be green, yellow, or red based on the value.

I thought there was a screensteps that showed how to display the bandwidth usage printed on the image, but I couldn’t find it this morning.

I suspect that your camera settings are over the limit and will need to be either more compressed, lower the framerate, or both.

The bandwidth is proportional to the framerate, and pixels, so using the next lower size will cut usage by 4x. Compression is not a linear scale, so is harder to predict.

Greg McKaskle

I’m not sure what compression we’re using, but we can get 640x480@10fps using roughly 1.5 Megabits. We’ve verified it via Wireshark and our own counter in our Java display. It spikes to 2.2Megabits every so often though, thus we’ll give it plenty of overhead.

That is a good benchmark. 20fps would double the bandwidth at the same compression, which I’m guessing is more than 30.

I know the bandwidth numbers for the field were largely based on camera usage. The intent was to leave enough bandwidth for multiple lower resolutions cameras per robot or a single 640x480.

Greg McKaskle

Thanks guys!

I’ll lower the fps and check.

From your experience was 10 fps good enough for the drivers to use?(Our camera is mainly used for vision but assisting the drivers could be nice as well)

Just to use for comparison, we use the following settings.
320 by 240, 15 FPS, 30 compression.
Our bandwidth utilization is around 1.5 megabits.

This image is sufficient for on board image processing and Driver utilization.

Results after changing:
640x480 Resolution,10 fps,30% compression
Takes up 3 mb/s(according to the dashboard),
Changed settings don’t seriously affect the vision proccessiong or drivers :slight_smile:

I believe the default encoding for the M1011 camera is MJPEG. The camera also supports H.264 and MPEG4, which use dramatically less bandwidth.

Does anyone know if it’s possible to configure the stream from the camera to the DS to use MPEG4? Doing so directly on the camera via its web config interface doesn’t have any effect on the stream used by the DS - it appears the DS is over-riding the on-camera settings, probably via VAPIX or ONVIF.

At present, I’ve only found MJPEG support in the driver station. MJPEG is basically just a stream of standalone JPEG images (*(http://en.wikipedia.org/wiki/Video_compression_picture_types)) so the driver station just provides a JPEG decoding function.

H.264 is more complex and would also induce some delay in the picture (amount depends on specific usage). I’m not sure you would want that it even if you could get it.*

No, we can only use MJPEG. The reason for that is that H.264 and the other format are pure video formats, whereas with MJPEG each frame is an individual picture that can be pulled out of the video feed and be processed immediately. H.264 and other formats only encode the changes between frames (which is why they use less bandwidth), meaning that if you wanted to process a frame with those formats you would need to process a dozen other frames to get one frame.

In our case, the DS isn’t doing any processing. It’s just displaying the stream. We have a simple crosshairs overlay configured on the camera via its web interface that is used for manual aiming by the driver/shooter.

Does anyone know what implementation the DS uses to render the stream? Is it Windows Media, the Axis Media Control (AMC), QuickTime, something else?

The FIRST LabView updates have a Sub-VI in the getcamera VI to decode the JPEG to BMP. It then uses LabView to render the BMP on the DS. How Labview does it is a question for NI. I would assume the C++ & Java versions have a similar function.

The LV Dashboard decodes using NI-IMAQ. It takes in the JPEG stream and an allocated image. If the buffer contains a valid JPEG, it updates the image. The display is the built-in LV display that is specific to NI IMAQ image format, supports ROI and overlays, etc.

I believe the Java smart dashboard uses a media control, but that is a guess.

The NI-IMAQ doesn’t include a codec for H.264 and other formats since they are rarely used in industrial settings. If you are going to use the H.264 with LV, delete the IMAQ display and use an ActiveX media control instead, update the camera CGI request and glue the two together.

Greg McKaskle

There is a ScreenSteps page at WPI on Measuring Bandwidth Use which you might find useful.

The FMS Whitepaper Rev.A also includes a table of expected camera use which I’ve reproduced below for convenience.

The FMS table suggests peak use of 4.3Mbs for the 640x480; 10fps; compression 30 configuration described below, somewhat more than the 3Mbs seen on the dashboard.

Unsurprisingly we found that the actual bandwidth used varies by about 20-30% depending on the specific image sequence being compressed. However counter-intuitively (at least to me) we noticed that bandwidth was higher for steady images and lower while the camera was moving quickly.

Edit: Daniel’s explanation of MJPEG above supports this observation. In a moving image each frame is blurry so has less detail and can therefore be compressed more. But there is no compression gain from the frame-to-frame similarity of a steady image if each frame is compressed independently.

cameradata.png


cameradata.png

Thanks Greg. That’s exactly what I wanted to know, for future reference.

We did some experiments with MPEG-4, using VLC to view the video on the DS. The good news is it does dramatically reduce bandwidth. The bad news is that it also dramatically increases latency. I think this is inherent in MPEG-4, and renders the video pretty much unusable for driving. :frowning:

We did fine with reducing the resolution and frame rate and using extreme compression on the M-JPG streams. Not pretty to look at, but we got three usable video streams while staying well within the bandwidth limit.