Looking for experience re: POV cameras

I’d like to poll the CD hive mind:

We have a POV camera at the top of our elevator which is indispensable for collecting cubes quickly in either “landfill” zone. We’re using Axis M10 series cameras, (which we also used in 2017) because they’re very easy to implement, and much lower latency than our experience in 2016 using USB webcams through the RIO. I estimate our latency is maybe 1/4 second currently.

The issue is that our (very demanding) drive team want even less camera lag during matches :slight_smile: Does anyone here have experience optimizing Axis cameras for latency? Have you found that using a USB camera through a co-processor (not the RIO) gives better results? Is there another technology (Limelight, cell phone, MIPI camera through a Jetson) that we should look into?

Thanks in advance.

We use an ACEPC to process our camera and have little (if any) lag. However, our drivers have not used the camera feed for driving at all and we have reason to believe that the increased bandwidth due to the camera caused the autonomous mishap in QF 1-1 at FLR

Finger Lakes Regional QF 1-1: https://youtu.be/080XP99KlgI

Perhaps reducing the resolution could decrease latency. That might be a nice place to start if you don’t want to replace your camera.

I’ve toyed around with the M1011/M1016 cameras we have, and generally it’s the video compression the camera has to do that drives up latency due to onboard cpu limitations, so it’s a delicate trade-off between compression latency and network bandwidth. Ideally, you’ll use the lowest possible resolution with low compression, but the Axis cameras will never be perfect. They’re simply not designed for extreme low-latency situations, and the fact that you’ve gotten yours down to ~250ms is pretty amazing

Also, I’m a strong believer in never using the Rio for any kind of camera processing- it’s sole purpose should be to run the robot, and using up it’s rather sparse CPU power for moving images around just doesn’t sound like a good idea. Just hook up a Raspberry Pi and run a UV4L server on it, they pull half an amp and leave your Rio high and dry to deal with more important things, like telling your motors what to do.

IIRC, MJPG gives better latency than H264 on the Axis M1011. We used 320x240 (blown back up on the DS side), 10 FPS, and very heavy compression (70 I think) to get bandwidth down to where we could run four MJPG cameras. Not suitable for framing, but good enough to drive by. Setting the exposure time is also important for latency. I think we used 1/100.

We have switched to Raspberry Pi + serial (ribbon cable) cameras. They have H264 compression through the GPU with < 20ms latency and far better quality vs bandwidth than anything we ever got with Axis 20X or M10XX. We network them directly to the DS, bypassing the RR entirely.

Interesting. Our team experimented with the idea of using a Pi (and possibly its tiny ribbon camera) for streaming images to the driver station. We were primarily concerned about the bandwidth, particularly the 7 mbps limit enforced by FMS. Do you find this to be an issue at all? Are you using a web server running on the Pi to stream images to the DS?

1293 ran a LifeCam at Smoky. Pretty much all exchange runs the entire tournament, with all the glass distortion that comes with, and they were running 7-9 cubes just fine when other gremlins (a battery bungee in the wheels, a code issue, the intake arms somehow getting knocked funny, being told to play D) didn’t stall them out. Low res, high frame rate. YMMV, but I think it’s 100% worth trying on a practice robot.

We are feeding the ribbon cable camera via raspivid into gstreamer and sending the video via UDP to gstreamer pipelines on the DS laptop. Raspivid will let you set whatever bandwidth you want for H264, and we can get four acceptable quality video feeds inside 3-4 mbps. Run something like the following on the RPi:

raspivid -t 0 -b 800000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 ! udpsink host=<ds-ip> port=5801

And catch it on the DS with:

gst-launch-1.0 -v udpsrc port=5801 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink

(The above was hand-typed so please watch for and forgive any typos.)

This RPi-side pipeline seems to go pretty much straight from the camera to the GPU and out the network, so it barely touches the RPi’s CPU and is extremely fast. Any other processing on that side can slow it down, possibly quite a lot. There may be other good configurations, but this is what I settled on after much experimentation. I know there are many ways to make it go slower :slight_smile:

I also failed to find any way to package this video stream into a form readable in a web page that didn’t destroy latency. If anyone has found one I’d be interested in learning about it.

This is great information, thanks! Our students looked at creating a Web server on the Pi that would grab the camera images and serve them up to a browser on the driver station, but as you can imagine, latency and frame rate were serious issues.

While it will be slower, one option for streaming video from a Pi is to compress it as an MJPG stream and serve it at a normal web address (e.g. mjpg://raspberrypi.local/stream:1181). That way, you should be able to view it like a normal Axis camera in the DS without a separate application to view the stream.

I’m trying this right now with a Jetson TX1. The streaming part is done, but I’ll check out if receiving the stream on Shuffleboard actually works. You can find our vision processing and streaming code here.

On the bright side, the delays in our code shouldn’t slow down the actual vision processing. One thread handles the processing and NetworkTables output, while a second thread is used to run the streaming web server.