Camera Speed Options

So my team this off season finally implimented some working vision solutions. After years of not getting it done, or putting it off we are finally done.

I’m starting to see that Limelight Cameras, particularly Limelight 2 are very popular. Running at around 90 FPS.

Currently we have raspberry pi’s and Nexus 5’s in our arsenal. I would like to use one of the coprocessors we already have, rather than spending 400$ a piece of limelights. I’m not afraid of writing code for ours. Naively I would think we should have similar, if not better CPU power than the limelight.

was anyone running a raspberry pi able to get 90 FPS or better? What was your setup?

Anyone running an android coprocessor able to get that FPS? Its difficult to determine if its possible with a bunch of variables.

Any thoughts would be great! Thanks!

Bandwidth was a huge issue this year. Most teams that were streaming the video back to the driver station found that they needed to reduce the resolution or frame rate or both to keep the bandwidth reasonable, especially if you are streaming more than one camera (pretty common). Reducing the frame rate to 20 or 30 FPS generally results in very little change in your ability to drive to the video stream. Reducing the resolution to about 240 or 360p also seemed to have very little impact on the visibility of the field elements and ability to drive.

We were running 2 - 3 video signals through our rasPi (early in the season we had 3 signals, but we later found one of them to not be of much use and we removed it). Mainly we were using the rasPi to run the vision navigation system to send steering commands to the robot to perform a “driver assist” function to align the robot with the vision targets while scoring. It had plenty of processing power.

I really wish any of these cameras would come with a “greyscale” mode to reduce bandwidth. You could sample just the green pixels in the Bayer Filter mask and reduce bandwidth to 33%, and reduce lag by not having to de-Bayer the sensor.

we ran with a rpi, and got 70 or 80 fps consistently with 240p, resolution didnt affect visibility and it worked great for us

This thread may be of interest to you:
A Guide to Recording 660FPS Video on a $6 Raspberry Pi Camera

1 Like

Did your team actually use any of this?

Ah. Yeah in this case these are just going to sit on board and not stream back data to the driver station. So I am not worried about that

What camera did you use?

We didn’t, we just used a Limelight. But I figured it was a relatively applicable read for custom high-FPS vision on a rPi.

1 Like

microsoft lifecam hd-3000

hmm… 70 fps is pretty good. I just don’t like the idea of being disadvantaged to the limelight lol

ah. Not looking to spend that money… I gotta imagine you have similar hardware on the Pi

we didnt use any vision processing besides just compression, but that option is totally available, not to mention it cost 1/4 of the price

The extra FPS would only matter if your control loop is also running at 90hz, most teams are running at the default 20ms loop time (50hz) so anything above 50fps wouldn’t really change performance.

4 Likes

This is absolutely true. It doesn’t do any good to update the navigation system inputs into the main control loop faster than it is able to consume that information.

I would also argue that the navigation parameters are probably not changing fast enough to even need to run at 50 fps. I bet if you ran the system at 25 fps you would probably not notice any difference in the control system behavior.

But if the pasPi is able to do all it’s processing to support a 50 hz control loop, you might as well do it. It doesn’t really hurt anything as long as the co-processor is able to do it’s job.

re: bandwith, our team developed “a-ha” vision where we were streaming an image to the driver station at ~60fps, with ~960x540 resolution. Fundamentally, the human drivers didn’t need color:

.

I don’t remember the exact bandwidth stats, but I do remember that it used 1/70th the bandwidth of typical H.264 encoding.

8 Likes

Woah that’s really smart

Weird way to call it when H264 is far from typical in FRC.

Is that doing something like edge detection -> image? How’d it look in matches? It seems like it would get really “noisy” if you were looking at another robot or just some bad lighting.

I don’t have a lot of the fine technical details, but we weren’t using raw edge detection; I believe we were using openCV and performing implicit surface detection, which dramatically reduced the amount of noise in the image and generally kept objects together even when the camera was moving at high speeds.

I did find an early recording of the system! https://1drv.ms/v/s!AhWFwgNM15uw2RQuBS73hMytvZX6?e=BhofVd