High FPS Camera

I was looking at other teams and noticed that they are able to achieve high FPS on their computers. We were hitting 4-5 FPS, how can we improve it and what (hardware) did your teams use

1 Like

For reaching 30-60fps, it’s mostly a matter of pipeline optimization. Scaling down your input image, and / or having a small inital works fairly well due to FRC cameras being able to be optimized for ideals (highly reflective tape and lights on robots).

For 60-120fps, the above still applies but I’ve seen people use thePS Eyecamera at 120fps with great success, though they did have some issues doing so on the roboRIO directly, using a Co-Processor like a Raspberry Pi or a Jetson instead.

Good luck!

One word…Limelight. :smiley:

Isn’t it out of stock?

Are you talking about vision processing? or the camera feed being displayed to the drivers?

Yeah limelight is currently out of stock .Last I heard their working on making more and will have a second shipment ready to ship sometime late week 1 or early week 2

Either way, before obsessing over FPS, look at latency.

If your video is 120fps but 300ms delayed it will impact your vision calculations significantly (you need to account for that lag if using vision to track a moving target, for example). Similarly for driving, that’s a lot of lag, and your driver’s performance will suffer.

Conversely 10-12fps at 50ms latency will mean less rounds of vision calculation per second, but the latency is low enough you may be able to ignore it depending on your application. Similarly, your driver will have a far easier time using that camera to drive the bot.

First, what hardware and software are you using?

As others have said, there are two main elements: what processor you are running on, and what your pipeline is doing.

You can get a big speed up by picking a faster co-processor to run the vision software. A Jetson is probably the high end, with a price to match. A RaspberryPi (make sure it is a RPi3) is probably the low end. In the middle are boards like the ones from ODROID, which are 2-3x faster than a RPi3, but there are many other options. (Be sure to check online benchmarks before sinking money into them.)

In terms of a pipeline, you might want to break down the various steps and time them. You can start by reducing the image size (best done in the camera) to 320x240. Then turn off parts of the processing and see what is taking the time.

Can you provide some more details about the setup you are using?

If you’re looking for higher FPS in order to do FPV style driving or other driver visual assistance (IE: seeing a gear on the ground on the other side of the airship), you’ll need something that can do the stream encoding.

The RIO is only capable of creating an mjpeg stream. This is literally just a bunch of jpegs that have to stream back to the DS. Using an offboard solution, you can encode the video into h.264 or another modern codec so that you can get HD video at high FPS without taking all your bandwidth. This can be a Pi or an nvidia board if you already have a USB webcam.

On 3005 last year, we used an off the shelf IP security camera. It has an integrated encoder and just ethernet out to the radio/switch on the robot. Then we just opened the built-in web interface to view the stream. We got about 15FPS at 720p with minimal latency issues. There were also options to tune the color so that objects stuck out more in the video.

This is the camera we used: http://www.microcenter.com/product/470325/IP_Security_Camera
It was pretty trivial to gut and remove the IR emitters and PoE board, then just wire in our own power from the robot.

We used a USB cam

Brandon from limelight made a post earlier saying the limelight should be back in stock around three middle of this week. If you are interested, you can sign up for updates on the limelight website to get an email once they are back in stock.

1 Like