Vision Processing Limits

So our team is pretty certain were gonna use the new Raspberry Pi image for image processing this year. Our question is what is the best camera to use to operate during the sandstorm and efficiently process the images to locate the vision targets on the field.

Our first choice so far is probably the Microsoft Lifecam 3000, which we have plenty of laying around and have used in the past.

We are aware of the 7mb/s bandwidth limitation for data to the roborio, so whats the best combination of fps and resolution to use? Also, does the same limitation apply for the camera to the raspberry pi? For example could we get a 720p 30fps stream to process the images?

Thanks in advance.

I’m not sure too much about the best way to do your cameras, but one thing to note is that the bandwidth limit was lowered to 4Mb/s this year:

R67. B. Bandwidth: no more than 4 Mbits/second.

There is no bandwidth limit within the robot, so definitely not between the camera and processor (particularly since that one is USB!).

1 Like

The Lifecam is a decent choice for vision processing.

You don’t need to worry about bandwidth limitations if all the vision processing is done on the raspberry pi (or elsewhere on the robot) and don’t send the stream to the driverstation. The bandwidth limit only applies to traffic between the robot and the driverstation.

Vision processing for FRC doesn’t need a high resolution - I believe the limelight has a 160x90 resolution. Low resolution images are faster to process than higher resolution ones, so try to use smaller images as often as possible.

3 Likes

You may find that the RPi cannot handle that size and framerate. The RPi is not the fastest processor out there, but it will do for simple processing.

Alright, thanks for the responses, I now understand that the bandwidth limit is only for the robot to the driverstation. Another quick question is, I just tried to plug in a Lifecam directly to the robot to mess around with resolutions and frame rates on the driverstation display, however it seems with the small quality adjustment box I cant get it to go above 7 or 8 fps, no matter where the slider is. Why is this? Can I manually lock it in the code?

That limit is probably from the camera. Cameras only support a fixed range of settings - resolutions paired with native framerate. So a camera could do 640x480 at 30fps, but at 1080p could only do 10fps. It’s just a limit of the camera hardware.

The stream page (which should be roborio-2783-frc.local:1181 for you) will show a table of these settings.

You can set the resolution output of the camera and a target framerate (which may not be hit if it’s above the camera’s supported framerate for that resolution) in your robot program.

1 Like

I have not used this in official play, but the Playstation eye camera is hardware built for computer vision and boasts some great frame rates at usable resolutions (the bottleneck with this camera would probally be whatever device it’s piping these frames to). It is also cheap and plays well with linux.

From Wikipedia:

The PlayStation Eye is capable of capturing standard video with frame rates of 60 hertz at a 640×480 pixel resolution, and 120 hertz at 320×240 pixels,[1] which is “four times the resolution” and “two times the frame-rate” of the EyeToy, according to Sony.[10] Higher frame rate, up to 320×240@187 or 640×480@75 fps, can be selected by specific applications (Freetrack and Linuxtrack).

How exactly do you do either of these? I looked at the stream page and none of the settings there were resolution or framerate, and the class “CameraServer” in our java code has no methods to change them. Also, the lifecam’s specs are 720p at 30fps. We checked this by plugging it into my computer and using the camera app.

The CameraServer.startAutomaticCapture() function returns a UsbCamera object. That object has methods to set the resolution, FPS, and frame format, as well as other settings.

2 Likes

Have you used one of the raspberry pi cameras with the cameraserver or UsbCamera objects?
Does it look just like a USB camera?

These are pretty fast… was thinking of trying one for vision processing but am not certain if it will work with the libraries.

Thanks for all of the work you’ve done on the image! This has been a real time saver.

Yes, it works and looks just like a USB camera.

Nice, that worked perfectly. However, I can’t get it to set both a framerate and a resolution, if i use startAutomaticCapture twice it treats it as two streams. Also, I don’t see a method to set the compression, using the slider in the driver station is the only way I can get a reasonable MB/s.

You need to store the UsbCamera object you get, like so:

UsbCamera cam = CameraServer.getInstance().startAutomaticCapture();

Then call the relevant methods on your cam.

Call ((MjpegServer) CameraServer.getServer()).setCompression() to force a particular JPEG quality, or setDefaultCompression() to set the JPEG quality to use if the dashboard doesn’t specify the quality. Note that if the camera is set to MJPEG mode, you’ll see higher CPU and latency compared to running the camera itself in uncompressed mode, since the software has to both decompress the image and recompress it.

Note: the getServer() method only works for the first camera; if you have multiple cameras you’ll need to use getServer(camera name). If you need this, I recommend you pair this with startAutomaticCapture(camera name) rather than using the built-in name generation.

Here is an example of using the Raspberry Pi image and two cameras. One LifeCam 3000 + One 170 Deg FishEye

http://wpilib.screenstepslive.com/s/currentCS/m/85074/l/1027798-the-raspberry-pi-frc-console

Hmm, its telling me “MjpegServer cannot be resolved to a type”. Also, the getServer() isn’t static, so I assume you meant to use the CameraServer object instantiated before?

Did you import edu.wpi.cscore.MjpegServer first?

Yeah, call getServer() on your CameraServer instance.

1 Like

Ah I see, Ctrl+. wasn’t working for some reason. Now its telling me the method getCompression() isn’t visible, and when I look at the source WPILib code it shows it’s not public. Sorry for so many questions, I just assume this is gonna be faster than trying to figure all this out on my own.

Oops! That’s a bug. Those functions should be public. As a workaround, use .getProperty("compression").set(value) instead of .setCompression(value)