What are teams doing about the Logitech C920's minimum resolution problem?

As has been documented before and (unfortunately) confirmed by my team, the Logitech C920 does not work below a 640x360 image when plugged into the roborio. Since this limits our framerate to only 2 or 3 fps, we switched back to using a lifecam. How are teams dealing with this issue?

Might want to check your bandwidth usage again, as 640x360 at 3 fps should only be 2 MB/s, before the C920 Compression.

Edit: Oh for our setup, we run our cams through a PI for image processing. The PI also sets up the camera feeds for the rio to send to the driver station I believe.

For streaming from our c920, we are just eliminating all of the WPILIB stuff and using a web browser to view the raw camera stream from our RPi. Personally, I am not a fan of the CameraServer classes. Luckily, there are many resources available online for camera streaming at least for the RPi. There are simple solutions like just installing one of the MJPEG streamer apps and making sure that it runs on boot. No matter what solution you go with, try to eliminate the CameraServer and play around with other camera streamer libs online.

1 Like

I know it’s not a direct answer to the question you’re asking, but we started using the Logitech C270 camera since fall 2017 after using the C920 during the 2017 season and haven’t looked back. For FRC it’s basically as good as the C920, and much cheaper.

1 Like

So it sounds like everyone who’s using a C920 is plugging it into an rpi and not the rio.

Generally any other coprocessor will work. Just let the Rio focus on what it supposed to be doing (making your robot work) and coprocessors tend to be easier to implement a streamer than the rio.

I just say RPi because it is a popular coprocessor in FRC and our team is currently using it.

As the author of the class and of the underlying cscore library, I would appreciate more detailed feedback. There are good reasons to use CameraServer over other approaches–it is significantly more robust to camera unplugging/replugging than any other library/tool I’ve found, and since it publishes stream information to NT it makes cameras show up automatically on dashboards, it supports camera switching, etc.

The cscore library is a very lightweight wrapper that uses V4L under the hood (with robust techniques for handling disconnects) and packages it for streaming just like mjpgserver does, it’s just designed for integration with image processing with OpenCV and along with the CameraServer class, integration with NetworkTables for publishing camera stream information for FRC.

Have you tried the FRCVision image for the rPi? What issues did you run into?

Regarding the C920, the RoboRIO version of Linux has an issue (the problem is at that layer, not with the CameraServer library), but it does work fine on rPi.

1 Like

Do you think it’s possible for this issue to be resolved?

I don’t know, that’s a question for NI, as it’s their custom-built kernel. I would say it’s a near-certainty this issue will not be fixed in the middle of the season.

Right, I didn’t it would be. Thanks for the info, we’re gonna look into using an rpi with our C920 since we already have them as a last ditch effort.

Peter, do you, by any chance, have a recommendation for a low-latency, high frame-rate (low resolution okay) camera that works “plug-and-play” with CameraServer and also supports “manual exposure?” We on 1519 have been having trouble finding a camera with these characteristics, and I suddenly realized when reading this post of yours, that you would be fantastic person to ask for recommendations on a camera that meets these requirements!

As a bonus, are you aware of one with a wide field of view? (Say 120 degrees or more?)

(PS: Manual exposure is desirable, as it allows the camera exposure to be normally underexposed (dark) so that retro-reflective tape illuminated by a green LED ring light to be properly exposed, so the retro-reflective tape doesn’t just saturate to white.)

(PPS: We have previously used the Axis 206 to meet the above requirements (well, except the CameraServer one), but it’s getting to be nearly impossible to buy any more, even on eBay.)

1 Like

Oh, I believe that I overstated myself. I think that the CameraServer classes are great for someone who is looking to get started. Being able go get a simple stream working in one or two lines of code is a huge plus. First off, I thought that the c920 problem had to do with the CameraServer classes. I just made a wrong assumption based on some of the other info online. Its not that there is anything wrong with the classes, its has to do more with finding the most efficient solution. Also, I have has the same issue, if you can even call it an issue, as Ken. It seems like there is always a trade off with the things Ken listed. The one improvement I have for the camera related things in WPILIB would be adding the option to view a regular MJPEG stream in shuffleboard.

It’s primarily a question of what the Linux driver supports, although there is some CameraServer interaction. CameraServer, just like every other camera program on Linux, uses V4L to access the camera properties. Every property exposed through V4L is available through CameraServer and exposed through the CameraServer webpage.

With respect to exposure, there are generally two issues at play: interdependencies/ordering between camera properties (USB cameras generally have one setting for manual vs automatic and a second setting for the actual exposure value), and strange scaling or restrictions on the actual value (the Microsoft cameras in particular have only certain values that work, and they are non-linear). The CameraServer setExposureManual() function tries to handle the “common” cases, but if that function doesn’t work, it’s possible to access the properties manually via the getProperty() function. And again, the webpage exposes the properties directly so you can experiment interactively to see if there’s a particular sequence that works.

I will say that the current code doesn’t do a good job of ordering property settings upon initial camera connection, and that’s the most probable cause of exposure not being successfully set. Given the criticality and high utilization of cameras this year, fixing this at the right level is likely too invasive/risky to fix during the season, but there may be a relatively easy workaround–hook into the camera connection event and set the exposure in that callback.

I have a wide variety of cameras for testing; I will do some experimentation specifically on exposure this weekend and follow up.

Thanks, Peter for the quick response. We’ll be doing experimentation here, too and will post if we find any good solutions. However, our experimentation will take a while, as we’re having to order likely candidates, not having “a wide variety of cameras” other than the Logitech c920 and the Axis 206.

Thanks again!

In my testing so far, I think this one fits the bill for the above:
https://www.amazon.com/gp/product/B01E8OWZM4/

Works great with CameraServer, manual exposure works, 180 degree FOV, smallest frame size is 320x180, 30 fps, 1.2 Mbps, seems to be pretty good latency (I don’t have a measurement setup though).

1 Like

We have this exact camera and not seeing quite these results on the field. Do you by chance have a recommendation for us on resolution, frame rate, and compression? We program in labview and use the built in dashboard sliders to adjust camera settings.

Thanks

Thanks so much! We now have one of these on order and will post how it works out after it arrives and we give it a go.

Sorry, I was answering a specific question and should have qualified my response with more information. Just to make sure we are talking about the same thing. What I was looking at was a rPi with the Java/C++ CameraServer. I will double check with the Rio and Java/C++, but if you’re talking about using this camera with LabView robot code, the LabView implementation of CameraServer is completely different, and may indeed have completely different behavior and performance.

One thing to note with this particular camera is it does not generate a 4x3 resolution, it’s only widescreen aspect ratio, and the LabView dashboard doesn’t support this, so is scaling somewhere (quite possibly on the server side).

Adjusting compression with the sliders also means software compression is being used on the server side, and with the Rio that will definitely slow things down.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.