USB Vision Cameras

Just out of curiosity, what USB/IP cameras are teams using on their robots? I know there is the Limelight/Pi with Photonvision solution, but our team is against it because it puts the expensive camera right in the line of fire. (Like the limelight needs to be right on a turret or this season everyone had them 2" inside of bumpers, and you can’t really shield it.)

Our first choice was actually a Pi Zero W. Using showmewebcam, you can use a standard CSI Pi camera, mini-csi cable, the tiniest SD card (64MB) and a Pi Zero, and get a very fast, stable USB camera, print a case around it, and you have a webcam for ~$30. It was keeping a stable 1080p30, but you can’t go faster on the FPS. Our biggest fear was Pi Zero availability, so we scrapped it.

Then we tried a JeVois A33 camera. We thought, for $50, you get a really compact USB camera that could be mounted easy. We didn’t need the smart features (had Photonvision Pi buried in the robot), we’ll just use it as a standard webcam… Well, the Jevois has a steep learning curve, the default camera initializes in a size you don’t need (and shows a sample video), Photonvision can’t resolution switch with NetworkTables to get to the resolution you want, and even changing the startup scripts to the right defaults, CameraServer could only connect to it like 1/3 the time. It just became a messy project. I even found this old post from 2017 showing the Jevois connecting to the Rio, and while I could get the serial data out, the camera feed was just lost.

We ended up using an Arducam 4K 8MP IMX219. I mean, it worked, we were able to get a decent driver cam out of it, but nothing special. It had a little bit of frame issues, probably closer to a 20-22FPS stable. When we used it for AprilTag identification, we were getting hops of 1-2’ of where we thought we were using the PhotonVision detection pipeline, I think it was because we never tuned for the right pipeline (like different PoseStrategy choices), aka user error. We never went far down this pipeline, so your mileage may vary (I think we need to try in the offseason 2 cameras to triangulate targets).

My lesson out of all this is finding webcams for robots is tough, mainly finding them compatible with Linux/CameraServer… Like I’m actually thinking about running Windows next year just for camera drivers (it really pains me to say it…) UVC compatibility means it “should” run on Linux, but half the time it won’t. Also, don’t use HD-LifeCams, they have a terrible processor in them and will drop to 7FPS on you with any movement.

PS: Why do people run Limelights at 90FPS? Most robot code runs at 20ms loop time (50Hz), so anything faster seems like just a waste. I could see a stable 60FPS being a good target to have new data every loop.

1 Like

I’m not sure I follow? How does whatever you’re doing solve this compared to PV/LL?

Bury the Pi deep in the robot, use a cheaper USB camera to film. Smaller/cheaper device out in the open that is easier to replace.

(Basically do you want your $400 Limelight or a $50 USB cam taking the hit when robots collide. The CPU, especially if you use something bigger like a Jetson should be protected.)

If you’re using a Jetson or Pi do NOT use the ribbon cable cameras for FRC without a good permanent mounting solution. Those connectors are the most fragile and easy to come loose things I have ever dealt with.

We’ve mounted Jetson nano’s with ribbon cameras upside down in a factory for doing ML data acquisition and labeling. That’s a static environment but just shaking the casing during the install can make one come loose. I can’t imagine what would happen with full speed defense during FRC.

2 Likes

I haven’t really seen problems with the limelight in the line of fire. We did kill one limelight last year because we smashed it against the climbing bar like 5 times but other than that it’s been fine.

I think running a coprocessor as fast as possible is useful to reduce latency. If your camera runs at 50 fps but is 0.01 seconds behind your robot loop’s timing, then every piece of camera data you get in your robot code will be 0.01 seconds behind. The higher framerate you run, the lower the usage latency to your robot code will be (unless you manage to frame sync your camera to the robot code’s timing).

2 Likes

Note that while the Limelight can capture at 90fps, it cannot process AprilTags in a useful fashion at this resolution (with some caveats). You could probably get it to do that but you’d have like 4ft of range. The 90fps is more relevant for lower latency overall and retroreflective targets, but those will be discontinued next year. I wouldn’t be concerned about the LL getting hit, it’s fairly robust and rarely sees action if it’s high on the robot anyway.

For future years, I’d run some combo of Photonvision and an Orange Pi 4 or 5. Or a Raspi 4 if you have them in hand. You can find my performance testing on LL2, LL3, and several custom setups here: Coprocessor - Google Drive

The Beelink tests show the widest varieties of cameras. The OV9281 and AR0144 work best for global shutter monochrome, but most 60fps webcams do well if you need color.

A 60fps webcam plugged into a Raspi will work quite well using either the WPIlib image or Photonvision. Just make sure to dial back the resolution.

The best solution I’ve seen so far is if you get yourself a dedicated raspberry pi (assuming you have one or two around), use the default wpilib image, and use one or two of the USB cameras from this list. Play around with the resolution and mjpeg compression settings, that’s the biggest “knob” i’ve seen to keep high framerates and low latency. Having it on a dedicated processor gives you flexibility to avoid taking up RIO cpu cycles with camera processing.

A second-best option is to do all that but on the RIO. It’s solid, but I’d be careful pushing high resolutions or more than one camera. It can probably do it, you’ll just hit a limit where it starts to fight for resources with robot code.

As you saw - in 2023… I’d avoid JeVois. Great in theory, and a price point we couldn’t pass up. Theoretically good for apriltags too now. But the old one had enough quirks in its development environment to make it hard for FRC (and hard in a “this just isn’t even fun to debug” way). I don’t think the new one fixed those usability issues either. It’s just not targeted for this market.

2 Likes

Run faster than the Rio loop because you can use multiple samples in pose estimation, more data points means in general a closer estimate to reality which is why some teams get up to 4 grams in a single loop cycle, for cameras even just a lifecam 3000 works great if you’re on a budget and there should be plenty laying around from old KoPs, if you’re looking for something apriltag specific monochrome with a global shutter at 1080p is probably the way to go.

Was a hard date actually set in regards to phasing out retroreflective targets? I know FIRST defintiely announced they would be phasing them out when they announced AprilTags, but I don’t recall off the top of my head them announcing they’d be fully gone by the 2024 season, just gone “eventually”.

(Myself and some other mentors were discussing/debating this a couple weeks back).

1 Like

From the FRC Blog post in 2022

“with the goal of moving away from retroreflective targets in 2024, pending a smooth implementation this season.”

I think AprilTags went really well for any team that tried them out. The retroflective/reflective tape should be phased out then per what was said. But they specifically said pending a smooth implementation without saying how smooth it needed to be to be considered “a sure thing”

3 Likes

I think the OP is not really thinking about the real requirements for camera placement. Yes, if you are trying to have a turret shooter always pointed at a target, then you probably want the camera mounted to the turret (but that is not required I think). But for pretty much any other use, there is no specific reason to have the camera near the outside of the robot, and for many uses, putting on or close to the the center line is often very helpful.

Admittedly we did use a USB camera, but we mounted the camera inside our robot, on one of the main supports. It was protected by external stays which were put in to stabilize our superstructure. If we had been using a LL or Pi with direct attached camera, we would have put it in the exact same place.