Driver Assist Camera Stream Times

What is the best expected stream time for a driver assist camera in 2025? Searching through CD we haven’t been able to find anything specific that points to a worthwhile solution.

We’ve been experimenting with OpenCV direct stream and PhotonVision Driver Mode using VLC, shuffleboard, browser….
Running RobotPy, removed all shuffleboard traffic from the code and only have vision running to get best case scenario.

Three raspberry pi4’s with dedicated global shutter cameras connected via Camera Serial Interface (CSI) channels. All 3 pi’s are static ip ethernet through the ethernet switch with roborio and radio. Camera software:
Cam1: PhotonVision AprilTag Pipeline
Cam2: PhotonVision Driver Assist Pipeline
Cam3: OpenCV stream

We’ve adjusted exposure, brightness, gains, stream resolutions and tested out the variations to best performance.

1. With only Cam3 OpenCV turned on, viewing on the Driver Station laptop has almost zero lag, less than 0.5 seconds
2. When we turn on any other of the PhotonVision coprocessors, CAM3 permanently drops stream and remains unusable
3. With just Cam1 AprilTag pipeline and Cam2 DriverMode running on PhotonVision software, the lag is ~3 seconds for Driver Mode on the lowest reasonable streaming resolution of 160x120 as viewed on the Driver Station shuffleboard (or PhotonVision browser app)

We can get the pipeline latency result, but is useless when the lag is visible to the eye. We setup clock timers recording the view of the shuffleboard side-by-side with a person walking through the frame which confirms ~3 seconds.

Our main use case for Driver Mode is to improve cycle time rate from scoring to picking up another game piece across the field. With the swerve drive speed @~14 feet per second and the field length 54 feet, the robot will cross the entire length of the field before the Driver Mode view refreshes, which makes it no value to the driver.

Looking for ideas and if it’s worth pursuing in competition season?

You’ll have to ask the drivers after they see the 2025 game and the team’s strategy. Odds are the drivers will not use a camera - usage happens very rarely on my team. Last year we didn’t have a driver camera. [There are several CD posts about driver cameras.]

You are very patient for your camera frames. A typical number I’ve benchmarked is more like 0.28 seconds of lag and that is noticeable - not considered almost 0.

Some years and some configurations I’ve seen 3 seconds of lag - that’s inexplicable but the drivers didn’t care - they weren’t looking anyways so we didn’t track it down.

If you want some insights of the performance then start with the Driver Station Windows Task Manager Performance Ethernet and see the load for the various scenarios.

1 Like

Even if you were getting in the range of 0.5 seconds delay, at your top speed of 14 ft/s the camera is still showing you 7 feet behind where the robot actually is. The enforced bandwidth limit between the robot and the driver station is responsible for most of this delay. Not sure if that limit is going to be rolled back any for the VH-109 radios. With the limit as it has been, it’s been a much better idea to try to get the robot to assist the driver using low-latency on-board image processing rather than having the driver take attention off the field to look at a high-latency, low-resolution camera feed coming back to the driver station.

As a point of reference, FPV pilots generally consider latency above 0.02s to be noticeable, and 0.1s to be unflyable. Source. I’m assuming that’s unachievable through the FRC network stack.

Every time our team has tried to use a driver-assist camera (2023, 2019, 2017) the drivers have complained that it was basically useless at any elevated speed.

Thank you for confirming this. Been my experience this last week testing it out. I am running FPV opencv software that works well within the requirements and works on my drones.
Our FRC robot is trimmed down to the bare bones with code and just setup with basic vision and no luck.