PhotonVision - Disabling a camera's processing?

Hi. I’m the main programmer for our team and am considering the best way to do our vision this year. Our original plan was to use PhotonVision for identifying the reflective tape, and a separate PixyCam for identifying balls. However, preliminary testing and research has shown that the Pixy just isn’t advanced enough this year, due to the many similar colors on the field.

Therefore, I want to try using a USB camera for ball search in addition to the PiCam for tape on the Pi 3. My main concern is framerate. We need the maximum possible framerate for fast targeting and ball chasing. However, in our design we will never need to look for the target and look for balls at the same time. I want to be able to completely turn off the processing for one of the cameras to dedicate processing to the other one.

Looking at the doc, it doesn’t look like there is any way to do that. It would be great to have something like “setting the pipeline to 99 will turn it off”. Or would setting “Driver mode” accomplish this? The docs do not have much information on this at all. All I can find is

You can use the setDriverMode() /SetDriverMode() (Java and C++ respectively) to toggle driver mode from your robot program. Driver mode is an unfiltered / normal view of the camera to be used while driving the robot.

Some more info on processing impact etc would be nice there. Similarly, is there any info on if scaling is applied to the frames for DS streaming purposes? We don’t need that so a way to turn that part off would be great.

I previously used OpenCV and python for vision processing, which would make it super easy to do what I am talking about. Just don’t enter the processing block for that camera, and you’re good.

Is there any way to temporarily disable all processing on one camera in PhotonVision so another one can run at max speed?

Hi! “Driver mode” sounds like what you’re looking for? That just streams the output from the camera basically straight back to cscore to be shown in a stream. It’ll be streamed at the resolution you set in the UI under stream resolution. We don’t have a way to explicitly disable processing, though iirc if you don’t have any clients connected to the stream a lot of processing gets saved.

Could you add something like:
PassThroughPipeline

public DriverModePipelineResult process(Frame frame, DriverModePipelineSettings settings) {
    long totalNanos = 0;
    
    var fpsResult = calculateFPSPipe.run(null);
    var fps = fpsResult.output;

    return new DriverModePipelineResult(
            MathUtils.nanosToMillis(totalNanos),
            fps,
           frame);

Alternatively I could try to locally fork and make this change. Do you have any guess as to how much of a speed boost this might give us? Is it not worth it to put in the effort?

PhotonVision does support a “driver mode” that will optimize your camera for high-speed processing, however, depending on what hardware you are running photonvision on, you may need to do a few more settings.

Generally I’ve found that setting the resolution as low as possible (somewhere around 180p) will help to increase runtimes because the camera has less pixels to identify. Another helpful thing is to make sure you are using the raw camera feed and not the thresholded (thresheld?) camera filter. That way the camera doesn’t have to analyze and re-assign pixels, it only needs to analyze (I’m pretty sure driver mode does this automatically). For the processing, I find it monumentally easier just to use photonvision (or limelight if you are using one)'s networktables. The latest limelight or gloworm images will have built in NetworkTableEntries for all of the necessary data for targeting code, like TX (target’s x position relative to the center of the frame), TY (target y from center), and TA (target area). Using these you can make a helpful program like an aimbot.

I have some nice code from 2020 using the limelight to align with vision using the vision subsystem and it’s associated NetworkTableEntries.

Hopefully this can help you out : )

Driver mode basically just does that right now – just streams the frame you get in after rescaling. Somewhat wary to add a disabled pipeline in case it’s confusing to new users. I would do some testing to see how bad your performance hit is first. You’re welcome to fork and play around though! That’s the goal of OSS :stuck_out_tongue_winking_eye:
(You’ll need to add your new type to the dropdown, and to PipelineManager, and to the type enum is all. Not terrible)