2 Cameras in Java

We are wanting to run 2 cameras on our bot. We have 1 camera working currently. Last year all we had to do was add another start automatic capture and it worked, but this year that doesn’t seem to be working.

We don’t want the cameras to run at the same time, but rather when a button is triggered.

Any ideas? We are programming in Java. We have a couple of LifeCams that are not on our bot that we can use for testing.

Is this what you are looking to do?

https://wpilib.screenstepslive.com/s/currentCS/m/vision/l/708159-using-multiple-cameras

You can use this: http://first.wpi.edu/FRC/roborio/release/docs/java/edu/wpi/first/cameraserver/CameraServer.html#addSwitchedCamera(java.lang.String)

Also use http://first.wpi.edu/FRC/roborio/release/docs/java/edu/wpi/first/wpilibj/shuffleboard/SendableCameraWrapper.html#wrap(edu.wpi.cscore.VideoSource) to wrap the camera when adding it to shuffleboard.

Make sure you add it to shuffleboard before changing the source of the MjpegServer returned by CameraServer.getInstance().addSwitchedCamera()

2 Likes

The same question has been asked at least 3 times this build season. Good luck.

Yes, I believe this is what we want. When we tried to do the StartAutomaticCapture(0) the code would not work for some reason.

Also, it seems this is written in C. I assume it would be similar in Java?

Can you be more specific with “the code would not work”? What happened?

Here’s a Java version of the C++ code (that also uses addSwitchedCamera).

UsbCamera camera1;
UsbCamera camera2;
VideoSink server;
Joystick joy1 = new Joystick(0);
boolean prevTrigger = false;
void robotInit() {
  camera1 = CameraServer.getInstance().startAutomaticCapture(0);
  camera2 = CameraServer.getInstance().startAutomaticCapture(1);
  server = CameraServer.getInstance().addSwitchedCamera("switched camera");
  camera1.setConnectionStrategy(ConnectionStrategy.kKeepOpen);
  camera2.setConnectionStrategy(ConnectionStrategy.kKeepOpen);
}
void teleopPeriodic() {
  if (joy1.getTrigger() && !prevTrigger) {
    System.out.println("Setting camera 2");
    server.setSource(camera2);
  } else if (!joy1.getTrigger() && prevTrigger) {
    System.out.println("Setting camera 1");
    server.setSource(camera1);
  }
  prevTrigger = joy1.getTrigger();
}

By “not work” I mean it gave me errors when I tried to compile the code until I removed the 0 or 1 from CameraServer.getInstance().startAutomaticCapture().

Previously though I had not named the cameras as you did above, so I just had the CameraServer.getInstance().startAutomaticCapture(); line.

Thank you for the Java version. I will try this on our testbed and hopefully it will work. Thanks.

We added your code for testing, but when we do the:
server = CameraServer.getInstance().addSwitchedCamera(“Switched Camera”); the addSwitchedCamera is underlined in red and the build fails. Is there something we need to import for this to work?

Edited to add:
We have imported the following things for the camera:
edu.wpi.cscore.UsbCamera;
edu.wpi.cscore.VideoSink;
edu.wpi.cscore.VideoSource.ConnectionStrategy;
wdu.wpi.first.wpilibj.CameraServer;

Be sure you’ve updated to at least version 2019.3.2 of of WPILib - this was when addSwitchedCamera was added.

If you’re using command based, I have an isolated example of camera switching. Basically the same as what Peter posted, but more fully in context and with a couple of the gotchas mentioned by Peter and others elsewhere on CD. The active code switches between an HttpCamera and a USB Camera, and comments are there for a Raspberry Pi dual camera setup, but if you have two USB cameras just replace the HttpCamera with a UsbCamera.

Also, to display this in ShuffleBoard, open the Camera Server section on the left and drag and drop the “Switched camera” instance. I’m not well versed in SmartDashboard, but presumably it’s something similar.

You need to import edu.wpi.first.cameraserver.CameraServer, not wpilibj (that one is deprecated and was not updated to include addSwitchedCamera).

That took care of the addSwitchedCamera. Hopefully that will take care of our problem. Thanks

Just wanted to let you know that your example worked perfectly. They were able to get the 2 cameras working and switching on a button. Do you know of a way to make it work on the smart dashboard vs the default dashboard?

On SmartDashboard, add a CameraServer widget and in the properties for that widget set the camera source to “switched camera”. On Shuffleboard, drag and drop the switched camera from the camera list to the viewport.

Hi, how are you connecting the lifecams to your machine? If you have access to a raspberry pi, then I would recommend the FRCVision raspi image feed switching feature, which you can control from the network table.

We are connecting through the Rio. I hope to one day try the Pi, but as of right now we are just using the Rio.

Thanks, I’ll have them try that.

I seem to be limited in the ability to compress video on the roborio. If I compress a single video feed, I get like 12-15 FPS max (at 320x240). If I try to compress two feeds at once, they both drop to like 8-10 FPS. Is this an intentional limitation, or can it be relaxed somewhere? When I look at the roborio load, its never more than 60%, so I know it’s not peaking out the CPU. This is generally the code I’m using:

  private UsbCamera frontCamera;
  private UsbCamera rearCamera;
  private UsbCamera armCamera;
  private VideoSink switchedCamera;
    
frontCamera = CameraServer.getInstance().startAutomaticCapture("Front Drive", RobotMap.FRONT_DRIVE_CAMERA_PATH);
frontCamera.setVideoMode(VideoMode.PixelFormat.kMJPEG, 320, 240, 20);

rearCamera = CameraServer.getInstance().startAutomaticCapture("Rear Drive", RobotMap.REAR_DRIVE_CAMERA_PATH);
rearCamera.setVideoMode(VideoMode.PixelFormat.kMJPEG, 320, 240, 20);

armCamera = CameraServer.getInstance().startAutomaticCapture("Arm", RobotMap.ARM_CAMERA_PATH);
armCamera.setVideoMode(VideoMode.PixelFormat.kMJPEG, 240, 240, 15);

switchedCamera = CameraServer.getInstance().addSwitchedCamera("Switched Camera");

switchedCamera.setSource(frontCamera);

There’s no intentional limitation. The code you posted above shouldn’t be recompressing anything (unless the dashboard asks for a different resolution or compression level), it’s just feeding the frames as received from the camera directly.

How are you reading the CPU load? Through “top” via SSH, or some other method?

How are you connected to the RoboRIO? Via radio, or Ethernet? If by radio, what’s your bandwidth utilization?

Right - this code starts uncompressed. When I start viewing in shuffleboard, it’s uncompressed and like 40-50FPS, and a lot of bandwidth (8-12MBs/camera I think).

If I move the compression slider to any setting (1-100), it will drop the FPS down to 10-15 regardless of requested FPS. If I try to compress a second video stream, both drop down to 8ish FPS. The bandwidth will drop accordingly (0.5-1MBs)

I am connecting via radio. I am watching the roborio load by SSH->top

It’s not “uncompressed”. It’s compressed using the hardware compression on the camera. Changing the slider on the dashboard actually introduces two software steps: it has to decompress the image from the camera, and then has to recompress it with the requested quality. I’m not sure why you’re not seeing CPU usage peak, because that indeed is the limiting factor. I could see it not peaking with just one camera, because that’s only one thread saturating the CPU. With two cameras, the compression steps should be distributed amongst both cores.