Hi! We are having trouble in fully understanding the CameraServer class! How does it differentiate between multiple cameras? We want to toggle between cameras with the press of a button (using Java to code), but we are concerned that bandwidth would be an issue. If we were to start automatic capture with camera 1, then remove camera 1, how would the program upon calling startAutomaticCapture again utilize camera 2 for the visual feed instead of camera 1? I haven’t found a lot of elaboration on what the CameraServer actually is or its properties, even after going through the Wpilib documentation.
CameraServer itself has a singleton implementation, so you can’t quite create two CameraServer objects. I do recall one advisor mentioning there was an implementation in the FRC Discord though, if you don’t find resolution here, you might want to check there. We got around that issue by using a Raspberry Pi for vision processing but it’s way overkill for a simple problem.