After messing around with cameras today, I was able to figure out how to use multiple cameras while only using the bandwidth of one. To do such, you simply switch between which feed you are sending back to the driver station with the touch of a button. See the code down below to see how to do this.
Variables needed:
int currSession;
int sessionfront;
int sessionback;
Image frame;
When I put this in my code and deployed it to the robot several errors showed up and the dashboard indicated there was no code.
Where did you put all of these pieces of code in your project?
kmodos: thank you very much. works perfectly for us.
has anyone figured out how to open both cameras at once? we would like to open both camera, grab a frame from each, lay them down side by side in a double size frame, and send that to the DS. The problem is that IMAQdxConfigureGrab throws an exception if you already have called it without an intervening MAQdxStopAcquisition.
Did you remember to import all of the required packages? The code under init code needs to be run one time when the robot is powered on. The switching code and sending the frame back to the driver station should be placed in some sort of loop, it can be in periodic, or a custom loop you write.
I might look into doing something like this. I have to talk with our drivers to see if this would be something that they want. If I get it working I will post it publicly.
Worked like a charm, thanks! For anyone who’s having trouble, make sure that all other camera code is removed. We accidentally left some in at first, and the code didn’t work correctly.
It’s possible, but you’d have to roll your own version of CameraServer. It’s not terribly hard. I put one together last week so we can publish OpenCV Mat images back to the dashboard. The protocol for getting stuff back to the dashboard is pretty simple. Just take a peek at the GRIP code for doing it (GRIP/core/src/main/java/edu/wpi/grip/core/operations/composite/PublishVideoOperation.java at master · WPIRoboticsProjects/GRIP · GitHub) … basically everything in that main while() loop is the important stuff. CameraServer from WPIlibj works much the same way but utilizes NIVision.Image objects instead.
How you mash the two images together I don’t know as I have basically zero experience with the NIVision libs. But, roll your own CameraServer, solve the issue of mashing images together into a single JPEG and there you go.
Does the lag occur when you switch between the two? Switching will always lag a bit as well as all feeds will lag a tiny bit. Try to lower the quality of the returned image or lower the resolution.