Hi all,
So this is my first year tackling vision processing on the roboRIO using Java, and I have a basic set of code that seems to work fairly well. I have created a camera subsystem that handles all the CameraServer related code, and I use its methods in my commands to do things such as initializing the camera and turning on/off processing the video using GRIP-generated code.
However, I’m having some slight trouble with the processing functions. One of the things I do along with processing the video is streaming the output by calling putFrame on CvSink objects. When I turn processing on, it streams perfectly fine. I can see my operations, contours, etc. However, once I turn the processing off, the output is bombarded by errors saying "Too many simultaneous client output streams (MjpegServerImpl.cpp:504). And strangely enough, when I turn processing ON once more, the error messages stop coming.
(The code snippets have some irrelevant methods removed.)
/*
To specify, the following are object variables for the camera subsystem:
t
cvSink (instantiated in my init method)
cvSource (instantiated in my init method)
*/
public void startProcessing(){
System.out.println("Starting processing thread");
t = new Thread(() -> {
while (!Thread.interrupted()) {
cvSink.grabFrame(rawImage);
if (!rawImage.empty()) {
rawImage.copyTo(input);
myGripPipeline.process(input);
contourList = myGripPipeline.filterContoursOutput();
Imgproc.drawContours(rawImage, contourList, -1, new Scalar(0,0,255,255), 2);
cvSource.putFrame(rawImage);
} else {
System.out.println("Mat image is empty!");
}
}
});
t.start();
}
public void endProcessing () {
if (t.getState() == Thread.State.RUNNABLE) {
t.interrupt();
}
}
The only mjpegservers that are created (at least to my knowledge) are via two methods in my code: startAutomaticCapture() and putVideo(). And they are both in my initialization method, which can only called once.
//isStarted is an object variable for the Camera subsystem.
//The method is also called before any other methods in the subsystem.
public void initCamera() {
if (!isStarted) {
System.out.println("Starting the camera!"); //I only see this printed once in the output, ever.
isStarted = true;
camObject = CameraServer.getInstance().startAutomaticCapture();
cvSink = CameraServer.getInstance().getVideo();
//the following code does exactly what CameraServer.putVideo() does; I had putVideo before but the same problem existed.
cvSource = new CvSource("ContourVideo", VideoMode.PixelFormat.kMJPEG, 320, 240, 30);
CameraServer.getInstance().addCamera(cvSource);
VideoSink server = CameraServer.getInstance().addServer("serve_" + cvSource.getName());
server.setSource(cvSource);
}
}
What I find really interesting is that I only receive the errors once I stop processing images. Maybe putFrame() is stopping the errors from occuring? I don’t know why it would do that, but that’s my best guess.
In all honesty, streaming the output of the GRIP pipeline is not necessary for vision to work, but I feel that it’s immensly useful for debugging purposes. I’ve also noticed that this is a fairly common issue here. Any help would be appreciated!