Grip produces different results on PC vs Robot.

We have the GRIP process down for building a pipeline and transferring it to the robot and implementing it using Java Code.

Our problem comes from the fact that the robot code produces vastly different results.

With the SAME MS HD-3000 USB camera (and same scene) plugged into the PC GRIP, we get a dozen contours that we can filter down to 2 or 3.

When we simply move the USB cord from the PC to the robot, and transfer the generated pipeline (java) file to the robot, we get 1200 contours that are all tiny, and no filtered ones.

Clearly the color thresholding is not giving us the same results, which leads us to think that the camera is not being setup the same on both systems.

So we need to see what’s happening on the robot that’s so different than the PC.

Questions:

  1. Is there a way to stream the processed image to the Driver station, rather than the raw camera image. This way we could at least see each step of the pipeline.

  2. Is there a trick to getting the two camera setups to agree?

  3. Somewhat related… None of the GRIP processing elements seem to want to process a grayscale image. My recollection was that these gave the best results in the past… What method do I use to threshold a grayscale image?

Thanks
Phil

  1. Yes. Use CameraServer.getInstance().startAutomaticCapture() to host an MJPEG stream.

  2. If you stream the image from the roborio, then you can use an IP camera source in GRIP to connect to it and you won’t ave to worry about changing camera settings.

  3. Use the “Threshold” operation to do thresholds on grayscale images.

  1. Thanks for the link. I can follow some of this. I’ll re-read a few more times. Still not exactly sure how to stream processed video, but I’ll experiment with this. It’s a bit like black magic at this stage.

  2. Excellent

  3. I guess that’s the CV Threshold… Wasn’t sure how the two types differed.

I have the line you indicate in my code, and it’s streaming to the dashboard.

I tried connecting grip to the roborio, but I get a Camera Service crash error.
I tried the fixed IP and the dynamic one, but go the same error instantly.
See attached error image.

https://www.chiefdelphi.com/forums/attachment.php?attachmentid=21732&d=1486417635

Was this on the same laptop as the one the dashboard was running on? I am able to get this error GRIP in this way if I am not connected to the robot’s wifi network.

Yes… It’s all being done on the same laptop…and it is connected to robot with WiFi

Plus I can run the dashboard and see video, or I can keep the dashboard off.

Either way I get the error when I run the grip program on the same laptop.

Should it matter if I have the dashboard running or not?

ie: is a Dashboard connection required, or is it consuming a stream that Grip needs.

https://www.chiefdelphi.com/forums/attachment.php?attachmentid=21732&d=1486417635

Broken link.

And no, having the dashboard open shouldn’t have an effect.

My solution was to hack the generated pipeline to put the result of the HSV threshold out to a second video stream. Then I can show both the raw and threshold images in the dashboard. However since that time I’ve discovered that the GRIP tool has a “Publish Video” step that you can use for the same purpose.

  1. Is there a trick to getting the two camera setups to agree?

Try connecting GRIP to an IP camera stream as suggested by other posters here. My solution was a bit more involved. Once I got the raw and threshold images displaying, I hacked the generated pipeline some more to allow me to “tune” it in real time based on settings in the Smart Dashboard.

I also found that it is critical turn turn off all automatic exposure and white balance functions, or else every time the camera makes a change, your thresholds become garbage. To this end, we are calling setWhiteBalanceManual(), setExposureManual(), and setBrightness().

  1. Somewhat related… None of the GRIP processing elements seem to want to process a grayscale image. My recollection was that these gave the best results in the past… What method do I use to threshold a grayscale image?

Not sure exactly what you are trying to do here. The usual practice is to have the first step in your pipeline be a threshold operation that turns the color image from the camera into a black & white image. This b/w image is then used as input to the contour-finding and filtering steps.

If you ever figure out why the images aren’t agreeing, could you post it? I’m using GRIP generated code for vision processing on a Pi, and the HSV thresholding values are not working at all as expected.

I have GRIP open on the ds laptop next to me, connected to our camera over the wifi network, and the Pi connected to the camera via ethernet over a network bridge. The HSV settings on the ds laptop (through the viewer window in GRIP) are showing nice, clear, thresholded images that are correctly identifying retroreflective tape. The frames I’m putting out from the Pi are much sketchier, even with a very wide hue range and no limits on saturation or value, only a very small part of the tape is identified.

Has anyone else had a similar problem? I’ve been playing with the hue values for awhile, yet not really getting anywhere.