Our team is having some difficulty with the Grip generated code. We are getting a different result when we run the pipeline on the roboRIO vs. the PC.
I am attaching a Word document with some screen captures. One shows the result of HSV threshold detection on the robot, and the other shows the same result on the PC. As you can see they are wildly different.
Both are using the same Microsoft USB camera mounted on the robot, illuminated with a green LED ring.
Any ideas as to what is going wrong?
(Edited to add: we are using Java if that makes any difference)
The reason for this is that OpenCV actually modifies the Mat passed into it when running findContours. Since the Mat from the HSV Threshold is passed into findContours directly, it modifies the Mat that is being sent to the dashboard.
The solution to this is to create a new Mat in the class (call it hsvSend or something like that). Then at the end of the HSVThreshold function, call hsvout.copyTo(hsvSend);. Then putSource the hsvSend mat instead of the hsvout mat
hsvout might not be the right name, I don’t remember. But it’s just the one that is the output from the inRange function.
As another hint, you are HIGHLY overexposing your image. Either lower the voltage to your light source, or lower the exposure on the camera. It will make tracking much easier and more reliable.
So I’ve been monkeying around with this for some time now, and I think the real problem is that the HSV thresholds generated by GRIP don’t produce the same results on the roboRIO. (Note that the camera I used to tune GRIP is the same one used by the roboRIO). Plus, the lack of brightness control in GRIP makes it pretty much useless for tuning as the brightness is a critical setting to improve noise rejection.
To figure this out, I had to modify the generated pipeline to make all of the tuning parameters variable, and then use the SmartDashboard to tune the parameters and monitor the result in real time.
Now I can at least tune the vision accurately, but we’re having some problems with the camera producing consistent readings. But that’s probably a topic for another thread…
Yes, this is going to be true whenever the Robot calls the (CameraServer’s) setResolution(…) and/or setContrast(…) methods. Good that you figured out how to adjust your HSV parameters into modified generated-code. The other (easier) approach to this should be to have the modified camera-server running on the RIO, and then have GRIP connect to this stream via ‘add-ip-camera’ in GRIP , eg with http://roborio-TEAM-frc:1181.
It is critical to override all of the camera’s automatic exposure control features. (We are using the Microsoft Lifecam HD3000) We found that at a minimum we had to call:
setWhiteBalanceManual()
setExposureManual()
setBrightness()
There also seem to be scenarios where the above calls occasionally fail. I haven’t been able to consistently reproduce this, so I can’t say exactly what causes this. We were able to work around it by making the calls multiple times.
(One theory I haven’t been able to test is that there is a race condition at camera start-up and you need to wait awhile after creating the camera object before calling the setters above)