|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
|||
|
|||
|
Vision problem - robot view not matching Grip view
Our team is having some difficulty with the Grip generated code. We are getting a different result when we run the pipeline on the roboRIO vs. the PC.
I am attaching a Word document with some screen captures. One shows the result of HSV threshold detection on the robot, and the other shows the same result on the PC. As you can see they are wildly different. Both are using the same Microsoft USB camera mounted on the robot, illuminated with a green LED ring. Any ideas as to what is going wrong? (Edited to add: we are using Java if that makes any difference) Last edited by skidad68 : 26-01-2017 at 21:14. Reason: added clarification |
|
#2
|
||||
|
||||
|
Re: Vision problem - robot view not matching Grip view
Are you using the publish video block in your GRIP pipeline?
|
|
#3
|
|||
|
|||
|
Re: Vision problem - robot view not matching Grip view
We've hacked the generated hsvThreshold() method to add a call to CvSource.putFrame().
|
|
#4
|
||||
|
||||
|
Re: Vision problem - robot view not matching Grip view
The reason for this is that OpenCV actually modifies the Mat passed into it when running findContours. Since the Mat from the HSV Threshold is passed into findContours directly, it modifies the Mat that is being sent to the dashboard.
The solution to this is to create a new Mat in the class (call it hsvSend or something like that). Then at the end of the HSVThreshold function, call hsvout.copyTo(hsvSend);. Then putSource the hsvSend mat instead of the hsvout mat hsvout might not be the right name, I don't remember. But it's just the one that is the output from the inRange function. As another hint, you are HIGHLY overexposing your image. Either lower the voltage to your light source, or lower the exposure on the camera. It will make tracking much easier and more reliable. |
|
#5
|
|||
|
|||
|
Re: Vision problem - robot view not matching Grip view
Thanks for the tip.
Follow-up question: Is there a way to set the exposure in GRIP? Otherwise the HSV calibration we do there isn't going to be close. |
|
#6
|
|||
|
|||
|
Re: Vision problem - robot view not matching Grip view
So I've been monkeying around with this for some time now, and I think the real problem is that the HSV thresholds generated by GRIP don't produce the same results on the roboRIO. (Note that the camera I used to tune GRIP is the same one used by the roboRIO). Plus, the lack of brightness control in GRIP makes it pretty much useless for tuning as the brightness is a critical setting to improve noise rejection.
To figure this out, I had to modify the generated pipeline to make all of the tuning parameters variable, and then use the SmartDashboard to tune the parameters and monitor the result in real time. Now I can at least tune the vision accurately, but we're having some problems with the camera producing consistent readings. But that's probably a topic for another thread... |
|
#7
|
|||
|
|||
|
Re: Vision problem - robot view not matching Grip view
Quote:
|
|
#8
|
|||
|
|||
|
Re: Vision problem - robot view not matching Grip view
So regarding the camera consistency issues:
It is critical to override all of the camera's automatic exposure control features. (We are using the Microsoft Lifecam HD3000) We found that at a minimum we had to call: setWhiteBalanceManual() setExposureManual() setBrightness() There also seem to be scenarios where the above calls occasionally fail. I haven't been able to consistently reproduce this, so I can't say exactly what causes this. We were able to work around it by making the calls multiple times. (One theory I haven't been able to test is that there is a race condition at camera start-up and you need to wait awhile after creating the camera object before calling the setters above) |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|