Quote:
Originally Posted by wsh32
Check out team 254's vision seminar at champs last year: https://www.youtube.com/watch?v=rLwOkAJqImo
To answer your question, if you want to see data coming out of the GRIP debugging, use localhost or 127.0.0.1.
A couple key points about processing the video stream:
1. Underexpose your camera. 254 goes more into detail about why in the video, but it does wonders.
2. You can also eliminate noise by applying a gaussian blur and running cv erode followed by cv dilate (same number of iterations). I'd also use the Filter Contours block and filter by area.
Also, make sure your camera is mounted in the center of your robot. It helps so much when it comes to alignment. Also, not as important, but keeping your camera behind your center of rotation makes your perceived angle of alignment less, which eliminates oscillation.
I hope this helps, good luck this season!
|
wsh32,
Thanks for the link and I made some screen captures for quick summary point. Is there a way that I can share/attach the document to share with everyone?
These are the take away points:
1) Make sure the camera is mounted in the center of the robot
2) Different camera & hardware have different advantages and disadvantages. Implementation is dependent on team resources. For us, we are attempting the easiest solution (roboRIO with camera) with low processing speed; however, high latency.
3) Use library Open CV, NIVision, Grip and etc.
4) Use HSV (hue, saturation, value) instead of RGB
5) Turn down the exposure time as we want a dark image (don't overexpose)
6) Tune HUE first, then Saturation, and the Value. Look for high saturation (color intensity) and then tune V/L (brightness) to LED ring setup. If done properly, we will not need to calibrate on the field.
7) Convert pixel coordinates into real world coordinates (angle displacement). Use coordinates as setpoint for a controller with faster sensor. In our case, we are going to use Gyro. There will be a latency issue; however, we will need to accept due to limited resources & knowledge
8) Use a linear approximation approach of pixels to degrees. Pinhole model is more exact and will be implemented for future years.
See best bets summary below.
