I'll try and respond to all your replies and thank you for them.
Quote:
Originally Posted by ahartnet
What is your preferred language?
Do you have a preferred camera to use - axis (more expensive but it seems either more capable or easier to work with or USB camera (cheaper but it seems to me to have some limitations or requires more effort for flexibility/reliability)?
|
Our preferred language is Java with command based robot.
We have a Axis camera on our robot and have available a USB as well.
Quote:
Originally Posted by snekiam
What are you trying to do with vision? Auto aiming can be tricky, but the main ideas boil down to this:
-Use an LED ring (usually green) to reflect off the tape
-Set your camera's exposure as low as it will go while still seeing the reflective tape
-Create a binary image of only potential goals (in OpenCV, use an HSV filter) detect find all contours
-Throw away anything that isn't the goal (check similarity with opencv, aspect ratio, area vs perimeter, etc)
-Now comes the tricky part :-). You'll probably want to calculate distance and angle to the tape. Distance can be done using this equation: F = (P x D) / W, where F is your focal length (probably published, but should double check it with this equation), P is the number of pixels wide your camera is, D is the distance, and W is the width.
-To calculate angle, you'll essentially modify the field of view equation. The azimuth angle = arctan( ( center point of goal's x cordinate - ( (image width(pixels))/2 - 0.5) ) / focalLength).
-The azimuth is the angle your robot will have to rotate to be dead on with the goal
Most of this can be accomplished in OpenCV if you are feeling adventurous. We didn't get vision working this year until after competition, but we did it on a raspberry pi with a USB camera. We then sent the angle and distance over networktables to the roborio.
If you have any questions about how to do a specific part of this, feel free to ask.
|
We have a green LED ring on our camera.
What is the best way to acquire the image and process it is what we are looking for. Should we focus on Grip running on the Rio? Should we focus on maybe RaspberryPi? Can the code be integrated into the robot code using OpenCV?
I'm guessing lots of trial and error with the different systems is what is going to go down.
Quote:
Originally Posted by virtuald
Definitely check out GRIP, I've heard good things and I'm sure it will be even better by the time next season rolls around.
Also, if looking at working code is your thing, I've been accumulating links to robot code, there's a fair amount of Vision related repos in a variety of different languages: https://firstwiki.github.io/wiki/robot-code-directory
|
I have exclusively looked at GRIP and I too thinks this will be the solution for many team in the years to come. I tried using the Axis camera and driver station running Grip to acquire the image and process it but hate how the network traffic looks and packet loss. I briefly tried deploying it to the roborio but is crashed a few minutes after launching. I have not dug into why yet. It was finding the target and posting data to the Network Tables.