Vision tutorials

I’m the lead mentor for a small team. The two groups I primarily work with are desing/build and software/drive team. We don’t have the size or support to break into design, cadd, build, software, web, busines, and so on. For a summer project we are going to figure out Vision. We found the WPI Robotbuilder command based video to be amazing jump starting our robots abilities beyond anything we’ve done in the past. We have not found similar tutorials to get use stared. Seems the bits we found assume a lot of previous vision or programing knowledge.

What is your preferred language?
Do you have a preferred camera to use - axis (more expensive but it seems either more capable or easier to work with or USB camera (cheaper but it seems to me to have some limitations or requires more effort for flexibility/reliability)?

What are you trying to do with vision? Auto aiming can be tricky, but the main ideas boil down to this:

-Use an LED ring (usually green) to reflect off the tape
-Set your camera’s exposure as low as it will go while still seeing the reflective tape
-Create a binary image of only potential goals (in OpenCV, use an HSV filter) detect find all contours
-Throw away anything that isn’t the goal (check similarity with opencv, aspect ratio, area vs perimeter, etc)
-Now comes the tricky part :-). You’ll probably want to calculate distance and angle to the tape. Distance can be done using this equation: F = (P x D) / W, where F is your focal length (probably published, but should double check it with this equation), P is the number of pixels wide your camera is, D is the distance, and W is the width.
-To calculate angle, you’ll essentially modify the field of view equation. The azimuth angle = arctan( ( center point of goal’s x cordinate - ( (image width(pixels))/2 - 0.5) ) / focalLength).
-The azimuth is the angle your robot will have to rotate to be dead on with the goal

Most of this can be accomplished in OpenCV if you are feeling adventurous. We didn’t get vision working this year until after competition, but we did it on a raspberry pi with a USB camera. We then sent the angle and distance over networktables to the roborio.

If you have any questions about how to do a specific part of this, feel free to ask.

Definitely check out GRIP, I’ve heard good things and I’m sure it will be even better by the time next season rolls around.

Also, if looking at working code is your thing, I’ve been accumulating links to robot code, there’s a fair amount of Vision related repos in a variety of different languages: https://firstwiki.github.io/wiki/robot-code-directory

I’ll try and respond to all your replies and thank you for them.

Our preferred language is Java with command based robot.

We have a Axis camera on our robot and have available a USB as well.

We have a green LED ring on our camera.
What is the best way to acquire the image and process it is what we are looking for. Should we focus on Grip running on the Rio? Should we focus on maybe RaspberryPi? Can the code be integrated into the robot code using OpenCV?

I’m guessing lots of trial and error with the different systems is what is going to go down.

I have exclusively looked at GRIP and I too thinks this will be the solution for many team in the years to come. I tried using the Axis camera and driver station running Grip to acquire the image and process it but hate how the network traffic looks and packet loss. I briefly tried deploying it to the roborio but is crashed a few minutes after launching. I have not dug into why yet. It was finding the target and posting data to the Network Tables.

I am still waiting on someone that does targeting and uses C++ as their primary programming language.

Did you look at the robot code directory linked above? Team 3512 has C++ Robot and OpenCV code for 2016.

It depends on how in depth you want to go. I believe there was some toying to get GRIP running on a co-processor on the robot, but I’m not sure if that ever turned into anything. We are using a raspberry pi to grab images from a usb camera, and process them there in openCV. We set up MJPG streamer on the pi to grab images, and then accessed them in OpenCV like they were from an axis camera, as we had latency problems accessing the camera directly. I will probably be posting our code with an explanation after I have a chance to do some more testing, as we have yet to actually mount our system on our bot and use it for auto aiming.

We do targeting and use C++ with NI Vision (although to be fair our code’s somewhat of a mess after several competitions’ worth of quick fixes :o)

Do you need help with anything specific?

If you have the funds available, you cant get a Kangaroo PC for $100 https://www.microsoftstore.com/store/msusa/en_US/pdp/InFocus-Kangaroo-Signature-Edition-Mobile-Desktop/productID.328073600.

We had a USB lifecam connected to a kangaroo running grip connected to the rio via a USB to ethernet dongle to the radio. We had our green LED ring connected to the PCM so that we could turn it on and off. One of our students designed a 3d Printed mount for the life cam and LED ring as well: https://twitter.com/therealsimslug/status/711384190541889536

We had issues with the ethernet connection to the rio with the new radio. We were able to run off of the old dlinks just fine. Never quite sorted it out, and never got good enough at shooting high goals to really use our vision code. But GRIP was very easy to set up. Running it on the kangaroo and using the network tables allowed a small enough lag for us to center on the goal OK. I think if we were to continue to pursue vision, I would come back to this set up.