![]() |
Vision tracking and related questions
I am wondering what vision tracking options there are and the relative difficulties of each.
Our team programs in C++, and we've never done any vision tracking before. Then onto our other questions: 1. Can you use multiple cameras on a single robot? |
Re: Vision tracking and related questions
Vision tracking options: Grip (good gui, easy to use,alpha) opencv(well documented, more complicated, have to develop your own algarithm)
you can have more then one camera on the robot, the problem you will run into is the allowed bandwidth at 7mbps |
Quote:
Also, I like OpenCV, GRIP is good to, and generates OpenCV code when used, but I don't know if it can do multiple cameras. I was thinking of doing stereo vision myself... |
Re: Vision tracking and related questions
Quote:
|
Re: Vision tracking and related questions
We too program in C++. We have some new programming students this year and are very interested in vision tracking.
Since we're just starting out, would you recommend RoboRealm or GRIP to play with the images? We think that "all we need to do" is find a target image and tell us how far left or right we are. Ideally, we could tell how far away from the goal we are too. Do you think it's feasible to run the vision processing on the Roborio or do you think wee need a separate platform? Our first option then would be to run it on the driver station as opposed to a co-processor for simplicity. Unfortunately, we think we'll need a second camera. Our ball pick-up is opposite from our shooter. I've been thinking about vision processing for years now, but this is the first time was have enough programming students to make it feasible. It seems that with the cRio, off-board vision processing was a must. Has anyone done any throughput studies to see just how much computing resources a typical vision processing algorithm takes on the Roborio? Thanks. |
Re: Vision tracking and related questions
Do you have to use grip or opencv? The screensteps seems to suggest you can just code it with the wpilibs.
I knowif ican get the data coordinates, i can work out the algorithm. Also, the sample with the 2016 libraries is simplerobot, does anyone have a command based example? |
Re: Vision tracking and related questions
Look here for tracking shronghold goalpost
http://www.mindsensors.com/blog/how-...our-frc-robot- |
Re: Vision tracking and related questions
Thanks for all the suggestions! Our team has started to play around with GRIP, but we are wondering whether it's best to run it from the driver station laptop (i5-4210U, integrated graphics, 6GB RAM), from a raspberry pi2, the RIO or buy some sort of co-processor like the kangaroo?
We plan to have a USB camera (Logitech) that we plan to plug the camera into whatever we are using for the vision processing, then publishing it to the RIO. Anyone have suggestions? I also would like some examples/tutorials on how to read the contours report from C++ robot code. |
Re: Vision tracking and related questions
Your main software options are Grip and RoboRealm. Grip feel unfinished and doesn't yet function well, as it is a work in progress. I greatly prefer RoboRealm.
I have succeeded in getting working RoboRealm vision processing this week and I can even adjust the robot's position to within 15 pixels of the center of the vision target. This is my RoboRealm pipeline: ![]() The Axis Camera module is in order to connect with an IP camera. RoboRealm can only use a USB camera if you're running RoboRealm on that device. (Aka in order to use a USB camera on the robot you would need a Windows machine ON the robot, such as a Kangaroo). The Adaptive Threshold gray scales the image and filters it so that only intensities of ~190-210 show, which is about the intensity of the reflective tape when an LED light is shown on it. Convex hull fills in the target U shape and makes it a rectangle. My blob filter removes all blobs that have made it this far except the largest blob. (If you want multiple targets to come through, remove blobs based on area instead of largest only.) Center of Gravity gives the X coordinate of the center of the target in pixels. Network Tables publishes the center of gravity information and image dimensions to the network tables, in order to be read in by your program. The following is my C++ code that is compatible with the above RoboRealm pipeline, look mainly at the Test() function: Code:
/* |
Re: Vision tracking and related questions
Quote:
|
Re: Vision tracking and related questions
Quote:
|
| All times are GMT -5. The time now is 10:15 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi