![]() |
Example of Vision Processing Avaliable Upon Request
Last year my team was reconized for having such a great Vision System and if I get enough requests I would be more than happy to put together a quick tutorial to get teams up and running with "On Robot Tracking" instead of sending packets over the network.
Sending packets over the network may be a problem this year because the GDC has said taht those packets are "Deprioritized" over others. Let me know what you think. I will need to know if you want the vision code to be in C++ or Python, and also if you want the robot code to be in C++ or Java. Let me know if anyone is interested! ==========================EDIT==================== ================ Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like. Last year The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot. The Tutorial The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image. After seeing the above posts it seems like everyone would like to see the vision in C++. Pros and Cons C++ Pros
Cons
Python Pros
Cons
Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit). I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV At the end of the Python tutorial I will show you how to convert the code to C++ |
Re: Example of Vision Processing Avaliable Upon Request
What was your strategy to computer vision? Did you use the WPIlib functions or did you write your own image recognition functions?
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
I'm interested. One of the Team's programming goals is to use vision this year. Thanks!
|
Re: Example of Vision Processing Avaliable Upon Request
I'm definitely interested! Our team got vision almost working last year, but the problem with it was that it wasn't at all reliable. In c++, if you don't mind.
|
Re: Example of Vision Processing Avaliable Upon Request
I would also be interested. I got some vision going last year as well, but not super reliably either. C++ please.
|
Re: Example of Vision Processing Avaliable Upon Request
An example for OpenCV vision processing on a coprocessor would be great!
For teams looking to process on the cRIO it looks like there are examples available in each language already. We'll be playing around with one soon to decide if its viable or if we want to focus on doing it on the DS or a coproc. |
Re: Example of Vision Processing Avaliable Upon Request
Any ideas for a self-encapsulated program that takes an image and supplies an image to the cRio w/ C++ and Network Tables && runs on a separate system (a la Raspberry Pi, Arduino, and so on)?
|
Re: Example of Vision Processing Avaliable Upon Request
I'm interested in learning about using a coprocessor.
|
Re: Example of Vision Processing Avaliable Upon Request
team 3753 here, and we've never used vision before, though we're 100% determined to this year. Have last year's Kinect and some reflective tape at the ready!
We're programming in LabVIEW, but I do know basic C++ and Java both so this would still be immensely helpful even if not done in LabVIEW! |
Re: Example of Vision Processing Avaliable Upon Request
There was a card with an activation key for some sort of vision processing software to install on our driving computer. So I'm curious to see how that would work. I feel like it could have a lot of potential, I'll post back when I try it out.
|
Re: Example of Vision Processing Avaliable Upon Request
Do the vision processing in Python please! I'd love to see a tut on it.
|
Re: Example of Vision Processing Avaliable Upon Request
Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.
Last year The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot. The Tutorial The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image. After seeing the above posts it seems like everyone would like to see the vision in C++. Pros and Cons C++ Pros
Cons
Python Pros
Cons
Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit). I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV At the end of the Python tutorial I will show you how to convert the code to C++ |
Re: Example of Vision Processing Avaliable Upon Request
Awesome! I'm a proficient Python developer outside of robotics but it's just easier to get other kids in programming in robotics with LabVIEW. Maybe if I do vision similar to yours this year we can push Python on to the rest of the team.
|
Re: Example of Vision Processing Avaliable Upon Request
We tried to do it last year. Unsuccessful.... JAVA pls ?
|
Re: Example of Vision Processing Avaliable Upon Request
As a note, there is the "white paper" at wpilib.screenstepslive.com that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.
|
Re: Example of Vision Processing Avaliable Upon Request
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.
But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++) |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
SimpleCV code is extremely easy to use. |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language. Look back for a post Entitled "OpenCV Tutorial" I will post here as well once the tutorial thread is up. |
Re: Example of Vision Processing Avaliable Upon Request
Ok Everyone! Here is a quick OpenCV Tutorial for tracking the rectangles!
OpenCV FRC Tutorail |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
I have a few questions regarding your tutorial. 1. Why did you use Gaussian Blur on the image? 2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle? 3. Why do you mulitply the contour_length by 0.02? 4. How did you find the number 1000 to check against the contourArea? I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome. |
Re: Example of Vision Processing Avaliable Upon Request
You certainly don't need a Core i5 to handle this processing. The trick is to write you code correctly. OpenCV implementations of certain algorithms are extremely efficient. If you do it right, this can all be done on an ARM processor getting about 20FPS.
Contours are simply the outlines of objects found in an image (generally binary). The size of these contours can be used to filter out noise, reflection, and other unwanted objects. They can also be approximated with a polygon which makes filtering targets easy. The "contourArea" of a target will have to be determined experimentally and you will want to find a range of acceptable values (i.e. the target area will be a function of distance). OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems. |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
1.) Gaussian Blur just reduces the grain in the image by averaging nearby pixles 2.) I believe you are confused about RETR_TREE and CHAIN_APPROX_SIMPLE, these belong to the findContours method. RETR_TREE is to return these as a tree, if you don't know what a tree is look it up (a family tree is an example). CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. (Straight out of the documentation). http://docs.opencv.org/ Their search is awesome. USE IT. 3.) I have not actually looked at the algorithm behind the scene for exactly how it affects the process but it is in theory to scale the points location. 4.) 1000 was just something we found worked best to filter out a lot of false positives. On our production system we ended upping it to 2000 if I remember right. Any other question feel free to ask, do research first though please. I wish you luck! |
Re: Example of Vision Processing Avaliable Upon Request
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?
We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor. Any ideas? md |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
Check out the "Powering the Kinect and the Pandaboard" section of their whitepaper. |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
Quote:
Rather then divert this thread, It would be good if you started a new thread and posted your driver station logs in it. The cRIO power supply operates down to 4.5 volts, as does the Radio. The Digital sidecar would be the first thing to turn off, but only momentarily until the battery voltage returns to normal. It seems likely that something else, like a loose wire or a bad battery caused the problems. |
Re: Example of Vision Processing Avaliable Upon Request
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
I am interested to see where this thread goes. I do mechanical mentoring but python hobbyist. |
Re: Example of Vision Processing Avaliable Upon Request
Quote:
I believe the documentation came from. Documentation I recently started to mess with creating a custom dashboard from scratch and was able to get the network tables from here running with little hastle on linux from Here. I would recommend this because they are derived directly from the robot c++ implementation (from my understanding) and seem much more stable then the version we created and used on our robot. |
Re: Example of Vision Processing Avaliable Upon Request
For powering the ODroid, we used this: http://www.pololu.com/catalog/product/2177 a 3A 5V Buck regulator. It's connected directly to the power distribution board, so we're not using the cRio regulator. It was made by using a multi-meter to determine plug polarity of the AC adapter, then splicing the barrel jack end onto the output of the regulator.
We also had some camera stability issues with them occasionally having driver issues, which we believed was caused by drawing too much power. This was solved with another 3A 5V Buck regulator and an external usb hub. |
Re: Example of Vision Processing Avaliable Upon Request
AWESOME
|
| All times are GMT -5. The time now is 21:35. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi