|
|
|
| My love is autonomous when you enter the room. |
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#16
|
||||
|
||||
|
Re: Example of Vision Processing Avaliable Upon Request
We tried to do it last year. Unsuccessful.... JAVA pls ?
|
|
#17
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
As a note, there is the "white paper" at wpilib.screenstepslive.com that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.
|
|
#18
|
||||
|
||||
|
Re: Example of Vision Processing Avaliable Upon Request
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.
But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++) |
|
#19
|
||||
|
||||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
SimpleCV code is extremely easy to use. |
|
#20
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language. Look back for a post Entitled "OpenCV Tutorial" I will post here as well once the tutorial thread is up. |
|
#21
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
Ok Everyone! Here is a quick OpenCV Tutorial for tracking the rectangles!
OpenCV FRC Tutorail |
|
#22
|
||||
|
||||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy |
|
#23
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
|
#24
|
||||
|
||||
|
Re: Example of Vision Processing Avaliable Upon Request
Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.
I have a few questions regarding your tutorial. 1. Why did you use Gaussian Blur on the image? 2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle? 3. Why do you mulitply the contour_length by 0.02? 4. How did you find the number 1000 to check against the contourArea? I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome. Last edited by Azrathud : 20-01-2013 at 05:48. |
|
#25
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
You certainly don't need a Core i5 to handle this processing. The trick is to write you code correctly. OpenCV implementations of certain algorithms are extremely efficient. If you do it right, this can all be done on an ARM processor getting about 20FPS.
Contours are simply the outlines of objects found in an image (generally binary). The size of these contours can be used to filter out noise, reflection, and other unwanted objects. They can also be approximated with a polygon which makes filtering targets easy. The "contourArea" of a target will have to be determined experimentally and you will want to find a range of acceptable values (i.e. the target area will be a function of distance). OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems. |
|
#26
|
||||
|
||||
|
Re: Example of Vision Processing Avaliable Upon Request
Fair enough. Thank you.
|
|
#27
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
1.) Gaussian Blur just reduces the grain in the image by averaging nearby pixles 2.) I believe you are confused about RETR_TREE and CHAIN_APPROX_SIMPLE, these belong to the findContours method. RETR_TREE is to return these as a tree, if you don't know what a tree is look it up (a family tree is an example). CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. (Straight out of the documentation). http://docs.opencv.org/ Their search is awesome. USE IT. 3.) I have not actually looked at the algorithm behind the scene for exactly how it affects the process but it is in theory to scale the points location. 4.) 1000 was just something we found worked best to filter out a lot of false positives. On our production system we ended upping it to 2000 if I remember right. Any other question feel free to ask, do research first though please. I wish you luck! |
|
#28
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?
We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor. Any ideas? md |
|
#29
|
|||||
|
|||||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
Check out the "Powering the Kinect and the Pandaboard" section of their whitepaper. |
|
#30
|
|||
|
|||
|
Re: Example of Vision Processing Avaliable Upon Request
Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|