Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Example of Vision Processing Avaliable Upon Request (http://www.chiefdelphi.com/forums/showthread.php?t=110531)

DetectiveWind 06-01-2013 16:32

Re: Example of Vision Processing Avaliable Upon Request
 
We tried to do it last year. Unsuccessful.... JAVA pls ?

Fifthparallel 06-01-2013 16:53

Re: Example of Vision Processing Avaliable Upon Request
 
As a note, there is the "white paper" at wpilib.screenstepslive.com that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.

ohrly? 06-01-2013 18:53

Re: Example of Vision Processing Avaliable Upon Request
 
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)

PaulDavis1968 06-01-2013 19:05

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by DjMaddius (Post 1209048)
Do the vision processing in Python please! I'd love to see a tut on it.

I Have done it with SimpleCV which is basically a python wrapper for opencv. I did that over the summer. I did it in opencv c++ last season.

SimpleCV code is extremely easy to use.

jacob9706 06-01-2013 20:10

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by ohrly? (Post 1209244)
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)

I am starting on the tutorial right now.

I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language.

Look back for a post Entitled "OpenCV Tutorial" I will post here as well once the tutorial thread is up.

jacob9706 06-01-2013 21:57

Re: Example of Vision Processing Avaliable Upon Request
 
Ok Everyone! Here is a quick OpenCV Tutorial for tracking the rectangles!
OpenCV FRC Tutorail

virtuald 07-01-2013 23:50

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by ohrly? (Post 1209244)
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy

jacob9706 07-01-2013 23:53

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by virtuald (Post 1210436)
You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy

Python is not supported but has a pretty big backing. And yes, I was talking about on an external source.

Azrathud 20-01-2013 03:07

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by jacob9706 (Post 1209441)
...

Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

catacon 20-01-2013 04:47

Re: Example of Vision Processing Avaliable Upon Request
 
You certainly don't need a Core i5 to handle this processing. The trick is to write you code correctly. OpenCV implementations of certain algorithms are extremely efficient. If you do it right, this can all be done on an ARM processor getting about 20FPS.

Contours are simply the outlines of objects found in an image (generally binary). The size of these contours can be used to filter out noise, reflection, and other unwanted objects. They can also be approximated with a polygon which makes filtering targets easy. The "contourArea" of a target will have to be determined experimentally and you will want to find a range of acceptable values (i.e. the target area will be a function of distance).

OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.

Azrathud 20-01-2013 05:48

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by catacon (Post 1218991)
OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.

Fair enough. Thank you.

jacob9706 28-04-2013 03:46

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by Azrathud (Post 1218982)
Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

Sorry for the REALLY late reply.
1.) Gaussian Blur just reduces the grain in the image by averaging nearby pixles

2.) I believe you are confused about RETR_TREE and CHAIN_APPROX_SIMPLE, these belong to the findContours method. RETR_TREE is to return these as a tree, if you don't know what a tree is look it up (a family tree is an example). CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. (Straight out of the documentation). http://docs.opencv.org/ Their search is awesome. USE IT.

3.) I have not actually looked at the algorithm behind the scene for exactly how it affects the process but it is in theory to scale the points location.

4.) 1000 was just something we found worked best to filter out a lot of false positives. On our production system we ended upping it to 2000 if I remember right.

Any other question feel free to ask, do research first though please. I wish you luck!

mdrouillard 28-04-2013 19:24

Re: Example of Vision Processing Avaliable Upon Request
 
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

Gregor 28-04-2013 19:32

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by mdrouillard (Post 1268606)
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

987 did a lot of work in the 2012 season on powering their Kinect.

Check out the "Powering the Kinect and the Pandaboard" section of their whitepaper.

jacob9706 28-04-2013 21:16

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by mdrouillard (Post 1268606)
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

The second processor (O-DROID U2) runs on 12 volts so it is just plugged directly into the power distribution board. Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.


All times are GMT -5. The time now is 21:35.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi