Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Example of Vision Processing Avaliable Upon Request (http://www.chiefdelphi.com/forums/showthread.php?t=110531)

jacob9706 06-01-2013 03:55

Example of Vision Processing Avaliable Upon Request
 
Last year my team was reconized for having such a great Vision System and if I get enough requests I would be more than happy to put together a quick tutorial to get teams up and running with "On Robot Tracking" instead of sending packets over the network.

Sending packets over the network may be a problem this year because the GDC has said taht those packets are "Deprioritized" over others.

Let me know what you think. I will need to know if you want the vision code to be in C++ or Python, and also if you want the robot code to be in C++ or Java.

Let me know if anyone is interested!

==========================EDIT==================== ================
Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.

Last year
The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot.

The Tutorial
The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image.

After seeing the above posts it seems like everyone would like to see the vision in C++.

Pros and Cons
C++
Pros
  • You know your errors on compile time (For the most part)
  • Many more tutorials (as of last year)

Cons
  • Extra scripts neded for compiling
  • Network sockets are "harder"

Python
Pros
  • Loosly typed language
  • Automatic memory allocation
  • "Easy" network sockets

Cons
  • Extra layer
  • Not as many examples
  • Not many people know Python

Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit).

I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV

At the end of the Python tutorial I will show you how to convert the code to C++

nathan_hui 06-01-2013 03:57

Re: Example of Vision Processing Avaliable Upon Request
 
What was your strategy to computer vision? Did you use the WPIlib functions or did you write your own image recognition functions?

jacob9706 06-01-2013 03:59

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by nathan_hui (Post 1208724)
What was your strategy to computer vision? Did you use the WPIlib functions or did you write your own image recognition functions?

We implemented an onboard machiene dedicated to processing our vision. We ended up utilizing OpenCV for the main processing.

ohrly? 06-01-2013 08:33

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by jacob9706 (Post 1208725)
We implemented an onboard machiene dedicated to processing our vision. We ended up utilizing OpenCV for the main processing.

What did you run OpenCV on? Did you figure out how to run it on the cRIO, or did you run it on a laptop?

rich2202 06-01-2013 09:10

Re: Example of Vision Processing Avaliable Upon Request
 
I'm interested. One of the Team's programming goals is to use vision this year. Thanks!

arthurlockman 06-01-2013 10:55

Re: Example of Vision Processing Avaliable Upon Request
 
I'm definitely interested! Our team got vision almost working last year, but the problem with it was that it wasn't at all reliable. In c++, if you don't mind.

omsahmad 06-01-2013 11:01

Re: Example of Vision Processing Avaliable Upon Request
 
I would also be interested. I got some vision going last year as well, but not super reliably either. C++ please.

RufflesRidge 06-01-2013 11:03

Re: Example of Vision Processing Avaliable Upon Request
 
An example for OpenCV vision processing on a coprocessor would be great!

For teams looking to process on the cRIO it looks like there are examples available in each language already. We'll be playing around with one soon to decide if its viable or if we want to focus on doing it on the DS or a coproc.

Fifthparallel 06-01-2013 11:23

Re: Example of Vision Processing Avaliable Upon Request
 
Any ideas for a self-encapsulated program that takes an image and supplies an image to the cRio w/ C++ and Network Tables && runs on a separate system (a la Raspberry Pi, Arduino, and so on)?

z_beeblebrox 06-01-2013 11:27

Re: Example of Vision Processing Avaliable Upon Request
 
I'm interested in learning about using a coprocessor.

ctccromer 06-01-2013 11:42

Re: Example of Vision Processing Avaliable Upon Request
 
team 3753 here, and we've never used vision before, though we're 100% determined to this year. Have last year's Kinect and some reflective tape at the ready!

We're programming in LabVIEW, but I do know basic C++ and Java both so this would still be immensely helpful even if not done in LabVIEW!

dheerm 06-01-2013 13:52

Re: Example of Vision Processing Avaliable Upon Request
 
There was a card with an activation key for some sort of vision processing software to install on our driving computer. So I'm curious to see how that would work. I feel like it could have a lot of potential, I'll post back when I try it out.

DjMaddius 06-01-2013 14:41

Re: Example of Vision Processing Avaliable Upon Request
 
Do the vision processing in Python please! I'd love to see a tut on it.

jacob9706 06-01-2013 15:13

Re: Example of Vision Processing Avaliable Upon Request
 
Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.

Last year
The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot.

The Tutorial
The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image.

After seeing the above posts it seems like everyone would like to see the vision in C++.

Pros and Cons
C++
Pros
  • You know your errors on compile time (For the most part)
  • Many more tutorials (as of last year)

Cons
  • Extra scripts neded for compiling
  • Network sockets are "harder"

Python
Pros
  • Loosly typed language
  • Automatic memory allocation
  • "Easy" network sockets

Cons
  • Extra layer
  • Not as many examples
  • Not many people know Python

Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit).

I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV

At the end of the Python tutorial I will show you how to convert the code to C++

DjMaddius 06-01-2013 15:30

Re: Example of Vision Processing Avaliable Upon Request
 
Awesome! I'm a proficient Python developer outside of robotics but it's just easier to get other kids in programming in robotics with LabVIEW. Maybe if I do vision similar to yours this year we can push Python on to the rest of the team.

DetectiveWind 06-01-2013 16:32

Re: Example of Vision Processing Avaliable Upon Request
 
We tried to do it last year. Unsuccessful.... JAVA pls ?

Fifthparallel 06-01-2013 16:53

Re: Example of Vision Processing Avaliable Upon Request
 
As a note, there is the "white paper" at wpilib.screenstepslive.com that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.

ohrly? 06-01-2013 18:53

Re: Example of Vision Processing Avaliable Upon Request
 
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)

PaulDavis1968 06-01-2013 19:05

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by DjMaddius (Post 1209048)
Do the vision processing in Python please! I'd love to see a tut on it.

I Have done it with SimpleCV which is basically a python wrapper for opencv. I did that over the summer. I did it in opencv c++ last season.

SimpleCV code is extremely easy to use.

jacob9706 06-01-2013 20:10

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by ohrly? (Post 1209244)
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)

I am starting on the tutorial right now.

I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language.

Look back for a post Entitled "OpenCV Tutorial" I will post here as well once the tutorial thread is up.

jacob9706 06-01-2013 21:57

Re: Example of Vision Processing Avaliable Upon Request
 
Ok Everyone! Here is a quick OpenCV Tutorial for tracking the rectangles!
OpenCV FRC Tutorail

virtuald 07-01-2013 23:50

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by ohrly? (Post 1209244)
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy

jacob9706 07-01-2013 23:53

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by virtuald (Post 1210436)
You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy

Python is not supported but has a pretty big backing. And yes, I was talking about on an external source.

Azrathud 20-01-2013 03:07

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by jacob9706 (Post 1209441)
...

Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

catacon 20-01-2013 04:47

Re: Example of Vision Processing Avaliable Upon Request
 
You certainly don't need a Core i5 to handle this processing. The trick is to write you code correctly. OpenCV implementations of certain algorithms are extremely efficient. If you do it right, this can all be done on an ARM processor getting about 20FPS.

Contours are simply the outlines of objects found in an image (generally binary). The size of these contours can be used to filter out noise, reflection, and other unwanted objects. They can also be approximated with a polygon which makes filtering targets easy. The "contourArea" of a target will have to be determined experimentally and you will want to find a range of acceptable values (i.e. the target area will be a function of distance).

OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.

Azrathud 20-01-2013 05:48

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by catacon (Post 1218991)
OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.

Fair enough. Thank you.

jacob9706 28-04-2013 03:46

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by Azrathud (Post 1218982)
Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

Sorry for the REALLY late reply.
1.) Gaussian Blur just reduces the grain in the image by averaging nearby pixles

2.) I believe you are confused about RETR_TREE and CHAIN_APPROX_SIMPLE, these belong to the findContours method. RETR_TREE is to return these as a tree, if you don't know what a tree is look it up (a family tree is an example). CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. (Straight out of the documentation). http://docs.opencv.org/ Their search is awesome. USE IT.

3.) I have not actually looked at the algorithm behind the scene for exactly how it affects the process but it is in theory to scale the points location.

4.) 1000 was just something we found worked best to filter out a lot of false positives. On our production system we ended upping it to 2000 if I remember right.

Any other question feel free to ask, do research first though please. I wish you luck!

mdrouillard 28-04-2013 19:24

Re: Example of Vision Processing Avaliable Upon Request
 
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

Gregor 28-04-2013 19:32

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by mdrouillard (Post 1268606)
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

987 did a lot of work in the 2012 season on powering their Kinect.

Check out the "Powering the Kinect and the Pandaboard" section of their whitepaper.

jacob9706 28-04-2013 21:16

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by mdrouillard (Post 1268606)
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

The second processor (O-DROID U2) runs on 12 volts so it is just plugged directly into the power distribution board. Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.

Iaquinto.Joe 28-04-2013 23:36

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by jacob9706 (Post 1268729)
Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.

Honestly something needs to change about this. We lost several matches because our fully charged battery has ran out of juice at the end of the match.

jacob9706 28-04-2013 23:42

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by Iaquinto.Joe (Post 1268851)
Honestly something needs to change about this. We lost several matches because our fully charged battery has ran out of juice at the end of the match.

We have never had a problem during a match. ALWAYS CHARGE YOUR BATTERIES BETWEEN MATCHES and NEVER put a bad battery in the rotation. We always buy new batteries each year and remove the bad ones for practice things.

Joe Ross 29-04-2013 00:06

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by jacob9706 (Post 1268729)
The second processor (O-DROID U2) runs on 12 volts so it is just plugged directly into the power distribution board. Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.

The cRIO power supply on the PDB will boost the battery voltage and operate down to 4.5 volts. At that point, the battery is very dead. Looking at the O-DROID U2 specs, it says it uses a 5v power supply. Do you know what kind of power circuitry the O-DROID U2 uses? It seems unlikely that it operates below 4.5 volts.

Quote:

Originally Posted by Iaquinto.Joe (Post 1268851)
Honestly something needs to change about this. We lost several matches because our fully charged battery has ran out of juice at the end of the match.


Rather then divert this thread, It would be good if you started a new thread and posted your driver station logs in it. The cRIO power supply operates down to 4.5 volts, as does the Radio. The Digital sidecar would be the first thing to turn off, but only momentarily until the battery voltage returns to normal. It seems likely that something else, like a loose wire or a bad battery caused the problems.

William Kunkel 05-05-2013 10:57

Re: Example of Vision Processing Avaliable Upon Request
 
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?

wet_colored_arc 05-05-2013 21:06

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by MaraschinoPanda (Post 1272440)
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?

Does Virtuald's link help? http://firstforge.wpi.edu/sf/projects/robotpy

I am interested to see where this thread goes. I do mechanical mentoring but python hobbyist.

jacob9706 05-05-2013 21:18

Re: Example of Vision Processing Avaliable Upon Request
 
Quote:

Originally Posted by MaraschinoPanda (Post 1272440)
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?

We ended up implementing our own network table class based on their documentation.Our Implementation. On line 63 of this we instantiate our network object and on line 265 to 367 we set a couple of values for the robot to read.

I believe the documentation came from.
Documentation

I recently started to mess with creating a custom dashboard from scratch and was able to get the network tables from here running with little hastle on linux from Here. I would recommend this because they are derived directly from the robot c++ implementation (from my understanding) and seem much more stable then the version we created and used on our robot.

sparkytwd 09-05-2013 17:45

Re: Example of Vision Processing Avaliable Upon Request
 
For powering the ODroid, we used this: http://www.pololu.com/catalog/product/2177 a 3A 5V Buck regulator. It's connected directly to the power distribution board, so we're not using the cRio regulator. It was made by using a multi-meter to determine plug polarity of the AC adapter, then splicing the barrel jack end onto the output of the regulator.

We also had some camera stability issues with them occasionally having driver issues, which we believed was caused by drawing too much power. This was solved with another 3A 5V Buck regulator and an external usb hub.

safiq10 05-09-2013 21:40

Re: Example of Vision Processing Avaliable Upon Request
 
AWESOME


All times are GMT -5. The time now is 21:35.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi