Log in

View Full Version : Example of Vision Processing Avaliable Upon Request


jacob9706
06-01-2013, 03:55
Last year my team was reconized for having such a great Vision System and if I get enough requests I would be more than happy to put together a quick tutorial to get teams up and running with "On Robot Tracking" instead of sending packets over the network.

Sending packets over the network may be a problem this year because the GDC has said taht those packets are "Deprioritized" over others.

Let me know what you think. I will need to know if you want the vision code to be in C++ or Python, and also if you want the robot code to be in C++ or Java.

Let me know if anyone is interested!

==========================EDIT==================== ================
Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.

Last year
The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot.

The Tutorial
The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image.

After seeing the above posts it seems like everyone would like to see the vision in C++.

Pros and Cons
C++
Pros

You know your errors on compile time (For the most part)
Many more tutorials (as of last year)


Cons

Extra scripts neded for compiling
Network sockets are "harder"


Python
Pros

Loosly typed language
Automatic memory allocation
"Easy" network sockets


Cons

Extra layer
Not as many examples
Not many people know Python


Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit).

I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV (http://docs.opencv.org/doc/tutorials/introduction/table_of_content_introduction/table_of_content_introduction.html#table-of-content-introduction)

At the end of the Python tutorial I will show you how to convert the code to C++

nathan_hui
06-01-2013, 03:57
What was your strategy to computer vision? Did you use the WPIlib functions or did you write your own image recognition functions?

jacob9706
06-01-2013, 03:59
What was your strategy to computer vision? Did you use the WPIlib functions or did you write your own image recognition functions?

We implemented an onboard machiene dedicated to processing our vision. We ended up utilizing OpenCV for the main processing.

ohrly?
06-01-2013, 08:33
We implemented an onboard machiene dedicated to processing our vision. We ended up utilizing OpenCV for the main processing.

What did you run OpenCV on? Did you figure out how to run it on the cRIO, or did you run it on a laptop?

rich2202
06-01-2013, 09:10
I'm interested. One of the Team's programming goals is to use vision this year. Thanks!

arthurlockman
06-01-2013, 10:55
I'm definitely interested! Our team got vision almost working last year, but the problem with it was that it wasn't at all reliable. In c++, if you don't mind.

omsahmad
06-01-2013, 11:01
I would also be interested. I got some vision going last year as well, but not super reliably either. C++ please.

RufflesRidge
06-01-2013, 11:03
An example for OpenCV vision processing on a coprocessor would be great!

For teams looking to process on the cRIO it looks like there are examples available in each language already (http://wpilib.screenstepslive.com/s/3120/m/8731). We'll be playing around with one soon to decide if its viable or if we want to focus on doing it on the DS or a coproc.

Fifthparallel
06-01-2013, 11:23
Any ideas for a self-encapsulated program that takes an image and supplies an image to the cRio w/ C++ and Network Tables && runs on a separate system (a la Raspberry Pi, Arduino, and so on)?

z_beeblebrox
06-01-2013, 11:27
I'm interested in learning about using a coprocessor.

ctccromer
06-01-2013, 11:42
team 3753 here, and we've never used vision before, though we're 100% determined to this year. Have last year's Kinect and some reflective tape at the ready!

We're programming in LabVIEW, but I do know basic C++ and Java both so this would still be immensely helpful even if not done in LabVIEW!

dheerm
06-01-2013, 13:52
There was a card with an activation key for some sort of vision processing software to install on our driving computer. So I'm curious to see how that would work. I feel like it could have a lot of potential, I'll post back when I try it out.

DjMaddius
06-01-2013, 14:41
Do the vision processing in Python please! I'd love to see a tut on it.

jacob9706
06-01-2013, 15:13
Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.

Last year
The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot.

The Tutorial
The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image.

After seeing the above posts it seems like everyone would like to see the vision in C++.

Pros and Cons
C++
Pros

You know your errors on compile time (For the most part)
Many more tutorials (as of last year)


Cons

Extra scripts neded for compiling
Network sockets are "harder"


Python
Pros

Loosly typed language
Automatic memory allocation
"Easy" network sockets


Cons

Extra layer
Not as many examples
Not many people know Python


Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit).

I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV (http://docs.opencv.org/doc/tutorials/introduction/table_of_content_introduction/table_of_content_introduction.html#table-of-content-introduction)

At the end of the Python tutorial I will show you how to convert the code to C++

DjMaddius
06-01-2013, 15:30
Awesome! I'm a proficient Python developer outside of robotics but it's just easier to get other kids in programming in robotics with LabVIEW. Maybe if I do vision similar to yours this year we can push Python on to the rest of the team.

DetectiveWind
06-01-2013, 16:32
We tried to do it last year. Unsuccessful.... JAVA pls ?

Fifthparallel
06-01-2013, 16:53
As a note, there is the "white paper" at wpilib.screenstepslive.com (http://wpilib.screenstepslive.com/s/3120/m/8731) that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.

ohrly?
06-01-2013, 18:53
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)

PaulDavis1968
06-01-2013, 19:05
Do the vision processing in Python please! I'd love to see a tut on it.

I Have done it with SimpleCV which is basically a python wrapper for opencv. I did that over the summer. I did it in opencv c++ last season.

SimpleCV code is extremely easy to use.

jacob9706
06-01-2013, 20:10
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won't use libraries only accessible in python? (or at least point out how to replace them in java/c++)

I am starting on the tutorial right now.

I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language.

Look back for a post Entitled "OpenCV Tutorial" I will post here as well once the tutorial thread is up.

jacob9706
06-01-2013, 21:57
Ok Everyone! Here is a quick OpenCV Tutorial for tracking the rectangles!
OpenCV FRC Tutorail (http://jacobebey.blogspot.com/2013/01/python-opencv-for-frc-teams.html)

virtuald
07-01-2013, 23:50
I didn't know python was officially supported this year? I guess java would be the best, but I know python too.

You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy

jacob9706
07-01-2013, 23:53
You might note that he was talking about using python on an extra computer, not the cRio.

Additionally, while it is not supported, there is a python interpreter that works on the cRio. Check out http://firstforge.wpi.edu/sf/projects/robotpy

Python is not supported but has a pretty big backing. And yes, I was talking about on an external source.

Azrathud
20-01-2013, 03:07
...

Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

catacon
20-01-2013, 04:47
You certainly don't need a Core i5 to handle this processing. The trick is to write you code correctly. OpenCV implementations of certain algorithms are extremely efficient. If you do it right, this can all be done on an ARM processor getting about 20FPS.

Contours are simply the outlines of objects found in an image (generally binary). The size of these contours can be used to filter out noise, reflection, and other unwanted objects. They can also be approximated with a polygon which makes filtering targets easy. The "contourArea" of a target will have to be determined experimentally and you will want to find a range of acceptable values (i.e. the target area will be a function of distance).

OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.

Azrathud
20-01-2013, 05:48
OpenCV is very well documented, so look on their website for explanations of specific functions. You really need a deeper understanding of computer vision and OpenCV to write effective code; copy pasta won't get you too far, especially with embedded systems.

Fair enough. Thank you.

jacob9706
28-04-2013, 03:46
Thanks pointing me toward OpenCV. I'll probably be doing onboard processing this year with a Raspberry Pi.

I have a few questions regarding your tutorial.
1. Why did you use Gaussian Blur on the image?

2. Could you explain what the findContours, arcLength( how do the arguments RETR_TREE and CHAIN_APPROX_SIMPLE modify the function?) , contourArea do exactly(except what's obvious), and how they relate to finding a correct rectangle?

3. Why do you mulitply the contour_length by 0.02?

4. How did you find the number 1000 to check against the contourArea?

I'm sure I could answer question #2 with some searching, but if you chould answer the others, that would be awesome.

Sorry for the REALLY late reply.
1.) Gaussian Blur just reduces the grain in the image by averaging nearby pixles

2.) I believe you are confused about RETR_TREE and CHAIN_APPROX_SIMPLE, these belong to the findContours method. RETR_TREE is to return these as a tree, if you don't know what a tree is look it up (a family tree is an example). CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. (Straight out of the documentation). http://docs.opencv.org/ Their search is awesome. USE IT.

3.) I have not actually looked at the algorithm behind the scene for exactly how it affects the process but it is in theory to scale the points location.

4.) 1000 was just something we found worked best to filter out a lot of false positives. On our production system we ended upping it to 2000 if I remember right.

Any other question feel free to ask, do research first though please. I wish you luck!

mdrouillard
28-04-2013, 19:24
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

Gregor
28-04-2013, 19:32
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

987 did a lot of work in the 2012 season on powering their Kinect.

Check out the "Powering the Kinect and the Pandaboard" section of their whitepaper (http://www.chiefdelphi.com/media/papers/2698?).

jacob9706
28-04-2013, 21:16
hello everyone. I am a team mentor for Team 772. This year we had great intensions of doing vision processing on a beagleboard XM rev 3 and during the build season got some opencv working on a linux distro on the board. Where we fell off is we could not figure out a legal way to power the beagleboard according to the electrical wiring rules. So my question to the community is, if you did add a second processor on your robot to do any function such as vision processing, how did you power the device?

We interpreted a second battery pack for the xm as not permitted because it would not be physically internal to the XM unlike a laptop which has a built in battery which would be allowed. The XM would have been great due to its lightweight in comparison to a full laptop. And if we powered it from the power distribution board, it says only the crio can be powered from those terminals, which we thought we would need to attach to in order to connect the required power converter. Remember there are other rules about the converter to power the radio etc. Besides, we did not want ideally to power it from the large battery because we did not want to have the linux os get trashed during ungracefull power up and downs that happen on the field. So, how did you accomplish this aspect of using a separate processor? So, in short we may have had a vision processing approach but we could not figure out how to wire the processor.

Any ideas?

md

The second processor (O-DROID U2) runs on 12 volts so it is just plugged directly into the power distribution board. Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.

Iaquinto.Joe
28-04-2013, 23:36
Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.

Honestly something needs to change about this. We lost several matches because our fully charged battery has ran out of juice at the end of the match.

jacob9706
28-04-2013, 23:42
Honestly something needs to change about this. We lost several matches because our fully charged battery has ran out of juice at the end of the match.

We have never had a problem during a match. ALWAYS CHARGE YOUR BATTERIES BETWEEN MATCHES and NEVER put a bad battery in the rotation. We always buy new batteries each year and remove the bad ones for practice things.

Joe Ross
29-04-2013, 00:06
The second processor (O-DROID U2) runs on 12 volts so it is just plugged directly into the power distribution board. Even with the battery dropping to 8 volts at times we never had an issue with the vision machine, the c-Rio will "crap out" before the O-DROID U2.

The cRIO power supply on the PDB will boost the battery voltage and operate down to 4.5 volts. At that point, the battery is very dead. Looking at the O-DROID U2 specs, it says it uses a 5v power supply. Do you know what kind of power circuitry the O-DROID U2 uses? It seems unlikely that it operates below 4.5 volts.

Honestly something needs to change about this. We lost several matches because our fully charged battery has ran out of juice at the end of the match.


Rather then divert this thread, It would be good if you started a new thread and posted your driver station logs (https://wpilib.screenstepslive.com/s/3120/m/8559/l/97119-driver-station-log-file-viewer) in it. The cRIO power supply operates down to 4.5 volts, as does the Radio. The Digital sidecar would be the first thing to turn off, but only momentarily until the battery voltage returns to normal. It seems likely that something else, like a loose wire or a bad battery caused the problems.

William Kunkel
05-05-2013, 10:57
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?

wet_colored_arc
05-05-2013, 21:06
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?

Does Virtuald's link help? http://firstforge.wpi.edu/sf/projects/robotpy

I am interested to see where this thread goes. I do mechanical mentoring but python hobbyist.

jacob9706
05-05-2013, 21:18
With regards to image processing on a co-processor, one of the biggest obstacles my team had was getting the information from the co-processor to the cRIO. Network socket documentation for C++ on the cRIO is flaky at best. Does anyone have experience/example code for communicating between a C++ or Python co-processor and a cRIO running C++?

We ended up implementing our own network table class based on their documentation.Our Implementation (https://github.com/Team3574/2013VisionCode/blob/master/src/nt_client.py). On line 63 of this (https://github.com/Team3574/2013VisionCode/blob/master/src/Processor2.py) we instantiate our network object and on line 265 to 367 we set a couple of values for the robot to read.

I believe the documentation came from.
Documentation (http://api.viglink.com/api/click?format=go&key=aa49f000a51ed35f2f92d8fe98f1954a&loc=http%3A%2F%2Fwww.chiefdelphi.com%2Fforums%2Fsh owthread.php%3Ft%3D112089&v=1&libId=be0dc03f-90a3-44d0-922f-9f3da2b48fcc&out=http%3A%2F%2Ffirstforge.wpi.edu%2Fsf%2Fdocman% 2Fdo%2FdownloadDocument%2Fprojects.wpilib%2Fdocman .root%2Fdoc1318&ref=https%3A%2F%2Fwww.google.com%2F&title=Looking%20for%20Network%20Table%20protocol%2 0documentation.%20-%20Chief%20Delphi&txt=Here's%20a%20link%20to%20the%20spec.&jsonp=vglnk_jsonp_13678026647177)

I recently started to mess with creating a custom dashboard from scratch and was able to get the network tables from here running with little hastle on linux from Here (https://github.com/robotpy/pynetworktables). I would recommend this because they are derived directly from the robot c++ implementation (from my understanding) and seem much more stable then the version we created and used on our robot.

sparkytwd
09-05-2013, 17:45
For powering the ODroid, we used this: http://www.pololu.com/catalog/product/2177 a 3A 5V Buck regulator. It's connected directly to the power distribution board, so we're not using the cRio regulator. It was made by using a multi-meter to determine plug polarity of the AC adapter, then splicing the barrel jack end onto the output of the regulator.

We also had some camera stability issues with them occasionally having driver issues, which we believed was caused by drawing too much power. This was solved with another 3A 5V Buck regulator and an external usb hub.

safiq10
05-09-2013, 21:40
AWESOME