Example of Vision Processing Avaliable Upon Request

Last year my team was reconized for having such a great Vision System and if I get enough requests I would be more than happy to put together a quick tutorial to get teams up and running with “On Robot Tracking” instead of sending packets over the network.

Sending packets over the network may be a problem this year because the GDC has said taht those packets are “Deprioritized” over others.

Let me know what you think. I will need to know if you want the vision code to be in C++ or Python, and also if you want the robot code to be in C++ or Java.

Let me know if anyone is interested!

==========================EDIT====================================
Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.

Last year
The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot.

The Tutorial
The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image.

After seeing the above posts it seems like everyone would like to see the vision in C++.

Pros and Cons
C++
Pros

  • You know your errors on compile time (For the most part)
  • Many more tutorials (as of last year)

Cons

  • Extra scripts neded for compiling
  • Network sockets are “harder”

Python
Pros

  • Loosly typed language
  • Automatic memory allocation
  • “Easy” network sockets

Cons

  • Extra layer
  • Not as many examples
  • Not many people know Python

Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit).

I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV

At the end of the Python tutorial I will show you how to convert the code to C++

What was your strategy to computer vision? Did you use the WPIlib functions or did you write your own image recognition functions?

We implemented an onboard machiene dedicated to processing our vision. We ended up utilizing OpenCV for the main processing.

What did you run OpenCV on? Did you figure out how to run it on the cRIO, or did you run it on a laptop?

I’m interested. One of the Team’s programming goals is to use vision this year. Thanks!

I’m definitely interested! Our team got vision almost working last year, but the problem with it was that it wasn’t at all reliable. In c++, if you don’t mind.

I would also be interested. I got some vision going last year as well, but not super reliably either. C++ please.

An example for OpenCV vision processing on a coprocessor would be great!

For teams looking to process on the cRIO it looks like there are examples available in each language already. We’ll be playing around with one soon to decide if its viable or if we want to focus on doing it on the DS or a coproc.

Any ideas for a self-encapsulated program that takes an image and supplies an image to the cRio w/ C++ and Network Tables && runs on a separate system (a la Raspberry Pi, Arduino, and so on)?

I’m interested in learning about using a coprocessor.

team 3753 here, and we’ve never used vision before, though we’re 100% determined to this year. Have last year’s Kinect and some reflective tape at the ready!

We’re programming in LabVIEW, but I do know basic C++ and Java both so this would still be immensely helpful even if not done in LabVIEW!

There was a card with an activation key for some sort of vision processing software to install on our driving computer. So I’m curious to see how that would work. I feel like it could have a lot of potential, I’ll post back when I try it out.

Do the vision processing in Python please! I’d love to see a tut on it.

Hey Everyone, It seems there is an overwhelming need for this. Let me specify what we did last year and what the tutorial will be like.

Last year
The main thing that set us appart form other teams what we did all our vision processing on our robot on a core i5 computer (ie. a motherboard with integrated graphics and a core i5, no screen or anything.). We used ubuntu (a version of linux). To deploy code we used git and bash scripts to deploy compile and run the code on boot.

The Tutorial
The one thing I will be covering is how to get a basic rectangle tracking system. This system will reconize a colored rectangle, find the bounding points and draw them on the image.

After seeing the above posts it seems like everyone would like to see the vision in C++.

Pros and Cons
C++
Pros

  • You know your errors on compile time (For the most part)
  • Many more tutorials (as of last year)

Cons

  • Extra scripts neded for compiling
  • Network sockets are “harder”

Python
Pros

  • Loosly typed language
  • Automatic memory allocation
  • “Easy” network sockets

Cons

  • Extra layer
  • Not as many examples
  • Not many people know Python

Because of the new documentation and that I am trying to convince my team to use Python this year all the way around (Robot and Vision) I will be doing the vision in python. Another reason for this is that the code is the same for windows and linux (C++ libraries varry a bit).

I will be posting back here when the tutorial is complete. I will not however be covering how to install python or the OpenCV libraries. When you wait for the tutorial from me about the rectangles here is how to install OpenCV (I will be using Python 2.7.3 and OpenCV 2.4.2) How to install OpenCV

At the end of the Python tutorial I will show you how to convert the code to C++

Awesome! I’m a proficient Python developer outside of robotics but it’s just easier to get other kids in programming in robotics with LabVIEW. Maybe if I do vision similar to yours this year we can push Python on to the rest of the team.

We tried to do it last year. Unsuccessful… JAVA pls ?

As a note, there is the “white paper” at wpilib.screenstepslive.com that will point you to the C++, Java and Labview examples for Rectangle recognition and processing.

I didn’t know python was officially supported this year? I guess java would be the best, but I know python too.

But pinkie-promise you won’t use libraries only accessible in python? (or at least point out how to replace them in java/c++)

I Have done it with SimpleCV which is basically a python wrapper for opencv. I did that over the summer. I did it in opencv c++ last season.

SimpleCV code is extremely easy to use.

I am starting on the tutorial right now.

I have decided I will not be doing the robot or network code at this time. I will be doing a tutorial on just the vision. If demand is high enough I will also do a tutorial on sending the data to the robot. The networking can be found just about anywhere for any language.

Look back for a post Entitled “OpenCV Tutorial” I will post here as well once the tutorial thread is up.