Outsourcing Vision Code

For the past few hours, I have been rigorously developing a working vision solution. It can be found here:

https://github.com/faust1706/vision2015

It tracks the L’s on the yellow totes, returning values that allow you to line up with the center of the tote, as well as tell if you are perpendicular to it.

Let me know if you have any questions

Thanks! We have been stoked on this for a while.

Ok. Questions.
Is it labview code?
If it is how would i impliment it?
if it isn’t how can i replicate something like this with labview and the vision assistant?
Thanks!

The code is written in c++ with opencv. I do not know if you can compile opencv on the roborio.

I’m in the process of doing an in-depth explanation of this code, stay tuned, it will be something like this:

https://www.dropbox.com/s/5wbgtie9vci2d26/Symposium%20Presenation.ppt?dl=0

Ignore the math at the beginning. It is irrelevant to this year. (Though if you are interested in learning it, look up camera pose estimation).

I don’t know what the sample vision program this year does, I’ll have to take a look at it. I am sure there is a way to implement this program in labview, but I personally have no idea how to do that.

The LabVIEW tutorial on vision will have you open a project that includes three processing approaches for the laptop and one integrated on the roboRIO.

One of these uses the color combos of carpet and tote along with the aspect ratio and other metrics to roughly identify totes.

The second uses retroflection on the | | shape.

The third uses a patterned template to locate the logo on the tote.

Feel free to ask additional questions.

By the way, I believe similar processing examples will soon be available for the other languages. And there was an OpenCV library release announced about a week ago, just search for it on CD.

Greg McKaskle

urm, the github repo seems to only have a readme?

Yeah, sorry. Massive code changes. I’m back at college so I have no say in what the students do with the repos. I just teach them. They have code that tracks every game piece in depth, ir code that tracks the reflective take, and color code that tracks the short side of the yellow tote.

They wanted to do a big code release, with all three programs. There are mini write ups of each, but they didn’t want to write 3 papers, so they are making a video that demos and explains all 3. They started video taping last night, and will again tomorrow when the camera is available again.

Tl;dr: the students want their code release to be as good as it can be, and half of them are perfectionists, so it’ll take some time. They still have all the video editting to do. Syncing the program output with the explanation with the live demo of each aspect.

If you have any questions, I am allowed to answer high level stuff, but not provide massive blocks of code.

This code is easily the most readable, nicely organized code the 1706 vision team has ever written. They set that out as a goal from the very start. When they got stuck on a task I told them to do, they thought about it, but also were adding comments, reorganizing the code, writing brief documentation, consulting their notebooks. It was a sight to see a bunch of freshman and sophomore high schoolers work so diligently on a task everyday.

I honestly have no idea when they are going to get done with the video. They code was completely finished last friday night, then Saturday, right before I left to go back to college, I gave them a short list of things to do before then opensource, which was do a quick write up of each sub program and explain what it achieves and how you can use the info. Then after I left one of them got the idea to make a video explaining it and demoing it. Their task is done for the build season, until they find a new one that is. They have 3 working solutions to the task they were given. I’ll have to think of a project to give them, like clean up our old code base and convert it all to the same language. I don’t know. They’re a bunch of enthusiastic kids who are excited to be learning. What more could you ask for?

I believe you can compile opencv on the roborio
http://www.chiefdelphi.com/forums/showthread.php?t=131905

I didn’t know you could at the time. I have not seen a benchmark of a simple opencv program yet running in parallel with the frc control stuff, so I cannot given any opinion about whether one should or shouldn’t go this route.

You can also compile the libfreenect library (libraries now, because of freenect2 for the kinectv2), and use the kinect with the roborio. Again, I just know you can do it, I have no idea what the performance will be if you try it.