paper: Team 341 Vision System Code

The biggest advantage was it made tuning very easy. Take the bot to the practice field with some balls, fire away, and record the range/RPM information into the TreeMap if you like what you saw (we actually had a version of the software that let the operator do this with the push of a “commit” button, but we ended up doing it all by hand just to sanitize the values). No need to fire up Excel.

By far the biggest advantage of a laptop-based solution was the development process that it facilitated. Simply collect some representative images and then you can go off and tune your vision software on a laptop without needing a cRIO or the FRC control system.

Yeah! I like that one a lot…

For the c++ wind river people out there who look at this… you cannot use the modulo operator on doubles but you can use the fmod() offered in math.h


return (fmod ((angle+360.0) , 360.0));  //c++

//That function reminded me of this one:
//Here is another cool function I pulled from our NewTek code that is 
//slightly similar and cute...
int Rotation_ = (((Info->Orientation_Rotation + 45) / 90) % 4) * 90;

//Can you see what this does?


Doh! I missed this before I responded as it was on the 2nd tab…

While I’m here though… if you are a c++ guy why use Java? Is it because it was the only way to interface with the dashboard? I gave up when trying to figure out how to do that in wind river.

Easy, it’s because I am not the only person who writes software for our team! Java is what is taught to our AP CS students, and is a lot friendlier to our students (in that it is a lot harder to accidentally shoot yourself in the foot). I also have a lot of training in Java (and still use it on a nearly daily basis), even if C++ is my bread and butter.

Ah ok.

Don’t be disappointed… this discussion has taught/reminded us something that we rarely use in c++, and this discussion indirectly helped my co-worker fix a bug today. I do know how you feel though as there is a lot of effort that goes into this! I had most-all my vision code written as well, and unfortunately it is all going straight into the bit bucket, as we could not get the deliverables to make it work in time. I do want to look your code over in more detail and post what I did as well, and hopefully at that time the discussion will have more meat in it as I do want some closure in the work that I have done thus far.

I will reveal one piece now with this video:

When I first saw the original video, it screamed high saturation levels of red and blue on the alliance colors, and this turns out to be true. The advantage is that there is a larger line to track at a higher point as I could use particle detection alone. The goal then was to interpret the line to perspective and use that to determine my location on the field. From the location I had everything I needed as I then go to an array table error correction grid with linear interpolation from one point to the next. (The grid among other tweaks are written in LUA more on that later too).

more to come…

There is one question that I would like to throw out there now though… Does anyone at all work with UYVY color space (a.k.a YPbPr). We work with this natively at NewTek, and it would be nice to see who else does.

So after attending the Einstein Weekend debugging session this past weekend and chatting with some of the teams about their various OpenCV implementing vision systems, I just HAD to check out Daisy’s code (another suggestion from Brad Miller).

So honestly, I have little experience with Java but figured what the heck since it’s so close to C++. After following some of the Getting Start Guides and playing with a couple of projects, I downloaded Daisy’s code and set to work running main() and passing the example image paths to the code as arguments. This seems to work well and two windows pop up showing the “Raw” and “Result” images. What baffles me is that I get this as output as well:

“Target not found
Processing took 24.22 milliseconds
(41.29 frames per second)
Waiting for ENTER to continue to next image or exit…”

and for the “Result” image I get a vertical green line more or less in the middle of the picture. I ran the program a couple of times with different images and similar results? Can someone tell this C guy what the heck he’s doing wrong? Is there something I’m missing? If you guys (Daisy) were using a different color ring light or something, could you provide some sample images that work? Thanks in advance!

  • Bryce

P.S. I’m running this on an OLD POS computer running XP and a Pentium D processor. I’ll have to run it at home on something with a little bit of muscle and check performance.

The vertical green line is simply an alignment aid that is burned in to each image - it is not an indication that you have successfully detected the target. If the vision system is picking up the vision targets, you should see blue rectangles and dots indicating the outline and center of the targets, respectively.

Which images are you testing with? The supplied images should work with the code “as is”. If you are using your own images, are you using a green LED ring? If you are using a different color LED ring, you will need to alter the color threshold values in the code. Note that regardless of LED ring color, adjusting the camera to have a very short exposure time (so that the images are quite dark) increases the SNR of the retroreflections, and makes tracking both more robust and much quicker.

Thanks for your reply Jared! I’m using the sample images supplied with the code located in DaisyCV/SampleImages. They’re called names like 10Feet.jpg and 10ft2.jpg. I tried about three different images. These look like the same images supplied with the Java vision sample program that I pulled off FirstForge. Does this seem correct? Thanks.

  • Bryce

Thanks for this great code, but I would really appreciate it if someone could explain to me how this worked. I open the file you uploaded in netBeans, and loaded the libraries, but I didn’t know what the classpath was and there was abunch of errors everywhere. Also if someone could explain to me how the whole network thing worked I would greatly appreciate it. I know this is a lot to ask, so thanks in advance.

I fixed the errors, but I still don’t know what the code does.

Thanks,
Dimitri

Sorry for the (very) late reply, but it has come to my attention that I erroneously included the default WPIlib vision tutorial sample images in this project (which light the target in red) instead of the green-lit test images we actually used for tuning. I will upload the correct test images when I get onto the right laptop.

Of course, you can also re-tune the color segmentation algorithm to look for red instead of green :slight_smile:

EDIT: I have uploaded some sample images that should work with the default tuning.

Thanks for uploading this!

3929 was extremely grateful when you posted this code last year. Thanks for posting this extension to setting it up.

thanks so much for posting this! just got it up and running and it is amazing!! this is our first year doing vision processing so this will help us alot in getting ready for this years competition.

If anyone is interested, I ported the image processing portion of this code to Python. http://www.chiefdelphi.com/forums/showthread.php?t=112866

if it’s not too much to ask, could someone please walk me through how to run the code with test images? I put the argument (a string with the path to my test image) in the arguments field of the project properties, run window. but when i run it in netbeans, i get the following errors in the netbeans output window. i guess it is trying the run the smartdashboard somehow, as it is supposed to, but how do I make this work for test images.

ant -f \\shs-ms10\Students\home\shs.install\NetBeansProjects\OctoVision run
init:
Deleting: \shs-ms10\Students\home\shs.install\NetBeansProjects\OctoVision\build\built-jar.properties
deps-jar:
Updating property file: \shs-ms10\Students\home\shs.install\NetBeansProjects\OctoVision\build\built-jar.properties
Compiling 1 source file to \shs-ms10\Students\home\shs.install\NetBeansProjects\OctoVision\build\classes
compile:
run:
Exception in thread “main” java.lang.NullPointerException
at edu.wpi.first.smartdashboard.gui.DashboardPrefs.getInstance(DashboardPrefs.java:43)
at edu.wpi.first.smartdashboard.camera.WPICameraExtension.<init>(WPICameraExtension.java:103)
at edu.octopirates.smartdashboard.octovision.OctoVisionWidget.<init>(OctoVisionWidget.java:91)
at edu.octopirates.smartdashboard.octovision.OctoVisionWidget.main(OctoVisionWidget.java:351)

It looks like some of the internal changes to SmartDashboard for 2013 have broken stand-alone operation. Never fear, here is how to fix it:

Add the line:

DashboardFrame frame = new DashboardFrame(false);

…inside the main method before creating the DaisyCVWidget.

that worked thanks so much!!

Based on what I’ve read here in the comments, this vision tracking system is legendary. We’re programming in C++, so obviously the code doesn’t work for us. We’ve never actually tried vision processing before, and don’t quite know were to start. Could you please give a brief explanation of how it works? I really appreciate it!