OCCRA
Go to Post Students... be sure to drill air holes in the crate though. Maybe toss in some cheetos for the programmers. - Andrew Schreiber [more]
Home
Go Back   Chief Delphi > CD-Media > White Papers
CD-Media  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

photos

papers

everything



Team 341 Vision System Code

Jared Russell

By: Jared Russell
New: 04-30-2012 12:55 PM
Updated: 09-24-2012 07:05 PM
Total downloads: 4093 times


SmartDashboard Widget used by FIRST Team 341 for automatic targeting during the 2012 competition season.

Now that the 2012 FIRST Championships are a (painful) memory, Team 341 is releasing its laptop-based vision system code to all teams so that they can learn from it and/or use it in offseason competitions.

Attached Files

  • zip DaisyCV Team 341 Vision Tracking Widget

    DaisyCV.zip

    downloaddownload file

    uploaded: 04-30-2012 12:55 PM
    filetype: zip
    filesize: 82.63kb
    downloads: 3057


  • zip Sample Images

    Team341TestImages.zip

    downloaddownload file

    uploaded: 09-24-2012 07:05 PM
    filetype: zip
    filesize: 189.39kb
    downloads: 1034



Recent Downloaders

Discussion

view entire thread

Reply

04-30-2012 01:07 PM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Attached is a zip file containing everything you need to run FIRST Team 341's laptop-based vision system. We are very proud of this software, as it propelled us to 4 Regional/District Championships and the Curie Division Finals. Every shot our robot attempted in Rebound Rumble was completely controlled by this code.

To get it up and running on your computer (which can be Windows, Linux, or Mac OSX), you first should install the WPI SmartDashboard using the installer which you can find here: http://firstforge.wpi.edu/sf/frs/do/.../frs.installer

Note that for Linux or OSX builds (or if you just want to use more recent libraries than are included in the SmartDashboard installer on Windows), you will need to do the installation by hand. The libraries you will need are OpenCV (http://opencv.willowgarage.com/wiki/), JavaCV (http://code.google.com/p/javacv/), and FFMPEG (http://ffmpeg.org/). Make sure that the OpenCV and JavaCV versions you install are compatible! Meaning the same version number of OpenCV, and the same architecture (x86 or x86-64). You then need to make sure the paths to the OpenCV libraries are in your system path.

To use the widget with the Axis camera, you should configure your camera to allow anonymous viewing and make sure it is connected directly to your wireless bridge (not the cRIO). The IP must also be set by right clicking on the SmartDashboard widget. For best results, you will also want to ensure that your camera has a very fast exposure time (we used 1/120s). The FIRST control system documentation and vision whitepaper go over how to do this.

You can also run the widget in a "stand alone" fashion, as the class has a Java main method that you can use to load images from disk and display the processed results. Sample images are included.

Our vision system uses OpenCV library functions to successively filter the raw camera image and find the vision targets. The highest target detected is then used to compute range and bearing information. With a statically mounted camera, we use inverse trigonometry to find the distance to the target based on the height in the image. Bearing is computed by adding the offset of the target horizontally in the camera frame to the heading we are getting back from the robot. This lets us use a closed loop controller on the robot to use the gyro to "aim". If your robot has a turret, you could instead send the turret angle back to the SmartDashboard and get the same results. The computed range is used to find an optimal shooter RPM, which also gets sent back to the robot. We interpolate between points in our "range-to-RPM" lookup table in order to provide smooth outputs to our shooter wheel speed control loop.

Thanks to Brad Miller, WPI, Kevin O'Connor, and the FRC Control System team for making OpenCV-based vision processing a reality in 2012!



04-30-2012 01:32 PM

Brandon Holley


Unread Re: paper: Team 341 Vision System Code

Thanks for publishing this. Your tracking was absolutely deadly all season. I'm curious to see how our final algorithm stacks up against your code.



04-30-2012 02:25 PM

JamesTerm


Unread Re: paper: Team 341 Vision System Code

Thanks so much for posting this... looking forward to reviewing this. It was nice to meet you... BTW I have worked with FFmpeg professionally since 2006, so if you ever have any issues with it, perhaps I can help.



04-30-2012 03:14 PM

R.C.


Unread Re: paper: Team 341 Vision System Code

1323 cannot thank Jared and 341 for helping us with vision. By champs we were one of the best shooters in our division.

-RC



04-30-2012 06:35 PM

rainbowdash


Unread Re: paper: Team 341 Vision System Code

Thanks for the code!

Great job with your camera tracking, Miss Daisy! It was exciting watching all your matches! I know everyone in the chat room was very shocked and sad when you guys got eliminated.

Good luck at your off- season competitions!



04-30-2012 11:04 PM



Unread Re: paper: Team 341 Vision System Code

Your tracking system was legendary this year, 341. Heh... One of our running jokes was that your team had an aimbot.

My nerdy friends always called out "HAAAAAXXXX!!1!" when you were on the field. You're pretty good when just playing the game is cheating compared to lesser robots.



05-01-2012 12:42 AM

Ziv


Unread Re: paper: Team 341 Vision System Code

This is really cool! I'm looking forward to taking a close look at this at a non-12AM time, but a quick skim left me hungry for more.

We used a much more primitive system to find targets. After even very lenient RGB thresholding, we found that ignoring small particles and filtering the remaining ones for high normalized moment of inertia was sufficient. However, we didn't use any measurements other than the x coordinate of the center of the targets. Your code will no doubt be studied by a great many people, hopefully including all of 125's trainee programmers .



05-02-2012 09:40 PM

hodgepodge


Unread Re: paper: Team 341 Vision System Code

It's great to see how others approached the vision processing. I wrote the vision code for team 118 this year and from a quick glance, your algorithm is quite similar.

Our code ran onboard on a BeagleBone running Linux and straight OpenCV, written in C++. libCurl was used to capture images and a UDP socket spewed angle and distance data to the cRio.



05-03-2012 08:31 AM

BradAMiller


Unread Re: paper: Team 341 Vision System Code

Jared -

Thanks so much for posting your code. It was good meeting you at the regional and amazing watching your teams robot do it's stuff. This will go a long way to helping teams learn about computer vision processing and raise the bar for future competitions. Expect to see better documentation and improvements for next season.

And thanks to Joe Grinstead and Greg Granito for helping develop the code over last summer at WPI. They did a great job with the camera code and network tables extensions to WPILib.

Brad



05-05-2012 03:58 PM

sebflippers


Unread Re: paper: Team 341 Vision System Code

Thanks for the code. I believe lines 322-334:

Code:
private double boundAngle0to360Degrees(double angle)
    {
        // Naive algorithm
        while(angle >= 360.0)
        {
            angle -= 360.0;
        }
        while(angle < 0.0)
        {
            angle += 360.0;
        }
        return angle;
    }
could be replaced with
Code:
private double boundAngle0to360Degrees(double angle)
    {
        return(abs(angle)%360.0);
    }



05-05-2012 05:50 PM

Ziv


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by sebflippers View Post
Thanks for the code. I believe lines 322-334 [snipped] could be replaced with
Code:
private double boundAngle0to360Degrees(double angle)
    {
        return(abs(angle)%360.0);
    }
Consider the angle -90. Daisy's code returns 270, but yours returns 90. I think you're looking for something more like this:

Code:
private double boundAngle0to360Degrees(double angle)
    {
        double ret = abs(angle)%360.0;
        if(angle < 0.0)
        {
            ret = -ret;
        }
        return(ret);
    }



05-06-2012 11:52 AM

sebflippers


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Ziv View Post
Consider the angle -90. Daisy's code returns 270, but yours returns 90. I think you're looking for something more like this:

Code:
private double boundAngle0to360Degrees(double angle)
    {
        double ret = abs(angle)%360.0;
        if(angle < 0.0)
        {
            ret = -ret;
        }
        return(ret);
    }
right.



05-06-2012 02:09 PM

sebflippers


Unread Re: paper: Team 341 Vision System Code

done. This one kind of defeats the purpose of making your code simpler, though.

Code:
	public static double boundAngle0to360Degrees(double angle)
    {
        return(angle > 0.0? angle % 360.0 : 360.0*(1 + (Math.abs((int)angle)/360))+angle);
    }



05-07-2012 03:58 AM

Gray Adams


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by sebflippers View Post
done. This one kind of defeats the purpose of making your code simpler, though.
Code:
	public static double boundAngle0to360Degrees(double angle)
    {
        return(angle > 0.0? angle % 360.0 : 360.0*(1 + (Math.abs((int)angle)/360))+angle);
    }
Why not just...
Code:
 public static double boundAngle0to360Degrees(double angle)
    {
        return((angle+360)%360);
    }



05-07-2012 08:11 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Gray Adams View Post
Why not just...
Code:
 public static double boundAngle0to360Degrees(double angle)
    {
        return((angle+360)%360);
    }
What if angle is equal to -361?

It's awesome that you guys are analyzing the code and you have already taught me something new (that Java's "%" operator works on floating point values...as a primarily C++ guy, I have it burned into my brain that thou shalt use "fmod" for floating point modulus). But if this is the part of the code that engenders the most discussion, then I'm a bit disappointed



05-08-2012 12:11 AM

Gray Adams


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Jared341 View Post
What if angle is equal to -361?

It's awesome that you guys are analyzing the code and you have already taught me something new (that Java's "%" operator works on floating point values...as a primarily C++ guy, I have it burned into my brain that thou shalt use "fmod" for floating point modulus). But if this is the part of the code that engenders the most discussion, then I'm a bit disappointed
Ok, fine.

Code:
return(((angle%360)+360)%360);



05-08-2012 07:58 PM

AlexD744


Unread Re: paper: Team 341 Vision System Code

I feel like at this point the original code was easier to understand from an outside perspective.

On another note, this is AWESOME! It's nice to see another team using Java to program, especially when it's done so well. Our camera tracking system is currently only used in auton to track after grabbing the third ball off the co-op bridge. It's very rudimentary code so that it can be run on the cRio (we had issues with a bad camera that made it impossible to test any sort of laptop based vision code). This is very interesting, and I'm excited to go over it in more detail later.



05-08-2012 08:58 PM

Tom Bottiglieri


Unread Re: paper: Team 341 Vision System Code

Thanks for posting this, Jared. I took a look at your dashboard on the field at Champs and was very impressed. It seems incredibly useful to have the vision system draw its idea of target state on top of the real image.

I suspect if vision is a part of upcoming games, we will probably use a solution very similar to this. This year we wrote all of our vision code on top of WPILib/NIVision to run on the cRIO. In the end we got this to work pretty well, but development and debugging was a bit of a pain compared to your system.



05-09-2012 09:43 AM

Leeebowitz


Unread Re: paper: Team 341 Vision System Code

Very cool! I just finished taking an online data structures course from IIT, so it was really awesome to see the implementation of the TreeMap! Just so that I'm sure I understand, it is used to quickly find an rpm based on your distance, right? What are the advantages (if any) of using this instead of plugging your data points into an excel graph and getting a polynomial best fit equation to transform your distance into an rpm?



05-09-2012 10:14 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Leeebowitz View Post
Very cool! I just finished taking an online data structures course from IIT, so it was really awesome to see the implementation of the TreeMap! Just so that I'm sure I understand, it is used to quickly find an rpm based on your distance, right? What are the advantages (if any) of using this instead of plugging your data points into an excel graph and getting a polynomial best fit equation to transform your distance into an rpm?
The biggest advantage was it made tuning very easy. Take the bot to the practice field with some balls, fire away, and record the range/RPM information into the TreeMap if you like what you saw (we actually had a version of the software that let the operator do this with the push of a "commit" button, but we ended up doing it all by hand just to sanitize the values). No need to fire up Excel.



05-09-2012 10:16 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Tom Bottiglieri View Post
I suspect if vision is a part of upcoming games, we will probably use a solution very similar to this. This year we wrote all of our vision code on top of WPILib/NIVision to run on the cRIO. In the end we got this to work pretty well, but development and debugging was a bit of a pain compared to your system.
By far the biggest advantage of a laptop-based solution was the development process that it facilitated. Simply collect some representative images and then you can go off and tune your vision software on a laptop without needing a cRIO or the FRC control system.



05-09-2012 11:06 AM

JamesTerm


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Gray Adams View Post
Why not just...
Code:
 public static double boundAngle0to360Degrees(double angle)
    {
        return((angle+360)%360);
    }

Yeah! I like that one a lot...

For the c++ wind river people out there who look at this... you cannot use the modulo operator on doubles but you can use the fmod() offered in math.h

Code:
return (fmod ((angle+360.0) , 360.0));  //c++

//That function reminded me of this one:
//Here is another cool function I pulled from our NewTek code that is 
//slightly similar and cute...
int Rotation_ = (((Info->Orientation_Rotation + 45) / 90) % 4) * 90;

//Can you see what this does?



05-09-2012 11:13 AM

JamesTerm


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Jared341 View Post
What if angle is equal to -361?

It's awesome that you guys are analyzing the code and you have already taught me something new (that Java's "%" operator works on floating point values...as a primarily C++ guy, I have it burned into my brain that thou shalt use "fmod" for floating point modulus). But if this is the part of the code that engenders the most discussion, then I'm a bit disappointed
Doh! I missed this before I responded as it was on the 2nd tab...

While I'm here though... if you are a c++ guy why use Java? Is it because it was the only way to interface with the dashboard? I gave up when trying to figure out how to do that in wind river.



05-09-2012 11:33 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by JamesTerm View Post
Doh! I missed this before I responded as it was on the 2nd tab...

While I'm here though... if you are a c++ guy why use Java? Is it because it was the only way to interface with the dashboard? I gave up when trying to figure out how to do that in wind river.
Easy, it's because I am not the only person who writes software for our team! Java is what is taught to our AP CS students, and is a lot friendlier to our students (in that it is a lot harder to accidentally shoot yourself in the foot). I also have a lot of training in Java (and still use it on a nearly daily basis), even if C++ is my bread and butter.



05-09-2012 02:00 PM

JamesTerm


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by Jared341 View Post
Easy, it's because I am not the only person who writes software for our team! Java is what is taught to our AP CS students, and is a lot friendlier to our students (in that it is a lot harder to accidentally shoot yourself in the foot).
Ah ok.

Quote:
Originally Posted by Jared341 View Post
But if this is the part of the code that engenders the most discussion, then I'm a bit disappointed
Don't be disappointed... this discussion has taught/reminded us something that we rarely use in c++, and this discussion indirectly helped my co-worker fix a bug today. I do know how you feel though as there is a *lot* of effort that goes into this! I had most-all my vision code written as well, and unfortunately it is all going straight into the bit bucket, as we could not get the deliverables to make it work in time. I do want to look your code over in more detail and post what I did as well, and hopefully at that time the discussion will have more meat in it as I do want some closure in the work that I have done thus far.

I will reveal one piece now with this video:
http://www.termstech.com/files/RR_LockingDemo2.mp4

When I first saw the original video, it screamed high saturation levels of red and blue on the alliance colors, and this turns out to be true. The advantage is that there is a larger line to track at a higher point as I could use particle detection alone. The goal then was to interpret the line to perspective and use that to determine my location on the field. From the location I had everything I needed as I then go to an array table error correction grid with linear interpolation from one point to the next. (The grid among other tweaks are written in LUA more on that later too).

more to come...

There is one question that I would like to throw out there now though... Does anyone at all work with UYVY color space (a.k.a YPbPr). We work with this natively at NewTek, and it would be nice to see who else does.



06-15-2012 05:19 PM

Bryscus


Unread Re: paper: Team 341 Vision System Code

So after attending the Einstein Weekend debugging session this past weekend and chatting with some of the teams about their various OpenCV implementing vision systems, I just HAD to check out Daisy's code (another suggestion from Brad Miller).

So honestly, I have little experience with Java but figured what the heck since it's so close to C++. After following some of the Getting Start Guides and playing with a couple of projects, I downloaded Daisy's code and set to work running main() and passing the example image paths to the code as arguments. This seems to work well and two windows pop up showing the "Raw" and "Result" images. What baffles me is that I get this as output as well:

"Target not found
Processing took 24.22 milliseconds
(41.29 frames per second)
Waiting for ENTER to continue to next image or exit..."

and for the "Result" image I get a vertical green line more or less in the middle of the picture. I ran the program a couple of times with different images and similar results? Can someone tell this C guy what the heck he's doing wrong? Is there something I'm missing? If you guys (Daisy) were using a different color ring light or something, could you provide some sample images that work? Thanks in advance!

- Bryce

P.S. I'm running this on an OLD POS computer running XP and a Pentium D processor. I'll have to run it at home on something with a little bit of muscle and check performance.



06-16-2012 04:35 PM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

The vertical green line is simply an alignment aid that is burned in to each image - it is not an indication that you have successfully detected the target. If the vision system is picking up the vision targets, you should see blue rectangles and dots indicating the outline and center of the targets, respectively.

Which images are you testing with? The supplied images should work with the code "as is". If you are using your own images, are you using a green LED ring? If you are using a different color LED ring, you will need to alter the color threshold values in the code. Note that regardless of LED ring color, adjusting the camera to have a very short exposure time (so that the images are quite dark) increases the SNR of the retroreflections, and makes tracking both more robust and much quicker.



06-17-2012 11:30 AM

Bryscus


Unread

Thanks for your reply Jared! I'm using the sample images supplied with the code located in DaisyCV/SampleImages. They're called names like 10Feet.jpg and 10ft2.jpg. I tried about three different images. These look like the same images supplied with the Java vision sample program that I pulled off FirstForge. Does this seem correct? Thanks.

- Bryce



08-12-2012 10:02 PM

divixsoft


Unread Re: paper: Team 341 Vision System Code

Thanks for this great code, but I would really appreciate it if someone could explain to me how this worked. I open the file you uploaded in netBeans, and loaded the libraries, but I didn't know what the classpath was and there was abunch of errors everywhere. Also if someone could explain to me how the whole network thing worked I would greatly appreciate it. I know this is a lot to ask, so thanks in advance.



08-12-2012 10:48 PM

divixsoft


Unread Re: paper: Team 341 Vision System Code

I fixed the errors, but I still don't know what the code does.

Thanks,
Dimitri



09-24-2012 07:00 PM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Sorry for the (very) late reply, but it has come to my attention that I erroneously included the default WPIlib vision tutorial sample images in this project (which light the target in red) instead of the green-lit test images we actually used for tuning. I will upload the correct test images when I get onto the right laptop.

Of course, you can also re-tune the color segmentation algorithm to look for red instead of green

EDIT: I have uploaded some sample images that should work with the default tuning.



01-10-2013 08:17 PM

Shawstek


Unread Re: paper: Team 341 Vision System Code

Thanks for uploading this!



01-14-2013 09:21 AM

carterh062


Unread Re: paper: Team 341 Vision System Code

3929 was extremely grateful when you posted this code last year. Thanks for posting this extension to setting it up.



02-02-2013 04:12 PM

twiggzee


Unread Re: paper: Team 341 Vision System Code

thanks so much for posting this! just got it up and running and it is amazing!! this is our first year doing vision processing so this will help us alot in getting ready for this years competition.



02-06-2013 01:45 AM

virtuald


Unread Re: paper: Team 341 Vision System Code

If anyone is interested, I ported the image processing portion of this code to Python. http://www.chiefdelphi.com/forums/sh...d.php?t=112866



02-07-2013 09:55 PM

twiggzee


Unread Re: paper: Team 341 Vision System Code

if it's not too much to ask, could someone please walk me through how to run the code with test images? I put the argument (a string with the path to my test image) in the arguments field of the project properties, run window. but when i run it in netbeans, i get the following errors in the netbeans output window. i guess it is trying the run the smartdashboard somehow, as it is supposed to, but how do I make this work for test images.

ant -f \\shs-ms10\Students\home\shs.install\NetBeansProject s\OctoVision run
init:
Deleting: \shs-ms10Studentshomeshs.installNetBeansProjectsOc toVisionbuildbuilt-jar.properties
deps-jar:
Updating property file: \shs-ms10Studentshomeshs.installNetBeansProjectsOc toVisionbuildbuilt-jar.properties
Compiling 1 source file to \shs-ms10Studentshomeshs.installNetBeansProjectsOc toVisionbuildclasses
compile:
run:
Exception in thread "main" java.lang.NullPointerException
at edu.wpi.first.smartdashboard.gui.DashboardPrefs.ge tInstance(DashboardPrefs.java:43)
at edu.wpi.first.smartdashboard.camera.WPICameraExten sion.<init>(WPICameraExtension.java:103)
at edu.octopirates.smartdashboard.octovision.OctoVisi onWidget.<init>(OctoVisionWidget.java:91)
at edu.octopirates.smartdashboard.octovision.OctoVisi onWidget.main(OctoVisionWidget.java:351)



02-08-2013 10:50 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

It looks like some of the internal changes to SmartDashboard for 2013 have broken stand-alone operation. Never fear, here is how to fix it:

Add the line:

Code:
DashboardFrame frame = new DashboardFrame(false);
...inside the main method before creating the DaisyCVWidget.



02-08-2013 10:55 AM

twiggzee


Unread Re: paper: Team 341 Vision System Code

that worked thanks so much!!



03-09-2013 10:15 PM

mrklempae


Unread Re: paper: Team 341 Vision System Code

Based on what I've read here in the comments, this vision tracking system is legendary. We're programming in C++, so obviously the code doesn't work for us. We've never actually tried vision processing before, and don't quite know were to start. Could you please give a brief explanation of how it works? I really appreciate it!



03-10-2013 11:18 AM

bradubv


Unread Re: paper: Team 341 Vision System Code

Thanks again to Team 341 for posting this code. It was an inspiration and a great learning opportunity for our programming team. Despite not being able to finish the PID Controller that used the vision code as input we reached the NYC Regional Semifinals.

All the elimination matches at the NYC Regional were played with the SmartDashboards turned off and that put us at somewhat of a disadvantage. I will follow up to see if I can get more details and if any of them are relevant to this thread, I'll post them here. One thing we didn't know going into the competition was that teams using the Java SmartDashboard need to apply the latest C++ language updates. Because we use Java on the robot side that wasn't obvious to us, so we were ignoring the notices about C++ language updates.



03-10-2013 12:26 PM

RufflesRidge


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by bradubv View Post
One thing we didn't know going into the competition was that teams using the Java SmartDashboard need to apply the latest C++ language updates. Because we use Java on the robot side that wasn't obvious to us, so we were ignoring the notices about C++ language updates.
This makes no sense. Whoever told you this was wrong. Running the C++ language update on a computer that is not being used to build C++ robot code will do nothing.



03-13-2013 08:57 PM

bradubv


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by bradubv View Post
I will follow up to see if I can get more details and if any of them are relevant to this thread, I'll post them here.
The FMS White Paper explains that the default camera settings will result in a bandwidth about twice that allotted. The LabVIEW Dashboard overrides those defaults with more reasonable settings, but if your team uses the Smart Dashboard you have to set the Resolution, FPS and compression settings on your own to make sure you don't go over the bandwidth limit.



03-14-2013 09:46 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Changing the default camera settings is the most important thing you can do in order to obtain reliable tracking and stay underneath the bandwidth cap.

In particular, there are six settings to pay attention to:

1) Resolution. The smaller you go, the less bandwidth you use but the fewer pixels you will have on the target. If you make all of the other changes here, you should be able to stay at 640x480.

2) Frames per second. "Unlimited" results in a 25 to 30 fps rate under ideal circumstances. Depending on how you use the camera in a control loop, this may be overkill. Experiment with different caps.

3) White balance. You do NOT want automatic white balance enabled! Failing to do so makes your code more susceptible to being thrown off by background lighting in the arena. All of our Axis cameras have a white balance "hold" setting - use it.

4) Exposure time/priority. You want a very dark image, except for the illuminated regions of the reflective tape. Set the exposure time to something very short. Put the camera in a bright scene (e.g. hold up a white frisbee a foot or two in front of the lens) and then do a "hold" on exposure priority. Experiment with different settings. You want virtually all black except for a very bright reflection off of the tape. This is for two purposes: 1) it makes vision processing much easier (fewer false detections), 2) it conserves bandwidth, since dark areas of the image are very compact after JPEG compression. The camera doesn't know what you are looking for, so it will try to send you the entire scene as well as it can. But if it can't see the "background" very well, you are "tricking" the camera into only giving you the part you need!

5) Compression. As the WPI whitepaper says, this makes a huge difference in bandwidth. Use a minimum of 30, but you may be able to get away with more (we are using 50 this year). Experiment with it.

6) Brightness. You can do a lot of fine tuning of the darkness of the image with the brightness slider.



08-27-2013 05:32 PM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

http://www.chiefdelphi.com/forums/sh...82&postcount=5

I posted some detailed explanation about the "magic numbers" in our code in the post above.



09-01-2013 10:29 AM

fovea1959


Unread Re: paper: Team 341 Vision System Code

What is best practice for setting the Camera up? Do it in the Dashboard code, or do it in the robot code?


Quote:
Originally Posted by Jared341 View Post
Changing the default camera settings is the most important thing you can do in order to obtain reliable tracking and stay underneath the bandwidth cap.

In particular, there are six settings to pay attention to:

1) Resolution. The smaller you go, the less bandwidth you use but the fewer pixels you will have on the target. If you make all of the other changes here, you should be able to stay at 640x480.

2) Frames per second. "Unlimited" results in a 25 to 30 fps rate under ideal circumstances. Depending on how you use the camera in a control loop, this may be overkill. Experiment with different caps.

3) White balance. You do NOT want automatic white balance enabled! Failing to do so makes your code more susceptible to being thrown off by background lighting in the arena. All of our Axis cameras have a white balance "hold" setting - use it.

4) Exposure time/priority. You want a very dark image, except for the illuminated regions of the reflective tape. Set the exposure time to something very short. Put the camera in a bright scene (e.g. hold up a white frisbee a foot or two in front of the lens) and then do a "hold" on exposure priority. Experiment with different settings. You want virtually all black except for a very bright reflection off of the tape. This is for two purposes: 1) it makes vision processing much easier (fewer false detections), 2) it conserves bandwidth, since dark areas of the image are very compact after JPEG compression. The camera doesn't know what you are looking for, so it will try to send you the entire scene as well as it can. But if it can't see the "background" very well, you are "tricking" the camera into only giving you the part you need!

5) Compression. As the WPI whitepaper says, this makes a huge difference in bandwidth. Use a minimum of 30, but you may be able to get away with more (we are using 50 this year). Experiment with it.

6) Brightness. You can do a lot of fine tuning of the darkness of the image with the brightness slider.



09-03-2013 11:59 AM

Jared Russell


Unread Re: paper: Team 341 Vision System Code

Quote:
Originally Posted by fovea1959 View Post
What is best practice for setting the Camera up? Do it in the Dashboard code, or do it in the robot code?
Neither.

With a laptop (such as the driver station) that is on the same network as the camera, open a web browser and navigate to the camera's IP address (which is set using the Axis Camera utility). From this web interface, you can tinker with all of the settings I mentioned and more, then save them so they are permanent (well, unless you press the reset switch on the camera).



03-15-2016 09:33 AM

trathier


Unread Re: paper: Team 341 Vision System Code

Will the code work with any other web based camera? I know the Axis and Microsoft cameras have been in the KOP, but has anyone used anything else?



view entire thread

Reply

Tags

loading ...



All times are GMT -5. The time now is 01:23 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin®
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi