Calibrating vision processing on the field

My team is probably going to use RoboRealm for vision processing this year and I was wondering how much time teams are given to calibrate their script on the field and what is the fastest/easiest way you know of to calibrating it?

We never got to use our vision processing code on a competition before so I don’t have any experience calibrating the script on the field, how does your team do it?

Thanks,
Dan

The way that we did it last year was to have me (programmer and driver) take screenshots during our practice match of the goals through the camera. I then tuned and tested the algorithm based on the predefined images and it seemed to work fairly well.

As per rule 5.5.7 (in The Tournament section),

The ARENA will be open for at least one (1) hour prior to the start of Practice MATCHES, during which Teams may survey and/or measure the FIELD. The specific time that the FIELD is open will be communicated to Teams at the event. Teams may bring specific questions or comments to the FTA.

The field staff typically will not let you bring your entire robot on the field during this time, but I don’t think there would be a problem with bringing a laptop and camera out.

For calibration with the robot, practice matches are the place to do it, as eddie12390 suggested.

What kind of calibration do you think you need to do and why?

Changing the parameters for the threshold to match the lighting in the field.

Is there a way to overcome the difference in lighting of the field from our workshop?

Here is the strategy that has worked for 341 using an Axis camera and LED ring(s). We never had to calibrate on the field in either 2012 or 2013 (both years, all of our shots were aimed by our vision system):

  1. Turn off automatic white balance on the camera (can be done from the camera’s web interface).

  2. Get a very short exposure time so the image is mostly dark (from the web interface). You want a very dark image, except for the illuminated regions of the reflective tape. Set the exposure time to something very short. Put the camera in a bright scene (e.g. hold up a white card a foot or two in front of the lens) and then do a “hold” on exposure priority. Experiment with different settings. You want virtually all black except for a very bright reflection off of the tape. This is for two purposes: 1) it makes vision processing much easier (fewer false detections), 2) it conserves bandwidth, since dark areas of the image are very compact after JPEG compression. The camera doesn’t know what you are looking for, so it will try to send you the entire scene as well as it can. But if it can’t see the “background” very well, you are “tricking” the camera into only giving you the part you need!

  3. Take a wide variety of sample images in and around our build space.

  4. Design a threshold in RGB or HSL that works well on your sample images.

  5. Your camera is now effectively much more sensitive to your LED ring than to ambient lighting, and all dynamic settings on the camera (white balance and exposure) are held constant. You likely will not need to do calibration (but of course YMMV).

Must be a regional thing, we’ve always brought our robot out.

Thanks for mentioning this. It reminded me that one should really read all of the rules each season.

This is actually the same as last year, though it seemed to me that most were interpreting it as “access to the FIELD for camera calibration.”

Which is vastly different from 2012 (Rebound Rumble), when the rule specifically forbade any measurement of the field. (Though I do remember at one regional, seeing a adult from some team going on to the field just after the doors opened, to measure the force required to bring the bridge down.)

-Karlis

We were also able to at the Pittsburgh Regional.

It kind of is. At stl, they let us bring our whole robot out. At terra heute we had a nice little cart powered by a battery and an adaptor to power our o droid and monitor. We rolled it around the field saving an image ten times a second while our program was running.

On curie, we did the same thing as terra heute. The refs were taking pictures of our cart.

All 3 didn’t care if we brought our robot out, but we didn’t need to. They are volunteers. If you can, calibrate with your program actually running and show them. They are there to learn too.

In 2012 we were so scared that the ir kinect wouldn’t work that some of us drove to the kc regional and took.thousands of pictures to convince ourselves that it would work. The refs looked very confused when we said our team number and we weren’t competing at the regional.

Make sure your drivers tell you about the time available. In 2012 we kept waiting for the announcement to say the field was open for vision testing. Never heard that announcement so I asked the Field Supervisor. Turns out the announcement was made at the Drivers meeting. The drive team didn’t seem to think they needed to relay that information to the programming team. That hurt our efforts with vision, and ultimately our matches. I didn’t let the drive team forget that too soon. :mad:

Thanks! that helped a lot and I’ll be making sure we follow your instructions

Sorry for you guys, I hope you’ll have it better this year. That’s the first thing I’ll tell our drivers (if we would finally choose them… :confused: )