Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Technical Discussion (http://www.chiefdelphi.com/forums/forumdisplay.php?f=22)
-   -   Turrets and cameras (http://www.chiefdelphi.com/forums/showthread.php?t=75353)

The Lucas 03-03-2009 11:34

Re: Turrets and cameras
 
Quote:

Originally Posted by spc295 (Post 830971)
would it be against the rules to mount a large colored light (not pink or green) to manually aim a turret. this would avoid camera problems, as well as help the driver aim the turret. that way you can see if you are illuminating a trailer or not?

Ask that in the Q&A for a clarification.

You could legally use the Robot Signal Light, as long as you satisfied all its rules. Otherwise I would say no. Generally, lights fall into the nonfunctional decorations category and are thus allowed to draw power from the robot, but this is a functional light so it doesn't fit (read R19 and R49). Also, if it is a very bright light it might violate R02-A by shining in people's eyes (lasers are specifically banned in R02-D)

billbo911 03-03-2009 12:52

Re: Turrets and cameras
 
Quote:

Originally Posted by jacobhurwitz (Post 830297)
... Greg McKaskle from NI (many of you might recognize his name from Chief Delphi) was there to help, but I only saw two teams (including us) actually out on the field. The first thing Greg said to do was to log in to the camera (192.168.0.90) and lower the brightness level to 0. After that, we tweaked our HSL values a little bit, showed them to Greg, and he approved.

Out of curiosity, is there a difference between setting to brightness level to 0 in the camera, versus using the "Set Bright" vi?
Currently our plan is to use the time on the field for calibrating to set both the white balance and the brightness level. Will we also need to set the Brightness level in the camera to 0 as well? If we don't do the camera setting and only use the vi, will it have any ill effects?

Bharat Nain 03-03-2009 13:35

Re: Turrets and cameras
 
When you set the brightness level to 0 by logging into the camera, does it stay like that even if you are using LabView/C/C++ code after restarting the camera and not doing anything with the brightness in your code? Is there any way to control the brightness in C/C++ code?

Lil' Lavery 03-03-2009 13:45

Re: Turrets and cameras
 
1712 used the given calibration time during Lunch on Thursday to work with the NI representative at DC to calibrate our camera. I don't know the specifics on the settings (was working on other items at the time), but we managed to get our tracking code working succesfully.
We would sometimes lose targets at a longer range, but that may have been due to their movement and the presence of multiple targets (when we lost a target, we would almost always lock onto another almost immediately) in the camera's vision.
We did notice some difference between when the camera was aimed at the black curtain background and when it was aimed anywhere else. It wasn't a massive difference, but it may be enough at some venues to impact your results.

Rich Kressly 03-03-2009 14:02

Re: Turrets and cameras
 
Quote:

Originally Posted by Lil' Lavery (Post 831082)
1712 used the given calibration time during Lunch on Thursday to work with the NI representative at DC to calibrate our camera. I don't know the specifics on the settings (was working on other items at the time), but we managed to get our tracking code working succesfully.
We would sometimes lose targets at a longer range, but that may have been due to their movement and the presence of multiple targets (when we lost a target, we would almost always lock onto another almost immediately) in the camera's vision.
We did notice some difference between when the camera was aimed at the black curtain background and when it was aimed anywhere else. It wasn't a massive difference, but it may be enough at some venues to impact your results.


Yep, one of our software engineers, Paul Gehman, worked with Greg from NI at lunch to get the 1712 settings right. I'll see if I can get Paul to post our values here - maybe I can get some screen shots from the programming laptop to post., but I'd suggest on field calibration on Thursday if/when it might be allowed. Bob Bellini, our other engineer helped to refine our target code on Sat, but since the lighting on the field was so bright and different, we really couldn't test a lot in the pits and never went to the practice field because of the difference. We just tried to learn through Thursday and things really worked out well for our autonomous routine using the camera and target size only with no other sensors.

Rich Kressly 03-03-2009 14:24

Re: Turrets and cameras
 
1 Attachment(s)
Attached are the cam values we used from calibrating with Greg from NI at lunch on Thurs. to score in autonomous in DC. I'll see if I can get Paul Gehman to comment here since he is the one that worked with Greg.

It's also important to note that it's possible that some slightly different values might be optimal for red/blue alliance as we always have had a harder time identifying green as opposed to pink and with the "other" color on top and the position of our camera the same values may not be as good in both cases. Nontheless, the lunchtime calibration on Thursday is what allowed us to consistently track and score on targets in auto.

Kristian Calhoun 03-03-2009 15:11

Re: Turrets and cameras
 
Quote:

Originally Posted by Rich Kressly (Post 831110)
Attached are the cam values we used from calibrating with Greg from NI at lunch on Thurs. to score in autonomous in DC. I'll see if I can get Paul Gehman to comment here since he is the one that worked with Greg.

Here are the color values that were posted at NJ. Again, many thanks to Joshua from NI for being so helpful and insightful this past weekend.

Green
H: min: 55 max:125
S: min: 35 max: 255
L: min: 92 max: 255

Pink
H: min: 220 max:225
S: min: 30 max: 255
L: min: 80 max: 255

Greg McKaskle 03-03-2009 20:20

Re: Turrets and cameras
 
As many have stated, the lighting at the events is indeed different from what is in your classroom or shop. I'll give a general description for how we arrived at values, I'll attempt to answer some of the camera related questions, and I'll give a few hints on how to tune your camera up. I won't be giving absolute values as I fully expect that different events will have variations in lighting brightness, arrangement, color, and in backdrop. Because if this, knowing how to find the values is far more important than knowing the values of a single field. I highly encourage you to take advantage of the Th field time to tune your camera. Be prepared for a BIG post. If you like, you can then share the values with other teams.

First off, the brightness and other camera settings are persisted on the camera. The camera settings are modified via HTTP CGI protocol and can be set interactively using the camera web pages, the C based camera demo application, or via the LV wrapper VIs. The Find Two Color LV code included a brightness parameter on the panel specifically for tuning. Other camera settings such as exposure are set and behave similarly.

Lighting on the practice field seems to work reasonably well with default settings. It is perhaps a bit brighter than the average classroom, but the tests I ran in DC worked. The lighting on the competition field is quite different. There are two trusses of lights running along the long sides of the field approximately 12 ft outside the edges of the field and suspended ~30 ft in the air. Those trusses are packed with lights. They are adjusted for even coverage and to produce few shadows.

With the default settings, the test went reasonably well on three sides of the field, but glare was present and caused some white streaks on the targets. The fourth side of the field, the one with the black curtain will unfortunately not work with the default settings. The reason for this is that the camera is sampling the pixels in a frame to determine how to adjust future exposures to capture enough light with the camera sensor. This typically works quite well, but since the camera has no idea what portion of the scene you are most interested in, it uses a statistical sampling and when the background of the image is not returning any light, it causes the camera to lengthen the exposure resulting in an image with lots of glare. This same effect may be seen if the house lights in the stands are set very low. If the overall light from the background is too low, the brightly lit targets will be blown out by the bright lights.

The solution is to shorten the exposure. With the Axis 206 camera, there are two modes to control exposure, leave auto exposure on and decrease brightness, or determine a way to calibrate the auto exposure and set it to hold. Personally, I think it is more repeatable and more robust to use auto exposure. Of the fields seen thus far, most decided to set the brightness to 0, one field determined that 10 was a better value to use. The brightness works similar to an exposure compensation setting on most consumer cameras. It modifies the determined exposure to let the human adjust for back lighting or dark backgrounds. For glare lower it. For backlit targets, raise it.

The other aspect of the competition field that is difficult to deal with is the light position. In some locations, especially as the camera gets near the target, it will be aimed upwards. In some locations on the field, the camera will then be staring into the lights. This further complicates matters by partially blinding the camera. Finally, in the worst locations on the field, the angle from a low mounted camera, a target, and the light will be symmetric. When the angle from the target's plane to the light and the angle from the plane to the camera sensor are roughly equal, the glare from the lights will go way up, practically turning the target white. This doesn't occur that many places on the field, but it would be wise to allow for the occasional dropped frame in your tracking code, your lead estimator, etc.

Because the camera will sometimes be staring at the lights, you will find that the color saturation of the target drops compared to the shop. Because of this, you may find it useful to lower the minimum values of saturation you are thresholding against. How much? As you can see, different teams determined different values. If taken too low, gray or pastel colors will start to be included in the mask. Not low enough and some areas of the field will flicker or stripe. To determine the value to use, it would be good to interactively move the camera and target around on the field viewing an HSL display. To do this, the LV example was modified to display more information. This will be made available for FTAs and teams to use if they wish. An announcement and instructions will be released soon.

A sampling of test images and the threshold results can be found at http://flickr.com/photos/35780791@N07/

From the stands, I saw some camera work very well, many others were not even turned on. As weeks go on, I expect more successes. Feel free to post questions or results.

Greg McKaskle

Doug Leppard 03-03-2009 20:28

Re: Turrets and cameras
 
Quote:

Originally Posted by Greg McKaskle (Post 831350)
Feel free to post questions or results.

Greg McKaskle

Thank you Greg.

GearsOfFury 03-03-2009 21:38

Re: Turrets and cameras
 
Greg, awesome post, thanks! Only question: what was the 'correction' for the images @ flickr? Just a drop in brightness (probably to 10 or 0) and a drop in saturation threshold? Or something else?

We pretty much had images similar to your "Default West" (lots of glow / washed out) for the area we were trying to calibrate in, and couldn't find a setting to get an image as reasonable as your "Corrected West 3/4/5" (which seem excellent!). We're totally willing and prepared to run an on-field calibration ourselves but it would help to know which values you changed to get the effects in the images.

Alexa Stott 03-03-2009 22:02

Re: Turrets and cameras
 
Quote:

Originally Posted by spc295 (Post 830971)
would it be against the rules to mount a large colored light (not pink or green) to manually aim a turret. this would avoid camera problems, as well as help the driver aim the turret. that way you can see if you are illuminating a trailer or not?

If you can find a way of doing this without impeding the vision of other drivers, I'd say go ahead. We fooled around with this idea a bit in 07. We thought about illuminatig the spider leg with another color and tracking that, but were unable to find a light source appropriate for it.

Greg McKaskle 04-03-2009 07:28

Re: Turrets and cameras
 
The corrected images were taken after dropping brightness and lowering saturation.

Greg McKaskle

Tom Line 04-03-2009 07:59

Re: Turrets and cameras
 
We have absolutely horrible lighting in our workshop - it's flourescent tubes every 3 or four feet. Our camera always points upwards and is mounted in a fixed orientation.

The best procedure we have come up with for camera tuning in labview goes like this:

1. Enable HSL debugging in the Find Two colors VI.
2. Enable HSL coloring in the Vision VI.
3. Hover your mouse over different parts of the greent target. Try to hover it over a dark patch of green and a bright patch of green. Watch your initial HSL values from the HSL debugging window. Note the maximum for each value, and the minimum.
4. Return to the Vision window and input them on the front panel VI.
5. Turn off the "find pink first" checkbox and look at the mask. More than likely, you'll see a bunch of noise and your two colors. One of the colors (usually the green color) will be flickering. You also may see a bunch of noise.
6. Make sure you're happy with the framerate before going any further. We're happy if we're seeing 15-20 FPS. We go into the Vision VI and set our framerate maximum to below the lowest number we see (with our tracking code, we very much want consistent values).
7. Return to the vision VI. Methodically adjust your green values up and down by 5 at a time. Return periodically to the HSL debugging window to verify none of your values there have changed.
8. Once your green locks in, go back to pink and adjust the values a bit. At this point you should have a reasonably clean mask, with little noise, and two solid shapes.
9. If you are still having problems, or if you aren't getting solid shapes, repeat the process. If you still don't have any success, we've several times resorted to modifying the area and particle settings to make them somewhat more forgiving.
10. Now you need to turn the robot so the lighting changes and repeat. If you're getting a lot of washout, or the color values you're getting between light / dark patches on the target are significantly different, you may need to raise or lower brightness to repeat. It usually takes us 10 minutes or so to get nice solid values that work over most of the area (we've done this in 3 or 4 significantly different lighting conditions just to practice - bright sunlight, flourescent tubes, and incandescent bulbs).

This has worked pretty well for us in several different very poor environments. I hope it helps you guys.

Team 1718

JesseK 04-03-2009 08:51

Re: Turrets and cameras
 
Greg, this is great information. We'll definitely spend a lot of time in Florida figuring this stuff out, as our shooter is the last thing we have to fine tune.

Our setup: we mounted our camera up as high as possible on our bot; the lens is roughly 55" off the floor. We do not tilt the gimbal; instead it pans to three set locations left, center, and right. We'll add a glare shield to the upper portion of the protective lexan cover we made, and bring back some settings & results from Florida next week. Too bad we can't get the camera feed back to the laptop during the matches -- I'd love to FRAPS it so other teams can see what we tried that may or may not work so they can further perfect the whole system.

JesseK 04-03-2009 09:03

Re: Turrets and cameras
 
Quote:

Originally Posted by Alexa Stott (Post 831422)
If you can find a way of doing this without impeding the vision of other drivers, I'd say go ahead. We fooled around with this idea a bit in 07. We thought about illuminatig the spider leg with another color and tracking that, but were unable to find a light source appropriate for it.

You could have tried IR LEDs, though be warned that you'll get alot of false positives if other teams do it as well. It'd be sweet because in 2007, a perfect circular reflection combined with open source COTS pattern recognition software would have eliminated the need for a color camera altogether. Tell the software to line up with a reflection, then get closer until the circle had the proper diameter. If only the IFI controller could have handled all of that processing, we might have tried it.

All of the other years, I don't know what benefit there would have been for doing this though.


All times are GMT -5. The time now is 02:44.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi