Brightness on 2016 vision samples

For all of the images we were given to test vision processing with, they all look like they were taken in a room with the lights off. Was this done with some setting on the camera? I’ve personally never been to an event that had the lights of, except maybe on Einstein…

My team was also wondering this. The image recognition works really well with the low lighting, but in normal lighting, the recognition seems to take longer/be not as easy.

I would assume that they changed the camera settings to filter out certain levels of light. There is a screensteps tutorial here.

https://wpilib.screenstepslive.com/s/4485/m/24194/l/288984-camera-settings

I believe the images were taken with a low exposure setting.

Here is the Wikipedia article about exposure in photography and an article talking about exposure with examples of overexposed and underexposed images.

I would agree. There’s also a setting on the camera webpage to adjust the exposure level; the screensteps talk about that too.

There are many ways to set up the camera and get images, but I’ll list the elements using the WPILib terminology.

The retroreflective tape is such a strong reflector that you can think of it as an amplifier of the ring light. The material will return either 300 or 600 times as much light as bright white paint. I no longer remember the spec for the material being used. It is so bright that it can overwhelm the camera’s sensor and auto settings and you will actually get an image with a white target and an LED colored fringe. This is called sensor bloom. Fancier camera sensors will postpone the blooming, but sufficiently bright light is a challenge.

The good news is that you can use this to your advantage. If you lower the exposure time and/or lower the brightness of the image, the background will darken and the tape will turn from white to the LED color. This also helps with processing performance, by the way, because many of the color conversion and processing algorithms will then be dealing with lots of dark pixels which they can quickly dismiss.

So FIRST did this on purpose. You can too by setting the brightness and/or the exposure settings. You may also want to turn off the auto white balance and choose something that will stay predictable.

Greg McKaskle

Any idea how you change those settings with the “Lifecam 3000” Camera?

we were wondering this same thing. We looked into it and from what we can tell you can change the settings using a program however i do not think you can save the settings. Has anyone solved this issue

The FRC update for Labview has VIs for setting these. They aren’t as nice as I’d like, so I’ve been working on reading the limits directly and developing my own.

The WPILib Camera VIs were originally written for the Axis VAPIX API. When USB cameras were added, they were added via the NI IMAQdx libraries. About half of the properties return a “not supported” error, but some others were extended to have a custom setting.

Lately, I’ve been using the Vision Acquisition express block configuration. It leads through five wizard screens with a test mode to view the changes as you experiment. It then generates code for IMAQdx. Once done, I will generally right-click and Open the Front Panel which will convert it to a VI. This gives a good starting point for more advanced configuration.

To darken, you can make adjustments to exposure, gain, and brightness.

Greg McKaskle

Is there an easier way? we getting trouble connecting the vision acquisition to the dashboard image loop

The camera API to an Axis camera can be remote, since it is a conversation between the laptop and the camera’s web server. But the USB camera needs to have the call made on the robot, where the session was opened.

Greg McKaskle

I just posted a new thread with some specifics. We are using Java, I can set the values into the USBCam object and save the values (using the Preferences class), but they don’t seem to be taking effect. No change in exposure or brightness.

The Screensteps are only for the Web interface of the Axis Cam. It would be nice if there were some examples or documentation on doing these basic settings from Java or C++

Ron, check the thread I just started (LifeCam USBCamera changing settings from java) - it’s not about this connection, but it does contain sample code showing how to do it. Basically you create a USBCam object, open the connection, start the capture, then you loop getting and image and passing that image to the CameraServer object. It’s actually pretty simple - now if I can figure out the rest, which I thought was going to be simple, but so far has not been

I apologize for not doing my homework.

The question is: Can the EXPOSURE of the USB camera be held or set manually? If it can be held will it hold through an on/off/on power cycle?

TNX

Jonboy, In my thread, http://www.chiefdelphi.com/forums/showthread.php?t=142633, that is exactly what I am trying to do - the code is there, the methods are there, but it doesn’t seem to be working.

According to the USBCamera class, there is a setExposureManual(int exp) and a setBrightness(int bright) methods, which is what I’m using. Whether they get saved across power cycles or not, I’m not sure, but I don’t care because I’ve connected it to the SmartDashboard and using the Preferences class, it will update to whatever values I save in the preferences file.

In the past I’ve altered the camera settings on a LifeCam Studio through the LifeCam software. If I remember correctly the changes persisted.

My team has been using NI Vision assistant and we now know how to track objects on the screen; however, we were wondering how to impliment the tracking to motor movement. We want our robot to find the target then auto adjust to score. If anyone has any code, websites, or tips for us that would be great. Thank you.

The USBCamera class is able to set exposure manually. It worked on our logitech camera.

Edit: sorry, didn’t see that this was already answered.

This is VERY cumbersome - it requires disconnecting the cam on the robot and connecting it to a PC, attempting to set the parameters based on venue, then saving them, then plugging back into the robot - it might be A way to do it, but certainly not optimal :confused: