View Full Version : LifeCam USBCamera changing settings from Java
robert1356
27-01-2016, 08:58
We are trying to use the LifeCam and the USBCamera class for our vision processing. One thing we need is the ability to control exposure and brightness so we get a good image going into our image processing pipeline.
So far I have been able to get an image on the SmartDashboard, provide a way to enter Exposure and Brightness values on the SmartDashboard and send them to the USBCamera class. The problem is, when I change the values, the image doesn't change brightness or exposure. I know the image is updating because I see the motion in new frames.
Here is what we're currently doing:
Startup:
USBCamera targetCam = new USBCamera("cam0"); // create connection to camera
NIVision.Image frame = NIVision.imaqCreateImage(NIVision.ImageType.IMAGE_ RGB,0); // create frame buffer
targetCam.openCamera(); // open the camera connection
targetCam.startCapture(); // start the frame capturing process (internal to USBCamera)
Loop:
targetCam.getImage(frame); // retrieve a frame from the USBCamera class
CameraServer.getInstance().setImage(frame); // push that frame to the SmartDashboard using the CamServer class
The above works as expected, and is pretty cool to boot.
Now adding the brightness/exposure control to the loop
Loop:
int exposure = Preferences.getInstance.getInt("camExposure", 50);
int brightness = Preferences.getInstance.getInt("camBrightness", 50);
if (brightness <= 100 && brightness >=0)
targetCam.setBrightness(brightness);
if (exposure <=100 && exposure >= 0)
targetCam.setExposureManual(exposure);
targetCam.UpdateSettings();
updatedBrightness = targetCam.getBrightness();
SmartDashboard.putNumber("Current Brightness", updatedBrightness);
targetCam.getImage(frame); // retrieve a frame from the USBCamera class
CameraServer.getInstance().setImage(frame); // push that frame to the SmartDashboard using the CamServer class
The above updated code, with brightness and exposure still gives me updated frames from the camera, I can also change the brightness and exposure from the SmartDashboard and the getBrightness() will return the NEW value and display it on the SmartDashboard, so I know the USBCamera class THINKS the brightness is changing BUT the actual brightness and exposure of the video does NOT change.
Does anyone have any experience on getting this to work? Any hints, tips or suggestions? We may have to punt with the USBCam and switch to the Axis Cam, which would be a shame since the LifeCam is so compact and "simple"
ahartnet
27-01-2016, 15:30
since the LifeCam is so compact and "simple"
and cheap. Haven't found an axis camera for <$170. Got a lifecam for <$25.
Assuming I don't have any fires to be putting out at our teams meeting tonight, this was actually going to be my priority to troubleshoot. First year back on the programming subteam for awhile and and first year ever using Java, but hopefully I'll have something to report back tomorrow to help you out.
ahartnet
27-01-2016, 22:29
Robert,
I was actually able to get almost exactly what you have to work. The only difference is I made sure to update the camera before opening the camera and starting the capture.
public class Robot extends IterativeRobot {
Preferences prefs;
CameraServer server;
USBCamera targetCam;
public static int g_exp;
/**
* This function is run when the robot is first started up and should be
* used for any initialization code.
*/
public void robotInit() {
prefs = Preferences.getInstance();
g_exp = prefs.getInt("exp", 1);
SmartDashboard.putNumber("Exp", g_exp);
targetCam = new USBCamera("cam0");
targetCam.setBrightness(5);
// We actually still had trouble setting exposure. It didn't actually like values 0-100. We found that setting the brightness did enough though.
// targetCam.setExposureManual(g_exp);
targetCam.updateSettings();
SmartDashboard.putNumber("Brightness", targetCam.getBrightness());
targetCam.openCamera();
server = CameraServer.getInstance();
server.setQuality(50);
server.startAutomaticCapture(targetCam);
}
We then don't get anything in the teleopPeriodic. In order to change the parameter for brightness (or exposure) you could change the value on the smart dashboard, click on the tab 2nd from the top on the left side of the driver station and click robot code restart (or whatever it's called). You don't need to do a full reboot.
Now if you find anyway to turn off the autofocus...that'd be handy.
1024Programming
28-01-2016, 08:38
THANK YOU THANK YOU THANK YOU!!! So frustrated with grip and this code might actually help us!
robert1356
28-01-2016, 08:59
I spent more time on it last night and discovered a few things. I looked at the source code for the USBCamera class - it's just a wrapper for the NIVision.IMAQ functions - that's not a bad thing, but it is good to know.
Info:
The USBCamera constructors call 'openCamera' internally. You don't need to explicitly call openCamera unless you explicitly call closeCamera.
UpdateSetting is automatically called by getImage() and getImageData() if one of the settings (exposure, brightness, fps, etc) had previously been changed. There is no need to call updateSettings() explicitly unless you are not using getImage() or getImageData().
The setBrightness() and setExposure() functions do nothing but set a USBCamera member variable. The settings are not sent to the camera/driver until the updateSettings is called (which will be the next time you call getImage() or getImageData())
The getBrightness() does nothing but return that member variable.
It's good to know what the functions are actually doing ;)
Bottom line - I removed the openCamera() call and removed my calls to updateSettings() and relied on the USBCamera class to manage it. This works better, but not perfectly. I can adjust the brightness and see things change, but exposure seems to have no effect.
I took it a step further. I created my own USBCamera class and added a getExposure() and modified the getBrightness() to get the values from the camera/driver using the NIVision.IMAQ calls. This proved to me that the exposure and brightness ARE getting changed, AND I validated that I am NOT in AutoExposure mode. Unfortunately, I still cannot get the exposure changes to show any appreciable effect on the LifeCam image. Brightness goes from a decent image with Brightness = 0, to a white image with Brightness = 100. Exposure changes seem to have no effect.
Has ANYONE been able to get images out of the life cam, using ANY method, that comes anywhere close to being as dark as the sample images for the FRC vision processing? I'm beginning to think that the LifeCam is simply not adjustable to that low of an exposure.
robert1356
28-01-2016, 09:04
Robert,
...
SmartDashboard.putNumber("Brightness", targetCam.getBrightness());
targetCam.openCamera();
...
See my followup. The openCamera() call is superfluous, it is opened when you created the object. Your use of setBrightness(), setExposureManual() and updatedSettings() is correct.
Were you able to get a good DARK image, or just an image that is good for a driver to look at? I want a DARK image that a driver would find unusable, but the image processing will be very happy about (because the retro-reflectors will show up nicely).
robert1356
28-01-2016, 09:38
I just confirmed it is possible - I'm running on a Mac and I just installed "Webcam Settings Panel" from the app store. It allowed me to adjust the exposure to a BLACK image. It also seems to confirm that it is not doing any of this in software, but is actually querying the camera for it's capabilities. The setting options are different between the LifeCam, my laptop iSight and my monitor's iSight, for example, the LifeCam has a backlight compensation slider while the iSights show up as simply on/off options.
I also confirmed that the settings are saved in the camera (or at least in the driver) - quit the settings panel (so I know it wasn't restoring the values) and unplugged the cam and plugged it back in and the image was the same as when I had unplugged it. This is all good news, now if I can just figure out how to use the USBCamera, NIVision.IMAQ or something on the RoboRIO to make these same adjustments, I would be happy.
robert1356
28-01-2016, 09:41
CORRECTION - I THOUGHT I had closed "Web Cam Settings" - I hadn't. IT was storing and re-storing the settings. I made sure I quit it and tested again. When I replugged the camera, it reverted to Autoexposure
ahartnet
28-01-2016, 12:22
Were you able to get a good DARK image, or just an image that is good for a driver to look at? I want a DARK image that a driver would find unusable, but the image processing will be very happy about (because the retro-reflectors will show up nicely).
It was dark enough that I don't anticipate GRIP having a problem with it - based on configuring the settings similarly with the USB camera is plugged in to the laptop.
Using the microsoft software linked to in the other thread, it was very obvious that the camera can be set up to change a lot more than we have access to with the class. I could turn the autofocus off, mess with the white balance, and all sorts of things that were useful in making the retroreflective tape pop even more. Unfortunately, I think we'll be limited to creating a good vision processing filter with just the brightness modified, though I'm going to keep messing with the exposure. But I am seeing the same thing you are - changing the exposure doesn't seem to have any affect, in fact at one point that camera stopped updating and I had no idea until I waved my hand in front of it to test something else.
Good to know it's just a wrapper...might be more investigating to do with the IMAQ functions.
robert1356
28-01-2016, 14:04
Using the microsoft software linked to in the other thread, it was very obvious that the camera can be set up to change a lot more than we have access to with the class. I could turn the autofocus off, mess with the white balance, and all sorts of things that were useful in making the retroreflective tape pop even more.
Was this running on the roboRIO or on your desktop PC? What microsoft software?
ahartnet
28-01-2016, 14:32
https://www.microsoft.com/hardware/en-us/d/lifecam-hd-3000 was the software - with the camera plugged into the laptop. The settings did not appear to get saved to the camera much as you saw from the webcam settings panel.
Thanks for posting this code. One question that I have is how can you use GRIP when you have this code running in the robot. We have included similar code to turn off auto white balance and to turn off auto exposure in Java using the USB camera class. However, we have found that GRIP will not run if we have opened the USB camera in our java robot code. So then we tried closing the camera when we were done setting up the camera, but it seems like it does not retain the settings in this case. Is there some way to change the settings in the USB camera and then run GRIP without losing your changes?
Justin Buist
06-02-2016, 23:36
Brightness goes from a decent image with Brightness = 0, to a white image with Brightness = 100. Exposure changes seem to have no effect.
Has ANYONE been able to get images out of the life cam, using ANY method, that comes anywhere close to being as dark as the sample images for the FRC vision processing? I'm beginning to think that the LifeCam is simply not adjustable to that low of an exposure.
The Javadoc notes that exposure and brightness go from 0 - 100 are completely wrong. I forget what the actual range is but this issue frustrated a room of students and mentors for most of a Saturday until we played with the MS software for the LifeCam which actually presents sane values for the 3 different settings. Brightness is something like -15 to 4 (don't trust that number, I'm going from rough memory) but not 0-100 at all. None of them documented to be 0-100 actually are.
1024Programming
08-02-2016, 09:23
The Javadoc notes that exposure and brightness go from 0 - 100 are completely wrong. I forget what the actual range is but this issue frustrated a room of students and mentors for most of a Saturday until we played with the MS software for the LifeCam which actually presents sane values for the 3 different settings. Brightness is something like -15 to 4 (don't trust that number, I'm going from rough memory) but not 0-100 at all. None of them documented to be 0-100 actually are.
I think it was mentioned in another post that the values actually go from 0-20,000. if that is the case, then using command will only max the value at 100/20,000.
robert1356
08-02-2016, 11:11
Using the NIVision IMAQ commands, the min/max values returned are 5 and 20,000 respectively. I duplicated the USB Camera class (too bad they made all the variable private instead of protected) and replaced the setExposureManual() with code that allowed me to set the value explicitly. It turns out that anything above about 40 or 50 will give you quite a bright image. I'm actually using 10 I think for the exposure and 10 for the brightness.
As for losing the settings - as long as you don't power the camera off, you will not lose the settings. It's tricky, but you can use Robot code to configure the camera, then disconnect, then run GRIP and it will have the setting you just set. There are some real caveats in all this and I think I finally have all the cases worked out. I'm probably going to post the code when I get it working.
Caveat #1 - The robot code cannot use CameraServer - if it does, GRIP will not be able to publish a stream. Unfortunately, while you CAN disconnect from the USBCamera, you CANNOT kill the CameraServer stream without rebooting the robot or manually killing the robot code.
Caveat #2 - don't forget that during development, if you use your robot code to set the settings, then launch GRIP, if you reload robot code, GRIP is still running and your robot code will throw an exception trying to connect to the USBCamera - you have to either reboot your roborio or kill the GRIP process. I've written code to kill the GRIP process if the USBCamera open() method throws and exception.
ahartnet
08-02-2016, 16:02
Just wanted to say thanks for reporting your findings on all of this. I've been busy taking care of some other mentor duties, and this is a life saver in allowing some students still get some help in trouble shooting what is going on with limited software experience
Using the NIVision IMAQ commands, the min/max values returned are 5 and 20,000 respectively. I duplicated the USB Camera class (too bad they made all the variable private instead of protected) and replaced the setExposureManual() with code that allowed me to set the value explicitly. It turns out that anything above about 40 or 50 will give you quite a bright image. I'm actually using 10 I think for the exposure and 10 for the brightness.
As for losing the settings - as long as you don't power the camera off, you will not lose the settings. It's tricky, but you can use Robot code to configure the camera, then disconnect, then run GRIP and it will have the setting you just set. There are some real caveats in all this and I think I finally have all the cases worked out. I'm probably going to post the code when I get it working.
Caveat #1 - The robot code cannot use CameraServer - if it does, GRIP will not be able to publish a stream. Unfortunately, while you CAN disconnect from the USBCamera, you CANNOT kill the CameraServer stream without rebooting the robot or manually killing the robot code.
Caveat #2 - don't forget that during development, if you use your robot code to set the settings, then launch GRIP, if you reload robot code, GRIP is still running and your robot code will throw an exception trying to connect to the USBCamera - you have to either reboot your roborio or kill the GRIP process. I've written code to kill the GRIP process if the USBCamera open() method throws and exception.
When you say you can use the Robot code to configure the camera, then disconnect, then run GRIP, what do you mean by "then disconnect"? Does the mean calling the CloseCamera() function?
We are trying to use the LifeCam and the USBCamera class for our vision processing. One thing we need is the ability to control exposure and brightness so we get a good image going into our image processing pipeline.
So far I have been able to get an image on the SmartDashboard, provide a way to enter Exposure and Brightness values on the SmartDashboard and send them to the USBCamera class. The problem is, when I change the values, the image doesn't change brightness or exposure. I know the image is updating because I see the motion in new frames.
Here is what we're currently doing:
Startup:
USBCamera targetCam = new USBCamera("cam0"); // create connection to camera
NIVision.Image frame = NIVision.imaqCreateImage(NIVision.ImageType.IMAGE_ RGB,0); // create frame buffer
targetCam.openCamera(); // open the camera connection
targetCam.startCapture(); // start the frame capturing process (internal to USBCamera)
Loop:
targetCam.getImage(frame); // retrieve a frame from the USBCamera class
CameraServer.getInstance().setImage(frame); // push that frame to the SmartDashboard using the CamServer class
The above works as expected, and is pretty cool to boot.
Now adding the brightness/exposure control to the loop
Loop:
int exposure = Preferences.getInstance.getInt("camExposure", 50);
int brightness = Preferences.getInstance.getInt("camBrightness", 50);
if (brightness <= 100 && brightness >=0)
targetCam.setBrightness(brightness);
if (exposure <=100 && exposure >= 0)
targetCam.setExposureManual(exposure);
targetCam.UpdateSettings();
updatedBrightness = targetCam.getBrightness();
SmartDashboard.putNumber("Current Brightness", updatedBrightness);
targetCam.getImage(frame); // retrieve a frame from the USBCamera class
CameraServer.getInstance().setImage(frame); // push that frame to the SmartDashboard using the CamServer class
The above updated code, with brightness and exposure still gives me updated frames from the camera, I can also change the brightness and exposure from the SmartDashboard and the getBrightness() will return the NEW value and display it on the SmartDashboard, so I know the USBCamera class THINKS the brightness is changing BUT the actual brightness and exposure of the video does NOT change.
Does anyone have any experience on getting this to work? Any hints, tips or suggestions? We may have to punt with the USBCam and switch to the Axis Cam, which would be a shame since the LifeCam is so compact and "simple"
I don't suppose anyone has done something similar in C++? I tried to translate it but it crashes the rio.
robert1356
26-02-2016, 22:57
I gave up trying to get image processing working on the roboRIO. I ended up having really bad problems - either GRIP or the robot code was crashing and generating a core dump (a HUGE file). This core dump was taking up all the storage space on the roboRIO which in turn would prevent the robot code from launching (you'd see exceptions that it couldn't write certain files because there was no room left on the device). I ended up moving everything to a Raspberry Pi. We now have a very stable system that works quite well and gives us all the control we need. I intended to post detailed documentation when I get a chance.
As for doing the above in C++, I did get that part of the process working in Java. C++ should be quite similar. Where is your code crashing?
I gave up trying to get image processing working on the roboRIO. I ended up having really bad problems - either GRIP or the robot code was crashing and generating a core dump (a HUGE file). This core dump was taking up all the storage space on the roboRIO which in turn would prevent the robot code from launching (you'd see exceptions that it couldn't write certain files because there was no room left on the device). I ended up moving everything to a Raspberry Pi. We now have a very stable system that works quite well and gives us all the control we need. I intended to post detailed documentation when I get a chance.
As for doing the above in C++, I did get that part of the process working in Java. C++ should be quite similar. Where is your code crashing?
Our code isnt crashing. But we get a whitewashed image. I can adjust cam settings in the ms util, but they dont reliably stick. With the brightness turned down, we have great results. But a random number of power cycles or code reloads, the settings revert.
jwatson12
03-03-2016, 09:47
Hello, any more updates to this post? I was reading on another post GRIP will not work with Lifecam. Were you able to get past this without using Axiscam and produce code in network tables for targeting?
robert1356
03-03-2016, 10:23
Hello, any more updates to this post? I was reading on another post GRIP will not work with Lifecam. Were you able to get past this without using Axiscam and produce code in network tables for targeting?
On the roborio, it would work with Lifecam. However, I moved to the RPi using the instructions from the GRIP wiki and a lot of my own discovery. I need to post everything, but haven't had the time. Bottom line - it works pretty well on the Pi. I have not actually tested connecting to a USB Cam - I just assumed the instructions were correct that GRIP on the Pi does not work with USB cams. I set up the mjpg-streamer and configured GRIP to connect to port 5800 (I configured the streamer to stream on 5800, not 1180 like the instructions say because the publish module in GRIP publishes on 1180 and creates a conflict). You can use v4l2-ctl to adjust all of the camera settings - lots of control and it does it on the fly, without having to stop the stream. This means I can look at the stream in a browser and adjust the settings exactly as needed. I have some improvements to make - I would like to get the USB cam working because I'd like to eliminate the lag of the streamer. That's a summary - if you have any specific questions, post them and I'll try to answer them.
jwatson12
03-03-2016, 10:50
Thanks for the quick reply. We are using Lifecam with roborio and having trouble getting the network tables to update after publishing GRIP. Getting HSL Threshold needs a 3-channel input. When we open Outline Viewer we see the report but no coordinates. We are publishing the contour report with images and trying with webcam. Both result in no network table updates. Any feedback is appreciated.
robert1356
03-03-2016, 11:15
Thanks for the quick reply. We are using Lifecam with roborio and having trouble getting the network tables to update after publishing GRIP. Getting HSL Threshold needs a 3-channel input. When we open Outline Viewer we see the report but no coordinates. We are publishing the contour report with images and trying with webcam. Both result in no network table updates. Any feedback is appreciated.
I don't understand the HSL threshold error - I still see that, but it works fine. I think it's a sequencing bug (starting the threshold process before they provide a valid input image). You should see a message later in the traces that indicates that errors have been cleared and everything is normal.
What do you mean "publishing the contour report WITH IMAGES"?
I did see that sometimes I would have to restart the Outline Viewer to get it to see the GRIP changes. Network Table weirdness.
Step back and do things one step at a time. Make a GRIP pipeline that just pulls in the Lifecam, publish the video and the framerate, make sure that works. Add pieces and publish, etc.
BE CAREFUL ABOUT BANDWIDTH and CPU UTILIZATION.
320x240 definitely was slow on the roboRIO. (all we did was a HSL threshold and contour)
we dropped to 160x120. One problem with GRIP is that you can't get the camera to generate a smaller frame size, you have to do a resize in GRIP - this is wasteful and bad. Grip is pulling in a 720p or 640x480 image (not sure which) and they you resize in GRIP to something that you can operate on and transmit to the driver station. Resize is an expensive operation, especially if you do one of the interpolations.
You can check CPU Utilization by logging into the roboRIO (from a terminal / command line) and typing:
top
look at the java process with GRIP.
On the RPi I had the CPU pegged, GRIP was taking 2/3 processor, jpg-streamer was taking 1/3. I dropped the frame rate and frame size to get the CPU utilization down to about 80%
What we tried on the roboRIO ---
What I was originally doing was:
HSL threshold
publish frame rate right out of the source (that let me know that it was actually generating frames)
contour
contour filter
publish contour report
created a mask with original image and contours
published the mask
It worked fine when we would run manually and deploy robot code (basically during testing). But when I added the code to have the robot code launch GRIP, we started seeing memory issues and fairly regularly, it would crash the JVM, generating a core-dump file in the process. Core dump files are HUGE and it would eat up all the device storage space. When the robot code tried to relaunch, it would hang because it attempts to create some files (preferences for example) and it couldn't because the file system was out of space. This is a VERY bad situation - robot code won't run. When this happened, I had to manually delete the core-dump files to get robot code running again. This is why I switched to the Pi - too risky to have the robot code / JVM crash in the middle of a match.
robert1356
03-03-2016, 11:17
BTW, make sure you have the latest version of GRIP. They're up to 1.3.1 now. We are currently using 1.2.1. I know 1.1.1 had problems
jwatson12
03-03-2016, 12:10
Thanks again. When I said I updated Rio with images I meant the input source in GRIP was all of the Stronghold images. I tried to publish this way and then with Lifecam as source. Both gave me issues. Once we setup the hue,etc are we to publish GRIP with images or Webcam as source? As you can tell this is our first time using GRIP.
Thanks for the quick reply. We are using Lifecam with roborio and having trouble getting the network tables to update after publishing GRIP. Getting HSL Threshold needs a 3-channel input. When we open Outline Viewer we see the report but no coordinates. We are publishing the contour report with images and trying with webcam. Both result in no network table updates. Any feedback is appreciated.
I found that problem as well. We received good info when connected to a laptop, but then nothing on the roborio. I found I had to hack into the project.grip file and update the settings there and with trial & error, got it working. I think solidity was one of the keys... Make it 0-100 (full scale).
I don't understand the HSL threshold error - I still see that, but it works fine. I think it's a sequencing bug (starting the threshold process before they provide a valid input image). You should see a message later in the traces that indicates that errors have been cleared and everything is normal.
What do you mean "publishing the contour report WITH IMAGES"?
I did see that sometimes I would have to restart the Outline Viewer to get it to see the GRIP changes. Network Table weirdness.
Step back and do things one step at a time. Make a GRIP pipeline that just pulls in the Lifecam, publish the video and the framerate, make sure that works. Add pieces and publish, etc.
BE CAREFUL ABOUT BANDWIDTH and CPU UTILIZATION.
320x240 definitely was slow on the roboRIO. (all we did was a HSL threshold and contour)
we dropped to 160x120. One problem with GRIP is that you can't get the camera to generate a smaller frame size, you have to do a resize in GRIP - this is wasteful and bad. Grip is pulling in a 720p or 640x480 image (not sure which) and they you resize in GRIP to something that you can operate on and transmit to the driver station. Resize is an expensive operation, especially if you do one of the interpolations.
You can check CPU Utilization by logging into the roboRIO (from a terminal / command line) and typing:
top
look at the java process with GRIP.
On the RPi I had the CPU pegged, GRIP was taking 2/3 processor, jpg-streamer was taking 1/3. I dropped the frame rate and frame size to get the CPU utilization down to about 80%
What we tried on the roboRIO ---
What I was originally doing was:
HSL threshold
publish frame rate right out of the source (that let me know that it was actually generating frames)
contour
contour filter
publish contour report
created a mask with original image and contours
published the mask
It worked fine when we would run manually and deploy robot code (basically during testing). But when I added the code to have the robot code launch GRIP, we started seeing memory issues and fairly regularly, it would crash the JVM, generating a core-dump file in the process. Core dump files are HUGE and it would eat up all the device storage space. When the robot code tried to relaunch, it would hang because it attempts to create some files (preferences for example) and it couldn't because the file system was out of space. This is a VERY bad situation - robot code won't run. When this happened, I had to manually delete the core-dump files to get robot code running again. This is why I switched to the Pi - too risky to have the robot code / JVM crash in the middle of a match.
This makes me nervous... I haven't filled the filesystem, but the executable crashes with out of memory. Didn't see a core. Before GRIP, I see only 25M free, so I know its close.
Does anyone know how long it takes to process a pipeline? Is there a way to find out the publish rate to NetworkTables?
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.