My team uses a Microsoft Lifecam 3000 and the frc-vision RasPi image to do our vision, programming in python with OpenCV. We were having lots of issues with glare from the ceiling lights, a problem we fixed with a VERY bright ring light. Because the Lifecam doesn’t play well with anything settings-wise, we prevented overexposure by just strapping sunglasses (sunglass?) lenses to it. That worked fine until circumstances compelled our vision coder to use our ancient Lenovo to work with the Lifecam. It runs windows 7, and as soon as he plugged the camera in it whitebalanced. Worse, the exposure stayed way too high even when plugged into a different computer, including the Pi. Is there a way to prevent this, or at least fix the one we have?
The stream webpage (you can get to it from the “Open Stream” button on the Vision Settings tab) has all the available settings, and you can edit them temporarily on that page. When you’re happy with the settings, you can then use “copy source config from camera” on the Vision Settings tab to make them persistent.
We were able to change the exposure in the stream webpage, but the changes aren’t consistent. When we copy the setting config files over, the display doesn’t seem to have exposure terms, it just skips from “Backlight Compensation” to “Pan Absolute”. Furthermore, when we run our own code, it doesn’t even have setting sliders at all, and doesn’t load the config. Are the easy UI and config files just a feature of the code used in multiple cameras? If so, how can we implement it ourselves?
The webpage with the interactive sliders is a feature of CameraServer/cscore and should appear if you are using code that does something like CameraServer.startAutomaticCapture() or equivalent. The config file loading feature is a feature of the template multiCameraServer example programs provided with the FRCVision image, so you can look through those to see how it’s done.
Awesome, thank you! I feel really dumb, moving the code over now.