Here's what we're using to grab images from the camera. We're using OpenCV 2.4 in python, but it should translate directly to other languages.
Code:
camera = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = camera.read()
cv2.imshow('Raw', frame)
As for setting the exposure, you can try calling this in OpenCV:
Code:
camera.set(cv2.cv.CV_CAP_PROP_EXPOSURE,-100)
No camera's exposure goes this low, but this will set it as low as possible.
If I remember right, we had to use v4l to set the LifeCam's exposure. It wasn't too difficult to do though, we just had to add it to the same startup script that launched our python vision processing script. Here's the command:
Code:
v4l2-ctl -d /dev/video1 -c exposure_auto=1 -c exposure_absolute=5
You may need to change the video device number to 0.
In addition, we've written documentation on much of the process of setting up our vision system. The "Configure the Raspberry Pi" blog has details about how to set up the camera, and the "The Java MQTT Broker" has a section about writing startup scripts on the roboRIO (and I believe Linux systems in general... don't quote me on that)
http://5495thealuminati.wix.com/shs-...news-blog/ck6w
Our code is also available on GitHub if you wanna poke around:
https://github.com/AluminatiFRC/Vision2016