Python cv2 VideoCapture Timing Out / "Freezing"

Hello. I’m on FRC team 2335 and while we’re hoping to get some vision processing going this year, we’re running into seemingly-inexplicable behavior.

We’re running the python example downloaded through the FRC Console on a Raspberry Pi. It’s a 2 Model B running the provided image. The example as is works fine. Upon further testing, we’ve found that the script works fine up until the capture = cv2.VideoCapture('/dev/video0') line. We’re trying to get video for separately written image processing code for the reflective tape, but it “freezes” on this step instead and doesn’t execute any code after it.

Instead of returning a VideoCapture object as we would expect from the documentation, the console output appears to indicate it getting stuck in some other thread (or something like that).

Additionally, we tried to grab the frames directly from the camera server using sink = CameraServer.getInstance().getVideo(camera=cameras[0]) and sink.grabFrame(sourceFrame, timeout=3) (sourceFrame is a numpy array with the shape [240, 352, 3] for 240p resolution). However, grabFrame() doesn’t seem to actually grab the frame, it times out.

Might any of you had success using a Pi for image processing, or have an idea on why VideoCapture is not behaving as expected? Or any alternative method to capturing the camera image for cv2 processing on the pi? We’d really appreciate it.

Using getVideo() is the recommended approach. If it’s timing out in grabFrame(), it most likely means it’s never getting frames from the camera. Which might also explain the issue you’re having with cv2.VideoCapture. Note the cv2.VideoCapture will interfere with the rest of CameraServer, so you can’t use both. Are you still able to access the stream via the webpage even when this happens? What camera are you using?

I assume your code looks something similar to this:

My code looks somewhat similar, with a few differences. Please note that the rest of the code above (like getting the cameras array) is exactly as found in the python example.

sourceFrame = np.zeros([240, 352, 3])
sink = CameraServer.getInstance().getVideo(camera=cameras[0])

# loop forever
while True:
    #ret, sourceFrame =
    #sourceFrame = visionDetect.getImage()
    timeout = sink.grabFrame(sourceFrame, timeout=3)
    #print(time, sourceFrame.shape)
    distanceToTape = getDistance(sourceFrame)
    table.putNumber('distance', distanceToTape)

The timeout of 3 seconds was purely for debug. Also note that I can see the camera just fine on the driver station using the CameraServer. Additionally, the comments are just me trying different things so that it works, they’re not to be used.

First of all, you need to set dtype when doing numpy.zeros(). The default is double, when it needs to be uint8 or similar. Are you setting the camera resolution to 352x240? Is the returned value from grabFrame() zero?

Ah, okay I will add a dtype. And yes, the camera resolution is 352x240. The returned value is zero, and when I ask it to print getError() it says that it timed out.

In case anyone has this problem in the future, I was able to get it fixed. On top of setting the type of the numpy array to uint 8. I also discovered that grabframe() returns a tuple containing the timeout time and the frame, so trying to pass the variable timeout to cv2 returned an error. The code I have now that it’s fixed looks like this:

    sourceFrame = np.zeros(shape=(240, 352, 3), dtype=np.uint8)
    sink = CameraServer.getInstance().getVideo(camera=cameras[0])

    while True:
        timeout, sourceFrame = sink.grabFrame(sourceFrame)

        if timeout == 0:
            print(sink.getError()) # TODO: Make this a driver station error
            distanceToTape, centerOffset = processDataFromImage(sourceFrame)

            if distanceToTape is not None:
                table.putNumber('distance', distanceToTape)
                table.putNumber('distance', -1)