Need help with OpenCV on frcvision pi image under python

Very simple program. One lifecam USB camera attached.
import cv2
import numpy as np
import math
import time

cap = cv2.VideoCapture(0)

Running this python3 script results in a hang after the following output

[ WARN:0] OpenCV | GStreamer warning: GStreamer: pipeline have not been created

I know the camera is connected and working as the multicamera server shows a stream at http://:1181

I really don’t know what to do with this information, I was expecting the frcvision pi image to be more or less plug and play. We have our code working on a linux laptop ok.

Any help appreciated.
Andy

Only one program can access the camera at a time. So if you have the multicamera server running you won’t be able to access the camera from a separate Python program at the same time. My recommendation would be to start with the example Python program and add your vision code to it. The CameraServer classes will let you get access to OpenCV images, instead of using cv2.VideoCapture.

Thanks for the reply. Super helpful! I guess another option would be to add a second camera? Or would the multicamera server automatically grab it?

The multicameraserver will only access the cameras you have configured in the vision settings tab.

Thanks for the advice, I am past this issue now after removing the camera from the multicamera server. Would running our openCV code from the camera server code result in more latency you think or would it be a wash?

It will be a wash. You’d also benefit from greater robustness than cv.VideoCapture (e.g. CameraServer deals gracefully with cameras getting unplugged or replugged in, VideoCapture does not).

Just a quick note, the pixy setup in the linux image is missing the udev files so only root can access the camera.

I’m trying to get OpenCV access to the camera from the multicamera server python example. But I’m not sure how to bridge the two.

I tried this
source = numpy.zeros((320,240,3), dtype=numpy.uint8) # ?
output = numpy.zeros((320,240), dtype=numpy.uint8) # ?
while (1):
if (cvSink.grabFrame(source) == 0):
continue

  cvtColor(source, output, COLOR_BGR2GRAY)

but get this error
cvtColor(source, output, COLOR_BGR2GRAY)
TypeError: only size-1 arrays can be converted to Python scalars

I’m new to python so it’s hard for me to figure out how these things integrate. I can’t seem to find good docs on python opencv. I don’t know what cvtColor wants for arguments or how to get what it wants from the camera server.

The OpenCV documentation includes documentation on the Python wrappers. The cvtColor docs say the Python arg order is (src, code, dst), but you’re passing (src, dst, code). You might also find the OpenCV Python Tutorials useful as well as the RobotPy documentation.

1 Like

Thanks, I ended up finding the camera server python source code on the pi and reading that. It gave me what I needed to know and it looks like I have it working now. Thanks for the pointers also.