In case anyone is using OpenCV + ffmpeg to do their image processing, I found a bug in OpenCV that causes ffmpeg to hang for up to 30 seconds or so when trying to connect to the AXIS M1011 camera. I’m surprised the SmartDashboard doesn’t run into this problem, since it appears to use ffmpeg underneath the hood for capturing with camera extension widgets (but not for the default camera widget that comes with it).
If you’re interested in more details, check out the bug reports at OpenCV and FFMpeg.
Around the start of this year’s build season, I contacted Spectrum 3847 about using OpenCV on an onboard Raspberry Pi for vision processing. They replied describing a problem with ffmpeg and OpenCV causing significant lag. It is probably the same issue you found above.
Well, a few months later and here I am. I created a fully working Node.js based OpenCV processing server. I’m using the Axis VAPIX Video Streaming API (which is just MJPG over HTTP) and it’s giving me 25 processed target images (640x480) per second. I created node-vapix to implement this and interface with the camera and I’m using node-opencv for OpenCV bindings.
This entire time, I never even had to introduce FFmpeg into my project. So I’m asking, why even use OpenCV’s VideoCapture instead of directly providing images?
Cool stuff. Of course, I’ve already put in the effort to fix the problem with FFMpeg (and it is fixed, after changing the source code a bit), so I’ll stick with that for now until it bites me again.
I am just starting down the road to learning how to use OpenCV with a webcam through Linux. I HAVE NO CLUE right now what I will be facing. So, can you possibly provide tips on what to look out for, gotcha’s to avoid, things that worked and things that didn’t? I would hate to reinvent the wheel when the work has already been done.
So, what caused the OpenCV + ffmpeg issues and how did you resolve it?