![]() |
Using the AXIS camera (M1011) with OpenCV
So, I am using cv::CvCapture to get video. This code works with my internal webcam (cvCaptureFromCAM(0);. However, when I switch to reading from the net-cam (AXIS M1011), my app keeps crashing and OpenCV says that it can't find "../../modules/highgui/src/cap_ffmpeg_impl.hpp(545)"
What am I doing wring and why is it triggering? the code is below. PHP Code:
Thanks! ;) |
Re: Using the AXIS camera (M1011) with OpenCV
My guess is that you either didn't compile OpenCV with ffmpeg support, or perhaps it can't find the ffmpeg library. Check the OpenCV file to see what the code is doing at that point. You could post the entirety of the error message and context (like where it is being issued) too.
|
Re: Using the AXIS camera (M1011) with OpenCV
Quote:
The funny thing is that the code compiles correctly. The code also works perfectly when I use cvCaptureFromCAM(0); instead of reading it from a file, with a URL. If you change, int capMode to 2, modify the IP URLs on: PHP Code:
the AXIS camera on the bot will do cascade-based vision tracking. How do I fix this issue about FFMPEG not being correctly detected? I spent over two hours trying to modify code, find a better stream URL, etc., but couldn't find out anything, so now it is time that I ask the experts ;) Also, what is the URL that I should use to gather the MJPEG stream to the AXIS (I think it is M1011)? Also, should I use another drop-in command like capture = cvCaptureFromFIle("URL"); ? Thanks for your time and effort and good luck this year ;) |
Re: Using the AXIS camera (M1011) with OpenCV
Are you sure it didn't say "Could not find codec parameters"? There's a similar looking message on line 556 of the file in their github repo. Line 545 says 'error opening file', which implies that it cannot open the stream you've passed to it.
The uri that worked for me last year was 'http://%s/mjpg/video.mjpg'. |
Re: Using the AXIS camera (M1011) with OpenCV
I had a feeling that that was a problem. I'll try that URL. If that doesn't work, where would I look to find the camera stream address? Also, how do I authenticate to the camera?
|
Re: Using the AXIS camera (M1011) with OpenCV
I'd recommend disabling the authentication on the camera, then you don't have to deal with it. I'm pretty sure there's a way to enable anonymous stream viewing somewhere.
|
Re: Using the AXIS camera (M1011) with OpenCV
If you are only acquiring images, you can turn on anonymous viewing on the camera settings and the URL will work without authentication. If you want to set other parameters, you can mimic what the web page does. And by that I mean to capture using wireshark and spoof it. It is what WPILib does. I can give the codes for FRC/FRC.
Greg McKaskle |
Re: Using the AXIS camera (M1011) with OpenCV
I'll have to look into anonymous viewing. If anyone knows how to do it, letting me know would be great!
Thanks :) |
Re: Using the AXIS camera (M1011) with OpenCV
Quote:
The image under Configure Users |
Re: Using the AXIS camera (M1011) with OpenCV
I got the anonymous viewing set up, and not the steaming works like a charm. I think I get less lag than my laptop's internal camera! :D
Here's my current code. I am currently working on it, but it displays three windows, grayscale, and two instances of the original image, to be processed. PHP Code:
|
Re: Using the AXIS camera (M1011) with OpenCV
So now that I have a working stream address, and I am able to do basic processing, How do I separate the goals from everything else? The camera is saturated from the reflected green, so I just need to create an algorithm to separate that from the rest of the stuff. Whee should I start? If anyone has some sample code, that would be appreciate, even if you want to PM me to not let anyone else see!
So currently, I am able to process the colors as CV_BRG2HSV and CV_BGR2GRAY. I am also able to imshow the images throughout the entire process! ----Different topic---- So many of us have claimed how wifi interference can cause a ton of lag in robot communication. To see what would happen, I will try this wacky test: step 1: Ask everyone to turn off their phones and electronics, and isolate the appliances from as much interferance as possible. I'll try to get a rough estimate of the lag (with no bandwidth restrictions enabled (similar to what you'd get if you were doing onboard processing without any network usage (except feedback). step 2: Ask everyone to turn on their phones, bluetooth, wifi and any other communications possible. I will also be running aircrack-ng on another computer to just send a ton of packets and try to cause as much disturbance as possible. This will run on the same channel as the robot communications. To manage consistency, no bandwidth restrictions would be enabled, again. I'd then check the lag times and see whether vision will be a possibility. I'll most likely publish this data to show what my experimentation came up with. step 3: analyze and publish the results. Also, choose the pathway you wish to follow, there-on. It could be to go by onboard or offboard processing! ---Tell me if I should add another step to this. I will try to use BackTrack Linux off a pendrive iso image. Also, this is quite overkill. You'll probably never get as much interference competition, so this will be more-of-a worst-case scenario! What do you guys think? |
You might wanna take a look at thresholding techniques, specifically OpenCV's inRange() function.
|
| All times are GMT -5. The time now is 03:11. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi