|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Fixing opencv lag
I am trying to get camera tracking working on my beaglebone black. I want to use the axis camera to do this, and I can get that part working. The problem is that the tracking is limited to about 15 frames per second, and the camera is sending at 30fps. This is causing images to just get stored in the buffer, which then starts lagging badly. Does anybody know how to either shrink the buffer down to 1 image, read from the back of the buffer, or loop through the buffer to the end quickly. Or if there are any other suggestions that would help too.
Also this is unrelated, but does anybody know why no networking programs allow reading from the back of the buffer. I have wanted to do that MANY times, but nothing ever supports it, and just curious if anybody knows why that is. |
|
#2
|
||||
|
||||
|
Re: Fixing opencv lag
This is basically what I was going to write, but Zaphod said it better...
Quote:
|
|
#3
|
||||
|
||||
|
Re: Fixing opencv lag
Yeah I was just about to post that I read that. I did it without the locks because I don't know how to do locks in python, but thats coming up next.
Thanks |
|
#4
|
||||
|
||||
|
Re: Fixing opencv lag
Quote:
In other words, don't allow the camera to "free run". You might also try setting the Axis to run at 15 fps. As long as you can process at 15 fps, you should not be filling the buffer. |
|
#5
|
||||
|
||||
|
Re: Fixing opencv lag
I tried just requesting a single frame at a time, but it would take 100ms from requesting the image to actually receive the image. It was a little too much for me.
As for limiting the fps it wasnt working for some reason. |
|
#6
|
||||
|
||||
|
Re: Fixing opencv lag
Just toss every other frame? Have you tried using C or C++ instead of Python?
|
|
#7
|
||||
|
||||
|
Re: Fixing opencv lag
I know the python libraries. And the threaded way works just fine, and most likely better.
|
|
#8
|
|||
|
|||
|
Re: Fixing opencv lag
The WPI libraries in LV include vision code. The code uses parallel loops -- results in parallel threads -- in case the user has set the camera faster than they can process. This is effective for keeping the lag to a minimum.
You may also want to lower the camera frame rate, though. First, there is some overhead involved in processing the frames, though hopefully you aren't decompressing the images or doing any of the expensive things. Second, the two thread approach will result in much less certainty about when the image was acquired. If you are trying to estimate time, it is likely better to run the camera at a slower, but known rate that doesn't buffer images. The other approach of requesting an image only when you want one was used in older versions of WPILib. It worked/works far better on the 206 model than on the newer Axis cameras. There is indeed quite a bit of setup overhead and the mjpeg approach is almost required to get above 10fps. Greg McKaskle |
|
#9
|
||||
|
||||
|
Re: Fixing opencv lag
Quote:
I ran the threading overnight last night, and it ran for 12 hours without missing any frames or starting to lag. How I had it set up was grab the image in the fast looping thread, and store that. Then in the slow thread i would grab a frame as fast as the tracking could loop, which was about 60ms. Because at 30fps that is a new frame every 33ms, that means at maximum the image is 2 frames old, which is perfectly fine. The reason we are on a beaglebone instead of the dashboard is i have heard stories of FTA's requiring dashboards to get shut off, and we don't want to be stuck if that happens. So we want it to be onboard. |
|
#10
|
|||
|
|||
|
Re: Fixing opencv lag
Totally understand about the approach. I have a bone at my desk for precisely this sort of experimentation. I was relating it to how LV implemented it in order to validate your approach.
I'd be interested to hear if FTAs need to limit dashboards this year. Some venue's are less wifi friendly, so it may still happen, but it is not the expected route. Greg McKaskle |
|
#11
|
||||
|
||||
|
Re: Fixing opencv lag
We did our image processing in python using OpenCV in a single thread, and didn't have any lag problems. Here are some thoughts that may help:
|
|
#12
|
||||
|
||||
|
Re: Fixing opencv lag
Quote:
As for grabbing the image directly, openCV has no way of just opening a jpg over the network, so some other library had to be used. I was using urllib because that was the only one i could find working. And it would take 70 ms just to connect to the camera and download the image to an already allocated buffer. |
|
#13
|
||||
|
||||
|
Re: Fixing opencv lag
Quote:
Code:
vc = cv2.VideoCapture()
vc.open('http://%s/mjpg/video.mjpg' % camera_host)
h = vc.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
w = vc.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
capture_buffer = np.empty(shape=(h, w, 3), dtype=np.uint8)
while True:
retval, img = vc.read(capture_buffer)
|
|
#14
|
||||
|
||||
|
Re: Fixing opencv lag
Quote:
In 2012, I had to tell a student that the vision software they worked on was not allowed because the FTA wouldn't let us use it. Then, we were stupid enough to try vision again in 2013, thinking that the "bandwidth limit" (which doesn't work) would prevent the FTA from banning the dashboard, but no, the FTA said we weren't allowed to use our vision stuff yet again, but this time, because a robot on the field had stopped working "because we sabotaged their network connection". What really happened? The other teams battery fell out. Not our problem, but we still couldn't use vision. Vision is not a guaranteed ability for robots. Nor are dashboards, or any amount of communication. |
|
#15
|
||||
|
||||
|
Re: Fixing opencv lag
Quote:
Yes, it is really a shame when a feature that has been pushed by FIRST is taken away, for any reason, and that feature is part of the major design a team has undertaken. The point of this thread, on the other hand, is how to avoid this possibility entirely. Here is why this thread addresses this. If the vision processing is done by a separate on-board system, Beaglebone in this case and PCDuino in ours, it removes the wireless network entirely. All acquisition of images, processing and communication of target information remains on the robot. Honestly, for much less than $200, any team can do this! It just takes time to learn how, and the information is already readily available. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|