So , My team finally decided to use LabVIew instead of C++, since we couldn’t get images from camera .
Now Another Issue , We are only getting slow motion from camera xDD,
WE don’t know if is because of cRIO Processing speed , or because of our code.
Anyone know any way to speed it up??
We already using an 160x120 image, with 30 FPS and a compression of 10 , anything else is at default, we are getting mjpeg .
How fast, and which cRio, are you requesting a color or greyscale image?
With color images at low resolution, we were able to get something like 6 fps when we attempted vision back in 2009. That included 2-color color masking and blob tracking.
With the new vision updates this year which allow requesting a greyscale image, 15 is hopeful, with nothing else running. The processor simply isn’t fast enough to get and process images much faster.
It is realllly slow, i would say something abou t 2- 5 seconds delay when we are tryng to process the image, but we get iamges iamges only on greyscale??, we tried to find it but couldn’t , and , i took the example of Trackiing rectangles from labview and coy/oaste it into our code, it works, but, it didn’t find all the rectangles, i would like to know if anyone have a regular value for each one of HSL images options, such like luminance, blue, whatever.
One thing to keep in mind. Viewing panels and probes isn’t free. The more panels you have open and the more probes and image displays, the more of the CPU will be spent on debugging. If you run the DS and click on the Charts tab, you can watch the CPU usage with different options. You can also instrument the vision processing loop to measure the rate.
Have you/If you haven’t try running just the sample vision code on your laptop for your camera? That would verify your camera is working right. The lag will be bad for the camera, but 2-5 seconds seems excessive. Our camera was apalrd’s speeds last year. To test you might also try putting a disable box over the potentially offending algorithm, and see if that solves your issue.
We also have this issue, and maybe I can be clear enough that we can resolve it.
First, there is no cRIO involved in anything I am about to say. In working on our image processing, we are using the camera, a wireless router, and a laptop. That’s it. Camera is plugged into the router by a cable, and the laptop acquires images over the WiFi.
I made a “stripped down” dashboard to acquire the images: I made a new Dashboard project, opened Dashboard Main.vi, and deleted everything that wasn’t the image acquisition loop.
The lag is so severe that the students can walk in front of the camera, then sit down at the laptop in time to watch themselves walk in front of the camera.
gnunes,
I ran the attached code only laptop with a 206 camera. I was using the DLink and had the camera and laptop plugged into the DLink. This ran fine. The attached screenshot shows that the time between frames was nominally 33ms with some jitter. The CPU usage on the computer is around 20% on my laptop. The latency measured by moving my hand was less than 1/4 second, probably more like 100 to 150 ms.
The IP camera are subject to the caching that can take place an each NIC, on the camera, switch, wifi devices, and the computer. Bring up the task manager and observe cpu usage, and experiment with wired vs wireless. It may be that your wifi was running at lower speed to be compatible with some other device on the network.
In terms of bandwidth, the jpegs on my computer were about 24kBytes large. At 30fps, that results in 720MBytes per second, or about 7.2Mbits per second. This would mostly consume a b speed network, should work on g speed, but six cameras would consume it. And finally, an n speed network should handle a number of cameras pretty well depending on its configuration.
Please post back when you discover the cause of the issue.
I think your estimate of 150 msec in a hard-wired connection is pretty spot on. I put a timer in the loop and put it on the screen, then pointed the camera at the screen. I did the same using my browser to get the images from the camera, and it runs about twice as fast. See the attached images.
I’d really prefer to have 80 msec lag rather than 160!
Another thing that bites me sometimes is environmental lighting. The camera tends to adjust its exposure time an frame rate depending on the light level. I must have spend an hour in the past trying to understand why my numbers had changed one morning. Finally it became clear that it was because I wore a dark navy shirt and the day before was a cream colored shirt.
In other words I was just holding stuff in my hand and was using a totally different background.
Other possible explanation for the latency difference are due to the IMAQ display. I discovered that the Image Info box doesn’t draw directly to the screen, and you may find that it affects the latency to have it shown. Hide it with a right click on the display >> Visible. Any zooming of the image could add some.
I ran your experiment using IE, and I didn’t get quite the differences. I saw between 100 and 170 for IE and 140 and 200 for LV. I also notice that the cpu usage is a bit higher for LV. Basically, it look like the diagram copying of the buffer or something similar does add some overhead.
Since you mentioned the bandwidth of the WiFi…what is the situation on the competition field? Can we grab 640 x 480 images at 30 fps in a competition? Or will that totally bog down communications with the cRIO?
(Last year our autonomous mode was completely killed by “watchdog not fed” errors…that only occurred at competitions. Everything worked perfectly at home! This has me completely terrified, but unable to do anything about it.)
What were results of the wifi? The difference between 150ms and 1500ms is huge. Did you fix the 1500 or did it just go away?
As for the field radio conditions. I have not seen evidence that the lag is typically there. I volunteer at a few events per season, so it could be that it happens when I’m not looking.
On the other hand, at every event, the situation plays out where a team blames the field and swears that fill-in-the-blank symptom has never happened before. Each time the issue has been debugged, typically on Thursday, and is caused by static discharge, a HW, or SW failure that just didn’t show up until the rigors of competition.
The DS this year has a tab on the far right called Charts which displays and logs the packet loss, round trip time, cpu usage, and battery voltage. It also displays the tiny dots which show the state the field is asking the robot to be in and the state(s) that the robot code actually executed. This last feature only works if you insert, or don’t remove, the instrumentation code that is in the framework. Program Files\FRC Driver Station\ also contains a Log File Viewer app to review and compare logs from shop and competition. If you run your code through the practice match, not just tele and auto separately, and you watch the last tab on Thursday, I think you are prepared. If the gremlin exists, I hope that the new tab helps to identify it sooner, whether it is a robot HW, SW, or field problem.
If you get to an event and decide to lower throughput on the field, I’d suggest lowering the resolution and upping compression to 60 or 70.
In testing I’ve run three cameras on a single robot in our eight story office – a super wifi noisy wifi environment. The limiting factor was the old laptop’s ability to decode and display. If you can reproduce the 1500ms or larger lag, please give details.
Yes, and no. My original code, which I made my cutting away and modifying a dashboard project still runs crazy slow. An essentially identical program, made “from scratch” has only the expected 150 ms latency.
This morning I deleted the cast to an HSL from the dashboard-derived code, and the lag seemed normal (I didn’t test this quantitatively). When I put the cast to HSL back in, it went back to the huge lag. But the other version has the cast to HSL and not the huge lag.
I spotted the difference, but thought I’d introduce some new tools as part of the explanation.
If you goto Tools>>Compare, you can choose the From Scratch and From Dashboard and it will diff and explain all of the differences between the VIs. Clearly one has a big numeric, and one has some dT calculation. The key difference for performance is in the cluster’s value. One image is 320x240, and the other is 640x480. If I change the scratch VI to 640, it is slow, and if I change the dashboard VI to 320 it is fast. I really mean large and small latency, but you know what I mean.
Next, I opened Tools>>Profile and ran the Performance Profiler on one of the 640 VIs. Click on the Start button, then run the VI for about five seconds, then stop the VI, then click on the Stop button on the profiler. You may also find it useful to click on the Timing Statistics checkbox, and that will tell you how many times HSL conversion was done for the time it took. It also does the min, max, and avg values. On my machine, it adds an average of 50ms to convert a 640x480 image to HSL. With data coming in every 33ms, “Houston, we have a problem”. My computer has two CPUs, but the operation can only fit on one of them the way it is written, so my CPU usage is about 60% and I’m CPU limited. And indeed, the fps is about 20.
The latency is introduced because while the CPU is processing an image, 1.5 images arrive. So pretty quickly, the TCP buffers swell up and the video stream is lagging. If I drop the framerate from 30 down to say 10, the video will obviously get a bit choppier, but the latency is now something like 250ms. If you use the VIs from the robot, the one that reads the stream continuously in a parallel loop and sends the data across in a notifier buffer. This would allow the fps setting to degrade better and not introduce much lag. It didn’t seem necessary for the dashboard, but can be used if you need it. Or you can drop the fps and tune it by looking at the CPU usage. Be sure to restart the VI, restarting the mjpg stream when the parameter changes.