Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   NI LabVIEW (http://www.chiefdelphi.com/forums/forumdisplay.php?f=182)
-   -   Vision Processing Help (http://www.chiefdelphi.com/forums/showthread.php?t=142993)

BillyBobJean 01-02-2016 18:50

Vision Processing Help
 
Problem:
We have low fps in our vision processing. We would like to move the vision processing from the roborio to the driver station to increase the camera performance. How do we accomplish this?

Info:
It is a usb camera and we use NI Labview.


First post so if there are any questions, please ask.

Conor Ryan 01-02-2016 19:17

Re: Vision Processing Help
 
Quote:

Originally Posted by BillyBobJean (Post 1533620)
Problem:
We have low fps in our vision processing. We would like to move the vision processing from the roborio to the driver station to increase the camera performance.

Info:
It is a usb camera and we use NI Labview.


First post so if there are any questions, please ask.

Adjust the resolution, you'll get a much higher FPS with a smaller picture.

BillyBobJean 01-02-2016 21:46

Re: Vision Processing Help
 
We tried that. We still get very low fps

rich2202 01-02-2016 21:49

Re: Vision Processing Help
 
Is your problem it takes a long time to vision process each frame, or that the camera updates slowly? What do you consider low fps?

FYI, it takes .6 seconds for our ds to process one frame.

adciv 02-02-2016 07:04

Re: Vision Processing Help
 
Quote:

Originally Posted by BillyBobJean (Post 1533620)
Problem:
We have low fps in our vision processing. We would like to move the vision processing from the roborio to the driver station to increase the camera performance. How do we accomplish this?

Info:
It is a usb camera and we use NI Labview.


First post so if there are any questions, please ask.

You can do this by moving the vision code from the vision example to the Dashboard vision loop (loop 2). You would then send the data using network tables to the robot. That said... What resolution are you running at? We're getting pretty decent frame rates at 320x240. I have benchmarks somewhere but it could be the roboRio is selecting a lower frame rate from the camera than you desire and this is contributing to the lower frame rate.

Quote:

Originally Posted by rich2202 (Post 1533687)
FYI, it takes .6 seconds for our ds to process one frame.

:ahh: What laptop are you running?!

Greg McKaskle 02-02-2016 08:39

Re: Vision Processing Help
 
Keep in mind that if you have the roboRIO's VI panel open and you are watching the images, the roboRIO is also needing to compress the images and send them to the LabVIEW debugging session. So before you know what your fps is, be sure to close the debug panel.

To discuss the resolution of the camera. The key here is the number of pixels that represent the smallest feature that you care about in the image. For this game, it is probably the target tape. The tape is 2" wide, and at distance y using camera x, you can use Vision Assistant or another tool to count the pixels that represent the 2" tape. The number of pixels affects the accuracy of measurements you make on that feature of your image. If your 2" results in 20 pixels, then a small error due to alignment/lens/lighting/vibration will result in a measure of 19 or 21 pixels instead. This would give you a swing of 5% on width related measurements. If you image has only 4 pixels for the 2" line, that same swing is from 3 to 5 -- or probably too large for you to feed into anything affecting your shooter. The above is just to demonstrate the concept and is a conservative way to think about resolution.

The first thing to do is to make the measurement and see if you can drop the resolution down without affecting the accuracy of what you are measuring. I think that anywhere from 5 to 10 pixels is typically plenty. After all, you probably aren't measuring the 2" tape, but the 20" width or the 14" height.

Greg McKaskle

rich2202 02-02-2016 10:56

Re: Vision Processing Help
 
Quote:

Originally Posted by adciv (Post 1533796)
:ahh: What laptop are you running?!

It is a newer I5 laptop. I have not had time to look at his code since 0.6 seconds is not material to us at this point (stop-vision-turn sequence is much longer).

This is the first year we have a programmer successfully implementing vision, so I am sure there is lots of optimization we can do. He did take an interesting approach:
1) Grab the picture off the laptop screen (use same pic the drivers use to drive the robot).
2) Looking for "corners", rather than "rectangles".

adciv 03-02-2016 09:04

Re: Vision Processing Help
 
I'm a bit more curious on how you're doing the processing. Based on past experience, you should be able to process that image in 0.005 seconds (I've benchmarked older i series laptops). The roborio can process 640x480 images in less than 0.6 seconds.

rich2202 03-02-2016 12:38

Re: Vision Processing Help
 
Quote:

Originally Posted by adciv (Post 1534345)
I'm a bit more curious on how you're doing the processing.

The Vision tutorial has you looking for rectangles. I imagine something like: Find an object at least X pixels high (or wide).

The programmer decided to try to find Corners. That requires looking at each pixel, and seeing which pixels around it are lit and unlit. If pixels above are unlit, and pixels below are lit, then it is a high probability that it is a top corner of the U. Similar type logic for the other corners. He then looks at the density of the pixels to determine where the corner is (the more tagged pixels in an area, the more likely it is a corner pixel.

It seems to work. Since he identifies left and right corners, he can differentiate goals when more than one is in the picture.

adciv 04-02-2016 09:22

Re: Vision Processing Help
 
Ah, he's written his own algorithm instead of using the built-in labview functions then? That would explain it.

The tutorial searches for "blobs" and then categorizes from there. It's contained in the particle analysis VI and this is one of the more common methods for detection. What language is he using?

Greg McKaskle 04-02-2016 10:40

Re: Vision Processing Help
 
There are corner detection algorithms in NI Vision, and they are pretty standard implementations, but I've never found them to be a robust approach.

The corner detector needs to look further away than just adjacent pixels, or sensor noise and small tape edge issues will cause lots of small "corners" that we don't care about. The NI algorithm has a parameter called pyramid level that determines what scale of feature you care about. I believe higher level looked for bigger/coarse features.

The kids I mentor are also trying really hard to use corners, but they just seem more flaky to use as a primary identifier. My 2cents.

Greg McKaskle


All times are GMT -5. The time now is 04:16.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi