Go to Post You're on a team, weren't you taught Gracious Professionalism? It doesn't only apply to competitions, it applies to the real world. - Zac Schofield [more]
Home
Go Back   Chief Delphi > Technical > Programming > NI LabVIEW
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 01-02-2016, 18:50
BillyBobJean BillyBobJean is offline
Registered User
FRC #1234
 
Join Date: Feb 2016
Location: Harar, Zimbabwe
Posts: 2
BillyBobJean is an unknown quantity at this point
Vision Processing Help

Problem:
We have low fps in our vision processing. We would like to move the vision processing from the roborio to the driver station to increase the camera performance. How do we accomplish this?

Info:
It is a usb camera and we use NI Labview.


First post so if there are any questions, please ask.

Last edited by BillyBobJean : 01-02-2016 at 21:34.
Reply With Quote
  #2   Spotlight this post!  
Unread 01-02-2016, 19:17
Conor Ryan Conor Ryan is offline
I'm parking robot yacht club.
FRC #4571 (Robot Yacht Club)
Team Role: Mentor
 
Join Date: Nov 2004
Rookie Year: 2004
Location: Midtown, NYC
Posts: 1,889
Conor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond reputeConor Ryan has a reputation beyond repute
Re: Vision Processing Help

Quote:
Originally Posted by BillyBobJean View Post
Problem:
We have low fps in our vision processing. We would like to move the vision processing from the roborio to the driver station to increase the camera performance.

Info:
It is a usb camera and we use NI Labview.


First post so if there are any questions, please ask.
Adjust the resolution, you'll get a much higher FPS with a smaller picture.
Reply With Quote
  #3   Spotlight this post!  
Unread 01-02-2016, 21:46
BillyBobJean BillyBobJean is offline
Registered User
FRC #1234
 
Join Date: Feb 2016
Location: Harar, Zimbabwe
Posts: 2
BillyBobJean is an unknown quantity at this point
Re: Vision Processing Help

We tried that. We still get very low fps
Reply With Quote
  #4   Spotlight this post!  
Unread 01-02-2016, 21:49
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,156
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Vision Processing Help

Is your problem it takes a long time to vision process each frame, or that the camera updates slowly? What do you consider low fps?

FYI, it takes .6 seconds for our ds to process one frame.
Reply With Quote
  #5   Spotlight this post!  
Unread 02-02-2016, 07:04
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: Vision Processing Help

Quote:
Originally Posted by BillyBobJean View Post
Problem:
We have low fps in our vision processing. We would like to move the vision processing from the roborio to the driver station to increase the camera performance. How do we accomplish this?

Info:
It is a usb camera and we use NI Labview.


First post so if there are any questions, please ask.
You can do this by moving the vision code from the vision example to the Dashboard vision loop (loop 2). You would then send the data using network tables to the robot. That said... What resolution are you running at? We're getting pretty decent frame rates at 320x240. I have benchmarks somewhere but it could be the roboRio is selecting a lower frame rate from the camera than you desire and this is contributing to the lower frame rate.

Quote:
Originally Posted by rich2202 View Post
FYI, it takes .6 seconds for our ds to process one frame.
What laptop are you running?!
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
  #6   Spotlight this post!  
Unread 02-02-2016, 08:39
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Processing Help

Keep in mind that if you have the roboRIO's VI panel open and you are watching the images, the roboRIO is also needing to compress the images and send them to the LabVIEW debugging session. So before you know what your fps is, be sure to close the debug panel.

To discuss the resolution of the camera. The key here is the number of pixels that represent the smallest feature that you care about in the image. For this game, it is probably the target tape. The tape is 2" wide, and at distance y using camera x, you can use Vision Assistant or another tool to count the pixels that represent the 2" tape. The number of pixels affects the accuracy of measurements you make on that feature of your image. If your 2" results in 20 pixels, then a small error due to alignment/lens/lighting/vibration will result in a measure of 19 or 21 pixels instead. This would give you a swing of 5% on width related measurements. If you image has only 4 pixels for the 2" line, that same swing is from 3 to 5 -- or probably too large for you to feed into anything affecting your shooter. The above is just to demonstrate the concept and is a conservative way to think about resolution.

The first thing to do is to make the measurement and see if you can drop the resolution down without affecting the accuracy of what you are measuring. I think that anywhere from 5 to 10 pixels is typically plenty. After all, you probably aren't measuring the 2" tape, but the 20" width or the 14" height.

Greg McKaskle
Reply With Quote
  #7   Spotlight this post!  
Unread 02-02-2016, 10:56
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,156
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Vision Processing Help

Quote:
Originally Posted by adciv View Post
What laptop are you running?!
It is a newer I5 laptop. I have not had time to look at his code since 0.6 seconds is not material to us at this point (stop-vision-turn sequence is much longer).

This is the first year we have a programmer successfully implementing vision, so I am sure there is lots of optimization we can do. He did take an interesting approach:
1) Grab the picture off the laptop screen (use same pic the drivers use to drive the robot).
2) Looking for "corners", rather than "rectangles".
Reply With Quote
  #8   Spotlight this post!  
Unread 03-02-2016, 09:04
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: Vision Processing Help

I'm a bit more curious on how you're doing the processing. Based on past experience, you should be able to process that image in 0.005 seconds (I've benchmarked older i series laptops). The roborio can process 640x480 images in less than 0.6 seconds.
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
  #9   Spotlight this post!  
Unread 03-02-2016, 12:38
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,156
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: Vision Processing Help

Quote:
Originally Posted by adciv View Post
I'm a bit more curious on how you're doing the processing.
The Vision tutorial has you looking for rectangles. I imagine something like: Find an object at least X pixels high (or wide).

The programmer decided to try to find Corners. That requires looking at each pixel, and seeing which pixels around it are lit and unlit. If pixels above are unlit, and pixels below are lit, then it is a high probability that it is a top corner of the U. Similar type logic for the other corners. He then looks at the density of the pixels to determine where the corner is (the more tagged pixels in an area, the more likely it is a corner pixel.

It seems to work. Since he identifies left and right corners, he can differentiate goals when more than one is in the picture.
Reply With Quote
  #10   Spotlight this post!  
Unread 04-02-2016, 09:22
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: Vision Processing Help

Ah, he's written his own algorithm instead of using the built-in labview functions then? That would explain it.

The tutorial searches for "blobs" and then categorizes from there. It's contained in the particle analysis VI and this is one of the more common methods for detection. What language is he using?
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
  #11   Spotlight this post!  
Unread 04-02-2016, 10:40
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Processing Help

There are corner detection algorithms in NI Vision, and they are pretty standard implementations, but I've never found them to be a robust approach.

The corner detector needs to look further away than just adjacent pixels, or sensor noise and small tape edge issues will cause lots of small "corners" that we don't care about. The NI algorithm has a parameter called pyramid level that determines what scale of feature you care about. I believe higher level looked for bigger/coarse features.

The kids I mentor are also trying really hard to use corners, but they just seem more flaky to use as a primary identifier. My 2cents.

Greg McKaskle
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 04:19.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi