Go to Post Point is, there is no definite answer. This is FIRST, Whatever works well with your team is the best. - NelsonMichael [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 14-11-2016, 10:41
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 296
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Pretty cool!
Now if only all of OpenCV could be NEON-optimized

Or if somebody could teach me what black magic I need to invoke to get OpenCV GPU acceleration on Android
__________________
Creator of SmartDashboard.js, an extensible nodejs/webkit replacement for SmartDashboard


https://ligerbots.org
Reply With Quote
  #2   Spotlight this post!  
Unread 14-11-2016, 10:55
Jaci's Avatar
Jaci Jaci is online now
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 248
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by euhlmann View Post
Pretty cool!
Now if only all of OpenCV could be NEON-optimized

Or if somebody could teach me what black magic I need to invoke to get OpenCV GPU acceleration on Android
OpenCV does have a NEON and VFP build option, both of which were enabled during these tests, which is part of the reason cv::inRange executed so quickly
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #3   Spotlight this post!  
Unread 14-11-2016, 11:01
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 296
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Jaci View Post
OpenCV does have a NEON and VFP build option, both of which were enabled during these tests, which is part of the reason cv::inRange executed so quickly
Yes, but few things have been NEON-optimized so far
__________________
Creator of SmartDashboard.js, an extensible nodejs/webkit replacement for SmartDashboard


https://ligerbots.org
Reply With Quote
  #4   Spotlight this post!  
Unread 14-11-2016, 11:12
RyanShoff RyanShoff is offline
Registered User
FRC #4143 (Mars Wars)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2012
Location: Metamora, IL
Posts: 145
RyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to behold
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Have you looked at how much overhead comes from getting 30fps from a USB camera?

Also findContours() should run faster on non-random data.
__________________
Ryan Shoff
4143 Mars/Wars
CheapGears.com
Reply With Quote
  #5   Spotlight this post!  
Unread 14-11-2016, 11:23
Jaci's Avatar
Jaci Jaci is online now
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 248
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by RyanShoff View Post
Have you looked at how much overhead comes from getting 30fps from a USB camera?

Also findContours() should run faster on non-random data.
I don't have a USB camera to test with, and I have to fix my Kinect adapter before I can run this live.

I understand the findContours() method will run faster with non-random data, however I chose random data to provide a worst-case scenario. Using a real image from a Kinect, the speed is somewhat faster.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #6   Spotlight this post!  
Unread 14-11-2016, 11:41
Andrew Schreiber Andrew Schreiber is offline
Data Nerd
FRC #0079
 
Join Date: Jan 2005
Rookie Year: 2000
Location: Misplaced Michigander
Posts: 4,049
Andrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

First, pretty awesome write up. Running on board removes a lot of risk associated with reliance on vision processing. The communication step is hard.

Second, I'd be curious how you derived the requirement of 640x480. It seems to me that using a lower resolution image would process faster and the quickest win in this whole process would be to compute what the min image resolution required would be.

I've attached some of the test images 125 produced that I've down sampled as an example if folks want to play with it. They were taken at 14 feet away dead straight on and then scaled using imagemagick to 1280x960 -> 80x60. While the 80x60 image is just silly I do believe there are applications where much lower resolutions are just as effective.

It also opens the possibility of using low res images for identifying ROI and then processing just the smaller region in the higher resolutions.
Attached Thumbnails
Click image for larger version

Name:	14_00_160.png
Views:	132
Size:	32.1 KB
ID:	21265  Click image for larger version

Name:	14_00_320.png
Views:	62
Size:	118.5 KB
ID:	21266  Click image for larger version

Name:	14_00_640.png
Views:	52
Size:	410.7 KB
ID:	21267  Click image for larger version

Name:	14_00_1280.png
Views:	79
Size:	1.21 MB
ID:	21268  
Attached Images
 
__________________




.
Reply With Quote
  #7   Spotlight this post!  
Unread 14-11-2016, 11:57
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,064
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Andrew Schreiber View Post
Second, I'd be curious how you derived the requirement of 640x480. It seems to me that using a lower resolution image would process faster and the quickest win in this whole process would be to compute what the min image resolution required would be.
This is definitely true. The resolution you need is a function of range, target geometry, angle of incidence, camera field of view, the frequency and type of non-target objects that pass the threshold, and required precision. 640x480 has been overkill for all vision challenges to date.

640x480x30 fps is a convenient benchmark, though, as it is achievable with largely unoptimized code by many forms of coprocessors.
Reply With Quote
  #8   Spotlight this post!  
Unread 14-11-2016, 11:58
Jaci's Avatar
Jaci Jaci is online now
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 248
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Andrew Schreiber View Post
First, pretty awesome write up. Running on board removes a lot of risk associated with reliance on vision processing. The communication step is hard.

Second, I'd be curious how you derived the requirement of 640x480. It seems to me that using a lower resolution image would process faster and the quickest win in this whole process would be to compute what the min image resolution required would be.

I've attached some of the test images 125 produced that I've down sampled as an example if folks want to play with it. They were taken at 14 feet away dead straight on and then scaled using imagemagick to 1280x960 -> 80x60. While the 80x60 image is just silly I do believe there are applications where much lower resolutions are just as effective.

It also opens the possibility of using low res images for identifying ROI and then processing just the smaller region in the higher resolutions.
Honestly I used 640x480 as a kind of 'boast' as to how much potential this can hold (that, and it's also the default resolution of a Kinect camera @ 30fps). You can actually downscale this image entirely using the VFP, by using vld1.64 to load into the D registers, and a variation of vst to shift back out to memory, interleaved, discarding the extras, or saving them for later use as you proposed in your last paragraph. This is 'effectively' zero cost to the entire algorithm, as it does it 128 bits at a time.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #9   Spotlight this post!  
Unread 14-11-2016, 12:05
Andrew Schreiber Andrew Schreiber is offline
Data Nerd
FRC #0079
 
Join Date: Jan 2005
Rookie Year: 2000
Location: Misplaced Michigander
Posts: 4,049
Andrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Jared Russell View Post
This is definitely true. The resolution you need is a function of range, target geometry, angle of incidence, camera field of view, the frequency and type of non-target objects that pass the threshold, and required precision. 640x480 has been overkill for all vision challenges to date.

640x480x30 fps is a convenient benchmark, though, as it is achievable with largely unoptimized code by many forms of coprocessors.
Quote:
Originally Posted by Jaci View Post
Honestly I used 640x480 as a kind of 'boast' as to how much potential this can hold (that, and it's also the default resolution of a Kinect camera @ 30fps). You can actually downscale this image entirely using the VFP, by using vld1.64 to load into the D registers, and a variation of vst to shift back out to memory, interleaved, discarding the extras, or saving them for later use as you proposed in your last paragraph. This is 'effectively' zero cost to the entire algorithm, as it does it 128 bits at a time.
Understood, just wanted to make sure other folks reading the thread didn't get the idea that 640x480 was required.
__________________




.
Reply With Quote
  #10   Spotlight this post!  
Unread 14-11-2016, 11:49
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,064
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

This is very cool, though I'm not (yet) convinced that you can get 30fps @ 640x480 with an RGB USB camera using a "conventional" FRC vision algorithm. But now you have me thinking...

Why I think we're still a ways off from RGB webcam-based 30fps @ 640x480: Your Kinect is doing several of the most expensive image processing steps for you in hardware.

With a USB webcam, you need to:

1. Possibly decode the image into a pixel array (many webcams encode their images in formats that aren't friendly to processing).

2. Convert the pixel array into a color space that is favorable for background-lighting-agnostic thresholding (HSV/HSL). This is done once per pixel per channel (3*640*480), and each op is the evaluation of a floating point (or fixed point) linear function, and usually also involves evaluating a decision tree for numerical reasons.

3. Do inRange thresholding on each channel separately (3x as many operations as in your example) and then AND together the outputs into a binary image.

4. Run FindContours, filter, etc... These are usually really cheap, since the input is sparse.

So in order to do this with an RGB webcam, we're talking at least 6x as many operations assuming a color space conversion and per-channel thresholding, and likely more because color space conversion is more expensive than thresholding. Plus possible decoding and USB overhead penalties. Even if we ignore that, we're at 7.7 * 6 = 42.6ms per frame, which would be 15 frames per second at 64% CPU utilization. Anecdotally, I'd expect another 30+ ms per frame of overhead.

The Kinect is doing all of the decoding for you, does not require a color space conversion, and gives you a single channel image that is already in a form that is appropriate for robust performance in FRC venues. No Step 1, No Step 2, and Step 3 is 1/3 as complex when compared to the above.

However...

Great idea hacking the ASM to use SIMD for inRange. I wonder if you could also write an ASM function to do color space conversion, thresholding, and ANDing in a single function that only touches registers (may require fixed point arithmetic; I'm not sure what the RoboRIO register set looks like). This would add several more ops to your program, and have 3x as many memory reads, but would have the same number of memory writes.
Reply With Quote
  #11   Spotlight this post!  
Unread 14-11-2016, 12:14
Jaci's Avatar
Jaci Jaci is online now
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 248
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Jared Russell View Post
This is very cool, though I'm not (yet) convinced that you can get 30fps @ 640x480 with an RGB USB camera using a "conventional" FRC vision algorithm. But now you have me thinking...

Why I think we're still a ways off from RGB webcam-based 30fps @ 640x480: Your Kinect is doing several of the most expensive image processing steps for you in hardware.

With a USB webcam, you need to:

1. Possibly decode the image into a pixel array (many webcams encode their images in formats that aren't friendly to processing).
This is certainly true. As I mentioned, I don't really have a benchmark to gather data from a conventional USB webcam, so I can't really provide input to this part.

Quote:
Originally Posted by Jared Russell View Post
2. Convert the pixel array into a color space that is favorable for background-lighting-agnostic thresholding (HSV/HSL). This is done once per pixel per channel (3*640*480), and each op is the evaluation of a floating point (or fixed point) linear function, and usually also involves evaluating a decision tree for numerical reasons.

3. Do inRange thresholding on each channel separately (3x as many operations as in your example) and then AND together the outputs into a binary image.
These can actually both be turned into 1 set of instructions if your use case is target-finding.
Most robots use some sort of light source to find the retro-reflective target. Most typically this is the green ring (for our Kinect, it's the IR projector). If your image is already in the RGB form, you can actually just isolate the Green channel (which you can do with SIMD extremely simply, vld3.8) and proceed onward. Storing the R and B channels out to a D register but not writing it to RAM will save a lot of time here, and then your thresholding function will only take one set of data.

Something similar can be done with HSV/HSL, however this will require a bit more math on the assembly side of things to isolate the Lightness for a specific hue or saturation. Nonetheless, it's still faster than calculating for all 3 channels.

Quote:
Originally Posted by Jared Russell View Post
However...

Great idea hacking the ASM to use SIMD for inRange. I wonder if you could also write an ASM function to do color space conversion, thresholding, and ANDing in a single function that only touches registers (may require fixed point arithmetic; I'm not sure what the RoboRIO register set looks like). This would add several more ops to your program, and have 3x as many memory reads, but would have the same number of memory writes.
I believe it would be possible to do HS{L,V}/RGB color space correction with SIMD if you're willing to take on the challenge. I may give this a try when I have some time to burn.
Putting them all into one set of instructions dealing only with the NEON registers is entirely possible, in fact the thresholding and ANDing are already grouped together, operating on the Q registers. I can confirm that the ARM NEON instruction set does include fixed-point arithmetic, although it requires the vcvt instruction to convert them to floating-point first, which is also done by the NEON system.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #12   Spotlight this post!  
Unread 14-11-2016, 14:19
NotInControl NotInControl is offline
Controls Engineer
AKA: Kevin
FRC #2168 (Aluminum Falcons)
Team Role: Engineer
 
Join Date: Oct 2011
Rookie Year: 2004
Location: Groton, CT
Posts: 261
NotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Interesting work.

We took a look at using the RoboRio for Vision Processing back in 2014 under the alpha test of the new hardware. We tried IP and Web Cams using the same vision detection algorithm to find hot goals as implemented on our 2014 robot.

This was an OpenCV implementation in C++ which was compiled using Neon running on the Roborio.

Take a look at our data, at the below link, under Vision, at the IP camera test.

We would need to dust it off, but for our complete end to end solution I think we could only get 20fps at 320x240 on the Rio.

http://controls.team2168.org/


Over the past few years we have grown to develop a decoupled, off-board vision system, for various reasons we deemed beneficial, but I am glad to see progress in this area.
__________________
Controls Engineer, Team 2168 - The Aluminum Falcons
[2016 Season] - World Championship Controls Award, District Controls Award, 3rd BlueBanner
-World Championship- #45 seed in Quals, World Championship Innovation in Controls Award - Curie
-NE Championship- #26 seed in Quals, winner(195,125,2168)
[2015 Season] - NE Championship Controls Award, 2nd Blue Banner
-NE Championship- #26 seed in Quals, NE Championship Innovation in Controls Award
-MA District Event- #17 seed in Quals, Winner(2168,3718,3146)
[2014 Season] - NE Championship Controls Award & Semi-finalists, District Controls Award, Creativity Award, & Finalists
-NE Championship- #36 seed in Quals, SemiFinalist(228,2168,3525), NE Championship Innovation in Controls Award
-RI District Event- #7 seed in Quals, Finalist(1519,2168,5163), Innovation in Controls Award
-Groton District Event- #9 seed in Quals, QuarterFinalist(2168, 125, 5112), Creativity Award
[2013 Season] - WPI Regional Winner - 1st Blue Banner
Reply With Quote
  #13   Spotlight this post!  
Unread 16-11-2016, 08:25
KJaget's Avatar
KJaget KJaget is offline
Zebravision Labs
FRC #0900
Team Role: Mentor
 
Join Date: Dec 2014
Rookie Year: 2015
Location: Cary, NC
Posts: 35
KJaget is just really niceKJaget is just really niceKJaget is just really niceKJaget is just really nice
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Given that HSV requires a bunch of conditional code it's going to be tough to vectorize. You could give our approach from last year a try :

Code:
    vector<Mat> splitImage;
    Mat         bluePlusRed;

    split(imageIn, splitImage);
    addWeighted(splitImage[0], _blue_scale / 100.0,
		splitImage[2], _red_scale / 100.0, 0.0,
		bluePlusRed);
    subtract(splitImage[1], bluePlusRed, imageOut);
This converts an RGB image into a grayscale one where higher grayscale values indicate more pure green. Pixels with lots of green and nothing else end up with high values. Pixels with no green end up as small values. Pixels with high green but also lots of blue and red also end up as low values. That last part filters out, say, white or yellow or whatever that have high G values but also high values for the other channels.

After that we did a threshold on the newly created single-channel image. We used Otsu thresholding to handle different lighting conditions but you might get away with a fixed threshold as in your previous code.

To make this fast you'd probably want to invert the red_scale and blue_scale multipliers so you could do an integer divide rather than convert to float and back - but you'd have to see which is quicker. Should be able to vdup them into all the uint8 lanes in a q register at the start of the loop and just reuse them. And be sure to do this in saturating math because overflow/underflow would ruin the result.

Oh, and I had some luck getting the compiler to vectorize your C code if it was rewritten to match the ASM code. That is, set a mask to either 0 or 0xff then and the mask with the source. Be sure to mark the function args as __restrict__ to get this to work. The code was using d and q regs but seemed a bit sketchy otherwise, but it might be fast enough where you could avoid coding in ASM.

Last edited by KJaget : 16-11-2016 at 08:27.
Reply With Quote
  #14   Spotlight this post!  
Unread 16-11-2016, 10:10
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

If you web-search for Zynq vision systems or Zynq vision processing, you will see that a number of companies and integrators use this for professional systems. Zynq from Xilinx is the processor of the RoboRIO.

So I think my take on this is ...

You likely don't need 640x480. It is almost as if that were taken into account when the vision targets were designed.

You likely don't need 30 fps. Closing the loop with a slow-noisy sensor is far more challenging than a fast and less-noisy one. Some avoid challenges, but others double-down.

The latency of the image capture and processing is important to measure for any form of movement (robot or target). Knowing the latency is often good enough, minimizing this is of course good. If there isn't much movement, it is far less important.

The vision challenge has many solutions. Jaci has shown, and I think the search results also show that many people are successful using Zync for vision. But this does take careful measurements and consideration of image capture and processing details.

By the way, folks typically go for color and color processing. This is easy to understand and teach, but it is worth pointing out that most industrial vision processing is done with monochrome captures.

Greg McKaskle
Reply With Quote
  #15   Spotlight this post!  
Unread 16-11-2016, 10:28
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Greg McKaskle View Post
By the way, folks typically go for color and color processing. This is easy to understand and teach, but it is worth pointing out that most industrial vision processing is done with monochrome captures.
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 09:04.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi