Go to Post Yes, a big part of FIRST is learning from others. But a just as important part of FIRST is learning from discovery. - [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 11-14-2016, 10:39 PM
tcjinaz tcjinaz is offline
Tim
FRC #3853
Team Role: Mentor
 
Join Date: May 2011
Rookie Year: 2011
Location: Arizona
Posts: 205
tcjinaz has a spectacular aura abouttcjinaz has a spectacular aura about
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

No fair heading down so close to bare metal
__________________
Software Mentor
3853 Pridetronics[

Reply With Quote
  #17   Spotlight this post!  
Unread 11-15-2016, 07:28 AM
Gdeaver Gdeaver is offline
Registered User
FRC #1640
Team Role: Mentor
 
Join Date: Mar 2004
Rookie Year: 2001
Location: West Chester, Pa.
Posts: 1,355
Gdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond reputeGdeaver has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

No one knows what vision processing will be needed in the future. For this year we found that feeding the results of processing into a control loop did not work well. We take a picture calculate the degrees of offset from the target. Then use this offset and the IMU to rotate the robot. Take another frame and check that we are on target. If not rotate and check. If on target shoot. We did not need a high frame rate and it worked very well. I'll note that our biggest problem was not the vision but, the control loop to rotate the bot. There was a thread on this earlier. We hosted MAR Vision day this past weekend. It has become very apparent that most teams are struggling with vision. While it's nice to see work like this, I would like to see more of an effort to bring vision to the masses. GRIP helped allot this year.
Reply With Quote
  #18   Spotlight this post!  
Unread 11-16-2016, 08:25 AM
KJaget's Avatar
KJaget KJaget is offline
Zebravision Labs
FRC #0900
Team Role: Mentor
 
Join Date: Dec 2014
Rookie Year: 2015
Location: Cary, NC
Posts: 37
KJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud of
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Given that HSV requires a bunch of conditional code it's going to be tough to vectorize. You could give our approach from last year a try :

Code:
    vector<Mat> splitImage;
    Mat         bluePlusRed;

    split(imageIn, splitImage);
    addWeighted(splitImage[0], _blue_scale / 100.0,
		splitImage[2], _red_scale / 100.0, 0.0,
		bluePlusRed);
    subtract(splitImage[1], bluePlusRed, imageOut);
This converts an RGB image into a grayscale one where higher grayscale values indicate more pure green. Pixels with lots of green and nothing else end up with high values. Pixels with no green end up as small values. Pixels with high green but also lots of blue and red also end up as low values. That last part filters out, say, white or yellow or whatever that have high G values but also high values for the other channels.

After that we did a threshold on the newly created single-channel image. We used Otsu thresholding to handle different lighting conditions but you might get away with a fixed threshold as in your previous code.

To make this fast you'd probably want to invert the red_scale and blue_scale multipliers so you could do an integer divide rather than convert to float and back - but you'd have to see which is quicker. Should be able to vdup them into all the uint8 lanes in a q register at the start of the loop and just reuse them. And be sure to do this in saturating math because overflow/underflow would ruin the result.

Oh, and I had some luck getting the compiler to vectorize your C code if it was rewritten to match the ASM code. That is, set a mask to either 0 or 0xff then and the mask with the source. Be sure to mark the function args as __restrict__ to get this to work. The code was using d and q regs but seemed a bit sketchy otherwise, but it might be fast enough where you could avoid coding in ASM.

Last edited by KJaget : 11-16-2016 at 08:27 AM.
Reply With Quote
  #19   Spotlight this post!  
Unread 11-16-2016, 10:10 AM
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

If you web-search for Zynq vision systems or Zynq vision processing, you will see that a number of companies and integrators use this for professional systems. Zynq from Xilinx is the processor of the RoboRIO.

So I think my take on this is ...

You likely don't need 640x480. It is almost as if that were taken into account when the vision targets were designed.

You likely don't need 30 fps. Closing the loop with a slow-noisy sensor is far more challenging than a fast and less-noisy one. Some avoid challenges, but others double-down.

The latency of the image capture and processing is important to measure for any form of movement (robot or target). Knowing the latency is often good enough, minimizing this is of course good. If there isn't much movement, it is far less important.

The vision challenge has many solutions. Jaci has shown, and I think the search results also show that many people are successful using Zync for vision. But this does take careful measurements and consideration of image capture and processing details.

By the way, folks typically go for color and color processing. This is easy to understand and teach, but it is worth pointing out that most industrial vision processing is done with monochrome captures.

Greg McKaskle
Reply With Quote
  #20   Spotlight this post!  
Unread 11-16-2016, 10:28 AM
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Greg McKaskle View Post
By the way, folks typically go for color and color processing. This is easy to understand and teach, but it is worth pointing out that most industrial vision processing is done with monochrome captures.
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
  #21   Spotlight this post!  
Unread 11-16-2016, 12:07 PM
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,069
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by adciv View Post
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?
Most industrial vision system applications use a pretty controlled background, so intensity-based detection and segmentation works well and has few false positives. Pointing a camera towards the ceiling in an arbitrary high school gym or sports arena is not as controlled, so you often need to use other cues to differentiate the target from the background. These cues could include color, shape, size, etc.
Reply With Quote
  #22   Spotlight this post!  
Unread 11-16-2016, 12:13 PM
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,069
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by KJaget View Post
Given that HSV requires a bunch of conditional code it's going to be tough to vectorize.
Full HSV requires evaluating conditions to compute hue, but if you use (ex.) a green LED ring, you can pretty well assume that if green is not the most abundant component for any given pixel then hue is irrelevant; the pixel is likely not part of the target.
Reply With Quote
  #23   Spotlight this post!  
Unread 11-16-2016, 12:22 PM
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 251
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by KJaget View Post
Given that HSV requires a bunch of conditional code it's going to be tough to vectorize.
I managed to implement an RGB->HSV colour space conversion by using the VBIT (bitwise insert if true) instruction in combination with VGT (greater than). VBIT is a way to vectorize branching operations to a certain degree.

I'll do a write up on this at some time, but I've got a lot on my plate over the next 2 weeks and I have to clean up the code a bit.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #24   Spotlight this post!  
Unread 11-16-2016, 12:32 PM
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 251
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Greg McKaskle View Post
You likely don't need 640x480. It is almost as if that were taken into account when the vision targets were designed.

You likely don't need 30 fps. Closing the loop with a slow-noisy sensor is far more challenging than a fast and less-noisy one. Some avoid challenges, but others double-down.
I agree with the not-needed 640x480 resolution, however I used it as a test case to provide more of a worst-case scenario (i.e. "hey look at what I can do"). I'll be writing another post some time in the future testing more 'applicable' vision target sizes as well as other things discussed in this thread.

However, I disagree with the not-needing 30fps. Vision targetting in FRC is very hit-or-miss. While a high update-rate might not be needed for alignment operations, I find sending back to the driver station a 30fps ("""""natural framerate""""") outline of what targets have been found is quite useful. For example, this year I sent back the bounding boxes of contours our vision system found to the driver station. This had the advantage that the driver had some kind of feedback about just how accurate we were lined up (and could adjust if necessary), and took next to no bandwidth as we were only sending back a very small amount of data 30 times a second (per contour). This was insanely useful and you can see that if you look at our matches (we implemented it between Aus Regional and Champs).
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #25   Spotlight this post!  
Unread 11-16-2016, 12:33 PM
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 251
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by tcjinaz View Post
No fair heading down so close to bare metal
It's only unfair if I say I've done it and then leave the whole thing closed source
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #26   Spotlight this post!  
Unread 11-16-2016, 05:06 PM
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 298
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Jaci View Post
It's only unfair if I say I've done it and then leave the whole thing closed source
Then you'd just need to make as many things private as you possibly could (except for a single "private" undocumented function that you let WPILib use), document a bunch of error codes but return undocumented ones on errors, and turn your whole project into proprietaryception and you have NIVision
__________________
Creator of SmartDashboard.js, an extensible nodejs/webkit replacement for SmartDashboard


https://ligerbots.org
Reply With Quote
  #27   Spotlight this post!  
Unread 11-18-2016, 08:28 AM
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
"Then you'd just need to make as many things private as you possibly could (except for a single "private" undocumented function that you let WPILib use), document a bunch of error codes but return undocumented ones on errors, and turn your whole project into proprietaryception and you have NIVision"
Good job with the hyperbole! At least I hope that isn't how you think things intentionally worked out.

WPILib did a poor job of wrapping NIVision (work NOT done by NI, by the way). The history is that a few folks tried to make a dumbed-down version for the first year, and it was a dud. Then some students hacked at a small class library of wrappers. But the hack showed through.

That doesn't mean NIVision, the real product, is undocumented or trying to be sneaky. NI publishes three language wrappers for NIVision (.NET, C, and LV). The documentation for NIVision is located here -- C:\Program Files (x86)\National Instruments\Vision\Documentation. And one level up is lots of samples, help files, utilities, etc. If the same people did the wrappers on top of OpenCV, it would have been just as smelly. Luckily, good people are involved in doing this newer version of vision for WPILib. But I see no reason to make NIVision into the bad guy here.

If you choose to ignore the WPILib vision stuff and code straight to NIVision libraries from NI, I think you'll find that it is a much better experience. That is what LV teams do, by the way. LV-WPILib has wrappers for the camera, but none for image processing. They just use NIVision directly.

If my time machine batteries were charged up, I guess it would be worth trying to fix the time-line. But the I'm still worried about the kids, Marty.

Greg McKaskle
Reply With Quote
  #28   Spotlight this post!  
Unread 11-18-2016, 09:22 AM
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by adciv View Post
Can you explain the reason for this? Are the systems designed to be used with monochrome or is it just worked out until that's all that's necessary?
Both.

If you can set the camera where it will simplify your task, and put the targets where it will simplify your task, you can simplify the system, lower cost, increase effectiveness, increase throughput, etc. The FRC robot field is not nearly as controllable or predictable, but it is beneficial to spend some time thinking about what you can control.

Also, monochrome cameras can have about 3x frame rate at the same resolution, or higher resolution at the same frame rate. They can have higher sensitivity, allowing faster exposures. Monochrome doesn't have to have a broad spectrum of lighting or capture. Lasers are already monochrome. Filters on your lens or light source make it narrower. Lenses don't have to worry about different refraction for different wavelengths.

The first step most team code perform is an HSL threshold -- turning an RGB image into a binary/monochrome one.

So, I'm not saying monochrome is better, but it is different, and powerful, and common. My point is that color cameras aren't a requirement to make a working solution and there are benefits and new challenges in each approach.

As for frame rate:
30fps is based on a human perception threshold. Industrial cameras, and SLRs for that matter, operate at many different exposures and rates. If the 30 fps is to align with a driver feedback mechanism, then it is a good choice. If it is to align with a control feedback mechanism, slower but more accurate may be better, or far faster may be needed. The task should define the requirements, then you do your best to achieve them with the tools you have. It is exciting to see folks reevaluate and sharpening the tools.

Greg McKaskle
Reply With Quote
  #29   Spotlight this post!  
Unread 11-20-2016, 10:06 PM
tcjinaz tcjinaz is offline
Tim
FRC #3853
Team Role: Mentor
 
Join Date: May 2011
Rookie Year: 2011
Location: Arizona
Posts: 205
tcjinaz has a spectacular aura abouttcjinaz has a spectacular aura about
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

Quote:
Originally Posted by Greg McKaskle View Post
If you web-search for Zynq vision systems or Zynq vision processing, you will see that a number of companies and integrators use this for professional systems. Zynq from Xilinx is the processor of the RoboRIO.

<snip>

Greg McKaskle
So does the FPGA in the FRC RoboRIO have some vision processing in hardware? Jaci seems to be operating (to good effect) at the ARM assembly language level. Is there something of use in the programmable part of the Zynq? If not, it could be that project I'm looking for to allow teams some access to the leftover gates in the FPGA, if there are any.

Tim
.
__________________
Software Mentor
3853 Pridetronics[

Reply With Quote
  #30   Spotlight this post!  
Unread 11-21-2016, 09:55 AM
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor

The Zynq architecture has a hard ARM CPU and an FPGA on a single chip. The ARM is completely open because the safety code for FRC has been pushed into the hard-realtime FPGA. It is not possible with current tools to easily allow a partial FPGA update. So the FPGA is static for FRC during the regular season. If you want to use tools to change it in offseason, go for it.

The FRC FPGA doesn't currently have any vision processing code in it. It wasn't a priority compared to accumulators, PWM generators, I2C and SPI and other bus implementations. If you get specific enough about how you want the images processed, I suspect that there are some gates to devote. But many times, the advantage of using an FPGA is to make a highly parallel, highly pipelined implementation, and that can take many many gates. And if the algorithm isn't exactly what you need, you are back to the CPU.

So, with todays tools, CPU, GPU, and FPGA are all viable ways to do image processing. All have advantages, and all are challenging. There are many ways to solve the FRC image processing challenges, and none of them are bullet-proof. That is part of what makes it a good fit.

Greg McKaskle
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 04:39 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi