Go to Post If your heart is racing and you feel a little nervous, then you're on the right side of the alliance station wall!;) - David Brinza [more]
Home
Go Back   Chief Delphi > Technical > Technical Discussion
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 11-08-2016, 10:25 PM
The Ginger's Avatar
The Ginger The Ginger is offline
#GingerPower
FRC #5464 (BluejacketRobotics)
Team Role: Driver
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Cambridge Mn
Posts: 61
The Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud of
Vision Tracking?

Hey CD, I was just wondering... how in the world does vision tracking work? my team has attempted primitive vision tracking in the 2016 season (our second year) but with no success. I am not asking for your code, which everyone seems to cling to like the one ring (however I wont turn it down), but the theory and components that make it tick. What are the best cameras, do you write code to recognize a specific pattern of pixels (which would blow my mind), or to pick up a specific voltage value that the camera uses as a quantification of the net light picked up by the camera's receiver. our team did well in 2016 with a solid shooter, I can only imagine how it would have done with some assisted aiming. Thank you all and good luck January 7th!

disclaimer: I just design and oversee final assembly, I am in no way a programmer, however our programmers will be taking a look at this
__________________

"The difficult we do today, the Impossible tomorrow, Miracles by appointment only."
"Theory is a nice place, I'd like to go there one day, I hear everything works there."
"Maturity is knowing you were an idiot, common sense is trying to not be an idiot, wisdom is knowing that you will still be an idiot."
"I have approximate knowledge of many things."

Last edited by The Ginger : 11-08-2016 at 10:47 PM. Reason: miss communication
Reply With Quote
  #2   Spotlight this post!  
Unread 11-08-2016, 10:54 PM
Andrew Schreiber Andrew Schreiber is offline
Data Nerd
FRC #0079
 
Join Date: Jan 2005
Rookie Year: 2000
Location: Misplaced Michigander
Posts: 4,049
Andrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond repute
Re: Vision Tracking?

Quote:
Originally Posted by The Ginger View Post
Hey CD, I was just wondering... how in the world does vision tracking work? my team has attempted primitive vision tracking in the 2016 season (our second year) but with no success. I am not asking for your code, which everyone seems to cling to like the one ring (however I wont turn it down), but the theory and components that make it tick. What are the best cameras, do you write code to recognize a specific pattern of pixels (which would blow my mind), or to pick up a specific voltage value that the camera uses as a quantification of the net light picked up by the camera's receiver. our team did well in 2016 with a solid shooter, I can only imagine how it would have done with some assisted aiming. Thank you all and good luck January 7th!

disclaimer: I just design and oversee final assembly, I am in no way a programmer, however our programmers will be taking a look at this
I'll do a quick outline for you, I'm working on something more in depth thing about it though.

1) Acquire Image (most any camera will work)
2) Filter Image to just the target color (HSV filter)
3) Identify contours (findCountours in openCV)
4) Eliminate extra contours (filter by aspect ratio or just size)
5) Find Centroid of contour and compute range and angle to target
6) Align robot
__________________




.

Last edited by Andrew Schreiber : 11-08-2016 at 11:11 PM.
Reply With Quote
  #3   Spotlight this post!  
Unread 11-08-2016, 10:55 PM
AlexanderTheOK AlexanderTheOK is offline
Guy
no team
 
Join Date: Jan 2014
Rookie Year: 2012
Location: Los Angeles
Posts: 146
AlexanderTheOK is just really niceAlexanderTheOK is just really niceAlexanderTheOK is just really niceAlexanderTheOK is just really nice
Re: Vision Tracking?

Thankfully, documentation on this stuff is better than ever! The screensteps should get you started well enough though.

(ps: Googling "FRC vision tracking" or "FRC vision processing" returns this as the first link. Being able to Google things well is an imperative skill for any profession relating even tangentially to computers, and is something that is worth developing. It saves you the time that you spend waiting for my response, and me the time that it takes to write this response.)
Reply With Quote
  #4   Spotlight this post!  
Unread 11-08-2016, 11:11 PM
Alsch Alsch is offline
Registered User
no team
 
Join Date: Oct 2014
Location: Underneath Canada
Posts: 5
Alsch is an unknown quantity at this point
Re: Vision Tracking?

So you're developing your own computer vision system? A bold undertaking. For a short answer, the hardware really is not as critical of an element as the software is. As far as vision tracking goes, what specifically are you trying to track, if anything? Depending on the intended application, object tracking can be as simple as edge detection (think line-following robots) to more complex recognition of colors and patterns (i.e. face detection or tracking a specific object by color)

There are a few prominent vision sensor projects out there like PixyCam and OpenMV (in fact I just came from a thread discussing one of them) that y'all could look into to see how they pull it off.

Reply With Quote
  #5   Spotlight this post!  
Unread 11-08-2016, 11:21 PM
GeeTwo's Avatar
GeeTwo GeeTwo is offline
Technical Director
AKA: Gus Michel II
FRC #3946 (Tiger Robotics)
Team Role: Mentor
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Slidell, LA
Posts: 3,493
GeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond reputeGeeTwo has a reputation beyond repute
Re: Vision Tracking?

Caveat: I had nothing to do with writing any of this; it was a pair of our student members in 2013. The code has been on line ever since. We used an IP camera communicating over on-robot network from a raspberry pi to the 'RIO, which then sent minimal targeting information over the network to the driver station. The code is all on our github (and has been for years): https://github.com/frc3946/PyGoalFinder has the raspberry pi side of the code (written by Matt Condon), and several of our robot codes, earliest (and probably cleanest) being https://github.com/frc3946/UltimateAscent, written by our founder and my son Gixxy are designed to connect with the data from this raspberry pi.

Possibly our newer code also works, but our robot performance has not convinced me of this; we were rock solid in 2013 and quite good in 2014, but I did not see evidence of good end-to-end targeting code in 2015 (when we didn't really try, because there were no targets in appropriate locations) or 2016.
__________________

If you can't find time to do it right, how are you going to find time to do it over?
If you don't pass it on, it never happened.
Robots are great, but inspiration is the reason we're here.
Friends don't let friends use master links.

Last edited by GeeTwo : 11-08-2016 at 11:30 PM.
Reply With Quote
  #6   Spotlight this post!  
Unread 11-09-2016, 12:27 AM
Ben Wolsieffer Ben Wolsieffer is offline
Dartmouth 2020
AKA: lopsided98
FRC #2084 (Robots by the C)
Team Role: Alumni
 
Join Date: Jan 2011
Rookie Year: 2011
Location: Manchester, MA (Hanover, NH)
Posts: 516
Ben Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud of
Re: Vision Tracking?

Quite a few teams have published their vision code. This is the most complete list of FRC code I know of: https://firstwiki.github.io/wiki/robot-code-directory
__________________



2016 North Shore District - Semifinalists and Excellence in Engineering Award
2015 Northeastern University District - Semifinalists and Creativity Award
2014 Granite State District - Semifinalists and Innovation in Control Award
2012 Boston Regional - Finalists
Reply With Quote
  #7   Spotlight this post!  
Unread 11-09-2016, 12:54 AM
kylelanman's Avatar
kylelanman kylelanman is offline
Programming Mentor
AKA: Kyle
FRC #2481 (Roboteers)
Team Role: Mentor
 
Join Date: Feb 2008
Rookie Year: 2007
Location: Tremont Il
Posts: 185
kylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to all
Re: Vision Tracking?

Here is a fairly simple system written in python using OpenCV and networktables that carried us to a successful 2016 season.

https://github.com/Frc2481/paul-buny.../master/Camera
__________________
"May the coms be with you"

Is this a "programming error" or a "programmer error"?

Reply With Quote
  #8   Spotlight this post!  
Unread 11-09-2016, 09:28 AM
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 296
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: Vision Tracking?

The physical premise of vision in most FRC games is detecting light that you send out with an LED ring, which bounces off retroreflective tape back to your camera. Retroreflective tape is a material with the property that incoming light bounces off in the same direction, instead of at an angle of reflection (like you'd expect from a mirror). That means no matter where you are, if you shine light at it, you get light back.

Quote:
Originally Posted by The Ginger View Post
What are the best cameras
Anything with sufficient resolution and adjustable exposure is fine. Exposure because you need to set it correctly so the camera sensor isn't flooded with light from the retroreflective tape.

Quote:
Originally Posted by The Ginger View Post
do you write code to recognize a specific pattern of pixels (which would blow my mind), or to pick up a specific voltage value that the camera uses as a quantification of the net light picked up by the camera's receiver
Those are the same thing
In the camera sensor, incoming light generates a signal (voltage). The array of signals is turned into an array of RGB colors (that is, an image). The premise of computer vision is detecting and tracking patterns in an image.

In the case of FRC, the retroreflective tape in the image will be much brighter and a different color (yes you need to carefully choose your LED ring color, which depends on the game. Red and blue are bad choices given the tower LED strips in Stronghold. Green is a good choice), so it's possible to detect with HSV filtering. HSV is a color space based on hue (color), saturation (grayscale to full color), and value (all black to full color). Using three specific filtering ranges of hue, saturation, and value, you can pick out the pixels of interest which should be the ones from the retroreflective tape. Then, you need to filter out the noise since camera images aren't perfect. This is usually accomplished by picking the largest continuous blob of filtered pixels.
Now you have a shape that corresponds to the target. You can use it to accomplish what you need to, eg a closed loop turn until the center of the shape lines up with the center of the image frame (lining up the robot to the target).
In Stronghold we took it one step further and calculated both distance and yaw angle (using a lot of statistics and NI vision), so the robot could quickly line up using the onboard gyroscope (a much more efficient closed-loop turn because the gyro has faster feedback) and adjust the shooter for the distance needed to shoot. This preseason we're taking it another step further by working on calculating the full 3D position and rotation relative to the target using OpenCV (where it's actually a whole lot easier than in NI vision). Hopefully the vision component in Steamworks won't be as useless as in 2015
__________________
Creator of SmartDashboard.js, an extensible nodejs/webkit replacement for SmartDashboard


https://ligerbots.org

Last edited by euhlmann : 11-09-2016 at 09:33 AM.
Reply With Quote
  #9   Spotlight this post!  
Unread 11-09-2016, 11:59 AM
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,748
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Tracking?

There are many ways to make an omelette, so I'll give my version of this.

1) Acquire Image
2) Process Image to emphasize what you care about over what you don't
3) Make measurements to grade the potential targets
4) Pick a target that you are going to track
5) Make 2D measurements that will help you determine 3D location
6) Adjust your robot

1. Acquire Image
This is actually where lots of teams have trouble. The images can be too bright, too dark, blurry, blocked by external or internal mechanisms, etc. It is always good to log some images to test your code against and to use the images provided at kickoff. Being able to view and calibrate your acquisition to adjust to field conditions is very important. The white paper about FRC Vision contains a number of helpful techniques regarding image acquisition.

2. Process Image
This is commonly a threshold filter, but can be an edge detector or any processing that declutters the image and makes it more efficient to process and easier to test. HSV, HSI, or HSL are pretty accurate ways to do this, but it can be done in RGB or even just using intensity on a black and white image. You can also use strobes, IR lighting, polarized lighting, and other tricks to do some of the processing in the analog world instead of digital.

3. Make Measurements
For NIVision, this is generally a Particle Report. It can give size, location, perimeter, roundness, angles, aspect ratios, etc for each particle. Pick the measures that can help to qualify or disqualify things in the image. For example, that particle is too small, that one is too large, that one is just right. But rather than a Boolean output, I find it useful to give a score (0 to 100%) for example, and then later combine those strategically in the next step. This is where the folder of sample images pays off. You get to tweak and converge so that given the expected images, it has a reasonably predictable success rate.

4. Pick a Target
Rank and select the element that your code considers the best candidate.

5. Determine 3D Location
The location of an edge, or the center of the particle can sometimes be enough to correlate to distance and location. Area is another decent approximation of distance. And of course if you want to, you can identify corners and use the distortion of the known shape to solve for location.

6. Adjust your Robot
Use the 3D info to adjust your robot's orientation, location, flywheel, or whatever makes sense to act on a target at that location in 3D space relative to your robot. Often this simplifies to -- turn so the target is in the center of the camera image, drive forward to a known distance, or adjust the shooter to the estimated distance.

From my experience, #1 is hard due to changing lighting and environment, inefficient or unsuccessful calibration procedures, and lack of data to adjust the camera well. #2 through 5 have lots of example code and tools to help process reasonably good images. #6 can be hard as well and really depends on the robot construction and sensors. Closing the loop with only a camera is tricky because cameras are slow and often noise due to mounting and measurement conditions.

So yes. Vision is not an easy problem, but if you control a few key factors, it can be solved pretty well by the typical FRC team, and there are many workable solutions. This makes it pretty good challenge for FRC, IMO.

Greg McKaskle
Reply With Quote
  #10   Spotlight this post!  
Unread 11-09-2016, 01:35 PM
lethc's Avatar
lethc lethc is offline
#gkccurse
AKA: Becker Lethcoe
FRC #1806 (S.W.A.T.)
Team Role: Alumni
 
Join Date: Nov 2012
Rookie Year: 2013
Location: Smithville, MO
Posts: 118
lethc will become famous soon enough
Re: Vision Tracking?

We used a program called TowerTracker last year to find the goal. We modified it slightly to fit our needs. The program ran on our driver station - it received a video stream from the robot (mjpg streamer) and sent back data (angle, etc) to the robot over NetworkTables.

Github link
__________________
2016: Greater Kansas City Regional Finalists, Oklahoma Regional Winners, Tesla Semifinalists, IRI Quarterfinalists
2015: Greater Kansas City Regional Finalists, Oklahoma Regional Winners, Tesla Quarterfinalists, IRI Winners
2014: Central Illinois Regional Quarterfinalists, Greater Kansas City Regional Finalists, Newton Semifinalists
2013: Greater Kansas City Regional Winners, Oklahoma Regional Winners, Galileo Quarterfinalists
Reply With Quote
  #11   Spotlight this post!  
Unread 11-09-2016, 04:59 PM
The Ginger's Avatar
The Ginger The Ginger is offline
#GingerPower
FRC #5464 (BluejacketRobotics)
Team Role: Driver
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Cambridge Mn
Posts: 61
The Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud ofThe Ginger has much to be proud of
Re: Vision Tracking?

thank you all for the helpful info, I cant wait to show it to my teammates, our golf ball shooter for this years game will never miss a shot
__________________

"The difficult we do today, the Impossible tomorrow, Miracles by appointment only."
"Theory is a nice place, I'd like to go there one day, I hear everything works there."
"Maturity is knowing you were an idiot, common sense is trying to not be an idiot, wisdom is knowing that you will still be an idiot."
"I have approximate knowledge of many things."
Reply With Quote
  #12   Spotlight this post!  
Unread 11-10-2016, 10:23 AM
KJaget's Avatar
KJaget KJaget is offline
Zebravision Labs
FRC #0900
Team Role: Mentor
 
Join Date: Dec 2014
Rookie Year: 2015
Location: Cary, NC
Posts: 35
KJaget is just really niceKJaget is just really niceKJaget is just really niceKJaget is just really nice
Re: Vision Tracking?

I'll add a plug for my students' work as well : https://www.chiefdelphi.com/media/papers/3267

Hopefully this paper is a good overview of how things work without making you read the code. But you can read the code as well - links in the paper.
Reply With Quote
  #13   Spotlight this post!  
Unread 11-11-2016, 11:36 PM
BenBernard BenBernard is offline
Registered User
FRC #5687 (The Outliers)
Team Role: Mentor
 
Join Date: Jan 2016
Rookie Year: 2015
Location: Portland, ME
Posts: 36
BenBernard is an unknown quantity at this point
Re: Vision Tracking?

I highly recommend watching this video from Team 254 (https://www.youtube.com/watch?v=rLwO...ature=youtu.be) and reading through the presentation directly (https://docs.google.com/presentation...lse#slide=id.p).

Thanks to Jared Russell and Tom Bottiglieri for sharing their experience!
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 08:17 PM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi