Go to Post That little 18 amp-hour battery has a really hard life in a FIRST robot. - eugenebrooks [more]
Home
Go Back   Chief Delphi > Technical > Technical Discussion
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 08-01-2017, 15:34
The Doctor's Avatar
The Doctor The Doctor is offline
Robotics is life
AKA: Hackson
FRC #3216 (MRT)
Team Role: Programmer
 
Join Date: Mar 2014
Rookie Year: 2013
Location: United States
Posts: 158
The Doctor is on a distinguished road
How to use vision targets?

Our team is going to attempt vision targeting this year. We are wondering how other teams have gotten the targets to show up reliably on a camera? Things I have seen include shining a specifically-colored light at the targets (like green), in order to make them easier to distinguish for the program. Another option, though more complicated, involves using an infrared camera and infrared lights to make the reflective tape shine in IR.

Have any teams used either of these in the past? Or has anyone had good luck without any sort of assistance to target the tape?
__________________
Robots + Python + pentesting == me;
Blog ~ GitHub ~ Keybase
If you have a pressing issue to discuss with me, kik me at slush.puddles since I don't check CD very often.
Reply With Quote
  #2   Spotlight this post!  
Unread 08-01-2017, 16:12
nardavin nardavin is offline
Registered User
FRC #2403 (Plasma Robotics)
Team Role: Programmer
 
Join Date: Mar 2014
Rookie Year: 2014
Location: Gilbert, Arizona
Posts: 29
nardavin is on a distinguished road
Re: How to use vision targets?

I would recommend that you start here: https://wpilib.screenstepslive.com/s/4485/m/24194
__________________
Team 2403: Plasma Robotics

Member 2014-2017, Head Programmer 2015-2016, Leadership Team 2016-2017, Co-President 2017



Reply With Quote
  #3   Spotlight this post!  
Unread 08-01-2017, 16:34
Xanawatt Xanawatt is offline
Registered User
FRC #1024
 
Join Date: May 2015
Location: Indianapolis, Indiana
Posts: 24
Xanawatt is an unknown quantity at this point
Re: How to use vision targets?

What we did last year was got a camera that is not specifically designed for IR, but rather put an IR filter lens on the camera. Then just get IR leds and surround the camera with them pointed at the target.
Reply With Quote
  #4   Spotlight this post!  
Unread 08-01-2017, 22:01
SamCarlberg's Avatar
SamCarlberg SamCarlberg is online now
GRIP, WPILib. 2084 alum
FRC #2084
Team Role: Mentor
 
Join Date: Nov 2015
Rookie Year: 2009
Location: MA
Posts: 159
SamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to beholdSamCarlberg is a splendid one to behold
Re: How to use vision targets?

Quote:
Originally Posted by nardavin View Post
I would recommend that you start here: https://wpilib.screenstepslive.com/s/4485/m/24194
Make sure you pay attention to GRIP and the official vision examples. The examples come with a GRIP pipeline that works to find the vision targets on the upper ring with a green ring light (you'll need to change the image source to point to the folder on your computer to get it working)
__________________
WPILib
GRIP, RobotBuilder
Reply With Quote
  #5   Spotlight this post!  
Unread 08-01-2017, 22:57
jee7s jee7s is offline
Texan FIRSTer, ex-frc2789, ex-frc41
AKA: Jeffrey Erickson
FRC #6357
 
Join Date: Nov 2007
Rookie Year: 1997
Location: Dripping Springs, TX
Posts: 319
jee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond reputejee7s has a reputation beyond repute
Re: How to use vision targets?

Quote:
Originally Posted by The Doctor View Post
Our team is going to attempt vision targeting this year. We are wondering how other teams have gotten the targets to show up reliably on a camera? Things I have seen include shining a specifically-colored light at the targets (like green), in order to make them easier to distinguish for the program. Another option, though more complicated, involves using an infrared camera and infrared lights to make the reflective tape shine in IR.

Have any teams used either of these in the past? Or has anyone had good luck without any sort of assistance to target the tape?
Full disclosure: I work for Thorlabs, where my job is dealing with cameras, optics, and photonics. I point to some thorlabs.com webpages below purely for reference. You don't want to buy high quality optics for FRC. You don't need them, and they are too expensive to use on your robot under the rules.

A few points...

First, specifically regarding the light, more important than the wavelength is the source of the light. You want the light source to be as close to the detector as possible. Why? Because retroreflectors bounce light back to the source. There's an image in the WPI documentation that reflects that. The tape used in the competition is a sphere type retroreflector, so it doesn't impact polarization like a corner cube does (https://www.thorlabs.com/newgrouppag...ctgroup_id=145 see the 'Lab Facts' tab). There's a change to polarization as the wave bounces around the sphere retroreflector, but it's not as definite as the corner cube.

So, on to green vs IR. IR is nice because the light is invisible to humans. You can put a pretty bright IR source out there and not see it and not have it be damaging to the human eye. This works on many sensors because the silicon used to build them is sensitive to IR. One aspect of that sensitivity is the Quantum Efficiency, or the likelihood that the sensor converts the photon to an electron. A typical QE curve is here: https://www.thorlabs.com/images/TabI...ciency_780.gif

Note from that QE curve that there's a lower response in IR (on the right side >750nm), but keep in mind that you can blast the target with IR photons.

There's a catch, though. Most color cameras have some amount of IR blocking in them. The high quality ones have a pretty hard cut at about 750nm. That's because the filters used to create the RGB bayer pattern are all pretty transmissive in the IR range. They are good at separating red, green, and blue, but they all pass IR. So, the camera manufacturer puts an IR blocking filter in front of the sensor to get good color reproduction. To test this on your camera, point a TV remote (anyone remember the 2008 season?) at the sensor and see if it lights up white. If it does, your camera can see IR, if it doesn't, your camera blocks IR.

Also note from that QE curve that the curve peaks around 500nm, which is "green" to the human eye. That curve is an example of a typical silicon detector, and if your camera is under $400, it almost certainly has a silicon detector. So, silicon peaks in the green, which means that if you use a green light, you need the least amount of light to get a signal in the camera.

The basic method is this:
Get a ring light of LEDs and put your camera aperture in the center of the ring. That establishes the part above regarding putting the source as close to the detector as possible. Ideally you would use a beamsplitter to illuminate through the camera lens, but the ring light is a cheaper solution that accomplishes the job for FRC purposes.

Next, capture a short exposure image of the target. You want short enough an exposure that the background remains dark and the tape is green (or whatever color your illumination is). If the tape/target is white in your image and your illumination is green, then your exposure is too long and you are flooding the sensor with excess photons, saturating the green pixels. Those photoelectrons then spill over into the red and blue pixels resulting in a white part of the image. If you hit that condition, you are over exposing and as a result you are probably getting some background returns in your data.

Background image isn't worth anything to you, generally speaking. You want a fairly dark image, like the ones provided as examples by WPI. If you have a look at those examples for the peg, you can see a pretty clear specular reflection of the ring light in ../Vision Images/LED Peg/1ftH2ftD1Angle0Brightness.jpg.

One way to eliminate specular reflection is to polarize the light going in and filter the light coming back. As I mentioned, a retroreflector changes polarization. Well, a specular reflection does not change polarization. So, you could polarize the illumination using a linear polarizer, then filter your return light with another polarizer, and provide an adjustment of angle. Change the polarization angle between the camera and the light and you should see reduced intensity in specular reflections while not losing the retroreflected intensity. If you want to try that, there are cheap linear polarizer sheets available out there for less than $10. It takes some experimentation because the tape isn't as predictable as the retroreflector, but you can find an angle that maximizes retroreflector return while minimizing specular reflection return.

Specular reflections aside, the important thing is that your targets are rectangles. Please make sure your illuminator doesn't look like a rectangle in specular reflections. That's another good reason for a ring light: it's a circle, not a rectangle.

After you get the image, you want to extract the correct color plane that corresponds to your illumination. Then, you threshold that image and "find the rectangles". I put that last part in quotes because it involves some math (like a thresholding operation, edge detection, hough transform on the edge detection, search of the HT space, etc) that is really the "fun" part of the vision challenge. I don't want to spoil that for you.

Hope that helps you get started on how to illuminate a retroreflector for vision processing.
__________________

2013 Alamo Regional Woodie Flowers Finalist Award Winner
2012 Texas Robot Roundup Volunteer of the Year
Texas Robot Roundup Planning Committee, 2012-present
FRC 6357 Mentor, 2016-
FRC 2789 Mentor, 2009-2016 -- 2 Golds, 2 Silvers, 8 Regional Elimination Appearances

FRC 41 Mentor 2007-2009
FLL Mentor 2006
FRC 619 Mentor 2002
FRC 41 Student 1998-2000

Last edited by jee7s : 08-01-2017 at 23:04.
Reply With Quote
  #6   Spotlight this post!  
Unread 10-01-2017, 13:25
orangeandblack5 orangeandblack5 is offline
Hates LabView - Uses It Anyway
AKA: Ian Stewart
FRC #5498 (Wired Devils)
Team Role: Programmer
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Grosse Ile, MI
Posts: 25
orangeandblack5 will become famous soon enough
Re: How to use vision targets?

Quote:
Originally Posted by jee7s View Post
-snip-
As somebody with a similar set of questions, thank you very much. This will be super helpful, as our team has no clue where to start.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 20:46.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi