Go to Post Please look around the forum, I believe this clue is already under discussion and has been overly dissected. - ttldomination [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 15-06-2016, 16:06
cpapplefamily cpapplefamily is offline
Registered User
FRC #3244 (Granite City Gearheads)
Team Role: Mentor
 
Join Date: May 2015
Rookie Year: 2015
Location: Minnesota
Posts: 231
cpapplefamily has a spectacular aura aboutcpapplefamily has a spectacular aura about
Vision tutorials

I'm the lead mentor for a small team. The two groups I primarily work with are desing/build and software/drive team. We don't have the size or support to break into design, cadd, build, software, web, busines, and so on. For a summer project we are going to figure out Vision. We found the WPI Robotbuilder command based video to be amazing jump starting our robots abilities beyond anything we've done in the past. We have not found similar tutorials to get use stared. Seems the bits we found assume a lot of previous vision or programing knowledge.
Reply With Quote
  #2   Spotlight this post!  
Unread 15-06-2016, 16:56
ahartnet's Avatar
ahartnet ahartnet is offline
Registered User
AKA: Andrew Hartnett
FRC #5414 (Pearadox)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2005
Location: Houston, Texas
Posts: 194
ahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant future
Re: Vision tutorials

What is your preferred language?
Do you have a preferred camera to use - axis (more expensive but it seems either more capable or easier to work with or USB camera (cheaper but it seems to me to have some limitations or requires more effort for flexibility/reliability)?
__________________
Team 451 The Cat Attack, Student Alumni (2005)
Team 1646 Precision Guessworks, Mentor (2006-2008)
Team 2936 Gatorzillas, Mentor (2011-2014)
Team 5414 Pearadox, Mentor (2015-Present)
Reply With Quote
  #3   Spotlight this post!  
Unread 15-06-2016, 20:04
snekiam snekiam is offline
Registered User
FRC #3322 (Eagle Imperium)
Team Role: Programmer
 
Join Date: Dec 2015
Rookie Year: 2010
Location: SE Michigan
Posts: 83
snekiam has a spectacular aura aboutsnekiam has a spectacular aura aboutsnekiam has a spectacular aura about
Re: Vision tutorials

What are you trying to do with vision? Auto aiming can be tricky, but the main ideas boil down to this:

-Use an LED ring (usually green) to reflect off the tape
-Set your camera's exposure as low as it will go while still seeing the reflective tape
-Create a binary image of only potential goals (in OpenCV, use an HSV filter) detect find all contours
-Throw away anything that isn't the goal (check similarity with opencv, aspect ratio, area vs perimeter, etc)
-Now comes the tricky part :-). You'll probably want to calculate distance and angle to the tape. Distance can be done using this equation: F = (P x D) / W, where F is your focal length (probably published, but should double check it with this equation), P is the number of pixels wide your camera is, D is the distance, and W is the width.
-To calculate angle, you'll essentially modify the field of view equation. The azimuth angle = arctan( ( center point of goal's x cordinate - ( (image width(pixels))/2 - 0.5) ) / focalLength).
-The azimuth is the angle your robot will have to rotate to be dead on with the goal


Most of this can be accomplished in OpenCV if you are feeling adventurous. We didn't get vision working this year until after competition, but we did it on a raspberry pi with a USB camera. We then sent the angle and distance over networktables to the roborio.

If you have any questions about how to do a specific part of this, feel free to ask.

Last edited by snekiam : 15-06-2016 at 20:05. Reason: Forgot about filtering goals!
Reply With Quote
  #4   Spotlight this post!  
Unread 16-06-2016, 00:19
virtuald's Avatar
virtuald virtuald is offline
RobotPy Guy
AKA: Dustin Spicuzza
FRC #1418 (), FRC #1973, FRC #4796, FRC #6367 ()
Team Role: Mentor
 
Join Date: Dec 2008
Rookie Year: 2003
Location: Boston, MA
Posts: 1,032
virtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant future
Re: Vision tutorials

Definitely check out GRIP, I've heard good things and I'm sure it will be even better by the time next season rolls around.

Also, if looking at working code is your thing, I've been accumulating links to robot code, there's a fair amount of Vision related repos in a variety of different languages: https://firstwiki.github.io/wiki/robot-code-directory
__________________
Maintainer of RobotPy - Python for FRC
Creator of pyfrc (Robot Simulator + utilities for Python) and pynetworktables/pynetworktables2js (NetworkTables for Python & Javascript)

2017 Season: Teams #1973, #4796, #6369
Team #1418 (remote mentor): Newton Quarterfinalists, 2016 Chesapeake District Champion, 2x Innovation in Control award, 2x district event winner
Team #1418: 2015 DC Regional Innovation In Control Award, #2 seed; 2014 VA Industrial Design Award; 2014 Finalists in DC & VA
Team #2423: 2012 & 2013 Boston Regional Innovation in Control Award


Resources: FIRSTWiki (relaunched!) | My Software Stuff
Reply With Quote
  #5   Spotlight this post!  
Unread 16-06-2016, 22:16
cpapplefamily cpapplefamily is offline
Registered User
FRC #3244 (Granite City Gearheads)
Team Role: Mentor
 
Join Date: May 2015
Rookie Year: 2015
Location: Minnesota
Posts: 231
cpapplefamily has a spectacular aura aboutcpapplefamily has a spectacular aura about
Re: Vision tutorials

I'll try and respond to all your replies and thank you for them.

Quote:
Originally Posted by ahartnet View Post
What is your preferred language?
Do you have a preferred camera to use - axis (more expensive but it seems either more capable or easier to work with or USB camera (cheaper but it seems to me to have some limitations or requires more effort for flexibility/reliability)?
Our preferred language is Java with command based robot.

We have a Axis camera on our robot and have available a USB as well.

Quote:
Originally Posted by snekiam View Post
What are you trying to do with vision? Auto aiming can be tricky, but the main ideas boil down to this:

-Use an LED ring (usually green) to reflect off the tape
-Set your camera's exposure as low as it will go while still seeing the reflective tape
-Create a binary image of only potential goals (in OpenCV, use an HSV filter) detect find all contours
-Throw away anything that isn't the goal (check similarity with opencv, aspect ratio, area vs perimeter, etc)
-Now comes the tricky part :-). You'll probably want to calculate distance and angle to the tape. Distance can be done using this equation: F = (P x D) / W, where F is your focal length (probably published, but should double check it with this equation), P is the number of pixels wide your camera is, D is the distance, and W is the width.
-To calculate angle, you'll essentially modify the field of view equation. The azimuth angle = arctan( ( center point of goal's x cordinate - ( (image width(pixels))/2 - 0.5) ) / focalLength).
-The azimuth is the angle your robot will have to rotate to be dead on with the goal


Most of this can be accomplished in OpenCV if you are feeling adventurous. We didn't get vision working this year until after competition, but we did it on a raspberry pi with a USB camera. We then sent the angle and distance over networktables to the roborio.

If you have any questions about how to do a specific part of this, feel free to ask.
We have a green LED ring on our camera.
What is the best way to acquire the image and process it is what we are looking for. Should we focus on Grip running on the Rio? Should we focus on maybe RaspberryPi? Can the code be integrated into the robot code using OpenCV?

I'm guessing lots of trial and error with the different systems is what is going to go down.

Quote:
Originally Posted by virtuald View Post
Definitely check out GRIP, I've heard good things and I'm sure it will be even better by the time next season rolls around.

Also, if looking at working code is your thing, I've been accumulating links to robot code, there's a fair amount of Vision related repos in a variety of different languages: https://firstwiki.github.io/wiki/robot-code-directory
I have exclusively looked at GRIP and I too thinks this will be the solution for many team in the years to come. I tried using the Axis camera and driver station running Grip to acquire the image and process it but hate how the network traffic looks and packet loss. I briefly tried deploying it to the roborio but is crashed a few minutes after launching. I have not dug into why yet. It was finding the target and posting data to the Network Tables.
Reply With Quote
  #6   Spotlight this post!  
Unread 17-06-2016, 03:23
Theflyingtank's Avatar
Theflyingtank Theflyingtank is offline
Registered User
AKA: Diego Ruiz
FRC #0207 (MetalCrafters)
Team Role: Driver
 
Join Date: Jun 2016
Rookie Year: 2015
Location: Hawthorne
Posts: 2
Theflyingtank is an unknown quantity at this point
Re: Vision tutorials

I am still waiting on someone that does targeting and uses C++ as their primary programming language.
Reply With Quote
  #7   Spotlight this post!  
Unread 17-06-2016, 05:33
virtuald's Avatar
virtuald virtuald is offline
RobotPy Guy
AKA: Dustin Spicuzza
FRC #1418 (), FRC #1973, FRC #4796, FRC #6367 ()
Team Role: Mentor
 
Join Date: Dec 2008
Rookie Year: 2003
Location: Boston, MA
Posts: 1,032
virtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant future
Re: Vision tutorials

Quote:
Originally Posted by Theflyingtank View Post
I am still waiting on someone that does targeting and uses C++ as their primary programming language.
Did you look at the robot code directory linked above? Team 3512 has C++ Robot and OpenCV code for 2016.
__________________
Maintainer of RobotPy - Python for FRC
Creator of pyfrc (Robot Simulator + utilities for Python) and pynetworktables/pynetworktables2js (NetworkTables for Python & Javascript)

2017 Season: Teams #1973, #4796, #6369
Team #1418 (remote mentor): Newton Quarterfinalists, 2016 Chesapeake District Champion, 2x Innovation in Control award, 2x district event winner
Team #1418: 2015 DC Regional Innovation In Control Award, #2 seed; 2014 VA Industrial Design Award; 2014 Finalists in DC & VA
Team #2423: 2012 & 2013 Boston Regional Innovation in Control Award


Resources: FIRSTWiki (relaunched!) | My Software Stuff
Reply With Quote
  #8   Spotlight this post!  
Unread 17-06-2016, 13:04
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 298
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: Vision tutorials

Quote:
Originally Posted by Theflyingtank View Post
I am still waiting on someone that does targeting and uses C++ as their primary programming language.
We do targeting and use C++ with NI Vision (although to be fair our code's somewhat of a mess after several competitions' worth of quick fixes )

Do you need help with anything specific?
Reply With Quote
  #9   Spotlight this post!  
Unread 17-06-2016, 11:38
snekiam snekiam is offline
Registered User
FRC #3322 (Eagle Imperium)
Team Role: Programmer
 
Join Date: Dec 2015
Rookie Year: 2010
Location: SE Michigan
Posts: 83
snekiam has a spectacular aura aboutsnekiam has a spectacular aura aboutsnekiam has a spectacular aura about
Re: Vision tutorials

Quote:
Originally Posted by cpapplefamily View Post

We have a green LED ring on our camera.
What is the best way to acquire the image and process it is what we are looking for. Should we focus on Grip running on the Rio? Should we focus on maybe RaspberryPi? Can the code be integrated into the robot code using OpenCV?
It depends on how in depth you want to go. I believe there was some toying to get GRIP running on a co-processor on the robot, but I'm not sure if that ever turned into anything. We are using a raspberry pi to grab images from a usb camera, and process them there in openCV. We set up MJPG streamer on the pi to grab images, and then accessed them in OpenCV like they were from an axis camera, as we had latency problems accessing the camera directly. I will probably be posting our code with an explanation after I have a chance to do some more testing, as we have yet to actually mount our system on our bot and use it for auto aiming.
Reply With Quote
  #10   Spotlight this post!  
Unread 21-06-2016, 13:44
ahartnet's Avatar
ahartnet ahartnet is offline
Registered User
AKA: Andrew Hartnett
FRC #5414 (Pearadox)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2005
Location: Houston, Texas
Posts: 194
ahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant futureahartnet has a brilliant future
Re: Vision tutorials

Quote:
Originally Posted by cpapplefamily View Post
I have exclusively looked at GRIP and I too thinks this will be the solution for many team in the years to come. I tried using the Axis camera and driver station running Grip to acquire the image and process it but hate how the network traffic looks and packet loss. I briefly tried deploying it to the roborio but is crashed a few minutes after launching. I have not dug into why yet. It was finding the target and posting data to the Network Tables.
If you have the funds available, you cant get a Kangaroo PC for $100 https://www.microsoftstore.com/store...ctID.328073600.

We had a USB lifecam connected to a kangaroo running grip connected to the rio via a USB to ethernet dongle to the radio. We had our green LED ring connected to the PCM so that we could turn it on and off. One of our students designed a 3d Printed mount for the life cam and LED ring as well: https://twitter.com/therealsimslug/s...84190541889536

We had issues with the ethernet connection to the rio with the new radio. We were able to run off of the old dlinks just fine. Never quite sorted it out, and never got good enough at shooting high goals to really use our vision code. But GRIP was very easy to set up. Running it on the kangaroo and using the network tables allowed a small enough lag for us to center on the goal OK. I think if we were to continue to pursue vision, I would come back to this set up.
__________________
Team 451 The Cat Attack, Student Alumni (2005)
Team 1646 Precision Guessworks, Mentor (2006-2008)
Team 2936 Gatorzillas, Mentor (2011-2014)
Team 5414 Pearadox, Mentor (2015-Present)
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 09:09.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi