Go to Post Just a modest request for folks NOT to put 3000-pixel-wide pictures inline in the forum. A link to the big picture works just as well while keeping the text readable. Thanks. - Rick TYler [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
View Poll Results: What did you use for vision tracking?
Grip on RoboRio - IP Camera 3 2.07%
Grip on RoboRio - USB Camera 9 6.21%
Grip on Laptop- IP Camera 19 13.10%
Grip on Laptop- USB Camera 6 4.14%
Grip on Raspberry Pi- IP Camera 5 3.45%
Grip on Raspberry Pi- USB Camera 13 8.97%
RoboRealm IP Camera 6 4.14%
RoboRealm USB Camera 7 4.83%
Other - Please Elaborate with a Response 77 53.10%
Voters: 145. You may not vote on this poll

Reply
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 01-05-2016, 17:49
rich2202 rich2202 is offline
Registered User
FRC #2202 (BEAST Robotics)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Wisconsin
Posts: 1,184
rich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond reputerich2202 has a reputation beyond repute
Re: What Did you use for Vision Tracking?

IP Camera. Custom Code on the Driver Station.

Vision program written in C++. It took the picture off the Smart Dashboard, and processed it. Pretty ingenious code. He looked for "corners". Rated each pixel for the likelihood it was a corner (top corner, bottom left corner, bottom right corner). the largest grouping was declared a corner.
__________________

Reply With Quote
  #17   Spotlight this post!  
Unread 01-05-2016, 17:57
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: Atlanta, GA
Posts: 70
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.

FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile.

We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is.


Brian
Reply With Quote
  #18   Spotlight this post!  
Unread 01-05-2016, 18:00
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 494
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by BrianAtlanta View Post
Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.

FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile.

We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is.


Brian

Wow that long?

I am extremely new to open-CV and python do you have any good places to start?
Reply With Quote
  #19   Spotlight this post!  
Unread 01-05-2016, 18:04
snekiam snekiam is offline
Registered User
FRC #3322 (Eagle Imperium)
Team Role: Programmer
 
Join Date: Dec 2015
Rookie Year: 2010
Location: SE Michigan
Posts: 89
snekiam has a spectacular aura aboutsnekiam has a spectacular aura aboutsnekiam has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
Wow that long?

I am extremely new to open-CV and python do you have any good places to start?
The installation takes a long time because you need to build it on the pi, which does take several hours. Which language are you looking to get started on?
Reply With Quote
  #20   Spotlight this post!  
Unread 01-05-2016, 18:04
axton900's Avatar
axton900 axton900 is offline
Programming Co-Captain
FRC #1403 (Cougar Robotics)
Team Role: Programmer
 
Join Date: Jan 2016
Rookie Year: 2015
Location: New Jersey USA
Posts: 45
axton900 has a spectacular aura aboutaxton900 has a spectacular aura aboutaxton900 has a spectacular aura about
Re: What Did you use for Vision Tracking?

We used an Axis IP Camera and a Raspberry Pi running a modified version of Team 3019's TowerTracker OpenCV java program. I believe someone posted about it earlier in this thread.
__________________
Team 1403: Cougar Robotics (2015 - present)


Last edited by axton900 : 01-05-2016 at 18:48. Reason: addition
Reply With Quote
  #21   Spotlight this post!  
Unread 01-05-2016, 18:14
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,263
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: What Did you use for Vision Tracking?

We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.

We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination.

We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup.

We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github.
__________________
"La mejor salsa del mundo es la hambre" - Miguel de Cervantes
"The future is unwritten" - Joe Strummer
"Simplify, then add lightness" - Colin Chapman
Reply With Quote
  #22   Spotlight this post!  
Unread 01-05-2016, 18:51
apache8080 apache8080 is offline
Lead Programmer, Drive Coach, Scout
AKA: Rishi Desai
FRC #5677
Team Role: Programmer
 
Join Date: Jan 2014
Rookie Year: 2013
Location: San Jose, CA
Posts: 38
apache8080 is on a distinguished road
Re: What Did you use for Vision Tracking?

We used a USB camera connected to a Raspeberry Pi. On the Raspberry Pi we used Python and OpenCV to track the goal from the retro-reflective tape. The tape was tracked by using basic color thresholding to track the color green. Once we found the green color we contoured the binary image and used OpenCV moments to calculate the centroid of the goal. After finding the centroid the program calculated the angle the robot had to turn to center to the goal by using the camera's given field of view angle. Using pynetworktables we sent the calculated angle to the RoboRIO and then a PID controller turned the robot to that angle. Here is the link to our vision code.

Last edited by apache8080 : 01-05-2016 at 18:53.
Reply With Quote
  #23   Spotlight this post!  
Unread 01-05-2016, 19:04
billbo911's Avatar
billbo911 billbo911 is offline
I prefer you give a perfect effort.
AKA: That's "Mr. Bill"
FRC #2073 (EagleForce)
Team Role: Mentor
 
Join Date: Mar 2005
Rookie Year: 2005
Location: Elk Grove, Ca.
Posts: 2,354
billbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond repute
Re: What Did you use for Vision Tracking?

We started out with OpenCV on a PCDuino. I say "started out" because we ultimately found we could do really well without it, and the fact that we realized why our implementation was actually causing us issues once in a while.

We have identified to root cause of the issues and will be implementing a new process going forward.

We are moving to OpenCV on RPi-3. It is WAY FASTER than what we had with the PCDuino, and is actually a bit less expensive. In addition, there is tons of support in the RPi community.
__________________
CalGames 2009 Autonomous Champion Award winner
Sacramento 2010 Creativity in Design winner, Sacramento 2010 Quarter finalist
2011 Sacramento Finalist, 2011 Madtown Engineering Inspiration Award.
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC). Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
Reply With Quote
  #24   Spotlight this post!  
Unread 02-05-2016, 10:50
KJaget's Avatar
KJaget KJaget is offline
Zebravision Labs
FRC #0900
Team Role: Mentor
 
Join Date: Dec 2014
Rookie Year: 2015
Location: Cary, NC
Posts: 41
KJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud of
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by marshall View Post
We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.

We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination.

We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup.

We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github.
I'll add that we use ZeroMQ to communicate between the TX1 and Labview Code on the RoboRIO. That turned out to be one of the least painful parts of the development process - it took like 10 lines of C++ code overall and just worked from there on.
Reply With Quote
  #25   Spotlight this post!  
Unread 02-05-2016, 10:55
mwtidd's Avatar
mwtidd mwtidd is offline
Registered User
AKA: mike
FRC #0319 (Big Bad Bob)
Team Role: Mentor
 
Join Date: Feb 2005
Rookie Year: 2003
Location: Boston, MA
Posts: 714
mwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond repute
Re: What Did you use for Vision Tracking?

We ran OpenCV for Java on an onboard coprocessor. At first it was run on a kangaroo, but when that burnt out we switched to an onboard laptop.
__________________
"Never let your schooling interfere with your education" -Mark Twain
Reply With Quote
  #26   Spotlight this post!  
Unread 02-05-2016, 11:21
virtuald's Avatar
virtuald virtuald is offline
RobotPy Guy
AKA: Dustin Spicuzza
FRC #1418 (), FRC #1973, FRC #4796, FRC #6367 ()
Team Role: Mentor
 
Join Date: Dec 2008
Rookie Year: 2003
Location: Boston, MA
Posts: 1,053
virtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant future
Re: What Did you use for Vision Tracking?

We used OpenCV+Python on the roborio, as an mjpg-streamer plugin so that we could optionally stream the images to the DS. pynetworktables to send data to the robot code.

Only about ~40% CPU usage, worked really well, the problems we had was in the code that used the results from the camera.

Code can be found here.
__________________
Maintainer of RobotPy - Python for FRC
Creator of pyfrc (Robot Simulator + utilities for Python) and pynetworktables/pynetworktables2js (NetworkTables for Python & Javascript)

2017 Season: Teams #1973, #4796, #6369
Team #1418 (remote mentor): Newton Quarterfinalists, 2016 Chesapeake District Champion, 2x Innovation in Control award, 2x district event winner
Team #1418: 2015 DC Regional Innovation In Control Award, #2 seed; 2014 VA Industrial Design Award; 2014 Finalists in DC & VA
Team #2423: 2012 & 2013 Boston Regional Innovation in Control Award


Resources: FIRSTWiki (relaunched!) | My Software Stuff
Reply With Quote
  #27   Spotlight this post!  
Unread 02-05-2016, 11:35
andrewthomas's Avatar
andrewthomas andrewthomas is offline
Registered User
FRC #1619 (Up-A-Creek Robotics)
Team Role: Driver
 
Join Date: Nov 2015
Rookie Year: 2014
Location: Longmont, CO
Posts: 10
andrewthomas is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

Team 1619 used an Nvidia Jetson TK1 for our vision processing with a Logitech USB webcam. We wrote our vision processing code in Python using OpenCV and communicated with the roboRIO and driver station using a custom written socket server similar to NetworkTables. We also streamed the camera feed from the Jetson to the driver station using a UDP stream.
Reply With Quote
  #28   Spotlight this post!  
Unread 02-05-2016, 14:06
MamaSpoldi's Avatar
MamaSpoldi MamaSpoldi is offline
Programming Mentor
AKA: Laura Spoldi
FRC #0230 (Gaelhawks)
Team Role: Engineer
 
Join Date: Jan 2009
Rookie Year: 2007
Location: Shelton, CT
Posts: 305
MamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant future
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by nighterfighter View Post
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.

We switched to using a PixyCam, and had much better results.
Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.

The only issue we came up against after the initial integration was at Champs when the much more polished surface of the driver's station wall reflected back the LEDs to the camera simulating a goal target and we shot at ourselves in autonomous. It was quickly fixed by adding the requirement that we had to rotate at least 45 degrees before we started looking for a target.

The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software.
__________________
Reply With Quote
  #29   Spotlight this post!  
Unread 02-05-2016, 14:40
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 257
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: What Did you use for Vision Tracking?

OpenCV C/C++ mixed source running on a Pine64 coprocessor with a Kinect as a camera. 30fps tracking using the Infrared stream. Targets are found and reduced to a bounding box ready to be sent over the network. Each vision target takes 32 bytes of data and is used for auto alignment and sent to the Driver Station WebUI for driver feedback. Code will be available in a few days, I'm boarding the plane home soon.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #30   Spotlight this post!  
Unread 02-05-2016, 15:15
Alpha Beta's Avatar
Alpha Beta Alpha Beta is offline
Strategy, Scouting, and LabVIEW
AKA: Mr. Aaron Bailey
FRC #1986 (Team Titanium)
Team Role: Coach
 
Join Date: Mar 2008
Rookie Year: 2007
Location: Lee's Summit, Missouri
Posts: 763
Alpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond repute
Re: What Did you use for Vision Tracking?

Nothing too fancy.

LabVIEW FRC Color Processing Example (Thanks NI. This was our most sophisticated use of vision ever, and the examples you provided every team in LabVIEW were immensely helpful.)

Running in a custom dashboard on the driver's station (i5 laptop several years old.)

Hue, Sat Val parameters stored in a CSV file with the ability to save new values during a match.

Target coordinates sent back to robot through Network Tables.

Axis Camera m1013 with exposure setting turned to 0 in LabVIEW.

Green LED ring with a significant amount of black electrical tape blocking out some of the lights.

PS. For Teleop we had a piece of tape on the computer screen so drivers could confirm the auto aim worked. If center of tape = center of goal, then fire.

PSS. Pop-up USB camera was not running vision tracking.
__________________
Regional Wins: 2016(KC), 2015(St. Louis, Queen City), 2014(Central Illinois, KC), 2013(Hub City, KC, Oklahoma City), 2012(KC, St. Louis), 2011(Colorado), 2010(North Star)
Regional Chairman's Award: 2014(Central Illinois), 2009(10,000 Lakes)
Engineering Inspiration: 2016(Smoky Mountain), 2012(Kansas City), 2011(Denver)
Dean's List Finalist 2016(Jacob S), 2014(Cameron L), 2013(Jay U), 2012(Laura S), 2011(Dominic A), 2010(Collin R)
Woodie Flowers Finalist 2013 (Aaron Bailey)
Championships: Sub-Division Champion (2016), Finalist (2013, 2010), Semifinalist (2014), Quaterfinalist (2015, 2012, 2011)
Other Official Awards: Gracious Professionalism (2013) Entrepreneurship (2013), Quality (2015, 2015, 2013), Engineering Excellence (Champs 2013, 2012), Website (2011), Industrial Design (Archimedes/Tesla 2016, 2016, 2015, Newton 2014, 2013, 2011), Innovation in Control (2014, Champs 2010, 2010, 2008, 2008), Imagery (2009), Regional Finalist (2016, 2015, 2008)
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 18:48.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi