Go to Post now now, there is no such thing as reading the manual too much. I read it every night just before I go to bed. - 663.keith [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
View Poll Results: What did you use for vision tracking?
Grip on RoboRio - IP Camera 3 2.07%
Grip on RoboRio - USB Camera 9 6.21%
Grip on Laptop- IP Camera 19 13.10%
Grip on Laptop- USB Camera 6 4.14%
Grip on Raspberry Pi- IP Camera 5 3.45%
Grip on Raspberry Pi- USB Camera 13 8.97%
RoboRealm IP Camera 6 4.14%
RoboRealm USB Camera 7 4.83%
Other - Please Elaborate with a Response 77 53.10%
Voters: 145. You may not vote on this poll

Reply
 
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 01-05-2016, 13:04
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 490
tomy has a spectacular aura abouttomy has a spectacular aura about
What Did you use for Vision Tracking?

This year my team didn't have time to get into vision tracking. I've been trying to dive into it but before I get started I was wondering what the best option was. I've heard a lot of speculation about what is good and what is bad. I was wondering what people actually used at competition. I would love to hear feed back on what worked and what didn't work.

Last edited by tomy : 01-05-2016 at 13:08.
Reply With Quote
  #2   Spotlight this post!  
Unread 01-05-2016, 13:14
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: Atlanta, GA
Posts: 69
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
Reply With Quote
  #3   Spotlight this post!  
Unread 01-05-2016, 13:54
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 490
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by BrianAtlanta View Post
1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
Reply With Quote
  #4   Spotlight this post!  
Unread 01-05-2016, 14:08
jreneew2's Avatar
jreneew2 jreneew2 is online now
Alumni of Team 2053 Tigertronics
AKA: Drew Williams
FRC #2053 (TigerTronics)
Team Role: Programmer
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Vestal, NY
Posts: 189
jreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
We used a very similar setup, except written in c++. We actually had virtually no lag on the raspberry pi side. The only portion where there was lag was actually the roborio processing the data from the network tables.

The setup was pretty simple to get up and running. We compiled opencv and networktables 3 on the pi, then wrote a simple c++ program to find and send the necessary data to align with the target back to roborio. I actually followed a video tutorial here to install cv on the raspberry pi. For network tables, I downloaded the code off of github and simply compiled it like a normal program and added it to my library path if i remember correctly.
Reply With Quote
  #5   Spotlight this post!  
Unread 01-05-2016, 14:14
granjef3's Avatar
granjef3 granjef3 is offline
Code Ninja
AKA: Matt
FRC #2383 (Ninjineers)
Team Role: Programmer
 
Join Date: Sep 2015
Rookie Year: 2016
Location: Florida
Posts: 6
granjef3 is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

2383 used a Jetson TX1, with a kinect used as an IR camera. Vision code was written in OpenCV using C++ and communicated with the roboRIO over network tables.

During the offseason we will be exploring the android phone method that 254 used for reliability reasons; the Jetson+Kinect combo was expensive and finicky, compared to an android phone with an integrated battery.
__________________

2016 Galileo Division Semifinalists with 341 Miss Daisy, 3683 Team Dave, and 4525 Renaissance Robotics

Thanks to all of our past alliance members!
Reply With Quote
  #6   Spotlight this post!  
Unread 01-05-2016, 14:31
ajacob ajacob is offline
Lead Programmer
FRC #2791 (Shaker Robotics)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2013
Location: Latham
Posts: 5
ajacob is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

This year shaker robotics used the RRio with NIvision(java) to track the targets, we analyzed frames only when we needed them to prevent using too much of the rio's resources.
Reply With Quote
  #7   Spotlight this post!  
Unread 01-05-2016, 16:34
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: Atlanta, GA
Posts: 69
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
The code wasn't that hard for our developers, for openCV or pyNetworkTables. They looked at tutorials at pyImageSearch, it's a great resource. I think we were getting in the 20-30fps, but I would have to review our driver station videos. We would process on the Pi, calculate a few things such as, (x,y), area, height, (x,y) of center of image and such. This information would be sent via pyNetworkTables to the robot code. The robot code would use this information as input to elevation PID and Drivetrain PID.

The images were also sent via mjpg-streamer to the driverstation for drivers to see, but it's not really needed. We just liked seeing the shooter camera.

We'd be happy to help, just PM me.

Brian
Reply With Quote
  #8   Spotlight this post!  
Unread 01-05-2016, 14:56
nighterfighter nighterfighter is online now
1771 Alum, 1771 Mentor
AKA: Matt B
FRC #1771 (1771)
Team Role: Mentor
 
Join Date: Sep 2009
Rookie Year: 2007
Location: Suwanee/Kennesaw, GA
Posts: 835
nighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant future
Re: What Did you use for Vision Tracking?

1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.

We switched to using a PixyCam, and had much better results.
__________________
1771- Programmer, Captain, Drive Team (2009-2012)
4509- Mentor (2013-2015)
1771- Mentor (2015)
Reply With Quote
  #9   Spotlight this post!  
Unread 01-05-2016, 15:02
JohnFogarty's Avatar
JohnFogarty JohnFogarty is offline
FTC, I have returned.
AKA: @doctorfogarty @GarnetSq
FTC #11444 (Garnet Squadron)
Team Role: Mentor
 
Join Date: Aug 2009
Rookie Year: 2006
Location: SC
Posts: 1,555
JohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond repute
Re: What Did you use for Vision Tracking?

4901 used Grip on an a RPi v2 + a Pi Camera.

For more info on our implementation visit here https://github.com/GarnetSquardon490...ion-processing
__________________
John Fogarty
2010 FTC World Championship Winner & 2013-2014 FRC Orlando Regional Winner
"Head Bot Coach" FRC Team 4901 Garnet Squadron

Former Student & Mentor FLL 1102, FTC 1102 & FTC 3864, FRC 1102, FRC 1772, FRC 5632
2013 FTC World Championship Guest Speaker
Reply With Quote
  #10   Spotlight this post!  
Unread 01-05-2016, 15:06
Ben Wolsieffer Ben Wolsieffer is offline
Dartmouth 2020
AKA: lopsided98
FRC #2084 (Robots by the C)
Team Role: Alumni
 
Join Date: Jan 2011
Rookie Year: 2011
Location: Manchester, MA (Hanover, NH)
Posts: 519
Ben Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud of
Re: What Did you use for Vision Tracking?

We used Java and OpenCV on an NVIDIA Jetson TK1, processing images from a Microsoft Lifecam HD3000.
__________________



2016 North Shore District - Semifinalists and Excellence in Engineering Award
2015 Northeastern University District - Semifinalists and Creativity Award
2014 Granite State District - Semifinalists and Innovation in Control Award
2012 Boston Regional - Finalists
Reply With Quote
  #11   Spotlight this post!  
Unread 01-05-2016, 15:26
David Lame David Lame is offline
Registered User
FRC #0247
 
Join Date: Feb 2015
Location: Berkley, MI
Posts: 84
David Lame is a jewel in the roughDavid Lame is a jewel in the roughDavid Lame is a jewel in the roughDavid Lame is a jewel in the rough
Re: What Did you use for Vision Tracking?

Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.

We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera.

There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control.

Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult.

If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks.

Once you get that going, the sky's the limit.
Reply With Quote
  #12   Spotlight this post!  
Unread 01-05-2016, 16:24
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 490
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by David Lame View Post
Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.

We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera.

There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control.

Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult.

If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks.

Once you get that going, the sky's the limit.
That is very true and I got grip working on a laptop so I'm trying to figure out where the next best place to go is. I don't know much about vision processing algorithms, python or openCV.

There is a good documentation for the raspberry pi which I have been working on when I can.

Thanks for the reply
Reply With Quote
  #13   Spotlight this post!  
Unread 02-05-2016, 14:06
MamaSpoldi's Avatar
MamaSpoldi MamaSpoldi is offline
Programming Mentor
AKA: Laura Spoldi
FRC #0230 (Gaelhawks)
Team Role: Engineer
 
Join Date: Jan 2009
Rookie Year: 2007
Location: Shelton, CT
Posts: 305
MamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant futureMamaSpoldi has a brilliant future
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by nighterfighter View Post
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.

We switched to using a PixyCam, and had much better results.
Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.

The only issue we came up against after the initial integration was at Champs when the much more polished surface of the driver's station wall reflected back the LEDs to the camera simulating a goal target and we shot at ourselves in autonomous. It was quickly fixed by adding the requirement that we had to rotate at least 45 degrees before we started looking for a target.

The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software.
__________________
Reply With Quote
  #14   Spotlight this post!  
Unread 02-05-2016, 14:40
Jaci's Avatar
Jaci Jaci is online now
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 251
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: What Did you use for Vision Tracking?

OpenCV C/C++ mixed source running on a Pine64 coprocessor with a Kinect as a camera. 30fps tracking using the Infrared stream. Targets are found and reduced to a bounding box ready to be sent over the network. Each vision target takes 32 bytes of data and is used for auto alignment and sent to the Driver Station WebUI for driver feedback. Code will be available in a few days, I'm boarding the plane home soon.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #15   Spotlight this post!  
Unread 02-05-2016, 15:15
Alpha Beta's Avatar
Alpha Beta Alpha Beta is online now
Strategy, Scouting, and LabVIEW
AKA: Mr. Aaron Bailey
FRC #1986 (Team Titanium)
Team Role: Coach
 
Join Date: Mar 2008
Rookie Year: 2007
Location: Lee's Summit, Missouri
Posts: 763
Alpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond reputeAlpha Beta has a reputation beyond repute
Re: What Did you use for Vision Tracking?

Nothing too fancy.

LabVIEW FRC Color Processing Example (Thanks NI. This was our most sophisticated use of vision ever, and the examples you provided every team in LabVIEW were immensely helpful.)

Running in a custom dashboard on the driver's station (i5 laptop several years old.)

Hue, Sat Val parameters stored in a CSV file with the ability to save new values during a match.

Target coordinates sent back to robot through Network Tables.

Axis Camera m1013 with exposure setting turned to 0 in LabVIEW.

Green LED ring with a significant amount of black electrical tape blocking out some of the lights.

PS. For Teleop we had a piece of tape on the computer screen so drivers could confirm the auto aim worked. If center of tape = center of goal, then fire.

PSS. Pop-up USB camera was not running vision tracking.
__________________
Regional Wins: 2016(KC), 2015(St. Louis, Queen City), 2014(Central Illinois, KC), 2013(Hub City, KC, Oklahoma City), 2012(KC, St. Louis), 2011(Colorado), 2010(North Star)
Regional Chairman's Award: 2014(Central Illinois), 2009(10,000 Lakes)
Engineering Inspiration: 2016(Smoky Mountain), 2012(Kansas City), 2011(Denver)
Dean's List Finalist 2016(Jacob S), 2014(Cameron L), 2013(Jay U), 2012(Laura S), 2011(Dominic A), 2010(Collin R)
Woodie Flowers Finalist 2013 (Aaron Bailey)
Championships: Sub-Division Champion (2016), Finalist (2013, 2010), Semifinalist (2014), Quaterfinalist (2015, 2012, 2011)
Other Official Awards: Gracious Professionalism (2013) Entrepreneurship (2013), Quality (2015, 2015, 2013), Engineering Excellence (Champs 2013, 2012), Website (2011), Industrial Design (Archimedes/Tesla 2016, 2016, 2015, Newton 2014, 2013, 2011), Innovation in Control (2014, Champs 2010, 2010, 2008, 2008), Imagery (2009), Regional Finalist (2016, 2015, 2008)
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 10:38.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi