Go to Post We did the best job we could out there, and unfortunatly we have human limits.. The refs don't catch every penalty, inspectors don't catch every issue, and this year the score keepers won't catch every single ball. - nobrakes8 [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
View Poll Results: What did you use for vision tracking?
Grip on RoboRio - IP Camera 3 2.07%
Grip on RoboRio - USB Camera 9 6.21%
Grip on Laptop- IP Camera 19 13.10%
Grip on Laptop- USB Camera 6 4.14%
Grip on Raspberry Pi- IP Camera 5 3.45%
Grip on Raspberry Pi- USB Camera 13 8.97%
RoboRealm IP Camera 6 4.14%
RoboRealm USB Camera 7 4.83%
Other - Please Elaborate with a Response 77 53.10%
Voters: 145. You may not vote on this poll

Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 01-05-2016, 13:04
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 494
tomy has a spectacular aura abouttomy has a spectacular aura about
What Did you use for Vision Tracking?

This year my team didn't have time to get into vision tracking. I've been trying to dive into it but before I get started I was wondering what the best option was. I've heard a lot of speculation about what is good and what is bad. I was wondering what people actually used at competition. I would love to hear feed back on what worked and what didn't work.

Last edited by tomy : 01-05-2016 at 13:08.
Reply With Quote
  #2   Spotlight this post!  
Unread 01-05-2016, 13:14
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: Atlanta, GA
Posts: 70
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
Reply With Quote
  #3   Spotlight this post!  
Unread 01-05-2016, 13:54
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 494
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by BrianAtlanta View Post
1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
Reply With Quote
  #4   Spotlight this post!  
Unread 01-05-2016, 14:08
jreneew2's Avatar
jreneew2 jreneew2 is offline
Alumni of Team 2053 Tigertronics
AKA: Drew Williams
FRC #2053 (TigerTronics)
Team Role: Programmer
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Vestal, NY
Posts: 195
jreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
We used a very similar setup, except written in c++. We actually had virtually no lag on the raspberry pi side. The only portion where there was lag was actually the roborio processing the data from the network tables.

The setup was pretty simple to get up and running. We compiled opencv and networktables 3 on the pi, then wrote a simple c++ program to find and send the necessary data to align with the target back to roborio. I actually followed a video tutorial here to install cv on the raspberry pi. For network tables, I downloaded the code off of github and simply compiled it like a normal program and added it to my library path if i remember correctly.
Reply With Quote
  #5   Spotlight this post!  
Unread 01-05-2016, 14:14
granjef3's Avatar
granjef3 granjef3 is offline
Code Ninja
AKA: Matt
FRC #2383 (Ninjineers)
Team Role: Programmer
 
Join Date: Sep 2015
Rookie Year: 2016
Location: Florida
Posts: 6
granjef3 is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

2383 used a Jetson TX1, with a kinect used as an IR camera. Vision code was written in OpenCV using C++ and communicated with the roboRIO over network tables.

During the offseason we will be exploring the android phone method that 254 used for reliability reasons; the Jetson+Kinect combo was expensive and finicky, compared to an android phone with an integrated battery.
__________________

2016 Galileo Division Semifinalists with 341 Miss Daisy, 3683 Team Dave, and 4525 Renaissance Robotics

Thanks to all of our past alliance members!
Reply With Quote
  #6   Spotlight this post!  
Unread 01-05-2016, 14:31
ajacob ajacob is offline
Lead Programmer
FRC #2791 (Shaker Robotics)
Team Role: Programmer
 
Join Date: Feb 2016
Rookie Year: 2013
Location: Latham
Posts: 5
ajacob is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

This year shaker robotics used the RRio with NIvision(java) to track the targets, we analyzed frames only when we needed them to prevent using too much of the rio's resources.
Reply With Quote
  #7   Spotlight this post!  
Unread 01-05-2016, 14:56
nighterfighter nighterfighter is offline
1771 Alum, 1771 Mentor
AKA: Matt B
FRC #1771 (1771)
Team Role: Mentor
 
Join Date: Sep 2009
Rookie Year: 2007
Location: Suwanee/Kennesaw, GA
Posts: 835
nighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant futurenighterfighter has a brilliant future
Re: What Did you use for Vision Tracking?

1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.

We switched to using a PixyCam, and had much better results.
__________________
1771- Programmer, Captain, Drive Team (2009-2012)
4509- Mentor (2013-2015)
1771- Mentor (2015)
Reply With Quote
  #8   Spotlight this post!  
Unread 01-05-2016, 15:02
JohnFogarty's Avatar
JohnFogarty JohnFogarty is offline
FTC, I have returned.
AKA: @doctorfogarty
FTC #11444 (Garnet Squadron) & FRC#1102 (M'Aiken Magic)
Team Role: Mentor
 
Join Date: Aug 2009
Rookie Year: 2006
Location: SC
Posts: 1,564
JohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond reputeJohnFogarty has a reputation beyond repute
Re: What Did you use for Vision Tracking?

4901 used Grip on an a RPi v2 + a Pi Camera.

For more info on our implementation visit here https://github.com/GarnetSquardon490...ion-processing
__________________
John Fogarty
2010 FTC World Championship Winner & 2013-2014 FRC Orlando Regional Winner
Mentor FRC Team 1102 M'Aiken Magic
"Head Bot Coach" FTC Team 11444 Garnet Squadron
Former Student & Mentor FLL 1102, FTC 1102 & FTC 3864, FRC 1772, FRC 5632
2013 FTC World Championship Guest Speaker
Reply With Quote
  #9   Spotlight this post!  
Unread 01-05-2016, 15:06
Ben Wolsieffer Ben Wolsieffer is offline
Dartmouth 2020
AKA: lopsided98
FRC #2084 (Robots by the C)
Team Role: Alumni
 
Join Date: Jan 2011
Rookie Year: 2011
Location: Manchester, MA (Hanover, NH)
Posts: 520
Ben Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud ofBen Wolsieffer has much to be proud of
Re: What Did you use for Vision Tracking?

We used Java and OpenCV on an NVIDIA Jetson TK1, processing images from a Microsoft Lifecam HD3000.
__________________



2016 North Shore District - Semifinalists and Excellence in Engineering Award
2015 Northeastern University District - Semifinalists and Creativity Award
2014 Granite State District - Semifinalists and Innovation in Control Award
2012 Boston Regional - Finalists
Reply With Quote
  #10   Spotlight this post!  
Unread 01-05-2016, 15:26
David Lame David Lame is offline
Registered User
FRC #0247
 
Join Date: Feb 2015
Location: Berkley, MI
Posts: 84
David Lame is a jewel in the roughDavid Lame is a jewel in the roughDavid Lame is a jewel in the roughDavid Lame is a jewel in the rough
Re: What Did you use for Vision Tracking?

Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.

We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera.

There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control.

Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult.

If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks.

Once you get that going, the sky's the limit.
Reply With Quote
  #11   Spotlight this post!  
Unread 01-05-2016, 16:24
tomy tomy is offline
Registered User
FRC #3038 (I.C.E. Robotics)
Team Role: Mentor
 
Join Date: Jan 2009
Rookie Year: 2009
Location: Stacy, Minnesota
Posts: 494
tomy has a spectacular aura abouttomy has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by David Lame View Post
Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.

We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera.

There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control.

Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult.

If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks.

Once you get that going, the sky's the limit.
That is very true and I got grip working on a laptop so I'm trying to figure out where the next best place to go is. I don't know much about vision processing algorithms, python or openCV.

There is a good documentation for the raspberry pi which I have been working on when I can.

Thanks for the reply
Reply With Quote
  #12   Spotlight this post!  
Unread 01-05-2016, 16:34
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: Atlanta, GA
Posts: 70
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: What Did you use for Vision Tracking?

Quote:
Originally Posted by tomy View Post
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
The code wasn't that hard for our developers, for openCV or pyNetworkTables. They looked at tutorials at pyImageSearch, it's a great resource. I think we were getting in the 20-30fps, but I would have to review our driver station videos. We would process on the Pi, calculate a few things such as, (x,y), area, height, (x,y) of center of image and such. This information would be sent via pyNetworkTables to the robot code. The robot code would use this information as input to elevation PID and Drivetrain PID.

The images were also sent via mjpg-streamer to the driverstation for drivers to see, but it's not really needed. We just liked seeing the shooter camera.

We'd be happy to help, just PM me.

Brian
Reply With Quote
  #13   Spotlight this post!  
Unread 01-05-2016, 16:41
lethc's Avatar
lethc lethc is offline
#gkccurse
AKA: Becker Lethcoe
FRC #1806 (S.W.A.T.)
Team Role: Alumni
 
Join Date: Nov 2012
Rookie Year: 2013
Location: Smithville, MO
Posts: 119
lethc will become famous soon enough
Re: What Did you use for Vision Tracking?

We used a modified version of TowerTracker for autonomous alignment and a flashlight for teleop alignment.
__________________
2016: Greater Kansas City Regional Finalists, Oklahoma Regional Winners, Tesla Semifinalists, IRI Quarterfinalists
2015: Greater Kansas City Regional Finalists, Oklahoma Regional Winners, Tesla Quarterfinalists, IRI Winners
2014: Central Illinois Regional Quarterfinalists, Greater Kansas City Regional Finalists, Newton Semifinalists
2013: Greater Kansas City Regional Winners, Oklahoma Regional Winners, Galileo Quarterfinalists
Reply With Quote
  #14   Spotlight this post!  
Unread 01-05-2016, 17:26
KH987's Avatar
KH987 KH987 is offline
Registered User
AKA: Kevin Hjelstrom
FRC #0987 (High Rollers)
Team Role: Engineer
 
Join Date: Jan 2015
Rookie Year: 2013
Location: Las Vegas, Nevada
Posts: 12
KH987 is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

Team 987 used an onboard Jetson TK1 for our vision tracking. We programmed it with C++ code using openCV. The Jetson sends target information to the roborio with tcp packets. From there a compressed low frame rate stream was sent to the driver station for diagnostics. The low frame rate stream used very little bandwidth to ensure we stay well under the maximum (it used around 500 KB/s).
Reply With Quote
  #15   Spotlight this post!  
Unread 01-05-2016, 17:43
Simon_D Simon_D is offline
Registered User
FRC #3310 (Black Hawk Robotics)
 
Join Date: May 2016
Rookie Year: 2016
Location: Rockwall tx
Posts: 1
Simon_D is an unknown quantity at this point
Re: What Did you use for Vision Tracking?

We ended up using the RoboRio with a USB Camera, but we attempt to uses opencv on Jetson, begalbone black and raspberry pi 3. We scraped the jetson after realizing how hard we were landing after hitting a defence. (But we had opencv working) then we had the begalbone scraped after the raspberry pi 3 was released. We got the pi to work after 16 hours of compiling opencv but we ran out if time.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 00:47.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi