Go to Post The gates were removed for this game so that it would be easier to get the Zamboni in and out when the field was resurfaced after each round. - dlavery [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 18-02-2016, 15:53
s5511's Avatar
s5511 s5511 is offline
Registered User
FRC #5511
 
Join Date: Jan 2016
Location: Cary, NC
Posts: 58
s5511 is an unknown quantity at this point
Vision Processing

Our team is currently struggling to get vision processing working this year. Our goal is to identify the retro reflective tape on the tower, and have our robot turn to shoot into the high goal accurately. We have access to a Raspberry Pi 2B, a Microsoft Kinect, and a USB Webcam. We want to do onboard vision processing using the Pi, because we don't want to rely on the FMS's slow connection.

Would you guys have any suggestions for us as to which vision processing software to use on the Pi (GRIP, roboRealm, or OpenCV), and also how to transmit the data to the roboRIO using network tables? Any help would be greatly appreciated!
  #2   Spotlight this post!  
Unread 18-02-2016, 16:04
BrianAtlanta's Avatar
BrianAtlanta BrianAtlanta is offline
Registered User
FRC #1261
Team Role: Mentor
 
Join Date: Apr 2014
Rookie Year: 2012
Location: Atlanta, GA
Posts: 70
BrianAtlanta has a spectacular aura aboutBrianAtlanta has a spectacular aura about
Re: Vision Processing

We went OpenCV. We originally were going to go GRIP, but at that time, they hadn't figured out how to get GRIP to work on the Pi. We didn't want to change framework after starting so we went with OpenCV using pyNetworkTables.

It was an easy install, make sure that you get the proper steps for your flavor and verions of linux. It was a 4hr process of downloading packages and compiling, we used Raspbian.

On a side note, I explained the "Adapter Pattern" to the programmers. Basically wrap the pyNetworkTables interface with an adapter class on both the Pi and the Code. So, if we decide to replace pyNetworkTables, it would be in just both adapters and not in any logic code.

Another reason for starting out with pyNetwork tables, is if we decide to move to GRIP, it's already speaking the same language.

Brian
  #3   Spotlight this post!  
Unread 18-02-2016, 16:28
s5511's Avatar
s5511 s5511 is offline
Registered User
FRC #5511
 
Join Date: Jan 2016
Location: Cary, NC
Posts: 58
s5511 is an unknown quantity at this point
Re: Vision Processing

We are actually using Labview on our main robot code. I'm not sure if pyNetworkTables would work for us, and it seems to only run on java robot code. Also, how is OpenCV going for you guys? Has there been any major issues that you have ran into?
  #4   Spotlight this post!  
Unread 18-02-2016, 16:49
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,751
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Processing

If you run your code on the PI, it sill be using a network tables implementation that is compatible with the LV implementation.

You can also look at the vision example on the getting started page. It shows some basics of camera and vision processing and maps the target info into a pretty useful coordinate space for steering the robot.

If you are looking to steer the robot using the camera, you may also want to consider taking an image, processing for the angular offset to the target, and then using a gyro to turn to the target. Cameras are a pretty slow sensor, and using them to measure how much a robot has turned is not easy. Anyway, if you take this approach, you don't necessarily need a coprocessor.

Greg McKaskle
  #5   Spotlight this post!  
Unread 18-02-2016, 18:00
virtuald's Avatar
virtuald virtuald is offline
RobotPy Guy
AKA: Dustin Spicuzza
FRC #1418 (), FRC #1973, FRC #4796, FRC #6367 ()
Team Role: Mentor
 
Join Date: Dec 2008
Rookie Year: 2003
Location: Boston, MA
Posts: 1,058
virtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant future
Re: Vision Processing

Quote:
Originally Posted by Greg McKaskle View Post
If you run your code on the PI, it sill be using a network tables implementation that is compatible with the LV implementation.
Actually, I've been told (and observed with the FRC Dashboard) that LabVIEW's NT implementation is not backwards compatible with NT2, which is what pynetworktables implements. You would not be able to use the current release version of pynetworktables with LabVIEW.

However, we have a beta version available (version 2016.0.0alpha1) that implements bare NT3 support, which you can install via pip install --pre pynetworktables ... it has the same API so it should work without problems, but isn't as well tested.
__________________
Maintainer of RobotPy - Python for FRC
Creator of pyfrc (Robot Simulator + utilities for Python) and pynetworktables/pynetworktables2js (NetworkTables for Python & Javascript)

2017 Season: Teams #1973, #4796, #6369
Team #1418 (remote mentor): Newton Quarterfinalists, 2016 Chesapeake District Champion, 2x Innovation in Control award, 2x district event winner
Team #1418: 2015 DC Regional Innovation In Control Award, #2 seed; 2014 VA Industrial Design Award; 2014 Finalists in DC & VA
Team #2423: 2012 & 2013 Boston Regional Innovation in Control Award


Resources: FIRSTWiki (relaunched!) | My Software Stuff
  #6   Spotlight this post!  
Unread 19-02-2016, 10:59
Greg McKaskle Greg McKaskle is offline
Registered User
FRC #2468 (Team NI & Appreciate)
 
Join Date: Apr 2008
Rookie Year: 2008
Location: Austin, TX
Posts: 4,751
Greg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond reputeGreg McKaskle has a reputation beyond repute
Re: Vision Processing

Ooh. Sorry about that. I think I knew that at one time. The LV implementation that shipped in 2015 implemented 2.0, and we updated that to 3.0 this year, but didn't try to merge them and do all of the testing to ensure they interoperated.

If a team needs a device to do 2.0, such as for a demo, use the 2015 code. Hopefully we will all be on 3.0 soon.

Greg McKaskle
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 00:40.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi