Go to Post Best. Team. Update. Ever. - Billfred [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
View Poll Results: Are You Tracking The Goal To Help Shoot
Yes - And It Works! 133 49.63%
Kinda - We Are Working On It! 104 38.81%
Nope - We Are Not Using A Camera 31 11.57%
Voters: 268. You may not vote on this poll

Reply
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 10-03-2016, 08:40
IronicDeadBird's Avatar
IronicDeadBird IronicDeadBird is offline
Theory Crafting Fo days...
AKA: Charles Ives "M" Waldo IV
FRC #1339 (Angelbots)
Team Role: Tactician
 
Join Date: Feb 2014
Rookie Year: 2005
Location: Colorado
Posts: 958
IronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond reputeIronicDeadBird has a reputation beyond repute
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by s5511 View Post
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?
From the limited testing the camera should be safe and secure in this. Rough game
__________________
HERO 俺を讃える声や 喝采なんて 欲しくはないさ
I liked my team more before they stole my jacket.
Play is for kids this is serious...
Reply With Quote
  #17   Spotlight this post!  
Unread 10-03-2016, 08:54
Jonathan Ryan's Avatar
Jonathan Ryan Jonathan Ryan is offline
Registered User
FRC #0145 (T-rx 145)
Team Role: Mentor
 
Join Date: Jan 2008
Rookie Year: 2005
Location: Sherburne, NY
Posts: 48
Jonathan Ryan is an unknown quantity at this point
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by sanelss View Post
We use a windows tablet with labview and do local usb camera processing but also forward coordinate data to driver station for driver to be able to see the alignment and verify before taking a shot. we have working auto crossing one of 5 defenses and taking a high goal shot. we have over 90% accuracy with auto aim, even the driver just enables auto aim but he still has manual control.
Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.
Reply With Quote
  #18   Spotlight this post!  
Unread 10-03-2016, 08:58
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,301
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by s5511 View Post
We are a team trying to get vision working, too. We run LabVIEW code on the robot and are planning on using a Jetson TK1 with OpenCV. Do you guys have any suggestions/comments?
We're just up the road from you folks and happy to help anytime. Just let us know what you're seeing (vision pun) and we will do what we can to help.
__________________
"La mejor salsa del mundo es la hambre" - Miguel de Cervantes
"The future is unwritten" - Joe Strummer
"Simplify, then add lightness" - Colin Chapman
Reply With Quote
  #19   Spotlight this post!  
Unread 10-03-2016, 10:45
Stappy Stappy is offline
Registered User
FRC #5084
 
Join Date: Mar 2014
Location: Corunna
Posts: 23
Stappy is an unknown quantity at this point
Re: Are You Using A Camera To Align Their Shooter?

We had our shooter using grip and had great success in the practice fields we went to. Got to our first competition and grip interfered with the field software and never worked. IF you are using grip I would highly suggest a backup system or plan.
Our programmer kept mumbling.. "I bet they will release an update to grip after this..."
__________________
FridgeBot Inc
Providing robotic appliances since 2014!
Reply With Quote
  #20   Spotlight this post!  
Unread 11-03-2016, 10:00
vScourge's Avatar
vScourge vScourge is offline
Videogame Developer
AKA: Adam Pletcher
FRC #4096 (Ctrl-Z)
Team Role: Coach
 
Join Date: Jan 2014
Rookie Year: 2012
Location: Champaign, IL
Posts: 39
vScourge is on a distinguished road
Re: Are You Using A Camera To Align Their Shooter?

Do you have details on what sort of interference GRIP had with the FMS?
Were you running GRIP on the DS, Roborio, or coprocessor?

We're intending to use GRIP on an onboard Raspberry Pi 2, but also using the SmartDashboard extension to send a low-res feed from it to the DS for the driver. Just wondering what specifically we should be wary of.

Last edited by vScourge : 11-03-2016 at 10:06.
Reply With Quote
  #21   Spotlight this post!  
Unread 11-03-2016, 10:11
cjl2625's Avatar
cjl2625 cjl2625 is offline
apel py
AKA: Cory Lynch
FRC #2067 (Apple Pi)
Team Role: Programmer
 
Join Date: Jan 2013
Rookie Year: 2013
Location: Guilford, CT
Posts: 412
cjl2625 is a splendid one to beholdcjl2625 is a splendid one to beholdcjl2625 is a splendid one to beholdcjl2625 is a splendid one to beholdcjl2625 is a splendid one to beholdcjl2625 is a splendid one to beholdcjl2625 is a splendid one to beholdcjl2625 is a splendid one to behold
Re: Are You Using A Camera To Align Their Shooter?

We are having great luck with RoboRealm; I find it more powerful than GRIP.
However I don't think RoboRealm has updated networktables compatibility for this year's control system. As a result, I devised a workaround using HTTP (RoboRealm sends data im an HTTP request to a local Python HTTP server, and then use PyNetworkTables to share the data with the robot.)
Perhaps it's not the most efficient method, but it works fine for me.
__________________
Head Programmer / Driver
Reply With Quote
  #22   Spotlight this post!  
Unread 11-03-2016, 12:54
euhlmann's Avatar
euhlmann euhlmann is offline
CTO, Programmer
AKA: Erik Uhlmann
FRC #2877 (LigerBots)
Team Role: Leadership
 
Join Date: Dec 2015
Rookie Year: 2015
Location: United States
Posts: 357
euhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud ofeuhlmann has much to be proud of
Re: Are You Using A Camera To Align Their Shooter?

We are using NI vision to automatically line up and calibrate our shooter. We line up using a constant rate turn and then adjust the shooter based on empirical data we collected that relates the size and position of the target on screen to how we need to calibrate the shooter to make that shot.

However I wouldn't recommend NI vision. It's very poorly documented. Next year we will probably switch to OpenCV
Reply With Quote
  #23   Spotlight this post!  
Unread 11-03-2016, 18:04
ThomasClark's Avatar
ThomasClark ThomasClark is offline
Registered User
FRC #0237
 
Join Date: Dec 2012
Location: Watertown, CT
Posts: 146
ThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud ofThomasClark has much to be proud of
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by Andrew Schreiber View Post
That being said, I've found the iteration cycles of GRIP to be unparalleled. The ability for students (and me) to ask "what if" is incredible. It's missing some features I'd like to see (better ability to do feature refinement most notably).
That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue and it doesn't look too hard, I can try implementing it.

Quote:
Originally Posted by Arhowk View Post
I would recommend against GRIP. Our team was going to use GRIP initially but I rewrote our processing code in OpenCV using GRIP as a realtime processing agent in the pits than just copying over the hsv, erosion kernels, etc. to the cv code.
Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.

Quote:
Originally Posted by Arhowk View Post
  1. GRIP, if ran on the dashboard, requires sending camera data over a second time in addition to the DS which clogs up bandwidth and laptop CPU
  2. GRIP, if ran the RIO, never even worked for us. Gave us some error and resulted in the program never writing to NetTables.
  3. GRIP on the RIO also requires the installation and execution of the Java VM which is quite alot of overhead if you aren't a Java team
  4. There is also the latency of running it on the DS that is amplified on the field which produces visible control lag for the driver or robot code if used
  5. You learn more if you do it by hand! It's not hard. (Getting an mjpeg is a pain though)
1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension. Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.
__________________
GRIP (Graphically Represented Image Processing) - rapidly develop computer vision algorithms for FRC
Reply With Quote
  #24   Spotlight this post!  
Unread 11-03-2016, 18:24
Andrew Schreiber Andrew Schreiber is offline
Joining the 900 Meme Team
FRC #0079
 
Join Date: Jan 2005
Rookie Year: 2000
Location: Misplaced Michigander
Posts: 4,063
Andrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond reputeAndrew Schreiber has a reputation beyond repute
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by ThomasClark View Post
That's really cool to hear. Do you have any suggestions for more feature refinement operations? If you open an issue and it doesn't look too hard, I can try implementing it.



Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.



1 - Or just send a single camera stream. If you're using SmartDashboard, you can publish the video from GRIP locally to the dashboard and use the GRIP SmartDashboard extension. Otherwise, I guess you could have the GRIP GUI open for drivers to look at.

2-4 are valid points, and running GRIP on a cheap coprocessor like a Kangaroo PC (or, like some teams have managed to do, Raspberry Pi) helps a lot.


I was actually playing with adding some stuff myself.


Update from AZ - GRIP seems to be running fine on our machine. Post event I'll see if I can get our vision kids to post a bit more detail.
__________________




.
Reply With Quote
  #25   Spotlight this post!  
Unread 12-03-2016, 01:11
kylelanman's Avatar
kylelanman kylelanman is offline
Programming Mentor
AKA: Kyle
FRC #2481 (Roboteers)
Team Role: Mentor
 
Join Date: Feb 2008
Rookie Year: 2007
Location: Tremont Il
Posts: 189
kylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to allkylelanman is a name known to all
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by ThomasClark View Post
Cool. One of GRIP's often overlooked use cases is actually a prototyping tool. For people who'd rather write their own OpenCV code for efficiency/portability/educational purposes, GRIP is still useful to lay out an algorithm and experiment with parameters.
^This. We pulled down the GRIP source and did a python port of the algorithm we had in GRIP. Because GRIP makes it so easy to try things we ended up with a simple 3 block algorithm. With out the rapid prototyping it likely would have had a few extra unneeded steps. We made the python program that runs on a Beagle Bone Black publish values to NT identically to how GRIP does. This allows us to switch between either GRIP on the DS and our python program on the BBB without any code changes required. The robot is none the wiser as to which one is currently being used.
__________________
"May the coms be with you"

Is this a "programming error" or a "programmer error"?

Reply With Quote
  #26   Spotlight this post!  
Unread 12-03-2016, 22:33
sanelss sanelss is offline
Registered User
FRC #1658
 
Join Date: Dec 2012
Location: saint louis
Posts: 258
sanelss is a splendid one to beholdsanelss is a splendid one to beholdsanelss is a splendid one to beholdsanelss is a splendid one to beholdsanelss is a splendid one to beholdsanelss is a splendid one to behold
Re: Are You Using A Camera To Align Their Shooter?

Quote:
Originally Posted by Jonathan Ryan View Post
Would you be willing to share. I know my team has always had trouble getting any kind of vision to work while using labview.
I'll sooner or later make a video of and provide some documentation since it performed so well this year.

We put on a great game but we just never really have any luck so we aren't advancing to worlds even though I think we have a great vision system and the robot performed beutifully so eventually when i stop being so sour over our loss i'll get around to doing it, you'll have to hold tight until then
Reply With Quote
  #27   Spotlight this post!  
Unread 13-03-2016, 00:05
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 257
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: Are You Using A Camera To Align Their Shooter?

We're using a Kinect this year for our vision processing, connected to a coprocessor running Freenect and OpenCV.

The Kinect uses an IR stream to find depth, however you can also view the IR stream Raw, which is extremely useful, as it means we don't need to have a big green LED on our robot's camera.

Our coprocessor (originally the Pine64, but changed to the raspberry pi because of driver support in libusb) finds the contours and bounding boxes of the high goal target. These values are sent to the RoboRIO via regular sockets. A single frame of data takes up only 32 bytes per target, which means we never run out of bandwidth. All this code is in C/C++.

Instead of doing some (unreliable) math to find the angle and distance to the target, we're just using a PID controller with the error set to the deviation between the centre of the bounding box and the centre of the frame to align. For distance, we're just using a lookup table with the distance of the target from the bottom of the frame in pixels. Calculating Distance and Angle is an unnecessary step and just complicates things.

While a target is in view, our flywheels will passively spin up to the appropriate speed to avoid taking time to spinup when we're ready to take a shot. This means the shot it taken almost instantly when I hit the 'shoot' button on the joystick.

Our vision code is written in C/C++ and our RoboRIO code is written in Java/Kotlin.
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #28   Spotlight this post!  
Unread 13-03-2016, 00:11
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 257
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: Are You Using A Camera To Align Their Shooter?

As a further note, I'll be attempting to add Kinect support to GRIP after the season's conclusion. If you're planning to use a Kinect next year and want support for this in GRIP, keep an eye on #163
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #29   Spotlight this post!  
Unread 13-03-2016, 00:39
GuyM142's Avatar
GuyM142 GuyM142 is offline
Registered User
AKA: Guy
FRC #3339 (BumbleBee)
Team Role: Mentor
 
Join Date: Jul 2013
Rookie Year: 2012
Location: Israel
Posts: 158
GuyM142 is just really niceGuyM142 is just really niceGuyM142 is just really niceGuyM142 is just really niceGuyM142 is just really nice
We use vision in order to align to the goal, we found out that the rate in which we get new measurement from the vision processing was too low to work properly with PID so we decided to use one image to calculate how much degrees to turn and then used the gyro to reach that angle. After settling down we take another image just to make sure the robot is on target.
__________________
2016-2017 - Programming Mentor
Curie Sub-Division Champions with 694, 379 & 1511
2015 - Team Captain & Head of Programming Crew
Carson Sub-Division Champions with 1325, 20 & 1711
First ever Israeli team on Einstein
2014 - Team Captain & Head of Programming Crew
2013 - Head of Programming Crew
2012 - Member of Programming Crew
Reply With Quote
  #30   Spotlight this post!  
Unread 13-03-2016, 06:14
Tottanka's Avatar
Tottanka Tottanka is offline
It isnt about bots,its about humans
AKA: Liron Gurvitz
FRC #3211 (The Y Team)
Team Role: Mentor
 
Join Date: Dec 2006
Rookie Year: 2006
Location: Hadera, Israel
Posts: 1,418
Tottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond reputeTottanka has a reputation beyond repute
Re: Are You Using A Camera To Align Their Shooter?

We used GRIP to create a python algorithm which we use with OpenCV.
The frame rate was too slow for us as well, so we are taking one shot of the target, and use Encoders to turn the robot the calculated angle for the target with PID.
Later we double check that it is indeed aligned and that's it.
Takes us less than a second to align properly.
__________________
My FRC record: 10 Years,FTA (2008-9), 3 Teams(1947,2669,3211).3 RCA, 1 Championship EI(2016), 1 Divisional finalist (2016), 1 Regional winner.
Israeli 2016 Volunteer of the year.
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 01:30.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi