Go to Post FIRST definately holds the key to my heart...and everything that goes with it (except maybe the evil chop saw in the basement) - karinka13 [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 12-04-2016, 11:01
microbuns's Avatar
microbuns microbuns is offline
Registered User
AKA: Sam Maier
FRC #4917
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2014
Location: Elmira
Posts: 81
microbuns is an unknown quantity at this point
How fast was your vision processing?

Our season has now finished, and one issue that ended up really hurting us was the amount of lag in our vision system. Out setup was as follows:
  • Axis camera
  • GRIP running as a second process on the RIO
  • GRIP reading from the Axis camera, doing some processing, then publishing to network tables
  • FRC user program pulling from the network tables every scheduler cycle
In the end, this whole process took anywhere from 0.5-1 seconds to actually act on the data. This caused a lot of issues with lining up the shot.

We were never able to track down exactly where in the pipeline we lose so much time. It could be any of the steps above.

How did your vision work, and how fast was it?
Reply With Quote
  #2   Spotlight this post!  
Unread 12-04-2016, 11:14
virtuald's Avatar
virtuald virtuald is offline
RobotPy Guy
AKA: Dustin Spicuzza
FRC #1418 (), FRC #1973, FRC #4796, FRC #6367 ()
Team Role: Mentor
 
Join Date: Dec 2008
Rookie Year: 2003
Location: Boston, MA
Posts: 1,053
virtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant futurevirtuald has a brilliant future
Re: How fast was your vision processing?

mjpg-streamer with opencv python plugin running on RoboRIO, 320x200. Published values to NetworkTables.

Didn't measure latency, but it was low enough to not notice it, certainly under 500ms. Around 40% CPU usage when processing enabled.
__________________
Maintainer of RobotPy - Python for FRC
Creator of pyfrc (Robot Simulator + utilities for Python) and pynetworktables/pynetworktables2js (NetworkTables for Python & Javascript)

2017 Season: Teams #1973, #4796, #6369
Team #1418 (remote mentor): Newton Quarterfinalists, 2016 Chesapeake District Champion, 2x Innovation in Control award, 2x district event winner
Team #1418: 2015 DC Regional Innovation In Control Award, #2 seed; 2014 VA Industrial Design Award; 2014 Finalists in DC & VA
Team #2423: 2012 & 2013 Boston Regional Innovation in Control Award


Resources: FIRSTWiki (relaunched!) | My Software Stuff
Reply With Quote
  #3   Spotlight this post!  
Unread 12-04-2016, 11:23
mwtidd's Avatar
mwtidd mwtidd is offline
Registered User
AKA: mike
FRC #0319 (Big Bad Bob)
Team Role: Mentor
 
Join Date: Feb 2005
Rookie Year: 2003
Location: Boston, MA
Posts: 714
mwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond reputemwtidd has a reputation beyond repute
Re: How fast was your vision processing?

We use a coprocessor (onboard laptop) that streams target information to the robot every 500ms. We also wait for the next frame to ensure the camera is stable when we react to the target. This means that we could end up waiting 500ms but most of the time it's probably less. We've found this rate seems pretty good for maintaining responsiveness while not bogging down any of the systems.

Could we speed it up? Probably, but we haven't seen a need thus far.

We also steam the target information over an open web socket rather than using Network Tables which probably helps with latency as well.
__________________
"Never let your schooling interfere with your education" -Mark Twain
Reply With Quote
  #4   Spotlight this post!  
Unread 12-04-2016, 11:55
fargus111111111's Avatar
fargus111111111 fargus111111111 is offline
Registered User
AKA: Tim W
FRC #0343 (Metal in Motion)
Team Role: Alumni
 
Join Date: Nov 2014
Rookie Year: 2010
Location: South Carolina
Posts: 102
fargus111111111 is on a distinguished road
Re: How fast was your vision processing?

We used the Labview example code, so I can't say exactly how it worked, but I do know that we were processing 25-30 fps on the roborio and it resulted in minimal lag. Our automatic aiming could turn the robot at a x value of up to .5 and still catch the target with the robot positioned just beyond the outer works. To make it accurate enough for shooting however we had to slow it down to an x value of .25. Side note, we do run PID to make slow speed control possible.
__________________
I didn't break it... this time.
Reply With Quote
  #5   Spotlight this post!  
Unread 12-04-2016, 12:09
Landonh12's Avatar
Landonh12 Landonh12 is offline
270 points
AKA: Landon Haugh
FRC #0364 (Team Fusion)
Team Role: College Student
 
Join Date: Jan 2012
Rookie Year: 2012
Location: Gulfport, MS
Posts: 211
Landonh12 has much to be proud ofLandonh12 has much to be proud ofLandonh12 has much to be proud ofLandonh12 has much to be proud ofLandonh12 has much to be proud ofLandonh12 has much to be proud ofLandonh12 has much to be proud ofLandonh12 has much to be proud of
Re: How fast was your vision processing?

We have a vision processing solution for champs, and it uses the LabVIEW example.

I took the code and all of the controls/indicators and implemented it into our Dashboard with support for camera stream switching and an option to turn tracking on/off.

We will only be using it for auto. The vision tracking really only processes images for 500ms. We will be using a gyro to get to the batter, then using the camera to capture a few images, process them, and then use the gyro to correct the error. I found that using the camera to track in real time just isn't very viable due to the inconsistency of the image while the robot is moving (it causes the image to blur and the target will not be found).

Works pretty well.
__________________
Team Fusion 364 - Driver/Programmer 2012-2015; Controls Mentor 2016-Present
Reply With Quote
  #6   Spotlight this post!  
Unread 12-04-2016, 12:33
RyanShoff RyanShoff is offline
Registered User
FRC #4143 (Mars Wars)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2012
Location: Metamora, IL
Posts: 147
RyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to beholdRyanShoff is a splendid one to behold
Re: How fast was your vision processing?

We used the Nvidia TK1. We used c++ and opencv with cuda gpu support. The actual algorithm was very similar to the samples from GRIP. Everything up to findContours() was pushed to the gpu. It would normally run at the full framerate of the MS lifecam (30fps). It sent a udp packet to the roborio every frame. The latency of the algorithm was less than 2 frames, so 67 ms.

We felt we still couldn't aim fast enough. We actually spent more time working on the robot positioning code than we did on the vision part. At least for us, rotating an FRC bot to within about a half degree of accuracy, is not an easy problem. A turret would have been much easier to aim.

One helpful exercise we did that I think is worth sharing: Figure out what the angular tolerance of a made shot is. We used 0.5 degrees for round numbers. Now, using the gyro, write an algorithm to position robot. We used the smart dashboard to type in numbers. Can you rotate the robot 30 +- .5 degrees? Does it work for 10 +- .5 degrees? Can you rotate the robot 1 degree? Can you rotate it .5 degree? Knowing these and improving them helps a lot.
__________________
Ryan Shoff
4143 Mars/Wars
CheapGears.com
Reply With Quote
  #7   Spotlight this post!  
Unread 12-04-2016, 12:56
JesseK's Avatar
JesseK JesseK is offline
Expert Flybot Crasher
FRC #1885 (ILITE)
Team Role: Mentor
 
Join Date: Mar 2007
Rookie Year: 2005
Location: Reston, VA
Posts: 3,657
JesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond reputeJesseK has a reputation beyond repute
Re: How fast was your vision processing?

For us the issues weren't about vision itself - it was about an erroneously-tuned PID on the shooter tilt/pan that took forever to settle. At the start of the competition, the shooter would be off by +/- a few degrees in pan and +/- a lot of degrees in tilt. Double-check those if you have a few spare hours. Note - we use a turret rather than drive train to adjust left/right aim.

We use Axis -> mjpeg (320x240@15) -> FMS Network -> (D/S Laptop) Open CV -> Network Tables -> FMS Network -> RoboRIO.

We used all of our free time this past Saturday to re-tune the shooter PID from scratch and optimize a few processing pathways. It was heartbreaking to miss the tournament, but it had a major silver lining: off the field, the shooter now tracks without noticeable lag to within about +/- 0.2 degrees. I would expect about an additional 100ms delay on the field given the packet round trip times through the FMS.
__________________

Drive Coach, 1885 (2007-present)
CAD Library Updated 5/1/16 - 2016 Curie/Carver Industrial Design Winner
GitHub
Reply With Quote
  #8   Spotlight this post!  
Unread 12-04-2016, 13:08
Jaci's Avatar
Jaci Jaci is offline
Registered User
AKA: Jaci R Brunning
FRC #5333 (Can't C# | OpenRIO)
Team Role: Mentor
 
Join Date: Jan 2015
Rookie Year: 2015
Location: Perth, Western Australia
Posts: 257
Jaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond reputeJaci has a reputation beyond repute
Re: How fast was your vision processing?

We use a Kinect camera connected directly to our coprocessor, which is then processed by OpenCV, and then sent to the RoboRIO / Driver Station for alignment and viewing. Running on a single thread, the coprocessor is able to update at the Kinect's maximum framerate of 30FPS.

Here's a video of it in action (with a handheld piece of cardboard with retroreflective tape. Coprocessor and RoboRIO are running in this example)
__________________
Jacinta R

Curtin FRC (5333+5663) : Mentor
5333 : Former [Captain | Programmer | Driver], Now Mentor
OpenRIO : Owner

Website | Twitter | Github
jaci.brunning@gmail.com
Reply With Quote
  #9   Spotlight this post!  
Unread 12-04-2016, 13:09
adciv adciv is offline
One Eyed Man
FRC #0836 (RoboBees)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2010
Location: Southern Maryland
Posts: 478
adciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to alladciv is a name known to all
Re: How fast was your vision processing?

Quote:
Originally Posted by microbuns View Post
  • Axis camera
  • GRIP running as a second process on the RIO
  • GRIP reading from the Axis camera, doing some processing, then publishing to network tables
  • FRC user program pulling from the network tables every scheduler cycle
In the end, this whole process took anywhere from 0.5-1 seconds to actually act on the data. This caused a lot of issues with lining up the shot.
What resolution were you running the Axis camera at? If you're running at 800x600 (or even 640x480) I could see it causing significant delay. I'm unfamiliar with GRIP but we easily achieve 10FPS using a 424x240 resolution with a USB camera on the RIO in Labview. If you're running a low resolution, then I'd look at network tables as a possible issue and consider replacing it with a UDP stream.
__________________
Quote:
Originally Posted by texarkana View Post
I would not want the task of devising a system that 50,000 very smart people try to outwit.
Reply With Quote
  #10   Spotlight this post!  
Unread 12-04-2016, 13:12
jreneew2's Avatar
jreneew2 jreneew2 is online now
Alumni of Team 2053 Tigertronics
AKA: Drew Williams
FRC #2053 (TigerTronics)
Team Role: Programmer
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Vestal, NY
Posts: 195
jreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura aboutjreneew2 has a spectacular aura about
Re: How fast was your vision processing?

We originally had opencv code running in a separate thread on the roborio. This worked pretty well, however there was noticeable lag. So between competitions we switched to a raspberry pi 3 running opencv code and network tables. This was way faster especially with the new pis 64bit capability and 1.2 ghz processor. We had less than 100 ms. So the only thing slowing hs down was the robot code. It worked pretty well, however our algorithm wasn't ideal because we didnt have any sort of PID loop. We just kept checking if we were in a pixel tolerance. Right now I am working in calculating angles to rotate to to shoot.
Reply With Quote
  #11   Spotlight this post!  
Unread 12-04-2016, 14:16
KJaget's Avatar
KJaget KJaget is online now
Zebravision Labs
FRC #0900
Team Role: Mentor
 
Join Date: Dec 2014
Rookie Year: 2015
Location: Cary, NC
Posts: 41
KJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud ofKJaget has much to be proud of
Re: How fast was your vision processing?

Stereolabs Zed camera -> Jetson TX1 for goal recognition (low 20FPS capture thread speed @ 720P, 50+ FPS in the goal detection thread) -> ZeroMQ message per processed frame with angle and distance to Labview code on RoboRio -> Rotate turret, spin shooter wheels up to speed -> Ball in goal

There were a few frames of lag in the Zed camera, so we waited ~250msec or so after the robot and turret stopped before latching in data on the LabView side. Even so, the shooter wheels spinning up were usually the slowest part. The whole process took maybe 2 seconds from stopping the robot until the shot happened.

Last edited by KJaget : 12-04-2016 at 14:20.
Reply With Quote
  #12   Spotlight this post!  
Unread 12-04-2016, 15:37
seg9585's Avatar
seg9585 seg9585 is offline
Registered User
AKA: Eric
FRC #4276 (Surf City Vikings)
Team Role: Engineer
 
Join Date: Feb 2006
Rookie Year: 2001
Location: Boeing (Seal Beach, CA)
Posts: 520
seg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond reputeseg9585 has a reputation beyond repute
Re: How fast was your vision processing?

Our configuration is a USB Camera plugged into a Beaglebone processor running OpenCV, sending X-offset and target validity data through ethernet packets around ~50Hz. The image capture is at 30fps and image processing takes a fraction of the frame capture time. So, we see the results just as quickly as the camera can stream it, effectively 33 ms
__________________
My FIRST legacy:

Team 204 Student 2001, 2002 (Voorhees, NJ)
Team 1493 College Mentor 2006 - 2008 (Troy, NY)
Team 2150 Intern/Professional Mentor 2007, 2009 (Palos Verdes)
Team 4123 Lead Engineering Mentor 2012 (Bellflower, CA)
Team 4276 Engineering Mentor 2012-2016 (Huntington Beach, CA)
Reply With Quote
  #13   Spotlight this post!  
Unread 12-04-2016, 15:59
tr6scott's Avatar
tr6scott tr6scott is offline
Um, I smell Motor!
AKA: Scott McBride
FRC #2137 (TORC)
Team Role: Mentor
 
Join Date: Dec 2007
Rookie Year: 2005
Location: Oxford, MI
Posts: 510
tr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond reputetr6scott has a reputation beyond repute
Re: How fast was your vision processing?

We are using the example code, on the roborio, we process images in about a second each. Due to the image processing lag, we convert the image data, to degrees of rotation on the Navx, and position bot with Navx data. We shoot when the next image confirms the image is within tolerances. On Average it will take us about 5 seconds from turning on vision to goal, if we are about 25 degrees off of center. All in labview on the bot, we don't even send image back to the driver station.

This is the first year we are using vision on the bot, next year we will probably play with vision processor in the off season, but we had enough of a learning curve, to get where we are at.
__________________
The sooner we get behind schedule, the more time we have to catch up.


Last edited by tr6scott : 12-04-2016 at 16:01. Reason: Add first year...
Reply With Quote
  #14   Spotlight this post!  
Unread 12-04-2016, 16:06
Harshizzle's Avatar
Harshizzle Harshizzle is online now
Registered User
AKA: Harshal Singh
FRC #4828 (RoboEagles)
Team Role: Alumni
 
Join Date: Aug 2014
Rookie Year: 2012
Location: Cary, NC
Posts: 33
Harshizzle is on a distinguished road
Re: How fast was your vision processing?

3 months so far (and it's still looking for its first competition field contour)
__________________
FRC Team 4828
2013-2016

Think Big.
Reply With Quote
  #15   Spotlight this post!  
Unread 12-04-2016, 20:06
Fauge7 Fauge7 is offline
Head programmer
FRC #3019 (firebird robotics)
Team Role: Programmer
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Scottsdale
Posts: 195
Fauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to allFauge7 is a name known to all
Re: How fast was your vision processing?

My team created and used tower tracker. Unfortunately due to our robots constraints we were not able to use it effectively but will try our hardest at competition.

Since tower tracker runs on the desktop it gets the axis camera feed which is maybe 200ms off. Then can process the frames real time so maybe another 30ms and sends it to network tables which is another 100ms off, and the robot to react which is real time by the time its ready for the vision. Robots can use snapshots of what they need to effectively use vision processing. When lining up you only need 1 frame to do the angle calculations. Then use a gyro to turn 20 degrees or whatever it is and then find out the distance. Multiple iterations help all of it of course. TL;DR 400ms max delay snapshotted gives us good enough target tracking.
__________________
Engineering Inspiration - 3019


Tower Tracker author (2016)
  • 1 regional finalist
  • 1 regional winner
  • 3 innovation in control awards
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 17:48.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi