Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   How fast was your vision processing? (http://www.chiefdelphi.com/forums/showthread.php?t=147080)

microbuns 12-04-2016 11:01

How fast was your vision processing?
 
Our season has now finished, and one issue that ended up really hurting us was the amount of lag in our vision system. Out setup was as follows:
  • Axis camera
  • GRIP running as a second process on the RIO
  • GRIP reading from the Axis camera, doing some processing, then publishing to network tables
  • FRC user program pulling from the network tables every scheduler cycle
In the end, this whole process took anywhere from 0.5-1 seconds to actually act on the data. This caused a lot of issues with lining up the shot.

We were never able to track down exactly where in the pipeline we lose so much time. It could be any of the steps above.

How did your vision work, and how fast was it?

virtuald 12-04-2016 11:14

Re: How fast was your vision processing?
 
mjpg-streamer with opencv python plugin running on RoboRIO, 320x200. Published values to NetworkTables.

Didn't measure latency, but it was low enough to not notice it, certainly under 500ms. Around 40% CPU usage when processing enabled.

mwtidd 12-04-2016 11:23

Re: How fast was your vision processing?
 
We use a coprocessor (onboard laptop) that streams target information to the robot every 500ms. We also wait for the next frame to ensure the camera is stable when we react to the target. This means that we could end up waiting 500ms but most of the time it's probably less. We've found this rate seems pretty good for maintaining responsiveness while not bogging down any of the systems.

Could we speed it up? Probably, but we haven't seen a need thus far.

We also steam the target information over an open web socket rather than using Network Tables which probably helps with latency as well.

fargus111111111 12-04-2016 11:55

Re: How fast was your vision processing?
 
We used the Labview example code, so I can't say exactly how it worked, but I do know that we were processing 25-30 fps on the roborio and it resulted in minimal lag. Our automatic aiming could turn the robot at a x value of up to .5 and still catch the target with the robot positioned just beyond the outer works. To make it accurate enough for shooting however we had to slow it down to an x value of .25. Side note, we do run PID to make slow speed control possible.

Landonh12 12-04-2016 12:09

Re: How fast was your vision processing?
 
We have a vision processing solution for champs, and it uses the LabVIEW example.

I took the code and all of the controls/indicators and implemented it into our Dashboard with support for camera stream switching and an option to turn tracking on/off.

We will only be using it for auto. The vision tracking really only processes images for 500ms. We will be using a gyro to get to the batter, then using the camera to capture a few images, process them, and then use the gyro to correct the error. I found that using the camera to track in real time just isn't very viable due to the inconsistency of the image while the robot is moving (it causes the image to blur and the target will not be found).

Works pretty well.

RyanShoff 12-04-2016 12:33

Re: How fast was your vision processing?
 
We used the Nvidia TK1. We used c++ and opencv with cuda gpu support. The actual algorithm was very similar to the samples from GRIP. Everything up to findContours() was pushed to the gpu. It would normally run at the full framerate of the MS lifecam (30fps). It sent a udp packet to the roborio every frame. The latency of the algorithm was less than 2 frames, so 67 ms.

We felt we still couldn't aim fast enough. We actually spent more time working on the robot positioning code than we did on the vision part. At least for us, rotating an FRC bot to within about a half degree of accuracy, is not an easy problem. A turret would have been much easier to aim.

One helpful exercise we did that I think is worth sharing: Figure out what the angular tolerance of a made shot is. We used 0.5 degrees for round numbers. Now, using the gyro, write an algorithm to position robot. We used the smart dashboard to type in numbers. Can you rotate the robot 30 +- .5 degrees? Does it work for 10 +- .5 degrees? Can you rotate the robot 1 degree? Can you rotate it .5 degree? Knowing these and improving them helps a lot.

JesseK 12-04-2016 12:56

Re: How fast was your vision processing?
 
For us the issues weren't about vision itself - it was about an erroneously-tuned PID on the shooter tilt/pan that took forever to settle. At the start of the competition, the shooter would be off by +/- a few degrees in pan and +/- a lot of degrees in tilt. Double-check those if you have a few spare hours. Note - we use a turret rather than drive train to adjust left/right aim.

We use Axis -> mjpeg (320x240@15) -> FMS Network -> (D/S Laptop) Open CV -> Network Tables -> FMS Network -> RoboRIO.

We used all of our free time this past Saturday to re-tune the shooter PID from scratch and optimize a few processing pathways. It was heartbreaking to miss the tournament, but it had a major silver lining: off the field, the shooter now tracks without noticeable lag to within about +/- 0.2 degrees. I would expect about an additional 100ms delay on the field given the packet round trip times through the FMS.

Jaci 12-04-2016 13:08

Re: How fast was your vision processing?
 
We use a Kinect camera connected directly to our coprocessor, which is then processed by OpenCV, and then sent to the RoboRIO / Driver Station for alignment and viewing. Running on a single thread, the coprocessor is able to update at the Kinect's maximum framerate of 30FPS.

Here's a video of it in action (with a handheld piece of cardboard with retroreflective tape. Coprocessor and RoboRIO are running in this example)

adciv 12-04-2016 13:09

Re: How fast was your vision processing?
 
Quote:

Originally Posted by microbuns (Post 1571771)
  • Axis camera
  • GRIP running as a second process on the RIO
  • GRIP reading from the Axis camera, doing some processing, then publishing to network tables
  • FRC user program pulling from the network tables every scheduler cycle
In the end, this whole process took anywhere from 0.5-1 seconds to actually act on the data. This caused a lot of issues with lining up the shot.

What resolution were you running the Axis camera at? If you're running at 800x600 (or even 640x480) I could see it causing significant delay. I'm unfamiliar with GRIP but we easily achieve 10FPS using a 424x240 resolution with a USB camera on the RIO in Labview. If you're running a low resolution, then I'd look at network tables as a possible issue and consider replacing it with a UDP stream.

jreneew2 12-04-2016 13:12

Re: How fast was your vision processing?
 
We originally had opencv code running in a separate thread on the roborio. This worked pretty well, however there was noticeable lag. So between competitions we switched to a raspberry pi 3 running opencv code and network tables. This was way faster especially with the new pis 64bit capability and 1.2 ghz processor. We had less than 100 ms. So the only thing slowing hs down was the robot code. It worked pretty well, however our algorithm wasn't ideal because we didnt have any sort of PID loop. We just kept checking if we were in a pixel tolerance. Right now I am working in calculating angles to rotate to to shoot.

KJaget 12-04-2016 14:16

Re: How fast was your vision processing?
 
Stereolabs Zed camera -> Jetson TX1 for goal recognition (low 20FPS capture thread speed @ 720P, 50+ FPS in the goal detection thread) -> ZeroMQ message per processed frame with angle and distance to Labview code on RoboRio -> Rotate turret, spin shooter wheels up to speed -> Ball in goal

There were a few frames of lag in the Zed camera, so we waited ~250msec or so after the robot and turret stopped before latching in data on the LabView side. Even so, the shooter wheels spinning up were usually the slowest part. The whole process took maybe 2 seconds from stopping the robot until the shot happened.

seg9585 12-04-2016 15:37

Re: How fast was your vision processing?
 
Our configuration is a USB Camera plugged into a Beaglebone processor running OpenCV, sending X-offset and target validity data through ethernet packets around ~50Hz. The image capture is at 30fps and image processing takes a fraction of the frame capture time. So, we see the results just as quickly as the camera can stream it, effectively 33 ms

tr6scott 12-04-2016 15:59

Re: How fast was your vision processing?
 
We are using the example code, on the roborio, we process images in about a second each. Due to the image processing lag, we convert the image data, to degrees of rotation on the Navx, and position bot with Navx data. We shoot when the next image confirms the image is within tolerances. On Average it will take us about 5 seconds from turning on vision to goal, if we are about 25 degrees off of center. All in labview on the bot, we don't even send image back to the driver station.

This is the first year we are using vision on the bot, next year we will probably play with vision processor in the off season, but we had enough of a learning curve, to get where we are at.

Harshizzle 12-04-2016 16:06

Re: How fast was your vision processing?
 
3 months so far (and it's still looking for its first competition field contour):ahh:

Fauge7 12-04-2016 20:06

Re: How fast was your vision processing?
 
My team created and used tower tracker. Unfortunately due to our robots constraints we were not able to use it effectively but will try our hardest at competition.

Since tower tracker runs on the desktop it gets the axis camera feed which is maybe 200ms off. Then can process the frames real time so maybe another 30ms and sends it to network tables which is another 100ms off, and the robot to react which is real time by the time its ready for the vision. Robots can use snapshots of what they need to effectively use vision processing. When lining up you only need 1 frame to do the angle calculations. Then use a gyro to turn 20 degrees or whatever it is and then find out the distance. Multiple iterations help all of it of course. TL;DR 400ms max delay snapshotted gives us good enough target tracking.

rod@3711 13-04-2016 13:39

Re: How fast was your vision processing?
 
All the hurdles to implementing machine vision kept us out of machine vision. We are like a lot of small teams in that we do not have time to chase machine vision when the drive motors are not running. All the chatter on Chief Delphi about using Net Tables, ancillary processors or loading opencv libraries reinforced our reluctance. In addition, 2 years of issues the hd3000 usb camera did not help.

This was particularly painful for me, since a big part of my engineering and programming career was machine vision.

This year we implemented a quick n dirty tower tracker for our regional event. It worked amazingly well, but a little too late to get us to St Louis.

I will post some screen shots and some code when we finish getting unpacked. Here are the highlights
  • C++ code
    runs in roboRio using minimum set of nivision.h functions.
    runs at frame rate (appeared to be about 15 fps)
    annotated image shows detection and tracking.
    Cpu load went up less than 10% when tracking.
    Logitech 9xxx? USB camera

Summary. We were already using IMAQxxxx functions (nivision.h) to select between a front viewing and rear viewing camera. When tracking, we copied each scan frame into a 2 dimensional array (something I am comfortable with) using imagImageToArray. Then used some fairly simple techniques to detect bright vertical and horizontal lines. Finally a little magic to home in on the bottom horizontal reflective tape. Then we copied our annotated image data back to the normal scan frame using imaqArrayToImage.

Once we could track the tower, we struggled with trying to make minimal angle correction with the a skid-steering robot. Finally ran out of time.

We did manage one 20 point autonomous, so we think we are cool.

Tom Bottiglieri 13-04-2016 14:12

Re: How fast was your vision processing?
 
Jared and I will be giving a presentation at the FIRST Championship Conferences about how to build fast vision processing systems and integrate them into your FRC robot's control system. We will post a thread on CD soon with more details, but for now you can see some details here.

billbo911 13-04-2016 16:01

Re: How fast was your vision processing?
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1572542)
Jared and I will be giving a presentation at the FIRST Championship Conferences about how to build fast vision processing systems and integrate them into your FRC robot's control system. We will post a thread on CD soon with more details, but for now you can see some details here.

It's killing me that I will not be attending CMP this year. So, I am truly looking forward to seeing what you will be releasing when that time comes.

I am amazed at what 254 came up with this year! I know 254 has not been a huge advocate of using vision unless it was truly necessary in the past. Jump forward to 2016, and we have one of the coolest and fastest vision tracking systems in FRC (that I am aware of).
Classic 254!!

billbo911 13-04-2016 16:13

Re: How fast was your vision processing?
 
Quote:

Originally Posted by rod@3711 (Post 1572529)
All the hurdles to implementing machine vision kept us out of machine vision. We are like a lot of small teams in that we do not have time to chase machine vision when the drive motors are not running. All the chatter on Chief Delphi about using Net Tables, ancillary processors or loading opencv libraries reinforced our reluctance. In addition, 2 years of issues the hd3000 usb camera did not help.....

Rod,
Would you, or anyone for that matter, be interested in a fairly low cost, say under $100, easy to tune, easy to modify, >30 fps (as high as 50 fps), no network table requirements, vision tracking system? All parts are easily available and all software is free. Even the vision code will be provided.

Keep your eyes open on CD. 2073 is refining it's code and approach to vision tracking and will be sharing it in time for the fall off season. We are not 254, but I think what you will see may change your mind about what a small team can do with vision in the middle of a build season crunch!

JesseK 13-04-2016 17:02

Re: How fast was your vision processing?
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1572542)
Jared and I will be giving a presentation at the FIRST Championship Conferences about how to build fast vision processing systems and integrate them into your FRC robot's control system. We will post a thread on CD soon with more details, but for now you can see some details here.

Aww man. Which presentation do we attend - Transformers, or Decepticons? Decisions, decisions...

jreneew2 13-04-2016 17:13

Re: How fast was your vision processing?
 
Quote:

Originally Posted by Tom Bottiglieri View Post
Jared and I will be giving a presentation at the FIRST Championship Conferences about how to build fast vision processing systems and integrate them into your FRC robot's control system. We will post a thread on CD soon with more details, but for now you can see some details here.
I hope you guys are recording this! I'm really interested in this topic!

Michael Hill 13-04-2016 17:25

Re: How fast was your vision processing?
 
Quote:

Originally Posted by JesseK (Post 1572646)
Aww man. Which presentation do we attend - Transformers, or Decepticons? Decisions, decisions...

Obviously Decepticons. They're WAY cooler.

Also, @Tom is that presentation going to be recorded/posted online? I'd love to see it. The motion profiling one was terrific.

Tom Bottiglieri 13-04-2016 17:32

Re: How fast was your vision processing?
 
Quote:

Originally Posted by JesseK (Post 1572646)
Aww man. Which presentation do we attend - Transformers, or Decepticons? Decisions, decisions...

Build the wrong robot that aims well or build the right robot that can't aim.... decisions, decisions....

microbuns 14-04-2016 18:47

Re: How fast was your vision processing?
 
Quote:

Originally Posted by Tom Bottiglieri (Post 1572542)
Jared and I will be giving a presentation at the FIRST Championship Conferences about how to build fast vision processing systems and integrate them into your FRC robot's control system. We will post a thread on CD soon with more details, but for now you can see some details here.

Like other people have said - is it going to be posted online? This is exactly the kind of thing our team needs!

ajhammond123 16-04-2016 23:15

Re: How fast was your vision processing?
 
Our team used a raspberry pi running OpenCV and stored the data on the pi, accessing it through a TCP port. This enabled the vision tracking to update its info at its maximum speed, which was practically real-time , and allowed the rio to only access the data when necessary, making the only cap on our tracking speed our algorithm in the rio to adjust the aim of the robot, which was never fully developed so it was very slow.


All times are GMT -5. The time now is 06:22.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi