|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
||||
|
||||
|
How fast was your vision processing?
Our season has now finished, and one issue that ended up really hurting us was the amount of lag in our vision system. Out setup was as follows:
We were never able to track down exactly where in the pipeline we lose so much time. It could be any of the steps above. How did your vision work, and how fast was it? |
|
#2
|
||||
|
||||
|
Re: How fast was your vision processing?
mjpg-streamer with opencv python plugin running on RoboRIO, 320x200. Published values to NetworkTables.
Didn't measure latency, but it was low enough to not notice it, certainly under 500ms. Around 40% CPU usage when processing enabled. |
|
#3
|
||||
|
||||
|
Re: How fast was your vision processing?
We use a coprocessor (onboard laptop) that streams target information to the robot every 500ms. We also wait for the next frame to ensure the camera is stable when we react to the target. This means that we could end up waiting 500ms but most of the time it's probably less. We've found this rate seems pretty good for maintaining responsiveness while not bogging down any of the systems.
Could we speed it up? Probably, but we haven't seen a need thus far. We also steam the target information over an open web socket rather than using Network Tables which probably helps with latency as well. |
|
#4
|
||||
|
||||
|
Re: How fast was your vision processing?
We used the Labview example code, so I can't say exactly how it worked, but I do know that we were processing 25-30 fps on the roborio and it resulted in minimal lag. Our automatic aiming could turn the robot at a x value of up to .5 and still catch the target with the robot positioned just beyond the outer works. To make it accurate enough for shooting however we had to slow it down to an x value of .25. Side note, we do run PID to make slow speed control possible.
|
|
#5
|
|||||
|
|||||
|
Re: How fast was your vision processing?
We have a vision processing solution for champs, and it uses the LabVIEW example.
I took the code and all of the controls/indicators and implemented it into our Dashboard with support for camera stream switching and an option to turn tracking on/off. We will only be using it for auto. The vision tracking really only processes images for 500ms. We will be using a gyro to get to the batter, then using the camera to capture a few images, process them, and then use the gyro to correct the error. I found that using the camera to track in real time just isn't very viable due to the inconsistency of the image while the robot is moving (it causes the image to blur and the target will not be found). Works pretty well. |
|
#6
|
|||
|
|||
|
Re: How fast was your vision processing?
We used the Nvidia TK1. We used c++ and opencv with cuda gpu support. The actual algorithm was very similar to the samples from GRIP. Everything up to findContours() was pushed to the gpu. It would normally run at the full framerate of the MS lifecam (30fps). It sent a udp packet to the roborio every frame. The latency of the algorithm was less than 2 frames, so 67 ms.
We felt we still couldn't aim fast enough. We actually spent more time working on the robot positioning code than we did on the vision part. At least for us, rotating an FRC bot to within about a half degree of accuracy, is not an easy problem. A turret would have been much easier to aim. One helpful exercise we did that I think is worth sharing: Figure out what the angular tolerance of a made shot is. We used 0.5 degrees for round numbers. Now, using the gyro, write an algorithm to position robot. We used the smart dashboard to type in numbers. Can you rotate the robot 30 +- .5 degrees? Does it work for 10 +- .5 degrees? Can you rotate the robot 1 degree? Can you rotate it .5 degree? Knowing these and improving them helps a lot. |
|
#7
|
||||
|
||||
|
Re: How fast was your vision processing?
For us the issues weren't about vision itself - it was about an erroneously-tuned PID on the shooter tilt/pan that took forever to settle. At the start of the competition, the shooter would be off by +/- a few degrees in pan and +/- a lot of degrees in tilt. Double-check those if you have a few spare hours. Note - we use a turret rather than drive train to adjust left/right aim.
We use Axis -> mjpeg (320x240@15) -> FMS Network -> (D/S Laptop) Open CV -> Network Tables -> FMS Network -> RoboRIO. We used all of our free time this past Saturday to re-tune the shooter PID from scratch and optimize a few processing pathways. It was heartbreaking to miss the tournament, but it had a major silver lining: off the field, the shooter now tracks without noticeable lag to within about +/- 0.2 degrees. I would expect about an additional 100ms delay on the field given the packet round trip times through the FMS. |
|
#8
|
||||
|
||||
|
Re: How fast was your vision processing?
We use a Kinect camera connected directly to our coprocessor, which is then processed by OpenCV, and then sent to the RoboRIO / Driver Station for alignment and viewing. Running on a single thread, the coprocessor is able to update at the Kinect's maximum framerate of 30FPS.
Here's a video of it in action (with a handheld piece of cardboard with retroreflective tape. Coprocessor and RoboRIO are running in this example) |
|
#9
|
|||
|
|||
|
Re: How fast was your vision processing?
Quote:
|
|
#10
|
||||
|
||||
|
Re: How fast was your vision processing?
We originally had opencv code running in a separate thread on the roborio. This worked pretty well, however there was noticeable lag. So between competitions we switched to a raspberry pi 3 running opencv code and network tables. This was way faster especially with the new pis 64bit capability and 1.2 ghz processor. We had less than 100 ms. So the only thing slowing hs down was the robot code. It worked pretty well, however our algorithm wasn't ideal because we didnt have any sort of PID loop. We just kept checking if we were in a pixel tolerance. Right now I am working in calculating angles to rotate to to shoot.
|
|
#11
|
||||
|
||||
|
Re: How fast was your vision processing?
Stereolabs Zed camera -> Jetson TX1 for goal recognition (low 20FPS capture thread speed @ 720P, 50+ FPS in the goal detection thread) -> ZeroMQ message per processed frame with angle and distance to Labview code on RoboRio -> Rotate turret, spin shooter wheels up to speed -> Ball in goal
There were a few frames of lag in the Zed camera, so we waited ~250msec or so after the robot and turret stopped before latching in data on the LabView side. Even so, the shooter wheels spinning up were usually the slowest part. The whole process took maybe 2 seconds from stopping the robot until the shot happened. Last edited by KJaget : 04-12-2016 at 02:20 PM. |
|
#12
|
||||
|
||||
|
Re: How fast was your vision processing?
Our configuration is a USB Camera plugged into a Beaglebone processor running OpenCV, sending X-offset and target validity data through ethernet packets around ~50Hz. The image capture is at 30fps and image processing takes a fraction of the frame capture time. So, we see the results just as quickly as the camera can stream it, effectively 33 ms
|
|
#13
|
||||
|
||||
|
Re: How fast was your vision processing?
We are using the example code, on the roborio, we process images in about a second each. Due to the image processing lag, we convert the image data, to degrees of rotation on the Navx, and position bot with Navx data. We shoot when the next image confirms the image is within tolerances. On Average it will take us about 5 seconds from turning on vision to goal, if we are about 25 degrees off of center. All in labview on the bot, we don't even send image back to the driver station.
This is the first year we are using vision on the bot, next year we will probably play with vision processor in the off season, but we had enough of a learning curve, to get where we are at. Last edited by tr6scott : 04-12-2016 at 04:01 PM. Reason: Add first year... |
|
#14
|
||||
|
||||
|
Re: How fast was your vision processing?
3 months so far (and it's still looking for its first competition field contour)
![]() |
|
#15
|
|||
|
|||
|
Re: How fast was your vision processing?
My team created and used tower tracker. Unfortunately due to our robots constraints we were not able to use it effectively but will try our hardest at competition.
Since tower tracker runs on the desktop it gets the axis camera feed which is maybe 200ms off. Then can process the frames real time so maybe another 30ms and sends it to network tables which is another 100ms off, and the robot to react which is real time by the time its ready for the vision. Robots can use snapshots of what they need to effectively use vision processing. When lining up you only need 1 frame to do the angle calculations. Then use a gyro to turn 20 degrees or whatever it is and then find out the distance. Multiple iterations help all of it of course. TL;DR 400ms max delay snapshotted gives us good enough target tracking. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|