Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   What Did you use for Vision Tracking? (http://www.chiefdelphi.com/forums/showthread.php?t=147984)

rich2202 01-05-2016 17:49

Re: What Did you use for Vision Tracking?
 
IP Camera. Custom Code on the Driver Station.

Vision program written in C++. It took the picture off the Smart Dashboard, and processed it. Pretty ingenious code. He looked for "corners". Rated each pixel for the likelihood it was a corner (top corner, bottom left corner, bottom right corner). the largest grouping was declared a corner.

BrianAtlanta 01-05-2016 17:57

Re: What Did you use for Vision Tracking?
 
Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.

FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile.

We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is.


Brian

tomy 01-05-2016 18:00

Re: What Did you use for Vision Tracking?
 
Quote:

Originally Posted by BrianAtlanta (Post 1581194)
Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.

FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile.

We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is.


Brian


Wow that long?

I am extremely new to open-CV and python do you have any good places to start?

snekiam 01-05-2016 18:04

Re: What Did you use for Vision Tracking?
 
Quote:

Originally Posted by tomy (Post 1581197)
Wow that long?

I am extremely new to open-CV and python do you have any good places to start?

The installation takes a long time because you need to build it on the pi, which does take several hours. Which language are you looking to get started on?

axton900 01-05-2016 18:04

Re: What Did you use for Vision Tracking?
 
We used an Axis IP Camera and a Raspberry Pi running a modified version of Team 3019's TowerTracker OpenCV java program. I believe someone posted about it earlier in this thread.

marshall 01-05-2016 18:14

Re: What Did you use for Vision Tracking?
 
We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.

We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination.

We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup.

We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github.

apache8080 01-05-2016 18:51

Re: What Did you use for Vision Tracking?
 
We used a USB camera connected to a Raspeberry Pi. On the Raspberry Pi we used Python and OpenCV to track the goal from the retro-reflective tape. The tape was tracked by using basic color thresholding to track the color green. Once we found the green color we contoured the binary image and used OpenCV moments to calculate the centroid of the goal. After finding the centroid the program calculated the angle the robot had to turn to center to the goal by using the camera's given field of view angle. Using pynetworktables we sent the calculated angle to the RoboRIO and then a PID controller turned the robot to that angle. Here is the link to our vision code.

billbo911 01-05-2016 19:04

Re: What Did you use for Vision Tracking?
 
We started out with OpenCV on a PCDuino. I say "started out" because we ultimately found we could do really well without it, and the fact that we realized why our implementation was actually causing us issues once in a while.

We have identified to root cause of the issues and will be implementing a new process going forward.

We are moving to OpenCV on RPi-3. It is WAY FASTER than what we had with the PCDuino, and is actually a bit less expensive. In addition, there is tons of support in the RPi community.

KJaget 02-05-2016 10:50

Re: What Did you use for Vision Tracking?
 
Quote:

Originally Posted by marshall (Post 1581203)
We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.

We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination.

We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup.

We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github.

I'll add that we use ZeroMQ to communicate between the TX1 and Labview Code on the RoboRIO. That turned out to be one of the least painful parts of the development process - it took like 10 lines of C++ code overall and just worked from there on.

mwtidd 02-05-2016 10:55

Re: What Did you use for Vision Tracking?
 
We ran OpenCV for Java on an onboard coprocessor. At first it was run on a kangaroo, but when that burnt out we switched to an onboard laptop.

virtuald 02-05-2016 11:21

Re: What Did you use for Vision Tracking?
 
We used OpenCV+Python on the roborio, as an mjpg-streamer plugin so that we could optionally stream the images to the DS. pynetworktables to send data to the robot code.

Only about ~40% CPU usage, worked really well, the problems we had was in the code that used the results from the camera.

Code can be found here.

andrewthomas 02-05-2016 11:35

Re: What Did you use for Vision Tracking?
 
Team 1619 used an Nvidia Jetson TK1 for our vision processing with a Logitech USB webcam. We wrote our vision processing code in Python using OpenCV and communicated with the roboRIO and driver station using a custom written socket server similar to NetworkTables. We also streamed the camera feed from the Jetson to the driver station using a UDP stream.

MamaSpoldi 02-05-2016 14:06

Re: What Did you use for Vision Tracking?
 
Quote:

Originally Posted by nighterfighter (Post 1581106)
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.

We switched to using a PixyCam, and had much better results.

Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.

The only issue we came up against after the initial integration was at Champs when the much more polished surface of the driver's station wall reflected back the LEDs to the camera simulating a goal target and we shot at ourselves in autonomous. :yikes: It was quickly fixed by adding the requirement that we had to rotate at least 45 degrees before we started looking for a target.

The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software.

Jaci 02-05-2016 14:40

Re: What Did you use for Vision Tracking?
 
OpenCV C/C++ mixed source running on a Pine64 coprocessor with a Kinect as a camera. 30fps tracking using the Infrared stream. Targets are found and reduced to a bounding box ready to be sent over the network. Each vision target takes 32 bytes of data and is used for auto alignment and sent to the Driver Station WebUI for driver feedback. Code will be available in a few days, I'm boarding the plane home soon.

Alpha Beta 02-05-2016 15:15

Re: What Did you use for Vision Tracking?
 
Nothing too fancy.

LabVIEW FRC Color Processing Example (Thanks NI. This was our most sophisticated use of vision ever, and the examples you provided every team in LabVIEW were immensely helpful.)

Running in a custom dashboard on the driver's station (i5 laptop several years old.)

Hue, Sat Val parameters stored in a CSV file with the ability to save new values during a match.

Target coordinates sent back to robot through Network Tables.

Axis Camera m1013 with exposure setting turned to 0 in LabVIEW.

Green LED ring with a significant amount of black electrical tape blocking out some of the lights.

PS. For Teleop we had a piece of tape on the computer screen so drivers could confirm the auto aim worked. If center of tape = center of goal, then fire.

PSS. Pop-up USB camera was not running vision tracking.


All times are GMT -5. The time now is 00:46.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi