|
|
|
![]() |
|
|||||||
|
||||||||
| View Poll Results: What did you use for vision tracking? | |||
| Grip on RoboRio - IP Camera |
|
3 | 2.07% |
| Grip on RoboRio - USB Camera |
|
9 | 6.21% |
| Grip on Laptop- IP Camera |
|
19 | 13.10% |
| Grip on Laptop- USB Camera |
|
6 | 4.14% |
| Grip on Raspberry Pi- IP Camera |
|
5 | 3.45% |
| Grip on Raspberry Pi- USB Camera |
|
13 | 8.97% |
| RoboRealm IP Camera |
|
6 | 4.14% |
| RoboRealm USB Camera |
|
7 | 4.83% |
| Other - Please Elaborate with a Response |
|
77 | 53.10% |
| Voters: 145. You may not vote on this poll | |||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
IP Camera. Custom Code on the Driver Station.
Vision program written in C++. It took the picture off the Smart Dashboard, and processed it. Pretty ingenious code. He looked for "corners". Rated each pixel for the likelihood it was a corner (top corner, bottom left corner, bottom right corner). the largest grouping was declared a corner. |
|
#17
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Our programmers want to clean things up and then we'll be open sourcing our code. With pyNetworkTables, you have to static IP, otherwise it won't work on the FMS.
FYI, for Python/OpenCV, the installation of OpenCV takes 4 hours after you have the OS. We used Raspbian (Wheezy, I think). PyImageSearch has the steps on how to install OpenCV and Python on Raspian. 2hrs to install packages, and the final step, a 2hr compile. We used the Pi 2. The Pi 3 came out midway through the build season and we didn't want to change. The Pi 3 might be faster, we're going to test to see what the difference is. Brian |
|
#18
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
Quote:
Wow that long? I am extremely new to open-CV and python do you have any good places to start? |
|
#19
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
The installation takes a long time because you need to build it on the pi, which does take several hours. Which language are you looking to get started on?
|
|
#20
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
We used an Axis IP Camera and a Raspberry Pi running a modified version of Team 3019's TowerTracker OpenCV java program. I believe someone posted about it earlier in this thread.
Last edited by axton900 : 01-05-2016 at 18:48. Reason: addition |
|
#21
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
We worked with Stereolabs in the pre-season to get their Zed camera down in price and legal for FRC teams. They even dropped the price lower once build season started.
We used the Zed in combination with an Nvidia TX1 to capture the location of the tower and rotate/align a turret and grab the depth data to shoot the ball. Mechanically the shooter had some underlying issues but when it worked, the software was an accurate combination. We also did a massive amount of research into neural networks and we've got ball tracking working. It never ended up on a robot but thanks to 254's work that they shared in St Louis (latency compensation and pose estimation/extraction), I think we'll be able to get that working on the robot in the off-season. The goal is to automate ball pickup. We'll have some white papers out before too long and we're working closely with Nvidia to create resources to make a lot of what we've done easier on teams in the future. Our code is out on Github. |
|
#22
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
We used a USB camera connected to a Raspeberry Pi. On the Raspberry Pi we used Python and OpenCV to track the goal from the retro-reflective tape. The tape was tracked by using basic color thresholding to track the color green. Once we found the green color we contoured the binary image and used OpenCV moments to calculate the centroid of the goal. After finding the centroid the program calculated the angle the robot had to turn to center to the goal by using the camera's given field of view angle. Using pynetworktables we sent the calculated angle to the RoboRIO and then a PID controller turned the robot to that angle. Here is the link to our vision code.
Last edited by apache8080 : 01-05-2016 at 18:53. |
|
#23
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
We started out with OpenCV on a PCDuino. I say "started out" because we ultimately found we could do really well without it, and the fact that we realized why our implementation was actually causing us issues once in a while.
We have identified to root cause of the issues and will be implementing a new process going forward. We are moving to OpenCV on RPi-3. It is WAY FASTER than what we had with the PCDuino, and is actually a bit less expensive. In addition, there is tons of support in the RPi community. |
|
#24
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Quote:
|
|
#25
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
We ran OpenCV for Java on an onboard coprocessor. At first it was run on a kangaroo, but when that burnt out we switched to an onboard laptop.
|
|
#26
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
We used OpenCV+Python on the roborio, as an mjpg-streamer plugin so that we could optionally stream the images to the DS. pynetworktables to send data to the robot code.
Only about ~40% CPU usage, worked really well, the problems we had was in the code that used the results from the camera. Code can be found here. |
|
#27
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Team 1619 used an Nvidia Jetson TK1 for our vision processing with a Logitech USB webcam. We wrote our vision processing code in Python using OpenCV and communicated with the roboRIO and driver station using a custom written socket server similar to NetworkTables. We also streamed the camera feed from the Jetson to the driver station using a UDP stream.
|
|
#28
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Quote:
The only issue we came up against after the initial integration was at Champs when the much more polished surface of the driver's station wall reflected back the LEDs to the camera simulating a goal target and we shot at ourselves in autonomous. It was quickly fixed by adding the requirement that we had to rotate at least 45 degrees before we started looking for a target. The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software. |
|
#29
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
OpenCV C/C++ mixed source running on a Pine64 coprocessor with a Kinect as a camera. 30fps tracking using the Infrared stream. Targets are found and reduced to a bounding box ready to be sent over the network. Each vision target takes 32 bytes of data and is used for auto alignment and sent to the Driver Station WebUI for driver feedback. Code will be available in a few days, I'm boarding the plane home soon.
|
|
#30
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Nothing too fancy.
LabVIEW FRC Color Processing Example (Thanks NI. This was our most sophisticated use of vision ever, and the examples you provided every team in LabVIEW were immensely helpful.) Running in a custom dashboard on the driver's station (i5 laptop several years old.) Hue, Sat Val parameters stored in a CSV file with the ability to save new values during a match. Target coordinates sent back to robot through Network Tables. Axis Camera m1013 with exposure setting turned to 0 in LabVIEW. Green LED ring with a significant amount of black electrical tape blocking out some of the lights. PS. For Teleop we had a piece of tape on the computer screen so drivers could confirm the auto aim worked. If center of tape = center of goal, then fire. PSS. Pop-up USB camera was not running vision tracking. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|