![]() |
What Did you use for Vision Tracking?
This year my team didn't have time to get into vision tracking. I've been trying to dive into it but before I get started I was wondering what the best option was. I've heard a lot of speculation about what is good and what is bad. I was wondering what people actually used at competition. I would love to hear feed back on what worked and what didn't work.
|
Re: What Did you use for Vision Tracking?
1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
|
Re: What Did you use for Vision Tracking?
Quote:
|
Re: What Did you use for Vision Tracking?
Quote:
The setup was pretty simple to get up and running. We compiled opencv and networktables 3 on the pi, then wrote a simple c++ program to find and send the necessary data to align with the target back to roborio. I actually followed a video tutorial here to install cv on the raspberry pi. For network tables, I downloaded the code off of github and simply compiled it like a normal program and added it to my library path if i remember correctly. |
Re: What Did you use for Vision Tracking?
2383 used a Jetson TX1, with a kinect used as an IR camera. Vision code was written in OpenCV using C++ and communicated with the roboRIO over network tables.
During the offseason we will be exploring the android phone method that 254 used for reliability reasons; the Jetson+Kinect combo was expensive and finicky, compared to an android phone with an integrated battery. |
Re: What Did you use for Vision Tracking?
This year shaker robotics used the RRio with NIvision(java) to track the targets, we analyzed frames only when we needed them to prevent using too much of the rio's resources.
|
Re: What Did you use for Vision Tracking?
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.
We switched to using a PixyCam, and had much better results. |
Re: What Did you use for Vision Tracking?
4901 used Grip on an a RPi v2 + a Pi Camera.
For more info on our implementation visit here https://github.com/GarnetSquardon490...ion-processing |
Re: What Did you use for Vision Tracking?
We used Java and OpenCV on an NVIDIA Jetson TK1, processing images from a Microsoft Lifecam HD3000.
|
Re: What Did you use for Vision Tracking?
Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.
We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera. There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control. Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult. If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks. Once you get that going, the sky's the limit. |
Re: What Did you use for Vision Tracking?
Quote:
There is a good documentation for the raspberry pi which I have been working on when I can. Thanks for the reply |
Re: What Did you use for Vision Tracking?
Quote:
The images were also sent via mjpg-streamer to the driverstation for drivers to see, but it's not really needed. We just liked seeing the shooter camera. We'd be happy to help, just PM me. Brian |
Re: What Did you use for Vision Tracking?
We used a modified version of TowerTracker for autonomous alignment and a flashlight for teleop alignment.
|
Re: What Did you use for Vision Tracking?
Team 987 used an onboard Jetson TK1 for our vision tracking. We programmed it with C++ code using openCV. The Jetson sends target information to the roborio with tcp packets. From there a compressed low frame rate stream was sent to the driver station for diagnostics. The low frame rate stream used very little bandwidth to ensure we stay well under the maximum (it used around 500 KB/s).
|
Re: What Did you use for Vision Tracking?
We ended up using the RoboRio with a USB Camera, but we attempt to uses opencv on Jetson, begalbone black and raspberry pi 3. We scraped the jetson after realizing how hard we were landing after hitting a defence. (But we had opencv working) then we had the begalbone scraped after the raspberry pi 3 was released. We got the pi to work after 16 hours of compiling opencv but we ran out if time.
|
| All times are GMT -5. The time now is 22:00. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi