|
|
|
![]() |
|
|||||||
|
||||||||
| View Poll Results: What did you use for vision tracking? | |||
| Grip on RoboRio - IP Camera |
|
3 | 2.07% |
| Grip on RoboRio - USB Camera |
|
9 | 6.21% |
| Grip on Laptop- IP Camera |
|
19 | 13.10% |
| Grip on Laptop- USB Camera |
|
6 | 4.14% |
| Grip on Raspberry Pi- IP Camera |
|
5 | 3.45% |
| Grip on Raspberry Pi- USB Camera |
|
13 | 8.97% |
| RoboRealm IP Camera |
|
6 | 4.14% |
| RoboRealm USB Camera |
|
7 | 4.83% |
| Other - Please Elaborate with a Response |
|
77 | 53.10% |
| Voters: 145. You may not vote on this poll | |||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
What Did you use for Vision Tracking?
This year my team didn't have time to get into vision tracking. I've been trying to dive into it but before I get started I was wondering what the best option was. I've heard a lot of speculation about what is good and what is bad. I was wondering what people actually used at competition. I would love to hear feed back on what worked and what didn't work.
Last edited by tomy : 01-05-2016 at 13:08. |
|
#2
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
1261 used raspberry pi, with opencv. The vision code was written in python and used pyNetworkTables to communicate
|
|
#3
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
Thanks for the in-site. How hard was it to write the code in opencv and use pyNetworkTables? Do you have any good resources or documentation that might help? Also how bad was the lag?
|
|
#4
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Quote:
The setup was pretty simple to get up and running. We compiled opencv and networktables 3 on the pi, then wrote a simple c++ program to find and send the necessary data to align with the target back to roborio. I actually followed a video tutorial here to install cv on the raspberry pi. For network tables, I downloaded the code off of github and simply compiled it like a normal program and added it to my library path if i remember correctly. |
|
#5
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
2383 used a Jetson TX1, with a kinect used as an IR camera. Vision code was written in OpenCV using C++ and communicated with the roboRIO over network tables.
During the offseason we will be exploring the android phone method that 254 used for reliability reasons; the Jetson+Kinect combo was expensive and finicky, compared to an android phone with an integrated battery. |
|
#6
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
This year shaker robotics used the RRio with NIvision(java) to track the targets, we analyzed frames only when we needed them to prevent using too much of the rio's resources.
|
|
#7
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Quote:
The images were also sent via mjpg-streamer to the driverstation for drivers to see, but it's not really needed. We just liked seeing the shooter camera. We'd be happy to help, just PM me. Brian |
|
#8
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
1771 originally used the Axis M1011 camera, and GRIP on the driver station. However we had a problem on the field, the network table data was not being sent back, and we couldn't figure it out.
We switched to using a PixyCam, and had much better results. |
|
#9
|
|||||
|
|||||
|
Re: What Did you use for Vision Tracking?
4901 used Grip on an a RPi v2 + a Pi Camera.
For more info on our implementation visit here https://github.com/GarnetSquardon490...ion-processing |
|
#10
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
We used Java and OpenCV on an NVIDIA Jetson TK1, processing images from a Microsoft Lifecam HD3000.
|
|
#11
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
Like most engineering decisions, there isn't a "good" and "bad", but there is often a tradeoff.
We used GRIP on a laptop with an axis camera? Why? Because our code for vision was 100% student built, and the student had never done computer vision before. GRIP on the laptop was the easiest to get going, and it only worked with an i.p. camera. There are down sides to that. If you use opencv, you can write much more flexible code that can do more sophisticated processing, but it's harder to get going. On the other hand, by doing things the way we did, we had some latency and frame rate issues. We couldn't go beyond basic capture of the target, and we had to be cautious about the way we drove when under camera control. Coprocessors, such as a Raspberry PI, TK1, or TX1, (I was sufficiently impressed with the NVIDIA products that I bought some of the company's stock), will allow you a lot more flexibility, but you have to learn to crawl before you can walk. Those products are harder to set up and have integration issues. It's nothing dramatic, but when you have to learn the computer vision algorithms, and the networking, and how to power up a coprocessor, and do it all at the same time, it gets difficult. If you are trying to prepare for next year, or dare I say it for a career that involves computer vision, I would recommend grip on the laptop as a starting point, because you can experiment with it and see what happens without even hooking to the robot. After you have that down, port it to a PI or an NVIDIA product. The PI probably has the most documentation and example work, so that's probably a good choice, not to mention that the whole setup, including camera, is less than 100 bucks. Once you get that going, the sky's the limit. |
|
#12
|
|||
|
|||
|
Re: What Did you use for Vision Tracking?
Quote:
There is a good documentation for the raspberry pi which I have been working on when I can. Thanks for the reply |
|
#13
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Quote:
The only issue we came up against after the initial integration was at Champs when the much more polished surface of the driver's station wall reflected back the LEDs to the camera simulating a goal target and we shot at ourselves in autonomous. It was quickly fixed by adding the requirement that we had to rotate at least 45 degrees before we started looking for a target. The PixyCam is an excellent way to provide auto-targeting without significant impact to the code on the roboRIO or requiring sophisticated integration of additional software. |
|
#14
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
OpenCV C/C++ mixed source running on a Pine64 coprocessor with a Kinect as a camera. 30fps tracking using the Infrared stream. Targets are found and reduced to a bounding box ready to be sent over the network. Each vision target takes 32 bytes of data and is used for auto alignment and sent to the Driver Station WebUI for driver feedback. Code will be available in a few days, I'm boarding the plane home soon.
|
|
#15
|
||||
|
||||
|
Re: What Did you use for Vision Tracking?
Nothing too fancy.
LabVIEW FRC Color Processing Example (Thanks NI. This was our most sophisticated use of vision ever, and the examples you provided every team in LabVIEW were immensely helpful.) Running in a custom dashboard on the driver's station (i5 laptop several years old.) Hue, Sat Val parameters stored in a CSV file with the ability to save new values during a match. Target coordinates sent back to robot through Network Tables. Axis Camera m1013 with exposure setting turned to 0 in LabVIEW. Green LED ring with a significant amount of black electrical tape blocking out some of the lights. PS. For Teleop we had a piece of tape on the computer screen so drivers could confirm the auto aim worked. If center of tape = center of goal, then fire. PSS. Pop-up USB camera was not running vision tracking. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|