![]() |
Re: What Did you use for Vision Tracking?
Quote:
PixyCam looks like a great option. |
Re: What Did you use for Vision Tracking?
1768 began the season using OpenCV and a Jetson TK1, we later switched to a Nexus 5X which became desirable due to its all in one packaging (camera and processor, which made taking it off the robot to do testing between events easy) and because our programmers felt as though it would be simpler to communicate between the roboRIO and the Nexus.
The nexus was used to measure distance and angle to the target, this information was then sent to the roboRIO. Nested PID loops then used the NavX MXP gyro data to alight the robot to the target. Images taken during the auto aligning process were used to adjust the turn set point. After two consecutive images returned an angle to the target of less than 0.5 degrees new images were not used to adjust the set point allowing the PID to maintain a position rather than bounce between slightly varying set points. ~DK |
Re: What Did you use for Vision Tracking?
Quote:
It seems like openCV is the way to go. Does anyone have a good turtorial location for openCV and vision tracking? I am hoping to put it on raspberry pi. |
Re: What Did you use for Vision Tracking?
Quote:
We already had our tracking code written for the Axis camera, so our P loop only had to have a tiny adjustment (instead of a range of 320 pixels, it was 5 volts), so the PixyCam swap was almost zero code change. We got our PixyCam hooked up and running in a few hours. We only used the analog output, didn't have time to get the digital output on it working. So if it never saw a target, (output value of around .43 volts I believe) the robot would "track" to the right constantly. But that is easy enough to fix in code...(if the "center" position doesn't update, you aren't actually tracking). If we had more time we probably would have used I2C or SPI to interface with the camera, in order to get more data. I know of at least 2 other teams from Georgia who used the PixyCam as well, being added in after/during the DCMP. |
Re: What Did you use for Vision Tracking?
Quote:
Brian |
Re: What Did you use for Vision Tracking?
Quote:
|
Re: What Did you use for Vision Tracking?
Quote:
|
Re: What Did you use for Vision Tracking?
Quote:
So if you could figure out a way to get the roboRIO to recognize it, you might be able to stream it back. You might be better off letting the PixyCam do processing, and using the axis camera/USB webcam for driver vision. Edit: You could probably send back the output of the Pixycam though...or reconstruct it. You can get the size and position of each object it senses. Send those back to the driver station, and have a program draw it on screen for you. Anything it doesn't see is just black. So you would have a 320x240 (or whatever resolution) black box, with green/red/etc boxes based on what the Pixy is processing. However, that would be a few frames behind what it currently is detecting. |
Re: What Did you use for Vision Tracking?
Quote:
OpenCV Pi Installation Instructions |
Re: What Did you use for Vision Tracking?
Quote:
Another note with the streamer. Consider not thrashing the SD when using the Pi. Constantly writing to the SD can reduce the time before corruption. We switched to writing the image to a RAM Disk, so nothing to the SD card, only memory. Brian |
Re: What Did you use for Vision Tracking?
We use a Raspberry Pi and RPI Camera, with the exposure turned way way down and a truly ridiculous amount of green LED's. Then we do some image processing stuff with OpenCV (blurring, HSV filtering, etc.) Then draw contours, and filter them out set on criteria. Lastly it communicates that over to the RoboRIO through network Tables. It's all written in Python (the bestest language)
We spent a lot of time trying to get OpenCV in Java to work, and putting it on the RoboRIO. In the end we went with the Raspberry Pi, and didn't feel like GRIP was reliable enough that we would want to use it on our robot during a competition. |
Re: What Did you use for Vision Tracking?
Kauaibots (team 2465) used the JetsonTK1 w/a Logitech C930 webcam (90 degree FOV). Software (C++) was in OpenCV, and it detected the angle/distance to the tower light stack, and also the angle to the lights on the edges of the defenses, as well as distance/angle to the retro-reflective targets in the high goal.
Video processing algorithm ran at 30fps on 640x480 images, and wrote a compressed copy (.MJPG file) to SD card for later review, and also wrote a JPEG image to a directory that was monitored by MJPG-Streamer. The VideoProc algorithm was designed to switch between 2 cameras, though we ended up only using one camera. The operator could choose to optionally overlay the detected object information on top of the raw video, so the drivers could see what the algorithm was doing. Communication w/the RoboRIO was via Network tables, including a "ping" process to ensure the video processor was running, commands to the video processor to select the current algorithm and camera source, and to communicate detection events back to the RoboRIO. *** The latency correction discussed in the presentation at worlds is a great idea. We have a plan for that.... :) Moving ahead, the plan is to use the navX-MXP's 100Hz update rate and it's dual simultaneous outputs (SPI to RoboRIO, USB to Jetson) and high-accuracy timestamp to timestamp the video in the video processor, send that to the RoboRIO, and in the RoboRIO use the timestamp to locate the matching entry in a time-history buffer of unit quaternions (quaternions are the value that is used to derive yaw, pitch and roll). This approach, very similar to what was described in the presentation at worlds, corrects for latency by accounting for any change in orientation (pitch, roll and yaw) after the video has been acquired but before the roboRIO gets the result from the video processor. We're collaborating with another team who's been working on neural networked detection algorithms, and the plan is to post a whitepaper on the results of this promising concept - if you have any questions please feel free to private message me for details on this effort. |
Re: What Did you use for Vision Tracking?
Quote:
|
Re: What Did you use for Vision Tracking?
We used a Logitech Pro 9000 type USB camera connected to the RoboRio. Wrote custom C++ code to track the tower.
A short video of our driver station in autonomous is on youtube. https://youtu.be/PRhgljJ9zus The yellow box is our region of interest. The light blue highlights show detection of bright vertical lines and yellow highlights show detection of bright horizontal lines. The black circle is our guess at the center-bottom of the tower window. But alas, we only got it working at the last couple of matches. A lot of fun, but did not help us get to St Louis. Our tracking code follows: Code:
void Robot::trackTower(){ |
Re: What Did you use for Vision Tracking?
Quote:
|
| All times are GMT -5. The time now is 00:46. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi