View Single Post
  #2   Spotlight this post!  
Unread 02-22-2012, 11:57 AM
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,069
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: pic: Team 341 presents Miss Daisy XI

Quote:
Originally Posted by lynca View Post
This frame rate is very impressive.

1. What camera are you using ?

2. What are the specs of your laptop ?

3. How do you transfer information between a laptop and cRio?

4. Do you have a openCV backboard tracking example available?
1. The Axis M1011 camera. This does limit us to 30 frames per second when operating on the robot (at 640x480 resolution), but the code itself has been shown to process upwards of 200 frames per second when streaming images from disk. In actuality, 30 frames per second is more than enough since we are actually using the gyro for doing our feedback control anyhow. At 30FPS we utilize about 15% of our CPU.

2. It's a Core i5 with 6GB of RAM.

3. Camera data goes robot -> laptop through the wireless link to the Driver Station. Computed outputs go back through the link to the cRIO using WPI Network Tables.

4. I'll post our full code after the competition season has begun. For achieving basic throughput between camera, laptop, and cRIO, you can use the example square tracker that comes with the SmartDashboard installer. Here is our basic algorithm:

1. Convert image to HSV color space
2. Perform thresholding in HSV (enforce minimum and maximum hue, minimum saturation, minimum value)
3. Find contours
4. Take convex hull of contours (this is the step that helps ensure that partial obscuration from the rim/basket doesn't kill us)
5. Perform polygon fitting to contours
6. Filter polygons based on (a) number of vertices, (b) aspect ratio, (c) angles of horizontal and vertical lines.
7. Select highest remaining polygon as the top target.
8. Compute angle from camera center to target center and add this to the current robot heading
9. Command the new heading to the robot (it uses gyro and encoders to achieve this heading)
10. Compute range based on trigonometry comparing center of camera axis to center of target height
11. Command shooter RPM setpoint based on linear interpolation of a lookup table

The code has been very carefully optimized to reduce the allocation/deallocation of dynamic memory between subsequent calls, which is what lets us operate at breakneck speed. This also involved a lot of debugging to hunt down latent memory leaks existing somewhere in the layering of OpenCV/JavaCV/WPIJavaCV APIs.
Reply With Quote