|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
We used the camera 100%. I have a feeling previous years vision targets have given the camera a bad name. The retro-reflective tape used this year is simply awesome to track and is the only target in my history that is not affected by the stage lighting used at the competitions. I hope that stuff is around for a long time to come.
|
|
#2
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
Yup. Our autonomous works in normal lighting, pitch black, or intense lights. We love that stuff.
|
|
#3
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
Wow! I've never done this before! I attempted to "track" still objects in various still pictures - a ball, a stick, a box - all color and at different lightings but I never was able to do this successfully. My main question is not how to apply an hsl threshold or clean up the photo after a threshold, but what to do with all that data I get using the stat VI's and such - I'm using labview. What does it all mean? and what on earth am i supposed to do with it to make my program know that it's looking at the target? thanks
|
|
#4
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
We used light sensors for line tracking, encoders for auto-correction, and an encoder on the arm for height determination. It's never failed. It's also kind of showy since instead of placing the tube it throws it down onto the peg.
|
|
#5
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
I used a gyro on the base to keep it straight, an encoder for distance, and a second gyro to determine arm angle. I'm hoping to rig up another encoder so as to have two straight-correcting sensors and average the two corrections, or use one as a safety net in case one malfunctions.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|