View Single Post
  #14   Spotlight this post!  
Unread 02-01-2014, 22:30
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Complete Autonomous Robot

Quote:
Originally Posted by bobby5150 View Post
Is GPS legal in FRC? I don't know how accurate they are, but if it was accurate to a few feet, maybe it could be used for positioning purposes, though it'd need to be calibrated for the playing field. Another thought is overcoming defense. I've seen some pretty cool autonomous routines that hold their ground even when they get hit, like 987 last year, but wouldn't getting it back on track to the accuracy needed be very challenging?
I don't really know which comment to start replying to, so I am choosing this one. I have written two variations of a vision program that gives camera calibration, that is, x y z displacement with respect to an object as well as pitch roll and yaw (so it can either replace a gyro or check its solution to see how accurate it is). In rebound rumble, we always could see the target. No matter what. If my program only saw one hoop, it would calculate where the other 3 were. We can use the vision solution as an index to where we are on the field. We did this in Rebound Rumble too. I did not use pose, but rather reduced the equations (by keeping roll and yaw constant, and pitch calculable by another means) to a basic trig problem, which resulted in the same scenario. We were a FCS. When a robot came to block us, our driver would push a button and it'd turn the robot so the vision solution read that we were x degrees to the right of the 3pt and it would lower the turret, so we were aiming at the left two point with respect to the three. We also had a button for the right two point. We could choose which one to shoot at.

Moving on. The new vision programmers, headed by a new mentor, just finished a cascade training project. It uses stereo imaging to calculate distance instead of a depth camera, but it still gets the point a cross. Our labview programmer (yes, singular sadly) has been working on preprogrammed maneuvers, such as a figure 8 (that one was just for fun). Other ones include a simple turn slightly to the right, go around and object, then go back onto the path that it was.

Our thought process for moving consistently is an interpolation table. If a star out puts a vector of x length, then our robot will go this fast, and our vision will tell us when we have gone that far. (We are going to multithread a XU to get as many solutions as possible to ensure minimal lag).

It is all dependent on the challenge, however. I feel a great year to have done this would have been logomotion.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."