Quote:
Originally Posted by Kevin Watson
It's not an easy problem because you'll need a very agile robot that, at the very least, will need to be four wheel drive so that you can do a turn-in-place (of course, several high school students will read this and grin because they've thought of a cool way to solve the problem that the NASA guy didn't think of -- it happens every year <grin>). If I were you, I'd push back and let the mechanism folks know that their solution for delivering a scoring piece needs to be more robust so the 'bot can approach at more oblique angles. To help visualize how you might accomplish the task, here's a link to a PDF containing a scale Visio drawing of the field: http://kevin.org/frc/frc_2007_field.pdf.
-Kevin
|
Our team is coming across the same dilemma. However, do you really need four wheel drive to do a turn in place? My team is using a forklift style drive this year(two drive wheels in the front, steering wheels in the back). The engineers of my team told me that we can turn in place by just turning the steering wheels almost perpedicular to the front and spinning the front wheels in opposite directions(ie- to turn left in place, spin the left wheel backwards and the right wheel forward). I am sceptical of this method. Will it really work?
But also, I have an idea of how to determine the orientation of the rack/vision target from information from the camera and would like to know the feasibility of it. It draws on the fact that the blob size is proportional to the angle that your approaching the target from. the blob size will be "thinner" if you're approaching from an angle, and larger if you're approaching head on. Do you think it would be possible to determine the angle of the rack based on this information, and the distance?
It seems that our robot will only be able to score(feasibly) head on.