![]() |
Re: Should We Program Autonomous For the Y?
We used line-tracking, it wasn't too difficult to program the Y autonomous code after getting the straight line to work. Just add a state that recognise the Y(true-false-true).
If your are using accelerometers, gyros and/or encoders, you could just have a half diagonal one that will score no matter where you start, and then work on trying to get the double ubertube. |
Re: Should We Program Autonomous For the Y?
Quote:
We choose not to. It's not the encoders. We run closed-loop control all the time (controlling wheel velocity, and a few other calculations to control machine dynamics while driving) and have no encoder issues at full 13fps. If anything, the camera would be a much more limiting sensor, as there is a lot of system lag in general while using it (everything must stop while processing images because of the processor overload), and if the image is processed on the laptop to increase speed of calculations then there is network lag returning the results (not much, but when trying to run with +-1 degree rotational error at 13 feet per second, everything is an issue). You know exactly where the tubes are. You know exactly where the pegs are. The largest source of error is human error on lineup (which generally isn't much) and error compounding from multiple actions (each turn, each drive, etc). The easiest way to do more complex tasks is to reduce the error in each step, then rely on the remaining error to be constant (for example, if I tell my turn command to turn 30 degrees, it might turn 28, but if I know that it will always turn 28 degrees instead of 30, then I can tell it to turn 32 and it will do a perfect 30 degree turn.) We already had the encoders, and were using them for closed-loop speed control. We already had the gyro, we were attempting to use it for a few special pieces of software (push-through and drive straight) but later abandoned its use in favor of a purely encoder-based solution. The autonomous code relies only on sensors which already existed, adding no weight to the machine. When you weigh in at 119.6 and have another 2 pounds of stuff you want to add, that 1 pound of sensors (2 cameras + wiring) is a lot. |
Re: Should We Program Autonomous For the Y?
At the WPI regional, when there were robots on both the Y and the straight lines, they tended to interfere with each other so that neither could get their ubertubes on.
|
Re: Should We Program Autonomous For the Y?
Quote:
Regarding the camera, yes if you used the camera as the lead sensor, you would be prohibitive. However there are ways around this. With strafe capability, that slight adjustment to center on the peg can make all the difference in making or missing that cap. I think to do a consistent 3 tube cap consistently the camera would have to be utilized and utilized correctly. For example, as you approach the pegs for the second tube, even having one frame from the camera could help to get that to 75%. Again my opinions were based on the 3 tube autonomous. Which a 2 degree error x 5 turns, and accounting for misalignment would be a prohibiting factor in reaching a 75% accuracy with 3 tubes even ignoring the 15 second limit. I have never used encoders myself, and after seeing your success I am very curious about them. Also, don't get me wrong... the double cap is amazing, and its awesome that it can be done simply with encoders and a gyro. 75% has always been my metric for success... unfortunately I failed this year. I'm glad to see you guys succeeded! |
Re: Should We Program Autonomous For the Y?
Quote:
|
Re: Should We Program Autonomous For the Y?
At the KC Regional, very few (if any) robots did the Y. I'd go for it if you could. Our team's experimental Y support should be 100% by the time we go to Midwest. (We just need special conditions for the joints =/ )
~David |
Re: Should We Program Autonomous For the Y?
My philosophy regarding autonomy: do not do it if you are not doing real time calculations. I personally do not consider those "drive forward 10 feet and score" autonomy real autonomy. They are pre-written instructions; where is the autonomy in that? Consider getting a job and a written step by step instructions on how to do that job. Would you consider that an autonomous action? No I would not.
So the autonomy should be a challenge for the programmers to breathe some life into the robot, allowing it to make choices on its own. The ideal autonomy should use cameras, I plan on using a camera, the photosensors and possibly the encoders. |
Re: Should We Program Autonomous For the Y?
Quote:
|
Re: Should We Program Autonomous For the Y?
Quote:
|
Re: Should We Program Autonomous For the Y?
Quote:
In my code: The autonomous script would have a command "DRIVE_STRAIGHT 120 6" to drive the 120 inches at 6 ft/sec The beescript interpreter would find and call the Drive Straight function (drive_straight.vi) The Drive Straight function would: Reset the encoders Calculate the remaining distance to the target plus an overshoot constant Determine the desired average output speed using a proportional controller Limit it to the 6 ft/sec Determine drift by using the differential of the two encoder distances Determine how much to compensate using another proportional controller Add the two and take the remainder of the larger number and subtract it from the smaller (so the difference in the motor output is always the number specified by the second P controller) Ramp the output to prevent lurching (which can cause twisting and innacuracy) Feed the speed, which is passed to the drive thread which actually drives (closed loop is turned off during auto since auto does its own calculations). The score would also include: Setting the elevator state to be fed to the state machine The state machine would pickup the state request (back in its thread) and lookup the position The position would be run through the state machine which would see if it is trying to crossover backwards or handle a special sequence. Since it isn't, I'll skip that. The resulting position would be checked against the active tube color, and if it's a circle (or if it's an ubertube and we're pretending ubertubes are circles) it will add the white tube bump for the current score state The resulting position will be fed through the P controller to determine power of the elevator and wrist The gain scheduler will modify the gains above based on several parameters (such as the sine of the angle of the wrist). The resulting power will be fed through the limits VI which will bring it into range The resulting power will be fed to the motor if the state machine has been initialized (set) since last exiting disabled (safety feature - no movement after exiting disabled until commanded) Lots and lots of math. |
Re: Should We Program Autonomous For the Y?
Quote:
I didn't really understand this part... |
Re: Should We Program Autonomous For the Y?
They might be using a camera that checks for a color range to determine which tube it is.
|
Re: Should We Program Autonomous For the Y?
No, it's set in software based on button input. (but we did actually attempt to use an NXT color sensor, but the claw didn't have any good place to put it).
I have an auto command to set the tube color. |
Re: Should We Program Autonomous For the Y?
Quote:
Quote:
Also the frame rate wouldn't be a problem |
Re: Should We Program Autonomous For the Y?
IF you tell it you have a white tube, it will go to the correct height. It dosen't know what tube it has, but asks you (there's a button to set white, when you set state it will set red). Red and Blue are the same, white has a height bump.
During auto, the set state command sets the state without setting color, and another function sets color. The code does exist to read and use the NXT color sensor, but we decided it didn't work well enough to use it (mostly because there is no place in the claw where the tube is an exact distance from the sensor, and the sensor relied heavily on scanning distance). Another issue was I2C cable length, as the run up to the elevator is something like 10 feet from the digital sidecar. |
| All times are GMT -5. The time now is 16:12. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi