Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   Should We Program Autonomous For the Y? (http://www.chiefdelphi.com/forums/showthread.php?t=93427)

George Nishimura 13-03-2011 12:21

Re: Should We Program Autonomous For the Y?
 
We used line-tracking, it wasn't too difficult to program the Y autonomous code after getting the straight line to work. Just add a state that recognise the Y(true-false-true).

If your are using accelerometers, gyros and/or encoders, you could just have a half diagonal one that will score no matter where you start, and then work on trying to get the double ubertube.

apalrd 13-03-2011 14:44

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by lineskier (Post 1037803)
Also 33 relies on encoders, which limits how fast they can accomplish it. I think to accomplish a 3 tube autonomous reliably you would have to incorporate a different strategy. The way I forsaw it was 2 cameras... one front and back.
front looks for pegs, back looks for tubes. With 2 cameras and a rangefinder it is definitely possible... But you would need a sub 3 second cap... which is insanely fast. (separate threads for each mechanism... good 2 speed trans... and a crazy good claw.)

We could run faster (we are running our first tube at 3.5 ft/sec, mostly limited by the elevator and our fear of overrunning the elevator).
We choose not to.
It's not the encoders. We run closed-loop control all the time (controlling wheel velocity, and a few other calculations to control machine dynamics while driving) and have no encoder issues at full 13fps.

If anything, the camera would be a much more limiting sensor, as there is a lot of system lag in general while using it (everything must stop while processing images because of the processor overload), and if the image is processed on the laptop to increase speed of calculations then there is network lag returning the results (not much, but when trying to run with +-1 degree rotational error at 13 feet per second, everything is an issue).

You know exactly where the tubes are. You know exactly where the pegs are. The largest source of error is human error on lineup (which generally isn't much) and error compounding from multiple actions (each turn, each drive, etc). The easiest way to do more complex tasks is to reduce the error in each step, then rely on the remaining error to be constant (for example, if I tell my turn command to turn 30 degrees, it might turn 28, but if I know that it will always turn 28 degrees instead of 30, then I can tell it to turn 32 and it will do a perfect 30 degree turn.)

We already had the encoders, and were using them for closed-loop speed control. We already had the gyro, we were attempting to use it for a few special pieces of software (push-through and drive straight) but later abandoned its use in favor of a purely encoder-based solution. The autonomous code relies only on sensors which already existed, adding no weight to the machine. When you weigh in at 119.6 and have another 2 pounds of stuff you want to add, that 1 pound of sensors (2 cameras + wiring) is a lot.

archaopteryx 13-03-2011 15:25

Re: Should We Program Autonomous For the Y?
 
At the WPI regional, when there were robots on both the Y and the straight lines, they tended to interfere with each other so that neither could get their ubertubes on.

mwtidd 13-03-2011 15:32

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by apalrd (Post 1038591)
We could run faster (we are running our first tube at 3.5 ft/sec, mostly limited by the elevator and our fear of overrunning the elevator).
We choose not to.
It's not the encoders. We run closed-loop control all the time (controlling wheel velocity, and a few other calculations to control machine dynamics while driving) and have no encoder issues at full 13fps.

If anything, the camera would be a much more limiting sensor, as there is a lot of system lag in general while using it (everything must stop while processing images because of the processor overload), and if the image is processed on the laptop to increase speed of calculations then there is network lag returning the results (not much, but when trying to run with +-1 degree rotational error at 13 feet per second, everything is an issue).

You know exactly where the tubes are. You know exactly where the pegs are. The largest source of error is human error on lineup (which generally isn't much) and error compounding from multiple actions (each turn, each drive, etc). The easiest way to do more complex tasks is to reduce the error in each step, then rely on the remaining error to be constant (for example, if I tell my turn command to turn 30 degrees, it might turn 28, but if I know that it will always turn 28 degrees instead of 30, then I can tell it to turn 32 and it will do a perfect 30 degree turn.)

We already had the encoders, and were using them for closed-loop speed control. We already had the gyro, we were attempting to use it for a few special pieces of software (push-through and drive straight) but later abandoned its use in favor of a purely encoder-based solution. The autonomous code relies only on sensors which already existed, adding no weight to the machine. When you weigh in at 119.6 and have another 2 pounds of stuff you want to add, that 1 pound of sensors (2 cameras + wiring) is a lot.

Thanks for your insights!

Regarding the camera, yes if you used the camera as the lead sensor, you would be prohibitive. However there are ways around this. With strafe capability, that slight adjustment to center on the peg can make all the difference in making or missing that cap. I think to do a consistent 3 tube cap consistently the camera would have to be utilized and utilized correctly. For example, as you approach the pegs for the second tube, even having one frame from the camera could help to get that to 75%.

Again my opinions were based on the 3 tube autonomous. Which a 2 degree error x 5 turns, and accounting for misalignment would be a prohibiting factor in reaching a 75% accuracy with 3 tubes even ignoring the 15 second limit.

I have never used encoders myself, and after seeing your success I am very curious about them. Also, don't get me wrong... the double cap is amazing, and its awesome that it can be done simply with encoders and a gyro.

75% has always been my metric for success... unfortunately I failed this year. I'm glad to see you guys succeeded!

George Nishimura 13-03-2011 16:37

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by archaopteryx (Post 1038610)
At the WPI regional, when there were robots on both the Y and the straight lines, they tended to interfere with each other so that neither could get their ubertubes on.

Happened to us twice. Make sure you plan your autonomous strategy well. Also, if you do straight line, back up quickly after placing the tube.

DtD 13-03-2011 19:37

Re: Should We Program Autonomous For the Y?
 
At the KC Regional, very few (if any) robots did the Y. I'd go for it if you could. Our team's experimental Y support should be 100% by the time we go to Midwest. (We just need special conditions for the joints =/ )

~David

davidthefat 13-03-2011 19:49

Re: Should We Program Autonomous For the Y?
 
My philosophy regarding autonomy: do not do it if you are not doing real time calculations. I personally do not consider those "drive forward 10 feet and score" autonomy real autonomy. They are pre-written instructions; where is the autonomy in that? Consider getting a job and a written step by step instructions on how to do that job. Would you consider that an autonomous action? No I would not.

So the autonomy should be a challenge for the programmers to breathe some life into the robot, allowing it to make choices on its own.

The ideal autonomy should use cameras, I plan on using a camera, the photosensors and possibly the encoders.

PatJameson 13-03-2011 20:28

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by davidthefat (Post 1038766)
So the autonomy should be a challenge for the programmers to breathe some life into the robot, allowing it to make choices on its own.

The ideal autonomy should use cameras, I plan on using a camera, the photosensors and possibly the encoders.

I prefer to follow KISS. If it can be done simply and consistently, it should be done as such. A camera is not simple nor is it as reliable as other means of completing autonomous.

Alan Anderson 13-03-2011 20:36

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by davidthefat (Post 1038766)
The ideal autonomy...

The ideal autonomy is one which reliably scores the most points for your alliance. Whether it uses time-based motor control, encoders for position sensing, clockwork cogs, ultrasonic rangefinders, cameras, or telekinesis is not really important.

apalrd 13-03-2011 21:57

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by davidthefat (Post 1038766)
...do not do it if you are not doing real time calculations. ..."drive forward 10 feet and score" autonomy real autonomy. They are pre-written instructions...

How much math is there in a "drive forward 10 feet and score"?

In my code:
The autonomous script would have a command "DRIVE_STRAIGHT 120 6" to drive the 120 inches at 6 ft/sec
The beescript interpreter would find and call the Drive Straight function (drive_straight.vi)
The Drive Straight function would:
Reset the encoders
Calculate the remaining distance to the target plus an overshoot constant
Determine the desired average output speed using a proportional controller
Limit it to the 6 ft/sec
Determine drift by using the differential of the two encoder distances
Determine how much to compensate using another proportional controller
Add the two and take the remainder of the larger number and subtract it from the smaller (so the difference in the motor output is always the number specified by the second P controller)
Ramp the output to prevent lurching (which can cause twisting and innacuracy)
Feed the speed, which is passed to the drive thread which actually drives (closed loop is turned off during auto since auto does its own calculations).

The score would also include:
Setting the elevator state to be fed to the state machine
The state machine would pickup the state request (back in its thread) and lookup the position
The position would be run through the state machine which would see if it is trying to crossover backwards or handle a special sequence. Since it isn't, I'll skip that.
The resulting position would be checked against the active tube color, and if it's a circle (or if it's an ubertube and we're pretending ubertubes are circles) it will add the white tube bump for the current score state
The resulting position will be fed through the P controller to determine power of the elevator and wrist
The gain scheduler will modify the gains above based on several parameters (such as the sine of the angle of the wrist).
The resulting power will be fed through the limits VI which will bring it into range
The resulting power will be fed to the motor if the state machine has been initialized (set) since last exiting disabled (safety feature - no movement after exiting disabled until commanded)

Lots and lots of math.

mwtidd 13-03-2011 22:09

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by apalrd (Post 1038888)
The resulting position would be checked against the active tube color, and if it's a circle (or if it's an ubertube and we're pretending ubertubes are circles) it will add the white tube bump for the current score state

Can your robot actually tell the color of the tube its holding and keep track of score?

I didn't really understand this part...

MagiChau 13-03-2011 22:13

Re: Should We Program Autonomous For the Y?
 
They might be using a camera that checks for a color range to determine which tube it is.

apalrd 13-03-2011 22:14

Re: Should We Program Autonomous For the Y?
 
No, it's set in software based on button input. (but we did actually attempt to use an NXT color sensor, but the claw didn't have any good place to put it).

I have an auto command to set the tube color.

mwtidd 13-03-2011 22:14

Re: Should We Program Autonomous For the Y?
 
Quote:

Originally Posted by MagiChau (Post 1038901)
They might be using a camera that checks for a color range to determine which tube it is.

Yeah, I was just curious if the elevator know's exactly what height to go to based on the tube :) That would be pretty killer ;)


Quote:

Originally Posted by apalrd (Post 1038902)
No, it's set in software based on button input. (but we did actually attempt to use an NXT color sensor, but the claw didn't have any good place to put it).

I have an auto command to set the tube color.

Oh okay, it would be pretty sweet to use the camera this way.
Also the frame rate wouldn't be a problem

apalrd 13-03-2011 22:24

Re: Should We Program Autonomous For the Y?
 
IF you tell it you have a white tube, it will go to the correct height. It dosen't know what tube it has, but asks you (there's a button to set white, when you set state it will set red). Red and Blue are the same, white has a height bump.

During auto, the set state command sets the state without setting color, and another function sets color.

The code does exist to read and use the NXT color sensor, but we decided it didn't work well enough to use it (mostly because there is no place in the claw where the tube is an exact distance from the sensor, and the sensor relied heavily on scanning distance). Another issue was I2C cable length, as the run up to the elevator is something like 10 feet from the digital sidecar.


All times are GMT -5. The time now is 16:12.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi