|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
I perused the code over lunch and noticed something that confused me. In your ports file you've assigned both the "B" port of an encoder and a limit switch to digital I/O 10. In order to facilitate this, was there a specific order the wires had to go into the DIN for I/O 10?
We rely heavily on limit switches to act as redundant safeties during programming as well as sensor failures, so being able to pair limit switches with encoders in this manner could save us from having to make I/O port tradeoffs. |
|
#2
|
|||||
|
|||||
|
Re: Team 254 2011 FRC Code
The port definitions for the top and bottom roller encoders are probably obsolete. It doesn't look like either of those encoders is actually used in the code, so there's no real conflict.
|
|
#3
|
|||
|
|||
|
Re: Team 254 2011 FRC Code
Correct. There are no encoders on the rollers. That sounds like a very old piece of code that didn't get removed as the code evolved...
|
|
#4
|
|||||
|
|||||
|
Re: Team 254 2011 FRC Code
Austin,
Could you or one of your programmers explain the rationale behind the design of your victor_linearize function? You average 5th and 7th order polynomials together, but it isn't obvious why you do this. Thanks This question was brought on by this thread: http://www.chiefdelphi.com/forums/sh...02#post1085502 |
|
#5
|
|||
|
|||
|
Re: Team 254 2011 FRC Code
Quote:
Here is the data and the three polynomials that are in the function. ![]() I generated the red data points by putting the robot up on blocks and applying PWM to the motors. I then read out the wheel speed at steady state for various PWM values. From there, I tried to fit the data. I first started with a 5th order odd polynomial (The + and - response should be the same, which means that f(x) = -f(-x)). It is shown in green. That wasn't a great fit, so I tried a 7th order polynomial, shown in blue. Neither of them were great fits. They are not monatonically increasing functions. When you drive the robot with them, the robot doesn't feel like the throttle is a consistent function, and it feels weird (it has been a while, and feelings don't translate to words so well.) From there, on a whim, I tried averaging the two functions. This actually turned out quite well. But, when I put it on the robot, it felt like the power was reduced too much at low speeds. To try to compensate for this, I added in a bit of y=x to get the equation shown in the legend above for the pink plot and to boost the power applied at low speeds. This is what is in use today in the victor_linearize function. |
|
#6
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Is there a reason you choose to use a polynomial function instead of a piecewise linear function?
From your empirical data it looks like no more than 5 linear sections (see red lines in figure below) would be needed to characterize the curve quite well. This would significantly cut down on the number of computations needed to return a result, and in this case it look like it might even more accurately fit the input data. ![]() Last edited by otherguy : 20-11-2011 at 20:04. |
|
#7
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Quote:
If speed is what you need, it's hard to beat a complete lookup table (no interpolation required): http://www.chiefdelphi.com/forums/sh...1&postcount=12 |
|
#8
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Quote:
The only reason I asked the question is because I looked at their code, and it seemed pretty involved for something that needs to be calculated every time you want to send an output to a motor controller. Code:
double RobotState::victor_linearize(double goal_speed)
{
const double deadband_value = 0.082;
if (goal_speed > deadband_value)
goal_speed -= deadband_value;
else if (goal_speed < -deadband_value)
goal_speed += deadband_value;
else
goal_speed = 0.0;
goal_speed = goal_speed / (1.0 - deadband_value);
double goal_speed2 = goal_speed * goal_speed;
double goal_speed3 = goal_speed2 * goal_speed;
double goal_speed4 = goal_speed3 * goal_speed;
double goal_speed5 = goal_speed4 * goal_speed;
double goal_speed6 = goal_speed5 * goal_speed;
double goal_speed7 = goal_speed6 * goal_speed;
// Constants for the 5th order polynomial
double victor_fit_e1 = 0.437239;
double victor_fit_c1 = -1.56847;
double victor_fit_a1 = (- (125.0 * victor_fit_e1 + 125.0 * victor_fit_c1 - 116.0) / 125.0);
double answer_5th_order = (victor_fit_a1 * goal_speed5
+ victor_fit_c1 * goal_speed3
+ victor_fit_e1 * goal_speed);
// Constants for the 7th order polynomial
double victor_fit_c2 = -5.46889;
double victor_fit_e2 = 2.24214;
double victor_fit_g2 = -0.042375;
double victor_fit_a2 = (- (125.0 * (victor_fit_c2 + victor_fit_e2 + victor_fit_g2) - 116.0) / 125.0);
double answer_7th_order = (victor_fit_a2 * goal_speed7
+ victor_fit_c2 * goal_speed5
+ victor_fit_e2 * goal_speed3
+ victor_fit_g2 * goal_speed);
// Average the 5th and 7th order polynomials
double answer = 0.85 * 0.5 * (answer_7th_order + answer_5th_order)
+ .15 * goal_speed * (1.0 - deadband_value);
if (answer > 0.001)
answer += deadband_value;
else if (answer < -0.001)
answer -= deadband_value;
return answer;
}
They clearly have a handle on things, so I was wondering if this approach provided them something over what I teach my kids (the piecewise linear approach described previously) here's the code one of my kids came up with as part of a pre-season homework assignment a few weeks ago. "scale" is a 2d array of points characterizing the piecewise function. The benefit of implementing it this way is that you can modify your array of points at any time, to tweak behavior, without having to modify your code. Code:
static double getInterpolatedAxis(double input){
for(int i=0;i<scale.length-1;i++){
if(input<scale[i][0]&&input>scale[i+1][0]){
double slope=(scale[i+1][1]-scale[i][1])/(scale[i+1][0]-scale[i][0]);//get slope
double intercept = (-1*slope*scale[i][0])+scale[i][1];
return (slope*input)+intercept;
}
}
}
|
|
#9
|
|||
|
|||
|
Re: Team 254 2011 FRC Code
Quote:
Once we had a working solution, we stopped working on it. There are more numerically efficient ways to get a similar answer, but we moved on to more important problems once we had a working answer. Now that I look at it in more detail, we could probably cut out half the compute cycles in the function without much work (Bring it down to around 10 multiplies, probably even less). You can read my description above for how I came up with the functions themselves. They really aren't that hard to fit. I'm a big fan of smooth motion. I don't like corners, or corner cases. Unless there were a good number of segments to the piecewise fit, it would have "corners" that I could "feel" as I drive it. When I tried driving with the original 5th order polynomial, I could feel the oscillations in the result and didn't like it at all. And hand fiddling with the piecewise function to get it to feel nice and match well would be as much work as the polynomial. Also, I read Adam's statement as saying that trying to automatically fit piecewise lines to the points is a lot harder than just fitting a polynomial. If you look at the source data that I started from, it is a bit noisy. Just interpolating between the points would be sub-optimal. |
|
#10
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Quote:
|
|
#11
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Quote:
|
|
#12
|
|||||
|
|||||
|
Re: Team 254 2011 FRC Code
In light of the fact that the Victor's inputs are highly discretized by the Victor itself (see http://www.vexforum.com/showthread.php?t=20350), a small lookup table (mapping PWM duty cycle<->speed) will ultimately provide the best possible accuracy, and will be computationally inexpensive to run. If you look under the hood of the WPIlib PWM code, all continuous-valued speed commands are ultimately converted to a PWM duty cycle value between 1 and 255 (0 is a reserved value for holding the line low). The effective duty cycle for the Victor is then .665ms + X*6.525us, where X is the PWM value. Since there are only 255 unique values, at most you need 255 elements in your lookup table (and in actuality, the beginning and end of this list will be past the Victor's 1.0 to 2.0ms allowable input range spec anyhow).
Define a float/double array with 255 values. The index is the PWM value; the value is the actual speed for that value. To populate this array you could, for example, run a "sweep" of the Victor by running a motor at each possible value for a couple of seconds and recording the total distance traveled during each segment (you would want to filter this data to preserve monotonicity). It would take several minutes to run, but you only need to do it once (when I get a chance, I will take an old bot and do exactly this, and post the results). Driving the Victor would be accomplished by searching through the 255 value array for your desired speed and selecting the index of the closest value in the array. Using a nearest-value binary search (since the array ought to be sorted) would require only 9 comparisons*, worst case, to select the optimal PWM value from the array. Using this method, you are guaranteed to have the least possible linearization error, though the polynomial method is close enough that I doubt you'd notice the difference. *At most 8 comparisons are necessary to search a 255 element array (since worst case complexity is O(log2(n)), but you would want an extra comparison to check if the adjacent value in the array is actually closer, since the easiest way to write a nearest-value binary search would be to find the closest value that is always either > or < the input. |
|
#13
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Quote:
If P is a 7th degree poly, and R is a 5th degree poly, then Q = a*P+b*R is a 7th degree polynomial ("a" and "b" are constants). I was trying to point out that it is not immediately obvious that a piecewise linear function "would significantly cut down on the number of computations needed to return a result" compared to a single polynomial. I inferred, based on the context, that you intended the word "computations" to imply "processing speed" and include not just arithmetic but conditional logic and branching as well. By the way, if you use two nx1 arrays instead of one nx2 array it saves cycles computing the indices. And if you pre-sort your [x[],y[]] points so they are in increasing order by x[], you can save a few cycles in your table lookup logic by eliminating one of the tests. Assuming x[0,1,..n] and y[0,1,...n] are tables of X and Y data values, and [x[n],y[n]] is a sentinel point to assure the function always returns a value: for (i=1,i<=n,i++) if (ax<x[i]) return y[i]+(ax-x[i])/(x[i-1]-x[i])*(y[i-1]-y[i]); Depending on the compiler, it might save a few more cycles to use pointer arithmetic to scan the x[] table. Last edited by Ether : 03-12-2011 at 09:40. |
|
#14
|
|||||
|
|||||
|
Re: Team 254 2011 FRC Code
Quote:
Also, for simple math, it's not really a concern when running on the cRio. |
|
#15
|
||||
|
||||
|
Re: Team 254 2011 FRC Code
Don't forget that another option is pre-processing of lookup tables, if the memory will hold it. 256 data points, 1 per PWM step, would be easy to calculate during sensor initialization and also cut down on cycles. Preprocessing it instead of pre-programming it also allows you to make adjustments to parameters before each match (if needed).
I agree that it's probably not needed for the cRIO, but was very useful for old-school quadrotors on primitive processors. It also helped back in the 90's with signal processing algorithms that needed to be realtime. Theoretically one could pre-process all of the possible motor outputs versus inputs such that a lookup was done given sensor/driver input instead of a calculation -- but that's a bit overboard. Last edited by JesseK : 22-11-2011 at 11:23. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|