View Single Post
  #3   Spotlight this post!  
Unread 04-02-2015, 14:22
cstelter cstelter is offline
Programming Mentor
AKA: Craig Stelter
FRC #3018 (Nordic Storm)
Team Role: Mentor
 
Join Date: Apr 2012
Rookie Year: 2012
Location: Mankato, MN
Posts: 77
cstelter will become famous soon enough
Re: default m_period for PIDController???

Quote:
Originally Posted by lopsided98 View Post
I'm pretty sure that increasing the PID frequency does not actually improve the performance much. In the WPILib implementation, multiplying the PID frequency by a certain amount effectively multiplies the the I and D terms by the same value. This is because the PIDController does not normalize for elapsed time internally. I think that might be what actually caused your wheels to perform better.

There is also the possibility that I am totally wrong about this, so feel free to correct me if you know more about it.
I just read the calculate() function in the latest wpilib. I could have sworn some 4 years ago when I first read this code that they used m_period in the calculation for the I and D components. But I don't see anything in the current implementation to account for that. So I think I get your first point.

But I don't quite follow your full thought. You start saying increasing PID frequency does not improve the performance, but the wheels perform better becuase we increased the frequency and that caused the I/D to calculate differently.

For me it is just common sense that the more frequently you sample the sensor, the more frequently you can compensate for the changes and thus achieve better performance.

My understanding is that The I value should be multiplied by the integral of the error function over time. Normally one draws a bunch of rectangles under the curve and sums the length/width product of each of these to approximate an integral. PIDController seems to assume the period is going to be perfect each time (so hopefully we don't revisit at 50ms and then 70ms later and the 30ms after that) and assumes the width of each rectangle is 1.0 and thus has done a sort of internal normalization of a 50ms rectangle width to 1.0 width. I think this means that a faster frequency will require a smaller I for equivalent behavior.

If I run a 50ms period and take 3 successive errors of 0.3, 0.2, 0.1 over 150ms, m_totalError is then 0.6 and if I have I=0.01, then the I component of output would be .006 and the P component would be identical.

But if I run a 5ms sample we would have acquired 30 successive errors of 0.3, 0.29,0.28,0.27,0.26,0.25....0.1 over the same 150ms and our m_totalError would be 4.2 so to achieve the same I component of .006 we would need an i of .0014, not .01.

I think you might be suggesting that we kept the same PID values before and after changing the frequency and the above effect is what improved performance (we effectively used a much larger I had we not changed I). Is that what you meant? In our case however, we did not arrive at same I for best performance iirc. Not sure though.

But I still tend to conclude that with faster frequency we can more accurately calculate an estimate of delta-error and the integral of the error function (significantly better I believe) and find better performance through the better accuracy of those values.

Last edited by cstelter : 04-02-2015 at 14:28. Reason: fixed bad wording of final sentence
Reply With Quote