At least I undestand the intent now. I think Joe Ross gave the best answer. It makes sense in the turret application. I was using it for my shooter speed where I think percent error makes more sense.
I think percent error is more natural to me because whenever we discussed the steady state error requirements of a control loop in school we always talked about it as a percentage error of the setpoint. i.e. steady state error of +/- 2% might be a requirement. Of course I think we were always talking about controlling speed and not controlling position.
I think I will add my own OnTarget implementation to the WPILib and recompile it. Something like this:
Code:
bool PIDController::OnTarget(bool percent_error)
{
bool temp;
CRITICAL_REGION(m_semaphore)
{
if(percent_error)
{
temp = fabs(m_error)/m_setpoint < m_tolerance;
}
else
{
temp = fabs(m_error) < (m_tolerance / 100 *
(m_maximumInput - m_minimumInput));
}
}
END_REGION;
return temp;
}
Quote:
Originally Posted by DjScribbles
Conclusion:
The whole thing would be much simpler (and function the same) if tolerence simply took in a float for the acceptable error value rather than percentage. (ex SetTolerence(1.0); //Tolerence of 1 degree).
|
I also agree with this.
Quote:
Originally Posted by Ether
On a side note:
It looks like m_tolerance, m_maximumInput, and m_minimumInput are all constants.
|
I think the reason they are part of the critical region is that they are accessed in the calculation of the output of the controller and can also be modified at any time by the application threads via methods such as this one.
Code:
void PIDController::SetInputRange(float minimumInput, float maximumInput)
{
CRITICAL_REGION(m_semaphore)
{
m_minimumInput = minimumInput;
m_maximumInput = maximumInput;
}
END_REGION;
SetSetpoint(m_setpoint);
}