bool PIDController::OnTarget()
{
bool temp;
CRITICAL_REGION(m_semaphore)
{
temp = fabs(m_error) < (m_tolerance / 100 *
(m_maximumInput - m_minimumInput));
}
END_REGION;
return temp;
}
I’m having trouble understanding the implementation of this function. In the context that I was using it my max input was 100 and my minimum input was 0. I was using the default tolerance of 0.05 (assuming this means 5%).
0.05 / 100 *100 = 0.05.
This means my error would have to less than 0.05 to be OnTarget regardless of setpoint.
For a setpoint of 10 I would have to be within 0.05 which is 0.5%, not 5%.
For a setpoint of 100 I still have to be within 0.05 which is 0.05% of 100, not 5%.
Shouldn’t the implementation be the percent error formula?
% error = (|Your Result - Accepted Value| / Accepted Value) x 100
Something like:
bool PIDController::OnTarget()
{
bool temp;
CRITICAL_REGION(m_semaphore)
{
temp = fabs(m_error)/m_setpoint < m_tolerance;
}
END_REGION;
return temp;
}