Code:
if (fabs(m_error) > (m_maximumInput - m_minimumInput) / 2)
{
if (m_error > 0)
{
m_error = m_error - m_maximumInput + m_minimumInput;
}
else
{
m_error = m_error + m_maximumInput - m_minimumInput;
}
}
Now that you brought this code up, would someone mind explaining why it is there? What is maximumInput and minimumInput? They both have a default value of zero. If maximumInput = 1 and minimumInput = -1 then the error is being biased by -2 for m_error>0 and 2 for m_error<0 if abs(m_error) > 1. So lets say that m_error = 1.2 then m_error is biased to become -.8 . What is this accomplishing?