Some thoughts about Double Precision numbers

Sorry if this has come up often-- I searched all the programming forums for roundoff and double precision and didn’t find any discussions of this and many of the hits I did find were back from 2003, so I though a reminder may prove valuable for some teams…

In a recent thread I noticed the line:


...
double distance;
..

if (distance != 6) {
...


Any time I see equality being done on a double (apart perhaps against zero under certain situations) red flags go off.

I would promote as a general rule one should never do such a comparison, but rather formulate with a range such as


if(Math.abs(distance-6) > 0.01)

where the 0.01 represents how close you must be to the number. The above would skip the if clause whenever distance is outside the range[5.99,6.01].

A double precision number is 64 bits (8 bytes) with a specific format to define a sign, mantissa, and exponent. Think of it as 0.NNN+EMM wheere 0.NNN is mantissa and MM is the exponent. But then do all the math in binary. The key is that 64 bits provides for ‘only’ 18,446,744,073,709,551,616 unique values which sounds like a lot, but you can find that many unique numbers betweeen any two real values.

When you say

if(distance!=6)

that 6 evaluates to just one of those 18 quintillion values. Odds of you ever hitting that one are, well, quite low. In this case the distance was a value from an ultrasonic. I’d argue that you might be able to position the sensor exactly 6" as precisely as you can measure and continually move it closer and further from an object and only manage to get a reading of exactly 6.0 very rarely and possibly even never if the resolution of how the math for distance is such that the two closest numbers it can calculate are 5.99999999 and 6.00000001.

I’ve even seen strange cases where code should be insanely safe along the lines of:


   if(...) {
        myval=4.2;
   }
   if(myval==4.2)
       ...

where I knew the code was going through the first if but would not go through the 2nd one.

It was not entering the if() because intel processers have some registers that are actually more than 64 bits so that math can be done at extended precision making the final 64 bit answer better. If I recall correctly the way the optimizer worked (this was optimized C++ code where I had this happen), is it must have allocated the variable myval as an extended register size and mapped an extended value for 4.2 to that register. Then when it did he equality test, it converted the extended precision number to a standard 64 bit precision and did 64 bit equality test and the equality said false.

Any double precision math operation (+, -, *, /) creates roundoff error because very few numbers (relatively speaking) can excatly be represented by 18 quintillion unique numbers. The upshot was that because of the compiler’s use of extended precision registers and the conversion functions, roundoff occurred and what looked unimaginable to be wrong was wrong because of doing double equality comparison with constant values.

Moral of the story, while you might get lucky sometimes doing equality on double precision numbers, it’s a bad strategy and you are always better off providing some level of tolerance in such situations.

I said zero could probably be safe because that’s one number you know will always be assigned one way at any precision and can be converted to integer, float, long, etc without any roundoff due to the fundamental binary format of double precision numbers. (there are other numbers-- perfect powers of 2 should be fine I think).

But even such numbers can be tainted once math is done on them. So if you have code where you explicitly state mydouble=0.0; and you later test if(mydouble==0.0) and you have designed your code such that no math is ever done on mydouble, I think you are probably safe. Similarily with 2.0 4.0 8.0 or 0.5, 0.25-- powers of 2 that can unambiguously represented as a double.

Of course some day we may get some new format for doubles and those rules may break.

So best rule of thumb is when comparing with doubles, always provide a tolerance.

I’ll just say, as the poster of the “if (distance != 6)” line, that I wrote it with the fact that unless the double is exactly 6 that it will run in mind. That’s because it led to a PID loop that contained a double-based deadzone, so I didn’t write see the point in writing a deadzone for 6 when the PID already had a deadzone.

Hi Oromus-- in retrospect I suppose I shouldn’t have used exactly your line to start the new thread. Apologies about that.

But I do see red flags whenever I see code that compares doubles for equality or inequality. I thought I’d post what I believe to be a very good rule of thumb about floating point numbers. Maybe some will disagree with my take and a nice discussion will erupt. Otherwise, perhaps folks will look at comparisons of floating point numbers a bit more carefully and code better robots as a result.

I don’t really agree with your assertions above, but this thread isn’t the place for that discussion, so maybe I’ll send you a PM about my thoughts. I still think that should have been a red flag for you, but probably for entirely different reasons than you suspect.

Implementing a range is the “correct” way to do it, but not the only way. Another option is to cast the double value to an int, or using Math.round(distance). This would strip the trailing decimal places on the double.
Both


int compare_distance = (int)distance;
if(compare_distance != 6){
    //Covers values from 6.0 to 6.999...
}

and


if(Math.round(distance) != 6){
    //Covers values from 5.5 to 6.4999...
}

will work equally well in most cases.

This is why the gcc compiler has a -Wfloat-equal warning flag. Using the “==” operator on floating point types usually indicates a programmer mistake.