Go to Post Safety glasses may be a nice forehead protector, but thats not what they are made for. - Quatitos [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rate Thread Display Modes
Prev Previous Post   Next Post Next
  #1   Spotlight this post!  
Unread 13-02-2015, 14:31
cstelter cstelter is offline
Programming Mentor
AKA: Craig Stelter
FRC #3018 (Nordic Storm)
Team Role: Mentor
 
Join Date: Apr 2012
Rookie Year: 2012
Location: Mankato, MN
Posts: 77
cstelter will become famous soon enough
Some thoughts about Double Precision numbers

Sorry if this has come up often-- I searched all the programming forums for roundoff and double precision and didn't find any discussions of this and many of the hits I did find were back from 2003, so I though a reminder may prove valuable for some teams...

In a recent thread I noticed the line:

Code:
...
double distance;
..

if (distance != 6) {
...
Any time I see equality being done on a double (apart perhaps against zero under certain situations) red flags go off.

I would promote as a general rule one should *never* do such a comparison, but rather formulate with a range such as

Code:
if(Math.abs(distance-6) > 0.01)
where the 0.01 represents how close you must be to the number. The above would skip the if clause whenever distance is outside the range[5.99,6.01].

A double precision number is 64 bits (8 bytes) with a specific format to define a sign, mantissa, and exponent. Think of it as 0.NNN+EMM wheere 0.NNN is mantissa and MM is the exponent. But then do all the math in binary. The key is that 64 bits provides for 'only' 18,446,744,073,709,551,616 unique values which sounds like a lot, but you can find that many unique numbers betweeen *any* two real values.

When you say

if(distance!=6)

that 6 evaluates to just one of those 18 quintillion values. Odds of you ever hitting that one are, well, quite low. In this case the distance was a value from an ultrasonic. I'd argue that you might be able to position the sensor exactly 6" as precisely as you can measure and continually move it closer and further from an object and only manage to get a reading of exactly 6.0 very rarely and possibly even never if the resolution of how the math for distance is such that the two closest numbers it can calculate are 5.99999999 and 6.00000001.

I've even seen strange cases where code should be insanely safe along the lines of:

Code:
   if(...) {
        myval=4.2;
   }
   if(myval==4.2)
       ...
where I knew the code was going through the first if but would not go through the 2nd one.

It was not entering the if() because intel processers have some registers that are actually more than 64 bits so that math can be done at extended precision making the final 64 bit answer better. If I recall correctly the way the optimizer worked (this was optimized C++ code where I had this happen), is it must have allocated the variable myval as an extended register size and mapped an extended value for 4.2 to that register. Then when it did he equality test, it converted the extended precision number to a standard 64 bit precision and did 64 bit equality test and the equality said false.

Any double precision math operation (+, -, *, /) creates roundoff error because very few numbers (relatively speaking) can *excatly* be represented by 18 quintillion unique numbers. The upshot was that because of the compiler's use of extended precision registers and the conversion functions, roundoff occurred and what looked unimaginable to be wrong was wrong because of doing double equality comparison with constant values.

Moral of the story, while you might get lucky sometimes doing equality on double precision numbers, it's a bad strategy and you are always better off providing some level of tolerance in such situations.

I said zero could probably be safe because that's one number you know will always be assigned one way at any precision and can be converted to integer, float, long, etc without any roundoff due to the fundamental binary format of double precision numbers. (there are other numbers-- perfect powers of 2 should be fine I think).

But even such numbers can be tainted once math is done on them. So if you have code where you explicitly state mydouble=0.0; and you later test if(mydouble==0.0) and you have designed your code such that no math is ever done on mydouble, I think you are probably safe. Similarily with 2.0 4.0 8.0 or 0.5, 0.25-- powers of 2 that can unambiguously represented as a double.

Of course some day we may get some new format for doubles and those rules may break.

So best rule of thumb is when comparing with doubles, always provide a tolerance.
Reply With Quote
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 10:49.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi