|
Re: Time precision better than seconds??
How accurate is the timing using floating-point precision? IEEE floating point representation isn't always 100% accurate; many times floats won't represent your value exactly as entered (e.g., 0.3 might become 0.2999...) so your times may end up slightly misrepresented internally.
Any reason as to why floating-point numbers were used instead of milli or microseconds?
__________________
One of the main causes of the fall of the Roman Empire was that, lacking zero, they had no way to indicate successful termination of their C programs.
|