Quote:
Originally Posted by Mark McLeod
That's due to how computer hardware in general works.
Part of programming is understanding the underlying computer implementation, and this is just one of the quirks or limitations.
|
It is a limitation in the imementation of floating point numbers (like doubles or singles). If you convert the dollars to cents by multiplying by 100 and then cast that as an integer, the problem will go away because integers do not have that problem.