We currently have a couple floating point operations in our program. Is it absolutely vital that we change these? I know that I can multiply by 100 or 1000 to get rid the the decimal point, but I’d rather not for these operations.
I didn’t think it was speed so much as general processor weirdness with memory. I’d multiply and remove the decimal. What exactly is preventing you from doing so? Time to execute? In that case, using floating point won’t speed things up at all.