How slow is floating point math really?

We currently have a couple floating point operations in our program. Is it absolutely vital that we change these? I know that I can multiply by 100 or 1000 to get rid the the decimal point, but I’d rather not for these operations.

So what do you guys think?

Thanks,
Nathan

I didn’t think it was speed so much as general processor weirdness with memory. I’d multiply and remove the decimal. What exactly is preventing you from doing so? Time to execute? In that case, using floating point won’t speed things up at all.

Here are some other threads with some real data:

Besides taking more time, they also use more memory, which can cause wierd errors.

No, a few floating point operations per 26ms cycle generally isn’t a problem.

-Kevin