I’ve always been a proponent of using your own integer-valued CORDIC/table/Taylor series trig functions on the PIC because of the lack of a hardware FPU. I’ve always been told it’s much faster.

The question that is now on my mind is, how much faster? I’ve noticed a lot of teams this year using the math.h trig functions, floating-points and all.

Has anyone any empirical evidence of the speed difference between the two methods? I might as well use math.h if the difference is small (plus, there would be only a couple of calls per 26ms).

I don’t have specific timing numbers, but considering that each time I go through the user_routine event loop our team’s code currently calls

sin() four times
cos() four times
atan2() twice
asin() twice
acos() twice
sqrt() four times

and it runs without problems and without taking up sufficient code space, so I decided it wasn’t worth messing with CORDIC functions and called the floating point routines “fast enough”.

If I have a chance I’ll set up a test loop to compare the methods, but it hasn’t been a priority.

I remember from an earlier version of the compiler (2.2?) that the compiler library source code had estimates of the number of clock cycles for a particular operation to complete. I don’t see this information in the current compiler, but I do remember that a floating point multiply/divide statement took an average of 1835 instruction clock cycles (a floating point addition/subtraction was 80). Assuming this information is still semi-valid, you can take a look at the source code for the trig routines in the “C:\mcc18\src raditional\stdclib” directory to get an estimate of how many of these operations there are for a particular trig routine. Sorry I can’t give you any more info than that.

Our is going to be using a squareroot function as well as a tan function. Doesn’t slow down too much as of now.
How much do floating point values affect the code though? We used alot of those in finding an accurate distance from the wall.