![]() |
Re: Servo 'smoothing'
Quote:
DON'T OVER OPTIMIZE A fixed point multiply takes two or three clock cycles to execute. A single precision floating point multiply takes three clock cycles to execute. A double precision floating point multiply takes four clock cycles to execute. In addition to the "execute" phase, each instruction must also go through fetch, decode/dispatch, and complete, which adds three more clock cycles. Simply put, the difference between single fixed or floating instructions just don't matter any more. What will matter is how many instructions. By converting to fixed point math (and adding the shift instructions), you actually slowed down the filter. To make things even more interesting, the processor is capable of dispatching up to 2 instructions per clock, but only if they are different types of instructions: You can dispatch a fixed point and a floating point instruction simultaneously, but you can't do the same with two fixed point instructions. This means that you are *much* better off allowing a compiler to order your instructions. Write what you mean, and trust in the compiler. - Eric (with a C) PS: Eliminating divides are almost worth your time. Fixed point divides take 20 cycles to execute. Floating point divides take 18 (single precision) or 33 (double precision) cycles to execute. For reference sake, a double precision divide on the cRIO takes about the same amount of time as an 8 bit fixed point instruction on the PIC. However, it will also be executing fixed point math while the FPU is working on the divide. |
Re: Servo 'smoothing'
Quote:
To be brief, I wasn't maintaining that fixed point math is faster or would make more sense for most operations on the cRIO. NI's processor selection obviously makes that untrue. Two points remain, however: 1. At some point, we'll hopefully be given access to the FPGA behind the RIO in cRIO. Floating point math is more or less untenable on this FPGA. If a team wants to do fast filtering, averaging, oversampling, etc. with no extra processor load, fixed-point on the FPGA is the way to go. Many interesting applications on the FPGA are going to require some sort of fixed point math, really. 2. Fixed point math is loads faster on processors designed with it in mind and such processors consume less power to do the same amount of work as a floating point processor. I was attempting to point this out so everyone reading the thread didn't simply consign fixed-point to the dustbin of history. There's a reason the big DSP makers are still designing and making new fixed-point DSPs, after all. |
| All times are GMT -5. The time now is 19:21. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi