Quote:
Originally Posted by vamfun
I beg to differ again.... what is important is the relative accuracy and that is referenced to the new edge ticks. When a new edge is detected, a mark is made in the sand. You travel one tick to another new edge. You make a second mark in the sand. You know that you have traveled exactly one tick from your first mark.
|
You don't know that. You know you have traveled
at least one tick. You might have traveled almost two ticks. That's what the
[1 .. 2) notation Joe used means.
Quote:
|
If you now reverse slightly and get a same edge indication... you will always say that you are at the 1st mark when indeed you know that you are not.
|
What you know is that you are between the first and second marks.
Quote:
|
Hence no matter what the relative scaling, when a reverse edge occurs... my algorithm will always be better than yours since its says I'm at the second mark which is indeed where I am.
|
You were at the second mark for a moment. Until another edge occurs, you don't know precisely where between the first and second marks you are.
Quote:
|
I think we all agree that both algorithms will have a max error of 1 count and this will occur at different times.
|
It is reasonable to consider the true quadrature algorithm to have a maximum error of a half count either side of zero. Again, this is typical of an optimum quantization process.
Quote:
|
If you demand exact repeatability with angle, then Joe's is the way to go. However, it introduces large rate spikes when oscillating over an edge.
|
What rate spikes? I think you missed the part where the FPGA doesn't compute a rate until two events have occurred in the same direction.
The number that a quadrature decoder yields does not represent an edge. It represents a region between edges. Does that help you understand the explanations here?