Quote:
Originally Posted by Ether
How is this implemented under the hood in FPGA? Is there a ring buffer in the FPGA which the FPGA populates with the 6.525us_resolution_timestamp for each rising edge it detects (assuming it's not in semi-period mode), and then when requested retrieves the elapsed time between the N+1 most recent samples in the ring (which the cRIO CPU then divides by N)?
|
There is a buffer that will hold up to 127 samples for each counter / encoder. It has write pointer that does form a ring buffer. The sum of the samples between the write pointer - average_size (usually) and write pointer are summed every 0.1 us * average_size (yeah I know... overkill) and made available to the driver (i.e. not on demand) along with the number of samples available. When a counter is reset, the write pointer is moved to mark all samples unwritten (so the "count" returned would be 0).
The I/O is 6.525 us from sample to sample. The timer used has a 1 us resolution. So the actual resolution is 6..7 us. However, the error is not cumulative across the average. Regardless of the number of samples averaged, the error is less than 7.525 us.
Quote:
Originally Posted by Ether
How large can "NumberOfSamplesToAverage" be? I searched the 2012 C++ and the 2013 Java WPILib code but couldn't find that search string.
|
The FPGA interface function is called writeTimerConfig_AverageSize(). Looks like we never exposed that to the WPILib API layer in C++ (and therefore Java). It's exposed in LabVIEW.
Cheers,
-Joe