We’ve got fairly tight requirements. In order to improve our shooter, we’re increasing the number of counts our encoder returns.
Right now we have a bourns 128 count with 1/4 slotted shaft input.
What we’re looking for is a 512 count with 1/4 slotted shaft input at reasonable cost.
With 128 counts per revolution, we see around 7000 counts per second. 512 gets up to 28,000 or so per second. My understanding is that 40,000 per second is the cRio’s limit, so I’m happy getting to 3/4 of it. Also note that the encoder will have to be fairly robust to handle the speeds (3000 rpm+) that our shooter runs.
We’ve looked but can’t find something in stock that meets these requirements. Does anyone else have an off-the-shelf idea?
I understand we could gear the encoder we have now upwards to get the appropriate counts, but we direct drive the encoder to minimize noise and would like to stay that way. We don’t have the ability for a drop-over or code wheel style encoder due to mounting real-estate.
If it is okay, I’ve used both the S1 and S5 encoders before from us digital.
Your method of calculating and/or filtering velocity could possibly have a much larger impact on control (we ran a 6-tick encoder all season), curious, what are you currently doing? I certainly agree with your logic that more resolution is better though (we’re switching to a higher count encoder at IRI for the same reason).
We are using the S5 encoder at 32 CPR. It has worked fine all year for us. Only problem we had was when testing different wheels/coatings we ended up destroying one after inserting/pulling it out of a tight hole in the shooter shaft about 25 times, which is pretty understandable.
we have our shooter logic in a 10 ms timed loop (not a waited loop).
We calculate the rate based on 3 loops worth of data. Each loop we drop the oldest data point and add a new one.
Then we average those 3 loops of data and send that into our velocity PID as our actual speed.
(I suppose, looking at this now, we really should just be looking at the rate over 9 loops or 90ms, and get rid of the average. It’s a bit redundent.
For our ‘enable to shoot’ logic, we use an 8 sample average of rate fed into an IIR filter with a .5 constant.
Please, feel free to tell us what we can improve =). I’m an mechanical engineer masquerading as a controls engineer, and systems engineering was always one of my most hated classes :lol:
Are you using 4x sampling to get the max from your Bourns?
The quoted CPR is based on using only one of the following:1) rising edge A
2) falling edge A
3) rising edge B
4) falling edge B
Counting each and every one of them gives you 512 ticks per revolution with a “128cpr” encoder.
We’re using a 50CPR US Digital S5 encoder. We switched over to this from the S4’s based on recommendation from our friends on the Poofs. We are extremely happy with the S5 (particularly the ball bearing version), and will only be using that encoder for applications similar to our shooter wheel in the future.
Depending on how much processing/filtering is performed on the encoder results, using anything except the 1x mode could introduce jitter. Picking one transition and sticking with it gives the “cleanest” measurement.
*Given a 128 CPR encoder running at approx 3280 rpm (per post#1), there will be approx 70 counts every 10 ms.
If a 10ms timed loop is used and the rate is computed by reading the raw sample count and dividing by sample time, a 1-count jitter will equal approx 1/70 of 3280, or 47rpm jitter.
Using this processing method and decoding at 4x instead of 1x should reduce the jitter, although perhaps not by a factor of 4 (because of the tolerances of the edge locations).
What kind of ± resolution on the RPM are you looking to achieve? And at what frequency do you want to know the current RPM of the shooter?
I would suggest creating an Excel document to calculate how various changes to the CPR and sampling time interval affect the resolution of the shooter speed output. Depending on what you are looking for, you may be able to achieve your desired results without hardware changes.
For example, with a 128 CPR encoder (and 1x decoding) sampling over a 100ms time interval, you can achieve an RPM resolution of 4.6875rpm per encoder count, while sampling over only a 30ms window yields a resolution of 15.625rpm per encoder count.
Qty Mfr Part Number Price Description Usage
1 US Digital S5-250-250-NE-S-B $70.65 S5 series, 250 CPR, 1/4" Shaft, No Index, Single Ended, Ball Bearing Shooter RPM
Art, excellent questions. Obviously, I’d like the tightest RPM control I can get. Working backwards, I’d like to be no more than +/-6 inches in height, and I still suspect that’s more than many top teams are aiming for. That means my shooter velocity can vary from 35 ft/s to 34.3 ft/s.
That means my rpm may vary from about 2674 to 2619 (single wheel)
In retrospect, that means that the 30ms time frame with the 128 encoder we’d been trying to use takes up a huge chunk (30%+) of our tolerance. 5% seems like a more reasonable number. Which means that a 512 count encoder should get us fairly close to having a measurement error of 5% over 30ms.
Obviously I’m trying to minimize calculation (delay) time. The fewer cycles required to get an accurate reading means that much less ‘lag’ between a change in the system and your response to that change. Where is the sweet spot, based on shooter momentum and motor update rate? Heck if I know…
10ms timed loop? is that because of vision tracking? Why does it need to be timed?
First thing I would like to mention from a previous time is that we had a problem with our encoder where the vibrations of the shooter caused the encoder itself to rupture the casing and then caused it to lose counts at higher speeds as the centripital force of the encoder wheel lost contact during certain phase intervals. We had to tape the outer casing down tight to keep this problem to a minimum, and then needed to use a priority averager to erase this symtom. The priority averager looks like this:
class Priority_Averager
{
private:
std::priority_queue<double> m_queue;
const size_t m_SampleSize;
const double m_PurgePercent;
double m_CurrentBadApple_Percentage;
size_t m_Iteration_Counter;
void flush()
{
while (!m_queue.empty())
m_queue.pop();
}
public:
Priority_Averager(size_t SampleSize, double PurgePercent) : m_SampleSize(SampleSize),m_PurgePercent(PurgePercent),
m_CurrentBadApple_Percentage(0.0),m_Iteration_Counter(0)
{
}
double operator()(double newItem)
{
m_queue.push(newItem);
double ret=m_queue.top();
if (m_queue.size()>m_SampleSize)
m_queue.pop();
//Now to manage when to purge the bad apples
m_Iteration_Counter++;
if ((m_Iteration_Counter % m_SampleSize)==0)
{
m_CurrentBadApple_Percentage+=m_PurgePercent;
//printf(" p=%.2f ",m_CurrentBadApple_Percentage);
if (m_CurrentBadApple_Percentage >= 1.0)
{
//Time to purge all the bad apples
flush();
m_queue.push(ret); //put one good apple back in to start the cycle over
m_CurrentBadApple_Percentage-=1.0;
//printf(" p=%.2f ",m_CurrentBadApple_Percentage);
}
}
return ret;
}
};
This got rid of that symptom but we still had the typical noise so we used a kalman filter:
void KalmanFilter::Reset()
{
m_FirstRun=true;
//initial values for the kalman filter
m_x_est_last = 0.0;
m_last = 0.0;
}
KalmanFilter::KalmanFilter(): m_Q(0.022),m_R(0.617) //setup Q and R as the noise in the system
{
}
double KalmanFilter::operator()(double input)
{
//For first run set the last value to the measured value
if (m_FirstRun)
{
m_x_est_last=input;
m_FirstRun=false;
}
//do a prediction
double x_temp_est = m_x_est_last;
double P_temp = m_last + m_Q;
//calculate the Kalman gain
double K = P_temp * (1.0/(P_temp + m_R));
//the 'noisy' value we measured
double z_measured = input;
//correct
double x_est = x_temp_est + K * (z_measured - x_temp_est);
double P = (1- K) * P_temp;
//update our last's
m_last = P;
m_x_est_last = x_est;
//Test for NAN
if ((!(m_x_est_last>0.0)) && (!(m_x_est_last<0.0)))
m_x_est_last=0;
return x_est;
}
And the typical averager
// A templated averager, make sure the type being averaged can handle the +, -, and / functions
template<class T, unsigned NUMELEMENTS>
class Averager
{
public:
Averager() : m_array(NULL), m_currIndex((unsigned)-1)
{
if (NUMELEMENTS > 1)
m_array = new T[NUMELEMENTS];
}
virtual ~Averager() {if (m_array) delete] m_array;}
T GetAverage(T newItem)
{
if (!m_array) // We are not really using the Averager
return newItem;
// If the first time called, set up the array and use this value
if (m_currIndex == -1)
{
m_array[0] = newItem;
m_currIndex = -2;
m_sum = newItem;
return newItem;
}
else if (m_currIndex < -1)
{
// We have not populated the array for the first time yet, still populating
m_sum += newItem;
int arrayIndex = (m_currIndex*-1)-1;
m_array[arrayIndex] = newItem;
// Still counting backwards unless we have filled all of the array
if (arrayIndex == (NUMELEMENTS-1)) // This means we have filled the array
m_currIndex = 0; // Start taking from the array next time
else
--m_currIndex;
// Return the average based on what we have counted so far
return (m_sum / (arrayIndex+1));
}
else // 0 or greater, we have filled the array
{
m_sum += newItem;
m_sum -= m_array[m_currIndex];
m_array[m_currIndex] = newItem;
++m_currIndex;
if (m_currIndex == NUMELEMENTS)
m_currIndex = 0;
return (m_sum / NUMELEMENTS);
}
}
void Reset(){m_currIndex=-1;}
private:
T* m_array;
int m_currIndex;
T m_sum;
};
With the three averagers working together our encoder readings look like this:
Each pixel represents a 10ms iteration… for completion, the green is the voltage applied while the yellow is the voltage scaler from PID (or actually PD). Notice how quickly the voltage peaks to full voltage… then drops when the encoder reading got too high… I cannot explain why it is trying to calibrate at a fixed offset below the desired velocity, but fortunately any issue of that would not be contributed to the averagers. I suspect that issue is due to the tolerance size (I’ll need to review the code).
*It doesn’t look like it’s holding the final speed very well (see portion circled in red). Do you have a screenshot with a longer time axis showing if it settles out?
*
*
Here same exact run… I fixed a bug in the graph program that allows it to process multiple bitmaps when the dump was too long so we can see most of it this time.
The 4.4 is due to poor PID tuning and other mechanical issues with the shooter and should have nothing to do with the averagers giving the encoders a better reading… I could probably get 2.2 if I spent more time tuning the PID… which I did not have during the competition. I really need to find and post what the encoder readings looked like before these averagers were applied… unfortunately I do not have them on this machine.