Quote:
Originally Posted by Ether
I my haste to be responsive I forgot to attach it. Here it is. It's just a screenshot of a portion of page 1 of the E4P datasheet, which is available here.
|
Thanks. So yes My PPR definition is equivalent to US Digital's CPR definition.
Quote:
Originally Posted by Ether
So I am going to assume that the 500 PPR Cytron that you mentioned is actually a 500 CPR (using the US Digital and GrayHill definitions of CPR).
|
Correct.
Quote:
Originally Posted by Ether
You said you were controlling 2300 rpm wheel speed with that 500 CPR Cytron encoder, but you didn't mention how you were decoding the signal, so it's not clear how many counts per rev you were getting. I'll just assume for the moment that you were using 4X quadrature decoding, so you'd be getting 2000 counts per rev.
|
Correct, I am using 4x decoding
Quote:
Originally Posted by Ether
You said you were polling the counts and computing the speed every 200ms. 2300 rpm is 7.7 revs every 200 ms. That's 15,333 counts every 200ms. So it's no wonder you are getting a clean speed signal: you are averaging the elapsed time of 15,333 counts to get your speed. That introduces lag. Lag in the sensor signal is generally not a good thing for closed loop control; it limits how fast you can make the response without causing oscillations.
|
I think we both agree the hardware is counting all of the pulses with a max rate of 153 kHz. Lets assume for a second that the count was a continuous analog signal. I am simply sampling this analog signal every ~200ms and then calculating the velocity between the two points.
Sampling the signal does not provide any lag. If we were to place the analog signal over the sampled version, all of the peaks would line up.
I am introducing lag when I calculate the derivative of the signal in order to get velocity. Furthermore, I am introducing is a small bit of error, because I am approximating the derivative of the signal between the two points with a straight line, but that is standard procedure when performing discrete differentiation, and its a small error I can tolerate within my robust control.
As for the lag, my speed signal lags my position signal by ~200ms in this example. However lag is present for any discrete differentiation. You will always lag by at least one time sample. Even using the FPGA when calculating the rate you will need to wait at least one time clock (granted the time clock is much smaller, so lag is less, but below I will explain why I believe 200ms is tolerable.
note: I choose to run my loop at 200ms, I can run it much faster with approx +/- 6-10 rpm hits which is still acceptable for my control algorithms. It is less than 1% of my top speed.
So that covers lag, and no matter If we use a millisecond timer, or have a nano timer, I can't get rid of lag as long as calculating velocity. I can shorten my sample time in order to reduce lag. I do not understand what you mean by average, because the points are not being averaged. Can you elaborate?
Quote:
Originally Posted by Ether
Consider what would happen if you did this: use only one channel of the encoder, and configure the FPGA to count only the rising edges of that channel and to report the period based on the elapsed time between the 126 most recent counts. You would use the GetPeriod() method of the Counter class to get the period. The FPGA polls for rising edges at ~153KHz, and uses a 1MHz timer to measure the period.
|
I have never been able to get a clean signal from the period() function. This is what lead me to write my own software rate function. Last year for our 2012 robot, I counted using on rising edges only on a 256 PPR encoder and saw RPM spikes of +/- 50 RPM in some instances. It wasn't something I could tolerate. So my solution was to average the data, and I averaged the last 10 samples to get a better signal.
I know other teams have also experienced similar issues, and in one Post, I read that NI recommends setting an averager on the FPGA using its API.
I haven't looked into why the getPeriod() function produces a signal so noisy, but I suspect it is because FRC chooses to do division in software instead of hardware, and divide by a constant number, however, that code can be delayed in software causing the division to be inaccurate. FRC has had trouble producing a clean getPeriod() signal for years.
Quote:
Originally Posted by Ether
At 2300 rpm you should get single-digit rpm jitter with this setup, and with only 1/4 of a rev lag instead of seven and a half revs. You could ask for speeds at a 10ms rate and get a fresh reading each time.
With a clean, noise free speed signal with minimal lag (as described above) it becomes possible to use a bang-bang wheel speed control algorithm. That provides the fastest spinup and recovery times. The code is so simple that it can be run at 10ms without sucking up CPU time. No tuning is required, and the iteration rate of the control algorithm can be quite sloppy without affecting the control.
|
Simply using the getPeriod() method won't help because I don't think it will produce the clean signal you mention, without averaging (which introduces lag), for reasons I have mentioned above and what we have personally experienced.
But my original question was why do you think a millisecond timer is not good enough accuracy for FIRST, and what is driving you to higher accuracy requirements than a millisecond timer.
Seems like you would like to do bang-bang control. And typically, bang-bang has oscillation around its setpoint because you just have on off control so in order to remedy this oscillation, you wish to sample as fast as you can, and update the control law quickly so that you could reduce oscillation.
But a couple of things here, why do i need to ask for a speed signal every 10 milliseconds? The Driver Station Packets are sent every 20 milliseconds, so any calculations done faster than that, are simply being thrown out and not making its way to the robot. Unless you run your code in a different thread that runs at a faster loop than teleop Periodic for example. Is this what you are assuming?
Secondly, You are ignoring inertia and coulomb friction. Unloaded my motors can react quickly, where it makes sense in doing calculations rather quickly. However, once I start adding inertia to my motor, I.E arms, gears, wheels, its reaction time reduces drastically. So even though you are calculating a command signal every 10ms, or 20ms, those signals are not actually moving the motor, because the motor with inertia has a reaction time of say 50ms or 100ms. For a shooter wheel its reduced when your at speed, but for position control, you must always overcome coulomb friction every time you start up. Why calculate a signal every 10ms when your not using it? My drivetrain doesn't react in 20ms, I wish it did however

.
As for the control of bang-bang, It is a good way to control velocity of the wheel for its simplicity, but you need to manage the oscillations around the setpoint. A better method might be to use a simple proportional controller to reduce oscillation, but then you must deal with steady-state error. Going to PID means you must deal with integral windup and derivative noise. Every control method has its pros and cons for the applications.
But how fast is a fast recovery? And if you are sampling every 10 milliseconds, why do you need a timer that has better resolution than 1 millisecond?
If I can have a bang-bang which recovers in hypothetically 1ms but oscillates around its set point so much that back to back shots vary, vs a PI controller which takes a little longer to get up to speed but is stable around its set point, which is the better controller?
Further more, what is the point of doing all that work, at that fast of a rate, if the mechanism I have feeding the shooter can only send a disc though it every 500ms at best or every 1 second on avg? See where I am going?
To each is his own. We use a PID class that finishes computation in 1-2ms. using the system clock and provided recovery times just under 1 second last year, and about 2.5 seconds from stop.
My opinion is nothing I have seen thus far in FIRST requires a faster software timer, especially since everything we do has an end result to move some physical mass, which couldn't react faster than milliseconds anyway.
I would love a nanosecond timer but its more of a want, I don't see the requirement for one.
Quote:
Originally Posted by Ether
Here's a link to the use of micro-second timer to measure the exit speed of the frisbee from the shooter. A highly accurate feedback signal of frisbee exit speed would make it possible to tune the shooter wheel speeds to maintain consistent frisbee speed.
|
Would be interesting to see if teams actually do this on the fly. This problem is not completely controllable since you are not actively controlling all variables which control muzzle velocity. Compression, surface friction, and wheel contact time all play a huge role in muzzle velocity, but you would only be controlling wheel speed. Furthermore, there is some point you reach diminishing returns, because the faster your wheel, the lest time they are in contact with the disc, so less energy is transferred.