I’m trying to help my team figure out how to wire an Adafruit 3mm beam break sensor to a Talon SRX motor controller to control a flywheel shooter.
The SRX is connected to the RIO via CAN.
I’ve looked thru a number of posts, but have not seen the details of which wire goes to what pin.
We currently are able to pass the sensor signal to the RoboRIO and calculate an RPM.
Now we want to send the sensor signal directly to the SRX and use its processor to generate the feedback signal to the motor.
For the receiver, connect red and black for power and then the yellow wire to the digital input.
From Talon SRX Use Guide:
The data port has the following pins:
Should we connect the Yellow sensor wire to Pin 3 Analog input or to one of the Quadrature pins?
We’ve briefly tested a similar setup, except we used sparkfun’s photointeruptor sensor connecting it to 5V, Gnd, and QuadA on the talon. Word of warning on using IR-based sensors. We’ve found that sunlight and venue lighting can often oversaturate the receiver, causing you to lose counts. If you must use an IR-based sensor, try your best to isolate it from ambient light.
One quick word of warning…the Talon SRX calculates velocity as “change in position per 100ms”. This works well for providing smooth velocity measurements with quadrature encoders (where you are generating tens or hundreds of thousands of pulses per second), but may result in poor velocity resolution with a break-beam sensor and number of counts per revolution. For example, a 5000 rpm flywheel with 2 pieces of retroreflective tape will generate ~167 counts per second, or ~17 counts per 100 milliseconds.
Usually for the latter configuration, you are better off measuring the time period between consecutive rising or falling edges rather than differentiating position over time. You can do this on the RoboRIO, but not on the Talon SRX (at least on current firmware as of today).
Not to speak for Jared, but this is my understanding of the issue.
The Talon is counting pulses and uses the pulse count as a proxy for velocity/rpm. In your scenario you are going to initially have ~30 pulses per 100ms (3000rpm / 600 = 50(revs/100ms) * 6 spokes = 30 pulses per 100ms.
Another way to think about this every pulse in a given 100ms window represents 100rpm. If you count pulses to determine, speed you are going to have +/-100 rpm accuracy if everything else is perfect (which it won’t be).
Rather than using the Talon to run the PID, you determine velocity by measuring the time between spokes 100ms/30 = 3.3ms vs 100ms/2.9 = 2.9ms if you can reliably measure changes with more accuracy than 0.4ms (3.3-2.9=0.4ms) you should be able to determine velocity greater accuracy than the Talon.
I have not done this, but I would run some tests and probably use InterruptableSensorBase override and use the readRisingTimestamp call to collect the data.
If Java/Linux collude to reduce accuracy, I might also consider using an arduino/teensy to collect this information and report via spi/i2c back to the rio.
We used a photoeye with two or three sections of reflector tape in 2013 full court shooter mode, speed set to 3700 rpm, max speed was around 4400 on the cim and wheel.
This year 2016 our shooter wheels used Andymark CIM encoder wired to the SRX and the using the PIDF on the srx. The CIM coder is 20 counts per Rev, from website, but the SRX counts the transitions on both channels, so 80 counts per rotation or the wheel. This was enough resolution to work well.
If you do this method, please read the SRX encoder manual and make sure you use the pull up resistors for the cimcoder.
You should probably review the hardware portion of the manual, as you may need pull up resistors for the sensor you are using too.
Hi, I’m the lead programmer of Team 2976, same team as DaveL. I have never experimented with velocity calculations with an encoder since I have always used the premade one in the encoder library. Normally I find the change in position in a constant time period…say 100 milliseconds. This usually works well for regular encoders that return 360 ticks per revolution. However, in the hardware setup we have, as DaveL explained we only have 6 spokes and 6 ticks per revolutions. Reading through your suggestions I have found namely 3 solutions:
Measure the change in position with constant time (100ms). However, as phurley67 pointed out this only gives us ± 100 rpm and the uncertainty increases with smaller sample times
Measure the change in time between 2 successive pulses or ticks. Although I haven’t tried this I calculate about 3 ms change in time and I feel this will give lots of noise. Furthermore, I also am worried about CPU usage when throwing interrupts ever 3 ms.
Use an encoder. This is a good solution, except, our team wants it to be done using a beam break. Nevertheless, it will be interesting to work with this.
My solution is using a tracking loop, similar to a Kalman filter. The principle behind this is our knowledge of the relationship between position and velocity—I.e. integrate position to get velocity—can allow us to estimate the velocity with less noise.
Here is the outline of a tracking loop:
Basic physics tells us that integrating velocity results in displacement, or position. So I do that and integrate a variable I call velocity_estimated. The result of this integration will be the estimated position.
I feed this into a PI controller as the input and set its setpoint to the measured position and the controller will try to minimize the error by giving me more accurate values of velocity_estimated.
In a way this is recursive. I first calculate the estimated position by integrating the estimated velocity. I compare this value to the measured position and using a feedback loop get a more accurate estimate for velocity which I integrate once again…and the cycle continues.
Even though this is a roundabout way to get at the velocity, this is good. By integrating position, we absorb error and hope that it will cancel.
See for yourself here is a link to my github page where I created a simulation of a spinning wheel with the extra time I have since school is out for winter break. https://github.com/neilhazra/SimulateWheelShooter/blob/master/src/Simulate.java
If you run it, you will notice estimated position is almost always closer to the hypothetical position_exact than measured position is. And that velocity_estimated is more precise and consistent (with much less noise) than delta position/ delta time.
This is not a perfect simulation, I assumed our error would be randomly distributed which it will definitely not be. I will need to test it on a real robot soon.