Long-time CD lurker here, and first time poster here, so sorry for missing any previous topics and/or etiquette here. However, I’d like to share our experiences with several key components/parts of our 2023 Robot, Anura. Some are great, and some are … mixed.
Falcon 500 (v3)
We are feeling very mixed on this particular motor… we have had several QC issues we have noticed and concerns, including:
- Shaft run-out: We noted on our Swerve X modules all 8 of our initial Falcon 500’s suffered pretty severe shaft runout/wobbly-ness on the output shafts. We ended up using all 8 of our spare replacement Falcon 500’s in between CVR and Sacramento to avoid any potential issues down the line between Sacramento/Champs. This may have been exasperated by the belts on our initial Swerve X configuration pulling the shafts into tension but I’ve seen/heard experiences with other FRC teams that have seen the same issue here. It’s not ideal at all to have shaft run-out this bad (where we could visibly see the shaft shifting about .010-.020” off axis), and our team was extremely unhappy with this QC, especially since we couldn’t get replacement spline shafts for the V3 during the season here…
- GND shorted to motor chassis: We had one Falcon 500 v3 short its GND line to the chassis of the motor, intermittently (see pic below for disassembled motor). We initially failed our inspection at CVR due to ground-chassis isolation fault, and didn’t fully fix this issue until discovering one of our falcons on our swerve drive had an initial 3.4k Ohm, that would change resistance upon rotation/disassembly. Pix shows a 13.4k Ohm resistance, but would seemingly change randomly (either OL or some resistance between 3k-20k)… a pain to track down fully until we got back to the lab from CVR.
After replacing the offending motor, we never had another GND/Chassis isolation fault…
- Price change from ~180 → 220: We felt very jaded when we had to pay an extra ~$50 per motor after already putting in an order for 10 Falcon 500’s back in October 2022. I understand with the supply-chain issues that it can be difficult to honor the original price, however, we had budgeted for $180 per motor. We still ended up having to purchase 6 more motors to have a full set of backups for our swerve drive, and were thankful to have those on hand for replacements…
- v5 vs Pro API: The $220 pill feels especially hard to swallow when you factor in an extra $10 dollars for Phoenix Pro (or $100 for CANivore bus) for FOC and an “improved API”. Our lead technical mentor raised concern about maintaining two code-bases to CTRE in an email, and got criticized for mis-spelling a few words, and was assured Vex/CTRE had that under control. After sending a more detailed and lengthy response (with spell-check on…), there was no reply… We ran into a few teams that had Phoenix pro that were running several versions behind because newer versions of firmware introduced new issues that hadn’t been seen before… which maybe confirms our concerns. Why must we pay an extra $10 dollars on top of a ~$220 dollar motor to unlock all the torque performance??
We are extremely dis-satisfied and will be running exclusively NEO/REV brushless motors until these issues are resolved. While we didn’t have a failure during a match for a Falcon 500, we still have to bring up these concerns on our part, especially considering the “cloud of internal issues” around Vex. Some may be saying we’d handicap ourselves, but we cannot support a company that doesn’t listen, so I’m hoping raising awareness will help cause change here.
RoboRIO 2.0
Initially, we hadn’t have any issues with the new RoboRIO 2.0 until it mattered the most here. Before Q73 (first match of Friday for us) @ Hopper division, we had to repair our intake arm and barely got the repair done until ~2-4 matches before our Q73. We went to turn on our robot… and the status LED blinked red constantly and refused to boot. That was no problem in its self — we had prepared with a spare RoboRIO 2.0 and back-up MicroSD card.
We had not populated the provided 4GB Industrial card (with image installed on it) in the RIO 2.0, and went to put that in right before taking off the RIO 2.0 … and the SD card slot slipped and slid to the left of the actual slot, essentially eating our MicroSD card and rendering the repair futile. Replacing the rio is relatively simple on our Robot, just the four screws, RSL, CAN, power, USB and Ethernet. We missed our match and our alliance lost by 23 points, which we could have contributed towards… which dampened the mood of the team since we couldn’t put a bot on the field for the first time this year during qualifications.
Afterwards, we ended up using a RIO 1.0, and never had another issue during champs @ Hopper, and noticed a few differences:
- Code changes/deploys happen much faster (as well as initial comms connection): Why with an older revision would we have less issues and faster code deployments than the “improved” RIO 2.0? After talking to several CSAs and receiving assistance, we noted that teams have replaced the stock MicroSD with a higher quality card (e.g. Sandisk) and noticed less comms issues/code deployment issues (haven’t verified ourselves here … yet). Which then leads to the question… why not give teams a high-quality MicroSD card to start? I understand there may be sourcing/pricing issues with using other MicroSD cards, but would it really kill the overhead to provide a better MicroSD card?
- Conformal coating differences: When opening up the RoboRIO 2.0 as a post-failure inspection, the CSAs (+ Team 3357 next to us, shout out to y’all, you are SUPER cool and an awesome team to talk to), noted a larger metal shaving inside preventing booting. After removing said shaving, the CSA said the RIO 2.0 booted successfully 4x times without issues. In years past, we’ve opened a RIO 1.0 and noted many more metal shaving inside of the device, but without any failures. It seems the conformal coating used on the newer devices isn’t as robust as the RIO 1.0 and less prone to shorting out internal components. We do take measures on our bot to protect potential ingress points (e.g. putting electrical tape over PWM connectors + protecting the RIO during mechanical operations), but metal shavings still found a way inside. Now, we do recognize in the future to mount the RIO 2.0 in a non-horizontal configuration (e.g. upside-down or on the side of the bot) that should prevent failures, but for other teams we feel this could create a “foot-gun situation”. We’ll be keeping a closer eye on mounting positions and even more protections around the RIO 2.0…
- Extra memory/CPU not necessary (in our case): Our team noted the RIO 1.0 handled our robot code just fine without any changes to the code. We personally use C++ and have around 30-50% CPU usage, and ~30% CAN utilization (we use a CANivore too). So we’ll most likely stick to the RIO 1.0 for the foreseeable future here until necessary to upgrade (or the processor changes).
So we’d like to hear what other teams have experienced failures of the RIO 2.0, we heard from CSAs that it was a “blood-bath” of RIO failures @ Champs, with 40+ they’ve personally dealt with, and probably more that went unnoticed or repaired in pits from teams without help.
Pigeon 2.0 and CANivore
We are overall happy with the performance and non-issues we had with these devices, our CANivore and Pigeon never experienced any hardware or electrical failures throughout all our practice and competitions.
However, there was one firmware update for the Pigeon 2.0 that caught our team off-guard … the yaw axis from Pigeon::GetRotation2D() was seemingly inverted in a recent update we applied before Champs. We now know from this experience to triple check drivability before bringing the bot to champs, but we had been testing integrating the Pathplanner library into our code-base (shout out to the developers here — the Pathplanner app is awesome and the library is easy to use). We had only been resetting the position of the bot either by driving back in a straight line or pulling the bot back into position, so we didn’t notice the error in our field-orientated drive here. We had failed to perform at all during our practice match and first two qualification matches due to the confusion of our drivers for control (we’d go in the opposite direction than expected when pressing forward) and accidentally caused our team-mate to flip over during our practice match when they were going over the charge station (truly sorry 2046).
So this is a wash for us here, we should recognize to test better when applying updates, but a simple change-log window when applying a firmware update would be very helpful to teams. Then we are reminded of possible changes that could affect us and other teams here.
For reference, here is our Field-orientated control method for our swerve code base:
void SwerveDrive::Drive(
// Pass in speeds and rotations, get back module states
units::meters_per_second_t xSpeed,
units::meters_per_second_t ySpeed,
units::radians_per_second_t rot,
bool fieldRelative) {
// Version of function that uses feedforward control
UpdateEstimator();
auto states = SwerveDriveParameters::kinematics.ToSwerveModuleStates(
fieldRelative ? frc::ChassisSpeeds::FromFieldRelativeSpeeds(
xSpeed, ySpeed, rot, -m_pigeon.GetRotation2d())
: frc::ChassisSpeeds{xSpeed, ySpeed, rot});
SwerveDriveParameters::kinematics.DesaturateWheelSpeeds(&states, SwerveDriveConstants::kMaxSpeed);
auto [fl, fr, bl, br] = states;
frc::SmartDashboard::PutNumber("FL speed", (double)fl.speed);
frc::SmartDashboard::PutNumber("FL angle", (double)fl.angle.Degrees());
frc::SmartDashboard::PutNumber("FR speed", (double)fr.speed);
frc::SmartDashboard::PutNumber("FR angle", (double)fr.angle.Degrees());
frc::SmartDashboard::PutNumber("BL speed", (double)bl.speed);
frc::SmartDashboard::PutNumber("BL angle", (double)bl.angle.Degrees());
frc::SmartDashboard::PutNumber("BR speed", (double)br.speed);
frc::SmartDashboard::PutNumber("BR angle", (double)br.angle.Degrees());
frc::SmartDashboard::PutNumber("X speed", (double)xSpeed);
frc::SmartDashboard::PutNumber("Y speed", (double)ySpeed);
frc::SmartDashboard::PutNumber("Rotation", (double)(SwerveDriveConstants::DEGToRAD * rot));
// Desired states to be logged in AdvantageScope (requires different format)
double AdvantageScopeDesiredStates[] =
{(double)fl.angle.Degrees(), (double)fl.speed,
(double)fr.angle.Degrees(), (double)fr.speed,
(double)bl.angle.Degrees(), (double)bl.speed,
(double)br.angle.Degrees(), (double)br.speed};
frc::SmartDashboard::PutNumberArray("AdvantageScope Desired States", AdvantageScopeDesiredStates);
SetModuleStates(states);
}
Swerve X (Tube-mount, Flipped)
We chose the Swerve X here from WCP and after initial drive issues at CVR (due to incorrect assembly) we have had 0 issues/failures of the modules themselves. We started with a 6.55:1 Pully configuration for our competition robot, but after browning out and noticing severe shaft run-out on our Falcon 500’s after CVR, we chose to switch to a 8.10:1 Gears kit. Best improvement to drivability to our ~124 Lbs bot here, and would use again in the future.
Overall, we only made two small modifications to the Swerve X modules to reduce vertical play in the gears on the top (small bearing washer under the idler plates) and improve CANCoder position relative to the magnet (small 3D printed strap over the CANCoder). The small 3D printed strap meant our status LED was always green throughout all competitions and never had them fall out of calibration afterwards.
Shout out to @R.C. (Also congrats again too for winning the champs) and WCP team for putting together such a solid piece of kit, we’d recommend the modules to any team looking to get into swerve here. As with any other major drivetrain change for a team, we’d recommend building a swerve test bot as we did over the summer to iron out any possible kinks with assembly and/or test code. There are quite a few gotchas that can come up, and assembly must be done with care and QC checks at every stage.
—
Overall we had a great season and experience at championships (even if we didn’t rank high or get picked @ champs), and thank everyone we met along the way. We’d like to hear about your experiences with COTS and other FRC components, since we hope our vendors can listen to our feedback. We do the same reflections and criticism of our own robot design, and hope our vendors can do the same.
-James