REV vs. CTRE Coding

With the drop of falcons, one of the biggest reason people are buying them (in my opinion) is the CTRE coding libraries that are not applicable to neos. As a team that currently has little programming experience with implementing any fancy code with encoders or smart motion, I was wondering what the difference is for a team that has not built up the experience with CTRE. This year we want to start automating more of our teleop tasks and creating more complex autons (if auton is a part of the game :wink:). How difficult is it to learn/implement smart motion for both the rev and ctre ecosystems? Also I am asking because this is a factor for us in the Neos vs Falcons debate.

Disclaimer - I am not a programmer

1 Like

I think that some of the arguments for buying Falcon 500s over the NEOs simply due to the API have been blown out of proportion. Both the CTRE and REV APIs support a lot of the same features (i.e. position control, velocity control, and one-dimensional trapezoid motion profiling). Although the API calls may be different, we have verified that features on both motor controllers are functionally equivalent.

The Spark MAX API is missing some “advanced” features such as 2D motion profiling, although it’s really not a good idea to run 2D motion profiling algorithms, which utilize several motors, as an onboard motor controller control loop.

Furthermore, WPILib in 2020 will support several of these “smart” features, including trapezoid motion profiles and trajectory tracking out of the box, so teams can run their control loops on the roboRIO. This also means that the code is no longer a “black box”, meaning that students can read the source and understand what’s going on behind those API calls.

So personally, I don’t think that the API between the two motor controllers should be the “deciding factor” when choosing what motor to use. Factors such as availability, support, other mechanical and electrical constraints, and firmware quality are more important factors in my opinion.

21 Likes

Thanks for the input! I totally agree with you that other aspects are more important. This was just another factor that I wanted to consider since everyone keeps saying how they were waiting for the vex/ctre motor and stuff like that. I am also asking from the standpoint of a team trying to implement fancy code stuff (my personal term for it) for the first time.

1 Like

For us, the CTRE libraries are a major reason why we chose not to use the NEOs last year and to use the Falcons this year. While of course it’s important to understand how everything works, having a “black box” that just works makes our lives a lot easier. At the beginning of the season last year the REV libraries were missing a lot and largely untested as REV’s first smart controller. They’re a lot more complete now, but still lacking some elements we make good use of. Since we never bought into the NEO ecosystem last year, it wasn’t a hard decision for us to continue with brushless motors in the CTRE ecosystem this year. I’m a bit disappointed with the rollout of the Falcon and the limited testing we’ll get with them, but with CTRE’s experience with smart motor controllers I’m fairly confident they’ll meet our expectations, and if not they can be switched for a Talon SRX and CIM without much trouble.

2 Likes

Do you have some data to support this? I know of a number of teams that have streamed motion profiles to multiple motor controllers to handle autos with great success.

You cannot correct for positional errors when streaming a motion profile to a motor controller. Once you have streamed the profile, each controller simply runs a position PID on “distance”. The problem here is that for the same “distance” traveled by each encoder, there are several (x, y, theta) endpoints that you can end up at. The WPILib trajectory tracking uses odometry to correct for positional (x, y, theta) error when tracking a trajectory.

Furthermore, it is harder to write code to “snap out” of a streamed motion profile trajectory and switch to something like a Vision controller (which is what a lot of successful teams did in auto in 2019).

From a code structure perspective, it makes no sense for individual motor controllers to be controlling the robot’s overall global state. The global state should be handled by a controller on the roboRIO which can use various sensor inputs across the robot to more accurately control the position on the field.

6 Likes

The Talon SRX and Victor SPX rollouts (Not to mention entire 2015 control system) have been well executed complete releases of functional hardware, and any bugs have been fixed in-season.

Rev told us all up front last year that they weren’t done coding when they started presales, and they are just now solving one of the critical bugs (instantiating new encoder objects when returning a encoder value).

Meanwhile CTRE/Vex found a manufacturing variance in the Falcon in December, and halted customer shipments to control it.

Since the shipment delay I’ve got parallel orders placed to stick with a triple mini CIM drive - in case Falcons don’t come through - but I am very confident in the product that CTRE/Vex is selling and the actions they will take to deliver on their technical promises.
(Yes, the launch of the latest vex edr controller was not confidence-inspiring, but given their long track record in FRC - I still trust them for now.)

In addition that, CAN bus software has improved by an order of magnitude in 2020. While previously, a case could be made that the Motion Profile streaming API was useful in reducing CAN bus latency, this is a moot point in 2020. Running WPILib Ramsete with 4 motors along with a WPILib TrapezoidMotionProfile on an elevator only resulted in a 20% utilization according to beta teams.

This also gives you a greater flexibility in adding sensor inputs and control to your mechanism, as simply replacing these controllers with better or more accurate ones is way easier than switching from Motion Profile streaming to a custom PID.

While I fully understand the theoretical reasons why it might be better to handle everything on the rio, that isn’t exactly what I asked. A number of teams have been very successful streaming motion profiles on motor controllers. Telling people something is a bad idea gives them the idea that they shouldn’t do it, and that simply isn’t the case.

I’ll give you an example. Labview teams don’t necessarily have access to a lot of the WPI functions. For them, if they don’t have the ability on the team to emulate what WPI is doing, then profiling on motor controllers is a perfectly good option.

Im not trying to be pedantic, but when inexperienced teams hear “you shouldn’t do that” they tend to take it seriously.

1 Like

449 used the streaming motion profile mode extensively in the past, with great success. I would never recommend a team use it now; there are a plethora of better options which have a much smaller code footprint and many fewer implementation constraints.

Inexperienced teams that have the option to do something other than the streamed motion profiling mode should definitely do that other thing.

3 Likes

I’m not arguing that the performance of streaming a profile to a motor controller is bad; I’m arguing that doing it on the roboRIO is better, not just theoretically, but practically. I would never recommend a rookie or inexperienced team (who is using a language other than LabVIEW) to stream motion profiles to the Talon SRX.

I would always recommend for them to read the WPILib examples and implement that, because the features are more extensive, the code is easier to understand and has a smaller footprint, the process is very well documented, and results in better performance.

1 Like

As a team using labview, are there features of talons that labview can’t access but are accessible with spark maxes?

Nope. The APIs are similar, but the Talon API has 2 years of iteration built on it. And there’s nothing (excluding WPILib and libraries) that you can’t do in LabView that you can do in Java and C++, at least when talking about Vendor hardware.

If this is true in any way, then this is a bug. The three officially supported languages are supposed to have feature parity in FRC.

No, it’s not a bug. Quoting a WPI dev:

The frc-characterization application (LQR) will output gains that are usable in LabView, but in general, no, the Java/C++ additions for 2020 are not directly usable from LabView.

LabVIEW for YEARS has has much better PID and control algorithm support then wpilib text based. 2020 is actually the first year the text based languages are catching up. There are some things each other can’t do, but if you look at the extended PID palette in LabVIEW, there are probably 15 different control algorithms and motion profiles in there, that text based doesn’t have.

This largely depends on the control scheme. If the OI controlling class has a “goes to position while driver holds” philosophy, with “cancel everything and hold position when driver lets off”, it’s very easy to snap out of anything - it’s just a matter of setting modes and forcing the controller to do our bidding.

Once you’ve smelled the lingering aura of two burnt 775Pros, you’ll consider switching to this philosophy.

In 2019, my team abstracted away the motor controller API’s into our own custom enumerations/etc. This allowed us to have MinICIMs on the practice bot, but switch to NEOs for weight savings on the comp bot. I wouldn’t recommend starting out a season this way, but it is a useful way to adapt to sudden necessary changes in-season, should the need arise.

2 Likes

I would always recommend for them to read the WPILib examples and implement that, because the features are more extensive, the code is easier to understand and has a smaller footprint, the process is very well documented, and results in better performance.

Can you point at a specific code example you consider doing it the “right” way using WPILib? I find, as someone new to FRC coding, it’s difficult to find the best API and/or example to emulate.

I’m trying to get my team using motion profiling and ultimately path planning and following. I’ve watched a number of videos and studied code of teams who have rolled their own. However, I would rather we started with the “official” WPILib way, and build understanding before cloning some teams code and tinkering.

There is a complete end-to-end trajectory tracking tutorial here.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.