You Probably Geared Your Drivetrain Wrong: Definitive Proof

I’d have to go and check scouting data, but I’d wager a 95%ile team averaged somewhere around 3-4 gears a match.

The extra speed is not the difference maker in performance there.


I’ll post further analysis of this later today, but for the sake of comparison, let’s say that we have a motor with 1000 RPM free speed and 100 N-m of stall torque. Manipulator 1 is geared 1:1 to the motor, manipulator 2 is geared 2:1.

When the motor of manipulator 2 is spinning at 90% of it’s free speed (900 RPM), it generates 10 Nm of torque at the motor, it generates 20 Nm of torque at the gearbox output, and the manipulator spins at 450 RPM.

For manipulator 1 at the same output speed of 450 RPM the motor will spin at 450 RPM and it generates 55 Nm of torque at the motor and gearbox output.

Essentially, there is a crossover point where the back-emf voltage effects become strong enough to offset the gearbox mechanical advantage.

For otherwise identical systems, a bit of algebra and the (negative) linear relationship between speed and torque reveals that this crossover velocity is V₂V₁/(V₂+V₁). For 19ft/s and 14ft/s, the crossover is at about 8.1ft/s. This is the crossover for force at the wheels or acceleration, not time.

For strategic design, absolutely. The issue is that there are several models for drive train friction, and even the simplest (no friction) position vs time curve will also depend on the number and type of motors and robot weight, and probably the wheel coefficient of friction, not just the two free speeds. I’ll see if I can put together a few sample cases this evening.


Good wager.

95th percentile @ Darwin 2017 = 4.63 gears per match


Interesting mathematics - I have always missed this. What we are after for a sprint is minimum transit time. So is it not true that you want to consider (de)acceleration, optimized for the game, then gear for the resulting top speed, including losses, that results? Plus one gets better control, more torque, at lower speeds when geared for a lower top speed, correct?

Deceleration is going to be really hard to quantify though. I just saw this mentioned in another thread, are you assuming coast mode deceleration, brake mode, or actual reverse input?

This is so, so wrong.

One of the best cycling robots in FRC, ever, was 610 in 2013. They drove cross field, every cycle, with a 4+2 CIM drive, geared at something like 10.5 FPS actual, optimized for sprint distance. They were consistently one of the top cyclers that year at every event they attended.

Eventually top speed becomes a limiting factor, but don’t fool yourself into thinking the top speed of your robot must be 15+ FPS in order to be competitive. This is the kind of thinking, “we have to do this or we lose”, that leads to teams over-reaching.

Edit: Someone beat me to this point, sorry, missed it.


It depends a lot on how you define this.

The only thing that we can definitively say is that the lower geared system induces lower or equal current no matter the velocity the robot operates at. I’d argue that this is the primary reason you see people arguing for lower top speeds.

In terms of control, it is more complicated. Using the numbers generously provided by GeeTwo, we could argue that a 14 fps robot is more controllable below 8.1 fps than the 19 fps robot and less controllable above that velocity. You may here people talking frequently about “headroom”: this is exactly what they are talking about.

While a lot of people talk about sprint distances, I’d argue that for short sprint distances higher top speed robots are closer than you’d think. Traction limits much of the advantage of low end torque that lower top speed robots have. On the higher end, the higher top speed robots end up having a clear advantage.

On the other hand, lower top speed robots induce less current, dropping the battery voltage under load less, thereby effectively generating more relative power. However, I’ve seen a lot less research and modeling to show how important the effects of this is.

I’ll refer only to 2910,2468 who were the “zippiest” robots at IRI in my opinion.
(7498 had the potential mechanically, but their driving style didn’t match my criteria).

If we’ll compare their graphs (who looks pretty similar), we can see that 2910 top speed is lower than 2468’s, but that 2468 spends more time in low speed.

The data over here supports my saying, as we can see that 2910 spent less time than 2468 on speed that lower than 2 ft/sec (that’s true up to ~8 ft/sec as we can see from the chart).

1 Like

The important thing here is - cycle time saved is cycle time saved. If you aren’t the absolute fastest at full speed, but your careful precise driving saves a second or two lining up, you’ve more than made up for it.

It shows in that 1477 vs 610 video above - 1477 does faster full field sprints but spends some time overshooting and backing up, aligning, etc. that 610 does not.


Do you happen to have pit scouting data in your spreadsheets?

I’d be interested in a scatterplot with the Y-axis being average cycles/match and the X-axis being drivetrain speed (or whatever pit scout value is closest).

I would be really appreciative of anybody who can provide this. Unfortunately my team’s pit scouting is done on paper and therefore effectively disappears after the picklist meeting.

@Matt_Boehm_329 Maybe the Zebra Dart data can be used to pull 99th percentile speeds for each team at one of the three events, and somebody who has scouting data from that event could provide the average cycles for those teams?

I think that would be the most relevant evidence for Adam’s hot take.

2013 is a fairly different game than say, early weeks 2017, where getting those rotors before the other alliance was the only way for most alliances to win.

we’re both watching Einstein F3 correct? 1477 plays defense almost all match, and 610 misses more shots than 1241, specifically at the 2:30 mark.

It was traction limited mainly due to a lack of weight (75lbs). Couple that with being the “rookie” at IRI left us being asked to play defense while watching others cycle slower :wink:

1 Like

What were you geared for?

Brake mode! I’m a fan of not asking the robot to do something that is impossible (like reversing itself instantaneously) but want to stop quickly.

Haha that’s all good, as you can see from @Matt_Boehm_329 graphs, you were above the average in IRI for driving speed.
I wasn’t talk about defense, but you can absolutely PM me to hear more, as I don’t want to interrupt this thread with that.

I get the headroom concept. One needs margin to close the servo so you can move at a predetermined speed. That is kinda more important for a flywheel etc than a drive train (maxing out) I reckon.

I’m sorting through the numbers in my head. I know my students have put together demo robots geared to have high top speeds but they accelerated too slowly. Then again they’ve built things that were nearly impossible to start moving w/o losing traction. And we’ve had robots where gearing it down made it easier to execute precise motions at low speeds.

Thanks for the data!

It might be interesting to cross reference defense and defended time using what was talked about in here to help add some clarity on speeds under 2ft/s: 2019 IRI Zebra's Dart Data Analysis - Defense if anyone wants to take that on?

Honestly, no it isn’t. 2013 and 2017 cycles are almost directly analogous; they were a very strong analogous comparison that teams like mine used when determining strategy early in the year.

1 Like

Based on 775a redline specs. 19.4ft/s free speed 15.7ft/s adjusted as per JVN with 75lbs although I’m not sure i believe the adjusted calculation because it doesn’t change when motors per gearbox is increased. Our configuration ran 4x 775a motors per gearbox which, we will admit, was over powered and created a drift bot which is probably why @Lidor51 commented on drive style.