how to model motor+gearbox

What is the accepted wisdom here on CD of the “correct” way to model a motor plus gearbox combination[1]?

More specifically, if I have

  • a motor with given free current, free speed, stall current, and stall speed, and

  • a gearbox with given gear ratio and “efficiency”

… and I combine them together,

then what is the free current, free speed, stall current, and stall speed of that combination?

For example, suppose I bolt a CIM to a gearbox that has ratio 10:1 and efficiency 90%. Then:

  • the free speed at the gearbox shaft should be slightly slower than 1/10th of the CIM’s free speed, because of the friction in the gearbox… but how much slower?

  • the free current should be slightly more than the CIM’s free current (because of the torque load of the gearbox friction), but how much more?

  • the stall current should be exactly the same as the CIM stall current

  • the stall torque would be less than 10 times the CIM’s stall torque, but how much less?

  • assume a linear behavior between free and stall points?

Endnotes:

1] I’m talking about a simple model for hand calculations for FRC purposes, not a complex simulation.

I’ve been thinking about the same thing, but haven’t come up with any good answers yet. Consider this a “bump” until the end of this week when I’m no longer madly building a robot. :wink:

Here’s what I’ve thought through so far.
Efficiency is about power, so 90% efficiency should mean that the peak power of the motor+gearbox is 90% of the motor’s power by itself. Since power is proportional to the product of speed and torque, the product of the “torque reduction” and the “speed reduction” should always be 90%.
The simplest answer is to just multiply the speed and torque by sqrt(0.9), which seems like a crude but useful approximation.

My second thought is that we can model the gearbox as simply an additional load on the motor, which causes it to have a lower free speed and lower stall torque.

If the CIM has a stall torque of 2.43 Newton-meters, and we treat the 10:1 gearbox as a load of 0.20 Newton-meters, then the stall torque of the output shaft will be: (2.43 - 0.20) * 10 = 22.3 N-m. The free speed will be 5310 RPM * (1 - 0.20/2.43) / 10 = 487 RPM. Using these numbers to calculate a motor curve gives an efficiency of 84% relative to the motor by itself.

Working this backward to get the load in terms of efficiency, the theoretical gearbox load is
stallTorque * (1 - sqrt(efficiency))

So it appears that multiplying both the stall torque and free speed by the square root of the efficiency is a pretty good estimate. I’ve seen quite a few resources who say to multiply speed and torque by the efficiency, so maybe I have the wrong definition?

I’ve seen tables which list the efficiency of various gear trains - spur gears, worm gears, etc. Can anyone tell me if those tables typically report “power efficiency” or “torque efficiency”?

That was a very looooong week :slight_smile: But thanks for your latest post.

**

Last night I was mulling over how to how to determine the efficiency of a gearbox, and if my assumptions and math are correct, it’s really easy:

  1. Get a value for the driving motor’s free speed that you trust.
  2. Measure the free speed of the motor+gearbox and multiply by the gear ratio to get the motor speed.
  3. Divide this by the motor’s free speed and square the result. This is the “power efficiency”.

In cases where we have multiple identical motors in one gearbox (such as 2 CIMs on a drivetrain), we can treat them like one motor with the same free speed and twice the stall torque. The above method should still work.

Steve, I can’t account for how to apply inefficiency accurately (i.e. sqrt(inefficiency) vs full inefficiency to only torque), but here’s an anecdote from my season:

For our lift’s acme rod, I calculated the inefficiency directly and applied it solely to torque inefficiency. I used the diameter of the acme rod and the # of inches/turn. That gave me a triangle (circumference = x, inch/turn = y) from which I then calculated the angle of the threads. The efficiency then has the form sin(angle)/[cos(angle)+sin(angle)]. Since the angle of our threads was ~45.5 degrees, the inefficiency was ~50%. Most acme rod or ball thread setups have thread angles between 25-45 degrees from what I’ve seen.

We modeled our threaded rod lift with this inefficiency. We used a spring force meter (is that the right term?) to accurately measure the force needed to lift the production lift with all of its inefficiencies. I then put it into an Excel spreadsheet that used Ether’s equations from the minibot to calculate the time of the lift (accounting for motor acceleration, gearing, etc); the spreadsheet was fairly accurate, within 0.2 seconds of the real lift (of course, subject to human error of timing).

I’ll reiterate that the inefficiencies of the gearbox & rod were only applied as a torque load on the motor (i.e. an extra subtraction of Newtons in the numerator of “D” in Ether’s equations). After reading some of your analysis, I’d conjecture that it worked only because our lift was geared for torque rather than speed AND because there are major losses in the rod. Thus the raw # of torque loss was more noticable than the raw # of speed loss. I put the same types of calculations into our drive train, yet I haven’t had a chance to get to the robot to test how the sim holds up to the true system.

For a worm gear setup, I presume the same method for inefficiency can be used: the diameter of the worm pinion is the X of the triangle; for “inches/turn” you’d have to use the circumference of the worm wheel as if it were a straight line, and then apply the gear ratio to it to figure it out.

Let Nmotorfree be the motor free speed

Let Tmotorstall be the motor stall torque

Let G be the gear ratio

Let eff be the efficiency

Then for the combination of motor+gearbox:

Ncombofree = (Nmotorfree/G)*sqrt(eff)

Tcombostall = (Tmotorstall*G)*sqrt(eff)

… and connect a straight line between the above two points to get the speed vs torque curve for the combination motor+gearbox.

I think that’s what Steve is saying, it it seems to make sense. Are there any other physics folks out there who would be willing to weigh in one this?

Edit:

see this post for an alternate view of how this should be modeled

**

Interesting question.

A flat mechanical efficiency (single number such as 90%) is a Coulomb friction model. It represents the load dependent binding of the gears and shafts due to tolerances in the bearings and the relative sliding of gear tooth surfaces under load.

The “speed reduction” through a gear box is always the gear ratio (unless you start skipping teeth) so you must enforce the constant ratio of lost power through the torque by multiplying it by 0.9. That is, if I have 100 RPM and 10 ft-lbs on the input side of a mesh ratio “R” with an efficiency of 0.9, then I must have 100/R RPM on the output side, and that means the torque must be 0.9xRx10 ft-lbs.

By definition there is no load at free speed. At no load, there is no friction. Your approximation will achieve the exact free speed indicated by your gear ratio (1:10). It’ll take you a little longer to get there, but the “model” will get there.

Simillarly, stall torque would be 90% of the motor torque times your (1:10).

Interestingly, backdriving the same gear box would give you a holding torque at stall of 111% of the motor torque times your (1:10).

In reality, the “flat efficiency” number is obtained as the average of common operating points and thus incorporates gear windage (speed dependent forces from spinning the bearings and squishing grease out from between the meshing teeth). There might be splits for various types of gear boxes and meshes, but I would expect it to have more to do with the intended operating speed and load of the gearbox.

To approximate the combined free speed you need to know (guess or measure) the windage component of the resistance and it should be a function of speed. Subtracting this from the motor torque curve (straight line in KOP) would give you a new motor curve… the zero point of which is the new free speed.

I know, I know… You were looking for something simple. :slight_smile:

I went through this exercise in 2003/2004. In the end I stopped using any of the detailed mathematical modeling in favor of a simpler analysis.

What I ended up with is a model like this:
Output Rotational Speed = Motor Free Speed / Gear Reduction * Speed Loss Constant

Output Stall Torque = Motor Stall Torque * Gear Reduction * Gearbox Efficiency

I experimentally determined the Gearbox Efficiency and Speed Loss Constant values. These will vary from robot to robot. The value I use (which is about right for “my” robots) is 81% for speed loss. I use a different efficiency value depending on what gearbox I’m looking at.

This is all very inexact. The calculations end up being “about right.”
The thing that it took me 3 or 4 years to learn? “About right” is totally okay for a FIRST Robot.

So that is where the simplified version of my spreadsheet came from.

Of course, this was from the design perspective – if you’re looking for a more detailed model for academic reasons, go to it!

-John

So, going back to the original post (excerpt appended below), what I hear you saying is this (the highlighted parts):

For example, suppose I bolt a CIM to a gearbox that has ratio 10:1 and efficiency 90%. Then:

  • the free speed at the gearbox shaft should be slightly slower than 1/10th of the CIM’s free speed, because of the friction in the gearbox… but how much slower? answer: no load, no friction, so the free speed at the gearbox output of the combo would be modeled as exactly 1/10th motor free speed (ignoring windage for simplicity)
  • the free current should be slightly more than the CIM’s free current (because of the torque load of the gearbox friction), but how much more? answer: no load, no friction, so the free current of the combo would be the same as the motor-alone free current
  • the stall current should be exactly the same as the CIM stall current answer: yes
  • the stall torque would be less than 10 times the CIM’s stall torque, but how much less? answer: 10% less
  • assume a linear behavior between free and stall points? ** answer: if you ignore windage for simplicity, then yes**

So:

Ncombofree = Nmotorfree/G

Tstallcombo = TstallmotorGeff

… and connect those points with a line to get the speed vs torque for the combo.

The “no friction at no external load” assumption is (as previously stated) an approximation, and I suspect that approximation becomes increasingly less tenable as the gear ratio increases. Does anyone have free speed data for a Banebots motor with and without a 256:1 gearbox?

**

Ether,
This is all “correct,” but clearly it is not “right.” There should be some kind of free speed loss, so it’s not a very good model. Your initial post asked for “wisdom” and John has delivered that.

I agree with everything he said except this:

…I stopped using any of the detailed mathematical modeling in favor of a simpler analysis.

The only thing simpler than Ether’s model would be to ignor efficiency entirely. He’s beat you by one multiplication, that’s 25% percent fewer FLOPS. Think about what you could be doing with all of that extra time during build season! :slight_smile:

I’m interested to hear how the banebots data works out.

Agreed. That’s why I found StevenB’s suggestion alluring. It derates the free speed AND the stall torque, and creates a speed vs torque line for which the output power at every operating point is derated by the efficiency. That seemed like a reasonable thing to do. The open question is: Does it do a useful job of modeling the actual behavior of the motor+gearbox combo? At this time, I do not have access to robot parts or lab equipment. If I did I would be running some tests right now.

Your initial post asked for “wisdom” and John has delivered that.

John’s answer was a good one, and it answered a question I was wondering about… but not the one I asked in this thread, to wit: “What is the accepted wisdom here on CD of the “correct” way to model a motor plus gearbox combination”. His model is for the whole vehicle; not the motor+gearbox separately.

Another thought occurred to me as I was back in the woods this afternoon pondering robots while mowing the walking trails. Given: Four wheel 120-pound vehicle with no drivetrain attached (just free-spinning wheels) on a flat, level, carpeted surface. Roughly how much force should it take to keep the vehicle moving at a constant speed of 13 feet per second? i.e. how much force to overcome rolling friction and wheel bearing friction? I’m thinking about 10 pounds. Does that sound about right? More? Less?

I’m interested to hear how the banebots data works out.

Indeed.

It would also be enlightening if someone would measure the free speed (or free current) of a stand-alone CIM and a CIM mounted in a Toughbox.

**

Here’s a link to a white paper and math model I posted last year:

http://www.chiefdelphi.com/media/papers/2405

Time to back up and make sure I understand what’s going on. My initial assumption was that the gearbox acts like a constant load on the motor, causing both reduced speed and reduced stall torque. This was incorrect.

If I understand correctly, there are two separate forces acting on the gearbox: One of them (windage) is loosely proportional to the speed of the motor, caused by churning of the lubricant and friction we can’t get rid of. The other is loosely proportional to the forces on the gear teeth, and thus to the torque on the gearbox.

Since the power out of the motor is the product of speed and torque, each reduced by their respective loss factors, the product of the loss factors gives efficiency. Thus, when an single efficiency number is posted, it most likely represents both speed and torque loss.

However, the split is not necessarily even - 90% overall efficiency could be the result of 92% torque efficiency and 97.8% speed efficiency, or any other split. Experience is the easiest and most useful way to determine what the separate speed/torque loss factors are.

Is this right, or am I off in the weeds?

The gearbox puts a parasitic torque load on the motor. This parasitic torque is a function of the speed and the output torque load on the gearbox:

Parasitic_Torque = f(output_speed, output_torque)

To the motor, the parasitic torque looks just like an additional external torque load. So it slows down the motor.

It’s too bad that gearbox manufacturers don’t provide this data.

It’s simple to get one data point by measuring the motor’s free current with and without the gearbox. Using the motor’s known torque constant, this could be used to calculate free speed with the gearbox. Getting a second datapoint requires measuring torque or setting a known torque load, which requires equipment most teams don’t have. If the motor can tolerate stall for several seconds without damage (perhaps even at a reduced voltage), a torque wrench with a special fitting could be used to measure the stall torque with and without the gearbox.

**

Thanks Clem! Interesting paper. I don’t know how I missed it.

For grins, I checked your analytical solution to eq_14 using my favorite* CAS, Maxima. I got the same answer as you, but I don’t like the way Maxima factored it [see attached PDF]. What CAS did you use?

You modeled the gearbox by using a constant parasitic torque. I’m looking for something just a bit more realistic.

*it’s my favorite because it’s free and therefore anyone can afford it:-)

**

Maxima.PDF (15.6 KB)


Maxima.PDF (15.6 KB)

If I understand correctly, there are two separate forces acting on the gearbox: One of them (windage) is loosely proportional to the speed of the motor, caused by churning of the lubricant and friction we can’t get rid of. The other is loosely proportional to the forces on the gear teeth, and thus to the torque on the gearbox.

They are all simplified models of very complex interaction forces… There are many many more than two forces/torques at work. Every model has it’s boundaries.

This one (load dependent plus windage) actually has some undesirable behavior about zero speed. You would expect the gearbox to stay stationary until the torque exceeds some threshhold torque boundary. This won’t happen here and the slightest torque still transmits 90% and will result in motion. This aberant behavior may or may not be important to your application. A “constant parasitic torque” model you’ve described does prety well in this case and can even approximate some non-back-drivable gearbox behaviors.

The “constant parasitic torque” is often the dominant force in tight fitting pin joint type rotational systems. Here the coulomb friction is a function of the “clamp load” in the joint and approximated as constant. Unless you have an air bearing, this behavior is always present to some extent. A “next best” model could be to apply all three. As you combine effects you need more measurements/observations to determine the values of the parameters.

The bottom line is that we can say “bad things” about any model. A good model minimizes complexity of understanding, implementation, and/or measurements while giving a desirable approximation. A bad model is one which gives inferior results to models of similar or lesser complexity. I haven’t seen any bad models in our discussions, and we certainly don’t want to steer anybody away from what has been proven to work!

Hopefully that about does it for the “simple” models discussion. :slight_smile:

**

Attached screenshot “tetrix.png” shows Tetrix data gleaned from Richard Wallace’s posts.

The purple dots are the two data points from post #1. These are actual dynamometer data on a Tetrix motor-plus-gearbox.

The blue dots are eyeballed from the dyno data graph attached to post #15 for the same motor, but without the gearbox. To facilitate visual comparison, this data was then adjusted for a “perfect” (100% efficient) 52:1 gearbox so it could be compared graphically to the data with the actual gearbox.

It’s quite apparent that, for this gearbox at least, the “threshold torque boundary” consideration can be ignored, and a “constant torque loss” assumption is not valid1. The torque loss in this gearbox is essentially zero with no load torque, and increases linearly2 as the load torque is increased. This data suggests that modeling torque loss as a linear2 function of load torque would work well for this gearbox.

Assuming this data is typical, the answers to the questions raised in the original post would be:

…suppose I bolt a CIM to a gearbox that has ratio 10:1 and efficiency 90%. Then:

  • the free speed at the gearbox shaft should be slightly slower than 1/10th of the CIM’s free speed, because of the friction in the gearbox… but how much slower? answer: for practical purposes, no slower.

  • the free current should be slightly more than the CIM’s free current (because of the torque load of the gearbox friction), but how much more? **answer: for practical purposes, no more. **

  • the stall current should be exactly the same as the CIM stall current answer: of course.

  • the stall torque would be less than 10 times the CIM’s stall torque, but how much less? answer: assuming that “90% efficient” means “the maximum power with the actual gearbox is 90% of the maximum power with the ideal gearbox”, and assuming that the speed vs torque curve is linear, the answer is “10% less” (see attached sketch “efficiency.png” which assumes linear speed vs torque with actual gearbox)

  • assume a linear behavior between free and stall points? answer: probably a good enough approximation.

1 the assumption is not valid, but it may still be possible to come up with an average value which can be tweaked to give useful results for certain problem domains

2 probably roughly linear, but can’t tell for sure from only two data points :frowning:

**