Gearbox friction model

I’m making a gearbox and motor simulation, and I’ve heard that planetary gearboxes generally have around 85% efficiency/stage. Which of these would model this better?

OutputSpeed = inputSpeed * efficiency^numStages / ratio;
OutputTorque = inputTorque * ratio
OutputSpeed = inputSpeed * efficiency^(numStages/2) / ratio;
OutputTorque = inputTorque * efficiency^(numStages/2) * ratio;

Something else maybe?

Reverse the first two equations and you have it. Speed loss is related to but not the same as efficiency. People typically use another constant for speed loss.

So,

OutputFreeSpeed = inputFreeSpeed / ratio * speedEfficiency^numStages;
OutputTorque = inputTorque * ratio * torqueEfficiency^numStages

?

What’s a good estimate for these constants? .85 for both of them?

Depends on motor piloting, number of stages (yes, the efficiency per stage varies per number of stages), which motor you use, amount of stress on output shaft, heat, etc. Best thing to do is to test it out in the field.

I usually use these (highly inaccurate) estimates, based a little on research, and a little on experience:

Belt reductions: about .98 efficient
Gear reductions: about .95 efficient
Chain reductions: about .90 efficient
Planetary reductions: about .8 efficient
Single lead worm reductions about .6 efficient

Of course, as T^2 said, these depend heavily on a lot of factors. I’ve noticed (although not in the least bit empirically) that for reductions that are less efficient to start off with, misalignment, lack of lubrication, high speed, etc makes a much bigger difference in terms of efficiency. For example, it’s much worse to misalign a worm gearset than a spur gearset in terms of efficiency, and belt reductions are much happier at high speeds than chain reductions. Of course, I don’t have any numbers to back these assumptions up, so this may be useless to others.

The process I usually use is to assume that these efficiencies are just multiplied by the free speed of the motor to get the adjusted free speed, however, that isn’t the real way you do it. I’d be very interested to hear what the actual meaning and use of efficiency is.

Really, what you want to do is to use some reasonable numbers to estimate how your system will perform, and leave enough flexibility in the system that you can regear if necessary. Maybe you won’t need to redo it on all systems, but every year we’ve needed to regear some stuff, and not because we didn’t do our math. Efficiency (like friction) is something that’s very hard to analyze.

In my experience you can generally just use one number for speed loss for the whole system regardless of gear ratio. “Speed loss” is really just the lowered free speed of the system caused by the negligible loading of by overcoming kinetic friction. For well greased gearboxes it shouldn’t change too much.

I’ve always used 81% as a “speed loss constant”, allegedly the result of experimental data collected by 229, but I wouldn’t be surprised if it was just some arbitrary number somebody came up with.

These are great numbers for design, and I use about the same numbers when I do design estimates. However, if you’d like to get more accurate, you’ll need to actually build the thing and do some testing. You’ll notice some strange stuff. We saw that we were most inefficient with low loads, and most efficient with medium loads. Lubrication and accurate machining and placement of gears, belts, and pulleys really makes a difference. Another thing that we found interesting was that the efficiency changes a lot with time. When we started our shooter (it used a “hack” 1:1 banebots p60), it was around 2800 rpm. After 1 minute, it was up to 3400 rpm, and after three more minutes it went to 3100 rpm.

Yeah those numbers look accurate enough for my estimates (either .81 for everything or the separate ones), I’m not doing any rocket science.

This is where you use efficiency:
OutputTorque = EfficiencyInputTorqueGR

Speed is indirectly affected by efficiency since the % of max torque will be higher in an inefficient system, which slows the motor speed. This is how you calculate it:

InputTorque = OutputTorque/(Efficiency*GR)
%Torque = InputTorque/StallTorque
OutputSpeed = (1-%Torque)*FreeSpeed

Using a blanket fudge factor isn’t going to behave properly, particularly if your % of max torque is high.

Doesn’t your OutputSpeed equation not factor in efficiency at all?

OutputSpeed = (1 - %Torque) * FreeSpeed
OutputSpeed = (1 - InputTorque/StallTorque) * FreeSpeed

InputTorque is the variable you multiply by efficiency to get OutputTorque - i.e. torque before efficiency is factored in. At a system’s free speed, the InputTorque would only be enough torque to overcome kinetic friction in the system (no external load).

It’s very possible I’m misunderstanding.

If you think about what is going on in a gearbox, it can help you with the model immensely.
With planetary and spur gears, the teeth of the gear are ment to roll relative to one another. The better the tooth, the lower the friction. This rolling will have some friction which will be proportional to load, and thus a simple efficiency factor works well espeically for moderate torques. Speeds are merely a function of the ratio.

Thus
output torque=effinput torquegear_ratio
Outputspeed=inputspeed/gear_ratio

While this model works pretty well, as other have pointed out, it falls apart towards “top speed”. IE, if my motor has a free speed of 15,000 rpm and my planetary gearbox has a ratio on 3:1, why doesn’t my gearbox freespeed = 5,000 rpm? Instead I get about 4,000 rpm!

So the other element tends to be drag. For example when riding a bicycle, you have to overcome the grade, rolling resistance, and wind resistance. On a still day, on flat ground, the initial torque is dependent on acceleration and rolling resistance, but as you speed up, wind resistance becomes a bigger factor.

For gearboxes, the “wind resistance” is often the oil or grease in the gearbox. These drag forces tend to have a torque to speed relationship of:
drag torque = C*speed^2 with C being a constant.

C is a combination of the thickness of the grease and bearings and…It is often temperature sensitive for many gearboxes.

A good way to get a value for this is to measure motor free speed, then motor gearbox free speed. Use the gear ratio and motor curve to find the torque for that speed, and then…
Set dragT=c*speed^2

For your model, you will then have:
Torque out=effratioinput_torqu-c*speed^2

If you use this method, and have the ability to measure stall torque, I think you can make a very accurate model, and you will likely see a much better efficiency number than using the other methods.

For instance in high power oiled gearboxes, we might use as little a 1%-1.5% efficiency loss per gear mesh and 0.1% to 0.15% for bearings. Of course, this is with gearboxes with really good gear geometry, properly weighted oils, and high quality bearings. Overloaded gears, bushings, bearings, and really thick greases tend to perform at lower efficiencies with higher “windage”, but they tend to work pretty good for FRC.

The above is essentially what I used in the Drivetrain Acceleration Model, except that instead of cspeed^2* I used Kro+Krvspeed* (line 85 in the C code). This one line of C code is easily changed to make it whatever function of speed you like. In FRC drivetrains, the Kro constant term is probably needed since even at very low speeds and output torque there is friction in the drivetrain (due to tension in the chains/belts, misalignment, etc).

The input torque depends on the output torque. Your output torque requirement is higher if the efficiency is lower. Or, your input torque is higher for the same output torque.

While I agree that a certain amount of friction there initially, I think that value should go away once it is overcome, much like the delta between static and dynamic coefficients of friction. If you are pushing a block on a smooth surface, you need a certain amount of push to get it going, but once it is going, the term would reduce.
I will need to think about that one a bit more, and how it would effect an acceleration model.

I wasn’t talking about static friction. Note how I carefully worded it:

at very low speeds and output torque there is friction in the drivetrain (due to tension in the chains/belts, misalignment, etc).

Low speeds, not zero speed. The tension in the chain combined with any misalignment can create a non-negligible loss even at very low speeds even if there is no load on the output. This can be modeled with a constant factor. You may also need a speed-dependent factor, as in your post.

I will have to look into other acceleration models to see what all they have, and how to do some simple test in order to get an accurate model of what is going on.

It would be really neat to come up with a VI or something similar that could be run with an FRC robot and output some parameters.

Consider the following scenarios:

Scenario1:

  1. put the robot up on blocks, so there is no external torque load on the drivetrain.

  2. apply enough voltage to the motors to get them moving

  3. slowly reduce the voltage to the lowest value that still sustains a constant (very slow) speed. Since the wheels are not accelerating, there is no net torque on the wheels.

For Scenario1 I assert the following:

A) at that same voltage, a free motor (i.e. motor not connected to gearbox and drivetrain) would be spinning faster and drawing less current. In other words, the motors in the drivetrain are producing torque… but that torque gets lost through the friction of the gearbox and drivetrain.

B) using the model:

wheel_torque=effratiomotor_torque- (Kro+Krv*speed)

… with wheel_torque = 0 and wheel_speed ~ 0, you get:

motor_torque = Kro/(eff*ratio)

… which allows you to choose a value for Kro which reflects the real-world situation that the motor_torque is not zero under these conditions.

C) This effect can be non-negligible for a poorly-built drivetrain (chains/belts too tight; misalignment; inadequate lubrication, etc).

D) since the net wheel_torque is zero and the wheel_speed ~0, the model:

wheel_torque=effratiomotor_torque-c*speed^2

… says the motor_torque is ~zero, which does not allow the model to take into account the losses mentioned above.

Scenario2:

Same as Scenario1, except place the robot on a straight, flat, level, carpeted surface

For Scenario2 I assert the following:

A) The motor current (and thus torque) will be even higher than in Scenario1.

B) The reason is the extra losses due to rolling resistance (carpet compression) plus additional work done on the carpet due to misalignment of the wheels (or in the case of mec or omni, the stretching of the carpet due to the sideways component of the reaction force of the roller on the carpet.

C) These extra losses will include a constant term (which can be added to Kro) and a speed dependent term. Whether the speed dependent term is better approximated as linear or quadratic I do not know at this point.