Does anyone know about a mathematical model to describe how a motor’s temperature changes over time given its velocity/applied voltage/current/etc…? I acknowledge that things like the heat equation imply change in temperature also depends on nearby temperatures, so trying to model the temperature at one point (the sensor) doesn’t give enough information. But if there’s any approximate models, I would be interested in knowing! The end goal is to create a velocity estimator that only takes in temperature and current, just for kicks and giggles

As a first-order approximation, you can look at the efficiency at a given point on the motor curve to know how much heat is being dumped into the motor. (Efficiency is the ratio of mechanical power out to electrical power in, and the difference between them is waste heat.)

To better model temperature over time, you need a lot more information, and honestly your best bet is probably experimentation. For example, it’s well known that CIM-class motors heat up slower than 775-class motors (thanks to greater thermal mass), but they also cool down slower. In addition to thermal mass (more properly “heat capacity”), you have to take into account where the heat is going. Convection always exists, but may be more or less prevalent depending on how confined the airflow around the motor case is. Similarly, motors bolted to thick aluminum are going to dissipate heat much better than motors bolted to plastic.

Finally, there’s the internal construction of the motor. The vast majority of the waste heat is Ohmic losses in the windings. The lower the thermal resistance between the windings and the surroundings, the better the motor will be able to dissipate heat. This is an advantage of brushless motors (although brushless motors also have higher efficiency than brushed motors, and so have less heat to deal with in the first place).

If you can get the rate at which heat is lost to the surroundings, you could do this to a point.

Heat generated in a motor is equal to I^2 * R, where I is the current and R is the motor winding resistance. Current is directly proportional to torque. This means that if you don’t have any friction (an ideal gearbox), you won’t have any current when operating at a steady speed and 0 load, and therefore, no torque. Think of something like a frictionless flywheel.

Getting heat dissipation is a big tougher. Heating the motor up to a known temperature and measuring how long it takes to cool off with no electricity flowing through it (for a CIM or other non-fan motor) will help you find the overall coefficient of heat transfer for the surface. Heat transfer rate is roughly proportional to a constant times the temperature difference between the object and the ambient temperature.

I doubt you’ll ever be able to get usable data out of something like this simply due to the number of variables you need to measure (like steady state current draw), but that might help get you started.

For this goal you will need to know how copper temperature AND magnet temperature are changing. Electrical resistance is a function of copper temperature, while induction (back-EMF) is a function of both speed and magnet temperature. As Carlos pointed out, both temperature changes are primarily driven by heat dissipated in the copper (I-squared x R), so you will also need to model transport of copper heat to the magnets. That model is complicated by several factors that depend on how the motor is constructed and installed.

Copper temperature increases as the square of the electrical current density (Ampere per coil wire cross-section area) increases. In the graphic below, “tau” is the thermal time constant of the motor coils, a first-order description of how heat is transported out.

Thanks! A first order approximation is probably good enough - I was hoping to use an unscented Kalman filter but an extended one that linearizes instead is probably sufficient. To calculate the heat being dumped in, would I do something like Q = (1 - \text{efficiency}) \cdot P = (1 - \text{efficiency}) \cdot \tau \omega where \tau is the exerted torque that is proportional to the applied voltage?

As for all the constants I don’t know, I’m willing to just know what the equations look like and solve for constants with regression. Some time in the near future I’ll have access to a lot of data points with velocity, stator, input, and output currents, and temperatures for our shooter so I’ll be able to work with that.

Thank you! As I said in my other comment, I will soon have access to data and can extrapolate constants if I have equations. I can definitely try out procedure you talked about. Chances are, I’ll also have to include the ambient temperature to the estimator. I’m hoping to try this out on our Falcon500 shooter, so the temperature sensor would be inside the motor and might not have the same ambient temperature as the outside

Could you give any specific equation? I have seen V = Ri + K_v \omega where the back-EMF is K_v \omega , but I haven’t seen any equation that considers magnet temperature. But this is very useful information! I’m not sure which temperature gets measured by a Falcon500’s sensor, but if variables are dependent on speed I’ll be able to estimate them more accurately.

Also, in the graphic, is the equation for \Delta T / \Delta t only valid for t \approx 0 or does it hold in general if you consider it a differential equation and account for the changing conductivity?

The formula is actually V = Ri + K_e \omega, where K_e = \frac{1}{K_v}. K_v is “speed per volt”, while K_e is “volt per speed”.

K_e can be expressed as the following expression of motor geometrical constants:

You can google what the other variables stand for, but the important one is \phi_f, which is the motor’s magnetic flux. For permanent-magnet motors like we use in FRC, the magnetic flux can be expressed by:

So you can see the magnetic flux is approximately linear with B_r, the magnet’s flux density. This B_r is a constant given by the physical properties of the magnet, and it is affected by the magnet’s temperature.

All that being said, B_r should only drop a fraction of a percent per degree so I don’t know how much it really matters for a first-order approximation.

^ what he said.

To the question of BEMF’s temperature dependence: common magnets in FRC are either ferrite or neo. Ferrites are found in CIM motors. Their temperature coefficient of Br is about -0.2% per Celsius degree. In a hard driven FRC match they might rise from 20C initial to 60C final, losing about 8% of maximum motor torque-per-Ampere.

Neo magnets are found in the recently introduced brushless motors. Their temperature coefficient of Br is about -0.12% per Celsius degree, and they don’t heat up as much as the CIM’s magnets, so the drop in torque-per-Ampere in a hard FRC match is less.

The above helps to explain why brushless motors are such a great addition to the FRC toolkit. Two factors influence better thermal performance: (1) brushless motor windings are on the stator, and thus have an easier path to transport their heat into the robot chassis compared with CIM windings, which are on the rotor, and (2) brushless motor magnets stay stronger throughout a match. Combining these two effects, a CIM loses about 1/3 of its peak power capability due to copper and magnet heating, comparing the end of a hard match with the beginning. The same comparison for a brushless motor gives much less reduction, typically 10% or so.

I’ve seen this in both ways actually, I’ve heard it’s just a convention thing. I was first introduced to this constant as K_\omega which is why I use it, but I’ll use your convention for now.

I think you’re right about this. A few months ago I looked at VEX pro’s motor curves for the motors our team uses and ran a linear regression to find each constant, and R^2 was always very close to 1. So it probably has a negligible impact on linearity

Thanks, That’s really cool information! It seems like this problem is much harder to tackle than I thought, but I’ll take a look at the data I collect soon and see if I can find anything useful

I’ve heard both K_e and K_b for the “back-EMF constant” and K_v is always the “voltage constant”, which is the inverse. They’re generally used for different things, K_e and K_b for calculating back-EMF given the speed, and K_v for calculating the free speed at different voltages. I’ve never seen K_\omega, though I get where it’s coming from.

Linear regression of what vs what? You can calculate the transient motor constants (R, K_T, K_e) from the steady-state motor constants (\omega_f, T_s, i_s, i_f) with:

Oh my bad, that was actually a typo! I meant to say K_v . That being said I have seen it as K_\omega and sometimes use it too

Multiple linear regression of torque v/s current and velocity (no intercept). I know you can calculate it with just the steady-state constants, but I decided to calculate it using all data points given (I verified it was almost the same as just using steady-state constants). If you’re curious here’s the source code: GitHub - howard-beck/Motor-Constant-Extrapolation: Extrapolate motor constants from given data points. Resistance, viscous friction coefficient, torque, and EMF constants are taken from provided motor curve data using a least squares planar regression without intercept (no friction). Inductance and moment of inertia are taken from measurements done with no load while not in steady state using a gradient descent algorithm. Frictional torque is taken from measurements with typical load using a least squares regression.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.