Average Energy per match

I am doing battery capacity fade studies at my work and wondered how their results could be applied to these robotics competitions, but ran into the question of how much energy is actually used per match.

Has anyone ever collected any data on the average energy used by their robot in a given set of matches? This is going to different per robot, per match but I imagine throwing out some extremely outliers I’d think you can find a pretty accurate average value.

My initial thought is that the battery we use currently is relatively poor for the amount of money we spend and the shelf life they retain. To lose 20% capacity in 6 months by only sitting on a shelf makes me wonder about the impedance growth rate associated with these batteries.

MORE Robotics (1714) has an amazing battery charging system that tracks each battery’s usage and life. I’m sure they could shed some light on your question.

We test our batteries, and have not seen the drop-off that you suggest (20%). In fact, some of our best batteries are not the new ones, but ones that are one or two years old.

If the manufacturer says 20%, they may be hedging their bets in case the battery is used in a critical application, or (more likely) are basing it on statistical sampling and covering the worst-case: for instance 99.999% of the batteries may lose less than 20%.

In preparation for launching the PD, our interns created a battery monitor. It logged the battery voltage and current a couple times per second to an SD card. I have the information gathered at Mayhem In Merrimack. It is either 2meg zip of .pngs or a 2gig folder of raw data. If you PM me your email address I’ll forward it along.

I was hoping that CrossTheRoad or AndyMark or other would sell the devices, but there wasn’t much interest. All it is is a pic, a hall effect sensor and a microSD connector.

It makes me wonder why we don’t take more effort to make sure we keep ours charged up and in use all year :slight_smile:

Energy use per match depends on a lot of things, and the game has a lot to do with it…we used up most of the capacity of the battery during matches in 2007…this year the batteries lasted quite a while.

You should be able to figure that on average, roughly one quarter to one half the capacity of the battery in each robot will be used in a match. So figure two batteries worth of charge per match (6 robots) as a starting guess.

Any chance plans and code for this device could be published?

Sure. It is just a lab tool - it isn’t cleaned up or productized, and finding where the source went might take a few hours. I could email them or publish them somewhere. How do you want them?

It is not my dropoff, that is from MK’s datasheet.

How did you make that assertion that MK is incorrect? I doubt they are “hedging their bets”.

What is the maximum SOC you saw when you first charged them? what C rate did you charge them with (C/10, C/20)? Now two years later when using the same C rate what is the SOC? Were you at the same temperature both times?

Or did you use some type of impedance spectrometer to correlate that to capacity?

Just because your robot still runs doesn’t mean the battery hasn’t lost capacity, back to my original question. Maybe your robot will function normally even if you lost 40%? What is the average energy it expended per match? Did you test it now two years later under the same conditions?

@squirrel, I wouldn’t recommend charging them fully for long term storage from what I can tell 40-60% seems to be the sweet spot for %SOC with regards to reducing impedance growth. Also, keeping that ambient temperature in check is huge.

@Eric, check your PM box in a few minutes

I can try and get usage data from the battery charging station for the last few events and demos the robot’s been to - We always swap batteries between matches just to be safe, so the station should log the “before” and “after” voltage for a match with a 2009 robot. I’m personally not too familiar with the inner workings of the charging station (it has big buttons and you hit them to start charging), but if it’s trivial I’m sure I can get data from a mentor.

The only thing I see that is close to 20% over 6 months on this brochure is the charge retention, which doesn’t sound like what you are describing. Can you post your information?

I remember someone coming up to our team at IRI in 2008 and putting some sensor on our battery cable to check power usage. I believe it was used for making sure everything went right with the new control system this year. I’m pretty sure it was a FIRST guy.

Yep, we used that data as well, I just don’t have it handy at the moment. Thanks!

www.mkbattery.com/images/ES17-12.pdf

that link is very interesting though.

Please do not confuse charge retention with capacity retention.

Charge loss is reversible; we generally re-charge batteries after each match.
Capacity loss is irreversible, it is the permanent loss of (some of) the ability to re-charge (or more accurately to deliver that energy back to the load)

A charge loss of 20% over 6 months is typical for lead-acid chemistry, no surprises there. For capacity loss, focus instead on “life expectancy” on that same data sheet.

IF you want to prevent significant capacity loss, keep the battery charged. Industry standard is 3 months on the shelf before it needs a recharge. We try to use that as a maximum.

The greatest factor of permanent capacity loss is allowing the battery to sit in a discharged condition. This allows the formation of ‘hard’ lead sulphate crystals which cannot be reversed by charging. As the sulphur comes out of the electrolyte, it loses effectiveness and the crystals ‘clog’ the lead oxide reducing the ability of the chemical reaction to occur, both reducing capacity.

Anyhow, to the original question: the charger can deliver 6 amps, it takes about 1.5 hours to recharge a battery, so we’ve used 9 Amp-hours as a rough estimate, about 50% of the battery capacity.

ah, correct, I misunderstood their terminology. So their 20% loss is due to their internal ESR discharging it on the shelf and not referring to their internal impedance growth (your ‘clogging’) where the electrons remain lithiated in the anode/cathode material.

My main interest in knowing the average energy was to see if a) this battery is overkill and b) if something like an ultracapacitor stack could replace it since there is no significant capacity fade associated with a UC stack (among other benefits).

ESR typically means “equivalent series resistance”, which hampers discharging and is what is measured for “internal impedance growth”. It is not related to self discharge.

The batteries are overkill for energy, but not power. Matches are ~1/20th of an hour (for easy math and to be conservative). This means that unless the battery is capable of a sustained 20C discharge, to get the necessary power you will have to overshoot the energy.

Since the load is rather peaky, it also has relatively severe peak power requirements. This pushes the energy surplus even higher.

Long story short, lead acid isn’t the optimum choice for us but it is working pretty well. Switching to source with a higher C rating would allow us to lower the amount of excess energy we are carrying around, which would theoretically lower weight and cost. However, higher C rated energy sources are not currently economically viable for FRC (I hope this changes!).

The 36V Dewalt pack that uses A123 cells has the energy and power density necessary, and is many pounds lighter. If only… :frowning:

With my back-of-the envelope calculation of 9 Ah, I think we’re in the ballpark as far as energy supply goes. I suppose a team could theoretically use as much as 120A for 2.3 minutes (= 4.6 Ah), but with a discharge current like that the battery’s capacity is exceeded (the curves only go up to 51A, I am extrapolating). The conclusion is that there’s a reasonable amount of energy, but not a large excess.

As for Ultracaps, that amount of energy might be a bit expensive. Here’s a Datasheet for the BCAP3000-P270-T04, a 3 kiloFarad 2.7 volt capacitor that lists for $130. You’d need not less than five to come close to that $37 MK battery. I agree, they are rated for a million cycles - the FiM model may expand to make this useful - but at this time, cost is a significant obstacle.

To put it another way: It’s not a bad idea, but you can buy an awful lot of lead-acid capacity for that money.

Thanks Don for getting in with an early answer.
Although the MK battery is a Sealed Lead Acid we must remember that it is also an AGM battery so that changes things a little bit. One needs to look at the entire data sheet when evaluating this battery and the significant data is life expectancy vs. depth of discharge. For our applications, there should be another (somewhat intangible) variable added that fudges life expectancy for current maximum draw.
I think we can all agree that this year’s game produced the lowest current draw of any game thus far due to the lack of friction for the drive train. However, in those years where robots drive on carpet, we must consider the teams who choose an operating speed that by design draws excessive amounts of current. The battery fully charged is capable of 600+ amps (albeit for short periods of time) and yes teams do approach that figure. A well designed electrical system is capable of delivering near the stall current for each motor on the robot. A CIM motor stall current is 129 amps. Four motors in a drive system=? In a pushing match with manipulators it is not unheard of for teams to deplete a battery in a two minute match. It is for this reason that IFI included a backup battery to keep the control system functional even when the current load pulled the battery below the operating voltage of the control system. Please be advised that sound mechanical design leads to good electrical performance. The new PD designed for 2009 included regulators that are designed to accommodate these voltage fluctuations.
Please also note the amp hour capacity of these batteries is based on discharge rate. 18 AH at 1.8 amps discharge or 6.4 AH at 54 amps. Interpolate that data as you might and a two minute match of average 200 amps will deplete the battery and shorten it’s life.
MK is working on a new design for our competition and hoping it will prove to be the FRC battery in the future. Some teams have samples and will be working on testing in the next couple of months.
Capacitor banks simply are not designed for or capable of sustained high current demands without significant reduction in terminal voltage. Using the data in the application notes and product guide, the cap solution at 200 amps would last about 60 seconds. We still need to remember that caps in series cannot simply be added. Five 3000 farad caps in series would result in about 600 farad equivalent capacitance with a five times increase in series resistance. Again using the application notes equations (http://www.tecategroup.com/app_notes/Representative_Test_Procedure-1007239.pdf) we would need an equivalent capacitance of 3,000 farads total or five banks of 5 caps in parallel to achieve the power needed for our robot at 88 amps average. So 25 caps at 0.5 kg is 12.5 kilos or twice the weight and five times the size.
In ham radio battery operations, a common practice is to average battery use by “key down” percentages or the time an operator may actually transmit (high current) to the time one is just receiving (low current). Say in our case, we have a normal 40% key down condition of 200 amps (that is about 45 amps/motor) coupled with a 20 amp key up. We could calculate this to about 88 amps average current which interpolating from the MK graph would still give us 2-3 minutes. With careful driving, software ramp up for speed and more efficient designs a team might be able to get that figure down to 150 amps key down and the result would be 68 amps average which would slide that back towards 4 minutes on the discharge curves. Each time the average current is reduced so is the required charge time. This could be significant in the finals.

very interesting, I have some experience with the maxwell ucaps and you both are correct that they would not be acceptable under weight, size or cost constraints. Though I should say I am working with a company who has an electrolyte chemistry that allows their ultracapacitor nominal cell voltage to be 4.1V (vs 2.5-2.7V) so the number in series is reduced to 3. Additionally, their manufacturing process allows for a range of Ah size Ultracaps by varying their specific area in the cell pouch. They are prismatic cells that should allow for a more compact construction.

That all aside you all may still be correct that the cost may be too great.

Stephen,
I think that the greatest potential for these caps lie in DC/DC convertors which is the implied application in devices like UPS and rail backup systems. As yet, that application is not available for use on our robots under robot rules (other than in the PD) at present.