Great AA Alkaline Battery Test – Pt 1: Battery Testing Fundamentals

Regular readers would probably know that I have a love of batteries, and their associated testequipment. Maybe it was the fact that they were expensive, but pervasive, and my enjoyment as a child relied quite heavily on batteries to power Walkmans and CD players amongst other things is the reason that I think so fondly about them.

Christmas really came early for me, being the lucky recipient of a B&K Precision Model 8600 DC Electronic Load for a RoadTest review, I was excited to put it to good use. I proposed in my RoadTest application that I would test some commercially available batteries and compare them for capacity. Soon, it got out of hand, resulting in a large number of tests to run and sleep being lost. The bigger issue was that the tests themselves didn’t really belong in a review of the electronic load, and the number of other brands involved detracted from the more important goal of evaluating the load unit itself. As a result, the “Great AA Alkaline Battery Test” series of postings was born.

Originally intended to be launched in parallel with the delivery of the RoadTest review, it was clear that I couldn’t make the deadline to deliver both this series of postings and the RoadTest, and thus this comes a few days late owing to the busy preparation of the graphical assets and analysis that underlies this investigation. It’s probably the most comprehensive investigation I’ve undertaken to date, so I hope you enjoy the results.

The Humble AA Alkaline Battery

The AA cell has been the most popular battery – introduced in 1907, standardized in 1947 and still used today. It powers almost everything from Walkmans, to CD players, torches, toys, wireless doorbells, remote controls, audio recorders, digital cameras and flash units. Its ubiquity and standardization is one of its strengths, with a large installed userbase, although in recent years, the drive towards smaller devices has increased the use of AAA cells and Li-ion/Li-poly rechargeable cells.

While AA cells are available in cheaper carbon-zinc/zinc-chloride types, the most popular and the best value are generally the alkaline cells, which operate through a chemical reaction between zinc and manganese (IV) dioxide, with an electrolyte of potassium hydroxide. It handles higher loads and intermittent light loads better than the carbon-zinc type, and delivers around three to five times more capacity. It has a nominal voltage of 1.5v, and can handle loads of up to around 700mA without significant heating. At the end of discharge, some hydrogen is evolved within the cell which pressurizes the cell. Over time, seals can rupture, and the electrolyte can leak and corrode metal which is in contact.

What is the Capacity?

Consumers looking to buy AA batteries face an uncertain buying proposition. Battery prices vary significantly between brands, none of which advertise their capacity on the packaging or cells. Many of them have different incomparable selling features, including “More Digital Power”, “Super Alkaline”, “Extra Alkaline”, “Our best ever”, “Lasts 3x longer”, etc. It’s hard to know what’s the best value for money when you don’t know what’s marketing speak, and what’s real. Sometimes, they even try to substantiate the claims with references to “IEC standard” without naming which standard – as far as I know, only the size of AA batteries are standardized, although there is a pulsed-load standard for digital camera battery life but not even I know what standard that is!

Even if you decide to go home and do some more research, there is some data available, but the data is usually limited to more expensive options from reputable brands. For example, an Energizer E91 datasheet and a Duracell MX1500 datasheet can be easily found, but chances are, the datasheets for the cells that are the best price option on the shelf are nowhere to be found. Even then, direct comparison between datasheets isn’t helped by the fact that some require a little arithmetic to get from service-hours to mAh/mWh capacity that consumers would be familiar with.

As a result, before we even jump into talking about any tests, we should first work out what we mean by capacity and how it can be assessed.

Capacity – mAh, mWh? Cut-off?

Unfortunately, when we talk about cell capacity, it does require a little bit of explanation. What does it mean when I say I have a cell that has 1000mAh capacity? A simplified answer is that it’s a cell that can withstand a load of 1000mA for an hour … but that’s not the end of the story.

The first point to realize is that mAh is a measure of current-delivery capacity. This does not actually translate to energy at all. In fact, if I had a 1000mAh “1.5V” cell, and a 500mAh “3V” cell, both would have the same amount of energy as energy is the current multiplied by the voltage (which is power) integrated over time.

As a result, if we are being really pedantic, the right unit to measure energy is to use mWh because it takes into account the voltage. Naively, you may think that “well, an Alkaline cell is 1.5V, so we can just multiply the mAh figure by 1.5 and get mWh, right?” Wrong.

This is where we get bitten by technicalities – namely the whole idea of “nominal voltage”. The nominal voltage of the cell should be the average voltage of the cell throughout discharge, but in reality, the actual cell voltage varies as a function of load (you’ll see this in the next subsection). So, depending on the internal geometry, chemical mobility of the cell, state of charge and temperature, the cell voltage can be different and thus you wouldn’t get the “right” answer just by assuming the printed nominal voltage is the actual average voltage throughout the discharge at your particular load.

This brings us to the next point – if the voltage varies as a function of state of charge, then when do we “decide” the cell is spent? This decision can have a small impact on the derived capacity, as if we set the threshold at a high value (e.g. 1.1V/cell), then we can’t extract all of the capacity. If we set the threshold at a low value (e.g. 0V/cell), then while we can get all of the capacity of the cell, it’s unrealistic because most devices will malfunction well before it reaches this point. The reality is somewhere between a full discharge to zero, and about 1V/cell.

Battery Load Condition

Unfortunately, capacity of the cell is not quite as simple as the single mAh and mWh number might imply. Lets say you have a 1000mAh cell – you would expect that cell to be able to power a 1000mA load for an hour or a 200mAh load for five hours, right?

Unfortunately, the truth is, to say you have a 1000mAh cell is not that meaningful unless you know what rate it was measured at. This is because the actual capacity a cell can deliver varies as a function of the load current. If a cell is rated for 1000mAh at C/5 rate, that means it can handle 200mA for five hours, but there’s no guarantee that it can handle 1000mA for an hour. As a rule of thumb, cells typically underperform when drawn at a higher rate than the rated rate, and overperform when drawn at a slower rate. As a result, it’s not inconceivable, the same cell might only handle 1000mA for 30 minutes, but maybe 100mA for eleven hours.

The reason for this is likely to be (at least) two-fold – cells are non ideal and have an internal resistance. This is like having an ideal cell with a resistor in series. At higher draw currents, the resistors’ dissipation increases, so more energy is lost within the cell and not delivered to the load. The second reason is probably more chemically related, and is to do with how fast chemicals can diffuse to/from the electrodes. Discharging a cell at high rate is likely to form depleted regions around the electrodes – how fast they can be replenished will likely affect how well the battery maintains its voltage (and thus, the function of the connected appliance).

As a result, testing will yield different figures under different conditions. The following graphs show the three most common load modes, with a vertical separator every half-hour of service time. For the most basic testing, it’s common to perform a constant current test.

In such a test, the current is held at a constant rate and the cell is discharged until it hits the termination (or cut-off) voltage. This mode is used probably because it’s simpler to implement on an electronic load at a high accuracy and electrochemistry reactions are inherently “electron based” and thus constant current represents a constant reaction rate, but its main drawback is that it doesn’t really interact with the voltage. As a result, the delivered power to the load (voltage * current) actually falls as it progresses. Most appliances do not feature such a constant current load profile – I can’t even think of one at the moment. As a result, while the results are probably easily measured by an electronic load, its usefulness to an end consumer is not as great as it seems.

If you’re a hobbyist like myself, then constant resistance discharge is probably the most familiar mode. This is another alternative, and this one does emulate the behaviour an old filament flashlight and motorized appliances where speed is not regulated might have. This can be implemented basically with a fixed value of resistance, but because of the way resistance interacts with voltage, it’s actually very gentle on the battery. As the battery voltage falls, the current falls accordingly, so the delivered power falls even further. For modern appliances, this type of loading is not common.

Instead, constant power discharge is more common. This is due to the prevalence of switching converters which provide regulated power to run electronics or even LEDs. In these “digital” appliances, a certain amount of power is needed to perform a task, so at any given voltage, a necessary current is drawn to produce the required amount of power. As a result, such a discharge regime is particularly harsh on the battery – current rises as the voltage falls which pushes the voltage down even further. This is why digital cameras seem to have such a short life with many disposable cells. Such a regime can be simulated with a proper DC electronic load, which internally operates a feedback loop to adjust its resistance accordingly to maintain the power within a narrow range of values.

In addition to this, there are various pulsed modes, which have seen some use in profiling digital camera usage. Such pulsed modes are often CC sections separated by time or steps between a number of CC settings. The main reason for these tests is that they simulate the discontinuous load nature of some appliances, which gives the secondary benefit that the alkaline cell has some time to “breathe” (i.e. chemical diffusion can occur) and thus post better lifetime figures.

Regardless, in the three examples above, the first was a constant current at 400mA, the second was constant resistance at 3.75 ohms and the last was a constant power at 0.6W. All three are (naively) the same, on the basis of a 1.5v nominal voltage. However, the time taken to test is radically different, as is the delivered capacity, because they are only equivalent at the point of nominal voltage, when in reality, they spend a lot of time operating at different points.

Other Tests and Potential Issues

A little research on the internet reveals a rather limited number of tests on batteries (which I am not referring to directly), some done by companies, but many done by hobbyists. While I am very appreciative of the information offered, and their efforts, each of them have their own potential issues. Unfortunately, the short answer for most of these issues is that there is no panacea, but it’s worth being aware of them nonetheless.

Applicability

The first issue is simply applicability – many of the tests are done by people overseas with batteries that aren’t on sale on the shelves here (in Australia), or with “extinct” series of batteries, as manufacturers like to change-up the branding every so often. So while there are a few sites with information, their applicability is limited. Sadly, there isn’t much we can do about this, but having more data might help …

Series Sets of Batteries, Sample Data & Size

The second issue is that some users have tested batches of batteries in series sets of two or four. This has a limitation, as the results may reflect a mixture of what the individual cells are capable of but also limits the amount of data provided – in which case, a set of two cells or four cells provides only a single data point.

This also ties in with an issue of the sample data being provided – some cells are tested by some sites, but information about their batch codes, expiry dates are not provided. Who knows if a “bad showing” is merely because the cells under test were soon going to expire? Further to this, some tests have fairly small sample sizes of even just one in some cases, which makes it hard to attribute a lot of weight to it. Unfortunately, in many cases, it seems that time and expense may be the reasons behind it.

Where batteries are acquired over time, their freshness might vary, and the test equipment may also drift. Thus it seems prudent if testing could be completed expeditiously, and cells purchased and tested with minimal storage and handling time such that the results are representative of the retail state of the cells, and applicable to the present market.

Electrical Test Conditions, Temperature & Cut-off Voltage

Amongst other limitations is the choice of various different test conditions which were not well characterized – e.g. a resistor welded in parallel over the terminals of a mechanical clock which reads the time may be an indication, but it doesn’t provide the data you need to make a firm call. Such tests might not have a well-defined cut-off voltage, have loads which are too large to hasten test times, or too small to try and eek out all the capacity, and have no knowledge about the actual voltage under discharge. They might be reporting mAh, but they don’t know the actual average voltage and “assume” all alkaline cells are going to be “about the same”.

Some do make an effort to make the test conditions well known and regulated, but sometimes they are thwarted unknowingly. In some test setups, I’ve seen the use of “classic” battery holders with very thin wires trying to carry high currents. As we all know, thin wires and large currents mean voltage drop – the cut-off voltage is not going to be accurate in this case!

Another potentially big issue is that the performance of a battery depends on the temperature it is operated at – I haven’t seen many people stipulate or regulate the temperature of the batteries under test, so some of their reported capacity variation might come down to the temperature of the cells themselves.

Test Methodology

In light of this, it’s clear that testing batteries is hard to do right, and there’s no one accepted method. As a result, I’ve decided to do my tests under the following conditions, providing justifications:

Load: B&K Precision Model 8600

Temperature: 21 degrees C, by air conditioner

Rate: Constant Power, 0.48W

Cut-off: 0.8V

Sample Size: 4 cells where available

Connectivity: 16AWG wire and clips

The load was the B&K Precision Model 8600 because I had it available to me, and because it’s a very capable load with an astonishingly good calibration report. The temperature was chosen to be 21 degrees C, as this is a common “room” temperature when it comes to existing battery datasheets. The cut-off was also chosen to be 0.8V based on existing battery datasheets, and also due to the fact that most silicon-based devices don’t operate below about 0.6-0.7V. A constant power discharge is chosen to emulate the load of switching power converters, and a load that requires a constant power – e.g. digital cameras, LED torches, audio recorders, CD players, GPS trackers, etc. The choice of 0.48W was made for a number of reasons – a common 2000mAh (C/5) 1.2V Ni-MH cell is rated to deliver 400mA over five hours, which is equivalent to 0.48W. Such a discharge rate would mean that test data is achievable in a reasonable test time, fair comparison can be made with Ni-MH cells which are fairly popular and ecologically desirable, and would be close enough with things like 1W LED torches which use two AA cells. It also means that the end-of-discharge current would be 0.48/0.8 = 600mA, which is within the 700mA limitation for AA cells. A sample set of 4-cells is used to avoid excessive test times and waste, although some cases, two-cells or three-cells were used as that was what was available. To minimise voltage drop induced error, 16AWG (1.5mm²) wire and clips were used to attach to the adjacent terminals of a multi-cell AA holder, to avoid the thin wires causing errors. No external sensing was used in light of this, as the cell voltage is likely to decline very quickly towards the end of discharge, and a few mV difference either way is not going to make a big difference.

I forgot to take a picture of the actual set-up but it’s extremely similar to my preliminary tests done above with this set-up with 1mm² (17AWG) wire, just with slightly up-rated wires:

Naysayers will probably already be shouting that “you won’t get all the capacity from the cell like that”, and I’d agree. The whole point is not to meter the whole capacity, which would probably require working at 25mA and discharging for about five days per cell (snore!). The point is to emulate a heavy drain device and determine how much power a heavy drain device might be able to extract from the cell. While I could also do another set at low-rate, I personally feel that low-rate discharge data is of limited value, as if you are draining cells that slowly, you won’t be chewing through the cells at a rate which would likely make a significant financial difference (e.g. in a remote control).

Another complaint might be the lack of “pulsed” rest time for the cells to recover. In reality, some use cases will have rest time, but to incorporate that into the test protocol would have increased the test time taken and I really couldn’t afford that. After all, I had intended to deliver the testing within a two month window.

Another objection may be the use of non-permanent connections or the use of a battery holder which might have minute contact resistance variation from cell to cell. This is “life” in a nutshell – your devices have contacts in varying states of cleanliness, and with varying planarity and contact area, and different cells might interact slightly differently with it. The test is practical in that sense. The lack of permanent attachment is not seen as a major issue either, as the slight variation in voltage would only be a mV or so, and wouldn’t be significant in the bigger scheme of things (e.g. cell-to-cell variations).

Unfortunately, if you still disagree with my methodology, it’s too late as the testing has been done. But feel free to start your own battery test adventure …

But … wait! I’ve heard that … is best?

In my long time lurking around the internet, there are a number of battery myths that are commonly spouted which I thought might be worth trying to answer with this investigation. While I probably can’t provide an answer to all of them, you might have heard the following advice at one stage:

The cells with the expiry date furthest away are the freshest, so buy those.

Heavier cells have more material in them, so they must be the better buy.

Parallel import cells from other markets might be cheaper but are very much inferior to locally supplied cells.

Unknown brand cells are no good – stick with the branded stuff if you don’t want to waste your money.

The “digital” power stuff on batteries is all hype, and there’s no reason to buy based on those claims.

Sometimes, the answer is not as obvious as it seems, and there is a fair amount of conjecture about this amongst certain forums – but I won’t get into any arguments. I’ll let my data do the talking.

Conclusion

As you can see, testing and comparing batteries is not as straightforward as having a magical unit where you can plug the cell in and a number comes up. Depending on how you test the battery, you can end up with a different result, and not all results are likely to be equivalent or even meaningful depending on the final application of the cells. There are a multitude of factors which contribute to the cell’s final delivered capacity which include the material within the cell itself, how long it has been stored, its internal geometry/resistance, mobility of the chemicals inside, load profile, duty cycle, temperature and end of discharge cut-off voltage.

A methodology was developed which I feel is the best balance of representative cell loading for a modern appliance, time required to test the cell, and consistency of results. While there is no “perfect” condition to test all cells, this set of conditions should provide useful data and allow for comparison where it really matters – under high draw where appliances consume the most cells and the most savings might be made.

In the next part, I will introduce all of the different batteries tested – the contenders.