Translate

Google+ Badge

Follow by Email

Search This Blog

Saturday, April 8, 2017

The reconstructed depth of the Little Ice Age varies between different
studies (anomalies shown are from the 1950–80 reference period)

The Little Ice Age (LIA) was a period of cooling that occurred after the Medieval Warm Period.[1] Although it was not a true ice age, the term was introduced into scientific literature by François E. Matthes in 1939.[2] It has been conventionally defined as a period extending from the 16th to the 19th centuries,[3][4][5] but some experts prefer an alternative timespan from about 1300[6] to about 1850.[7][8][9]
Climatologists and historians working with local records no longer
expect to agree on either the start or end dates of the period, which
varied according to local conditions.

Areas involved

Evidence from mountain glaciers does suggest increased glaciation in a
number of widely spread regions outside Europe prior to the twentieth
century, including Alaska, New Zealand and Patagonia.
However, the timing of maximum glacial advances in these regions
differs considerably, suggesting that they may represent largely
independent regional climate changes, not a globally-synchronous
increased glaciation. Thus current evidence does not support globally
synchronous periods of anomalous cold or warmth over this interval, and
the conventional terms of "Little Ice Age" and "Medieval Warm Period"
appear to have limited utility in describing trends in hemispheric or
global mean temperature changes in past centuries.... [Viewed]
hemispherically, the "Little Ice Age" can only be considered as a modest
cooling of the Northern Hemisphere during this period of less than 1°C relative to late twentieth century levels.[10]

The IPCC Fourth Assessment Report
(AR4) of 2007 discusses more recent research, giving particular
attention to the Medieval Warm Period. It states that "when viewed
together, the currently available reconstructions indicate generally
greater variability in centennial time scale trends over the last 1 kyr
than was apparent in the TAR.... The result is a picture of relatively
cool conditions in the seventeenth and early nineteenth centuries and
warmth in the eleventh and early fifteenth centuries, but the warmest
conditions are apparent in the twentieth century. Given that the
confidence levels surrounding all of the reconstructions are wide,
virtually all reconstructions are effectively encompassed within the
uncertainty previously indicated in the TAR. The major differences
between the various proxy reconstructions relate to the magnitude of
past cool excursions, principally during the twelfth to fourteenth,
seventeenth and nineteenth centuries."[11]

Dating

The last written records of the Norse Greenlanders are from a 1408 marriage at Hvalsey Church, now the best-preserved of the Norse ruins.

There is no consensus regarding the time when the Little Ice Age began,[12][13] but a series of events before the known climatic minima has often been referenced. In the 13th century, pack ice began advancing southwards in the North Atlantic, as did glaciers in Greenland. Anecdotal evidence suggests expanding glaciers
almost worldwide. Based on radiocarbon dating of roughly 150 samples of
dead plant material with roots intact, collected from beneath ice caps
on Baffin Island and Iceland, Miller et al. (2012)[6]
state that cold summers and ice growth began abruptly between 1275 and
1300, followed by "a substantial intensification" from 1430 to 1455.[14]

In contrast, a climate reconstruction based on glacial length[15][16] shows no great variation from 1600 to 1850 but strong retreat thereafter.

Therefore, any of several dates ranging over 400 years may indicate the beginning of the Little Ice Age:

The Little Ice Age ended in the latter half of the 19th century or early in the 20th century.[17][18][19]

Northern Hemisphere

Europe

The Frozen Thames, 1677

The Little Ice Age brought colder winters to parts of Europe and North America. Farms and villages in the Swiss Alps were destroyed by encroaching glaciers during the mid-17th century.[20] Canals and rivers in Great Britain and the Netherlands were frequently frozen deeply enough to support ice skating and winter festivals.[20] The first River Thames frost fair was in 1607 and the last in 1814; changes to the bridges and the addition of the Thames Embankment affected the river flow and depth, greatly diminishing the possibility of further freezes. Freezing of the Golden Horn and the southern section of the Bosphorus took place in 1622. In 1658, a Swedish army marched across the Great Belt to Denmark to attack Copenhagen. The winter of 1794–1795 was particularly harsh: the French invasion army under Pichegru was able to march on the frozen rivers of the Netherlands, and the Dutch fleet was fixed in the ice in Den Helder harbour.

Sea ice surrounding Iceland
extended for miles in every direction, closing harbors to shipping. The
population of Iceland fell by half, but that may have been caused by skeletal fluorosis after the eruption of Laki in 1783.[21] Iceland also suffered failures of cereal crops and people moved away from a grain-based diet.[22] The Norse colonies in Greenland
starved and vanished by the early 15th century, as crops failed and
livestock could not be maintained through increasingly harsh winters,
but Jared Diamond
has suggested they had exceeded the agricultural carrying capacity
before then. Greenland was largely cut off by ice from 1410 to the
1720s.[23]

Winter skating on the main canal of Pompenburg, Rotterdam in 1825, shortly before the minimum, by Bartholomeus Johannes van Hove

The Twentieth Century climatologist Hubert Lamb
said that in many years, "snowfall was much heavier than recorded
before or since, and the snow lay on the ground for many months longer
than it does today."[24] In Lisbon, Portugal, snowstorms were much more frequent than today; one winter in the 17th century produced eight snowstorms.[25]
Many springs and summers were cold and wet but with great variability
between years and groups of years. Crop practices throughout Europe had
to be altered to adapt to the shortened, less reliable growing season,
and there were many years of dearth and famine (such as the Great Famine of 1315–1317, but that may have been before the Little Ice Age).[26] According to Elizabeth Ewan and Janay Nugent, "Famines
in France 1693–94, Norway 1695–96 and Sweden 1696–97 claimed roughly 10
percent of the population of each country. In Estonia and Finland in
1696–97, losses have been estimated at a fifth and a third of the
national populations, respectively."[27]Viticulture disappeared from some northern regions and storms caused serious flooding and loss of life. Some of them resulted in permanent loss of large areas of land from the Danish, German, and Dutch coasts.[24]

The violin maker Antonio Stradivari
produced his instruments during the Little Ice Age. The colder climate
is proposed to have caused the wood used in his violins to be denser
than in warmer periods, contributing to the tone of his instruments.[28] According to the science historian James Burke,
the period inspired such novelties in everyday life as the widespread
use of buttons and button-holes, knitting of custom-made undergarments
to better cover and insulate the body. Fireplace hoods were installed to
make more efficient use of fires for indoor heating, and enclosed stoves were developed, with early versions often covered with ceramic tiles.[29]

The Little Ice Age, by anthropology professor Brian Fagan of the University of California at Santa Barbara, tells of the plight of European peasants during the 1300 to 1850 chill: famines, hypothermia, bread riots
and the rise of despotic leaders brutalizing an increasingly dispirited
peasantry. In the late 17th century, agriculture had dropped off
dramatically: "Alpine villagers lived on bread made from ground
nutshells mixed with barley and oat flour." [30] Historian Wolfgang Behringer has linked intensive witch-hunting episodes in Europe to agricultural failures during the Little Ice Age.[31]

Depictions of winter in European painting

William James Burroughs analyses the depiction of winter in paintings, as does Hans Neuberger.[32]
Burroughs asserts that it occurred almost entirely from 1565 to 1665
and was associated with the climatic decline from 1550 onwards.
Burroughs claims that there had been almost no depictions of winter in
art, and he "hypothesizes that the unusually harsh winter of 1565
inspired great artists to depict highly original images and that the
decline in such paintings was a combination of the 'theme' having been
fully explored and mild winters interrupting the flow of painting".[33]
Wintry scenes, which entail technical difficulties in painting, have
been regularly and well handled since the early 15th century by artists
in illuminated manuscript cycles showing the Labours of the Months, typically placed on the calendar pages of books of hours. January and February are typically shown as snowy, as in February in the famous cycle in the Les Très Riches Heures du duc de Berry,
painted 1412–1416 and illustrated below. Since landscape painting had
not yet developed as independent genre in art, the absence of other
winter scenes is not remarkable.

Burroughs says that snowy subjects return to Dutch Golden Age painting with works by Hendrick Avercamp
from 1609 onwards. There is then a hiatus between 1627 and 1640, before
the main period of such subjects from the 1640s to the 1660s, which
relates well with climate records for the later period. The subjects are
less popular after about 1660, but that does not match any recorded
reduction in severity of winters and may reflect only changes in taste
or fashion. In the later period between the 1780s and 1810s, snowy
subjects again became popular.[33]

Neuberger analysed 12,000 paintings, held in American and European
museums and dated between 1400 and 1967, for cloudiness and darkness.[32] His 1970 publication shows an increase in such depictions that corresponds to the Little Ice Age,[32] peaking between 1600 and 1649.[35]
Paintings and contemporary records in Scotland demonstrate that curling and ice skating
were popular outdoor winter sports, with curling dating back to the
16th century and becoming widely popular in the mid-19th century.[36] As an example, an outdoor curling pond constructed in Gourock
in the 1860s remained in use for almost a century, but increasing use
of indoor facilities, problems of vandalism, and milder winters led to
the pond being abandoned in 1963.[37]

North America

Early European explorers and settlers of North America reported exceptionally severe winters. For example, according to Lamb, Samuel Champlain reported bearing ice along the shores of Lake Superior in June 1608. Both Europeans and indigenous peoples suffered excess mortality in Maine during the winter of 1607–1608, and extreme frost was reported in the Jamestown, Virginia, settlement at the same time.[24] Native Americans formed leagues in response to food shortages.[23] The journal of Pierre de Troyes, Chevalier de Troyes, who led an expedition to James Bay
in 1686, recorded that the bay was still littered with so much floating
ice that he could hide behind it in his canoe on 1 July.[38] In the winter of 1780, New York Harbor froze, allowing people to walk from Manhattan Island to Staten Island.

The extent of mountain glaciers had been mapped by the late 19th
century. In the north and the south temperate zones, snowlines (the
boundaries separating zones of net accumulation from those of net
ablation) were about 100 metres (330 ft) lower than they were in 1975.[39] In Glacier National Park, the last episode of glacier advance came in the late 18th and the early 19th centuries.[40] In Chesapeake Bay, Maryland, large temperature excursions were possibly related to changes in the strength of North Atlantic thermohaline circulation.[41]

Mesoamerica

An analysis of several proxies undertaken in Mexico's Yucatan Peninsula, linked by its authors to Maya and Aztec chronicles relating periods of cold and drought, supports the existence of the Little Ice Age in the region.[42]

Atlantic Ocean

In the North Atlantic, sediments accumulated since the end of the last ice age,
nearly 12,000 years ago, show regular increases in the amount of coarse
sediment grains deposited from icebergs melting in the now open ocean,
indicating a series of 1–2 °C (2–4 °F) cooling events recurring every
1,500 years or so.[43]
The most recent of these cooling events was the Little Ice Age. These
same cooling events are detected in sediments accumulating off Africa,
but the cooling events appear to be larger, ranging between 3–8 °C
(6–14 °F).[44]

Asia

Although the
original designation of a Little Ice Age referred to reduced temperature
of Europe and North America, there is some evidence of extended periods
of cooling outside this region, but it is not clear whether they are
related or independent events. Mann states:[3]

While there is evidence that many other regions outside Europe
exhibited periods of cooler conditions, expanded glaciation, and
significantly altered climate conditions, the timing and nature of these
variations are highly variable from region to region, and the notion of
the Little Ice Age as a globally synchronous cold period has all but
been dismissed.

In China, warm-weather crops such as oranges were abandoned in Jiangxi Province, where they had been grown for centuries.[45] Also, the two periods of most frequent typhoon strikes in Guangdong coincide with two of the coldest and driest periods in northern and central China (1660–1680, 1850–1880).[46]

Southern Hemisphere

Scientific
works point out cold spells and climate changes in areas of the
Southern Hemisphere and their correlation to the Little Ice Age.

Africa

In Ethiopia and Mauritania, permanent snow was reported on mountain peaks at levels where it does not occur today.[45]Timbuktu, an important city on the trans-Saharan caravan route, was flooded at least 13 times by the Niger River; there are no records of similar flooding before or since.[45]
In Southern Africa, sediment cores retrieved from Lake Malawi
show colder conditions between 1570 and 1820, suggesting the Lake
Malawi records "further support, and extend, the global expanse of the
Little Ice Age."[48] A novel 3,000-year temperature reconstruction method, based on the rate of stalagmite growth in a cold cave in South Africa, further suggests a cold period from 1500 to 1800 "characterizing the South African Little Ice age."[49]

Antarctica

Kreutz et al. (1997) compared results from studies of West Antarctic ice cores with the Greenland Ice Sheet Project Two GISP2 and suggested a synchronous global Little Ice Age.[50] An ocean sediment core from the eastern Bransfield Basin in the Antarctic Peninsula shows centennial events that the authors link to the Little Ice Age and Medieval Warm Period.[51]
The authors note "other unexplained climatic events comparable in
duration and amplitude to the LIA and MWP events also appear."

The Siple Dome
(SD) had a climate event with an onset time that is coincident with
that of the Little Ice Age in the North Atlantic based on a correlation
with the GISP2 record. The event is the most dramatic climate event in
the SD Holocene glaciochemical record.[52]
The Siple Dome ice core also contained its highest rate of melt layers
(up to 8%) between 1550 and 1700, most likely because of warm summers.[53]Law Dome ice cores show lower levels of CO2 mixing ratios from 1550 to 1800, which Etheridge and Steele conjecture are "probably as a result of colder global climate."[54]

Sediment cores in Bransfield Basin, Antarctic Peninsula, have neoglacial indicators by diatom and sea-ice taxa variations during the Little Ice Age.[55]
The MES stable isotope record suggests that the Ross Sea region
experienced 1.6 ± 1.4 °C cooler average temperatures during the Little
Ice Age, compared to the last 150 years to now.[56]

Australia and New Zealand

Limited evidence describes conditions in Australia. Lake records in Victoria
suggest that conditions, at least in the south of the state, were wet
and/or unusually cool. In the north, evidence suggests fairly dry
conditions, but coral cores from the Great Barrier Reef show similar rainfall as today but with less variability. A study that analyzed isotopes
in Great Barrier Reef corals suggested that increased water vapor
transport from southern tropical oceans to the poles contributed to the
Little Ice Age.[57] Borehole reconstructions from Australia suggest that over the last 500
years, the 17th century was the coldest on the continent, but the
borehole temperature reconstruction method does not show good agreement
between the Northern and Southern Hemispheres.[58]

Pacific Islands

Sea-level
data for the Pacific Islands suggest that sea level in the region fell,
possibly in two stages, between 1270 and 1475. This was associated with
a 1.5 °C fall in temperature (determined from oxygen-isotope analysis)
and an observed increase in El Niño frequency.[60] Tropical Pacific coral records indicate the most frequent, intense El Niño-Southern Oscillation activity in the mid-seventeenth century.[61]

South America

Tree-ring data from Patagonia show cold episodes between 1270 and 1380 and from 1520 to 1670, contemporary with the events in the Northern Hemisphere.[62][63] Eight sediment cores taken from Puyehue Lake
have been interpreted as showing a humid period from 1470 to 1700,
which the authors describe as a regional marker of the onset of the
Little Ice Age.[64]
A 2009 paper details cooler and wetter conditions in southeastern South
America between 1550 and 1800, citing evidence obtained via several
proxies and models.[65]18O records from three Andean ice cores show a cool period from 1600–1800 [66]

Although only anecdotal evidence, in 1675 the Spanish explorer Antonio de Vea entered San Rafael Lagoon through Río Témpanos (Spanish for "Ice Floe River") without mentioning any ice floe but stating that the San Rafael Glacier did not reach far into the lagoon. In 1766, another expedition noticed that the glacier reached the lagoon and calved into large icebergs. Hans Steffen
visited the area in 1898, noticing that the glacier penetrated far into
the lagoon. Such historical records indicate a general cooling in the
area between 1675 and 1898: "The recognition of the LIA in northern
Patagonia, through the use of documentary sources, provides important,
independent evidence for the occurrence of this phenomenon in the
region."[67] As of 2001, the border of the glacier had significantly retreated as compared to the borders of 1675.[67]

Orbital cycles

Orbital forcing
from cycles in the earth's orbit around the sun has, for the past 2,000
years, caused a long-term northern hemisphere cooling trend that
continued through the Middle Ages and the Little Ice Age. The rate of Arctic cooling is roughly 0.02 °C per century.[69] This trend could be extrapolated to continue into the future, possibly leading to a full ice age, but the twentieth-century instrumental temperature record shows a sudden reversal of this trend, with a rise in global temperatures attributed to greenhouse gas emissions.[69]

Solar activity

Solar activity events recorded in radiocarbon

The Maunder minimum in a 400-year history of sunspot numbers

There is still a very poor understanding of the correlation between low sunspot activity and cooling temperatures.[70][71] During the period 1645–1715, in the middle of the Little Ice Age, there was a period of low solar activity known as the Maunder Minimum. The Spörer Minimum has also been identified with a significant cooling period between 1460 and 1550.[72] Other indicators of low solar activity during this period are levels of the isotopes carbon-14 and beryllium-10.[73]

Volcanic activity

In a 2012 paper, Miller et al.
link the Little Ice Age to an "unusual 50-year-long episode with four
large sulfur-rich explosive eruptions, each with global sulfate loading
>60 Tg" and notes that "large changes in solar irradiance are not
required."[6]

Throughout the Little Ice Age, the world experienced heightened volcanic activity.[74] When a volcano
erupts, its ash reaches high into the atmosphere and can spread to
cover the whole earth. The ash cloud blocks out some of the incoming
solar radiation, leading to worldwide cooling that can last up to two years after an eruption. Also emitted by eruptions is sulfur, in the form of sulfur dioxide gas. When it reaches the stratosphere, it turns into sulfuric acid particles, which reflect the sun's rays, further reducing the amount of radiation reaching Earth's surface.

A recent study found that an especially massive tropical volcanic eruption in 1257, possibly of the now-extinct Mount Samalas near Mount Rinjani, both in Lombok, Indonesia,
followed by three smaller eruptions in 1268, 1275, and 1284 did not
allow the climate to recover. This may have caused the initial cooling,
and the 1452–53 eruption of Kuwae in Vanuatu triggered a second pulse of cooling.[6][14] The cold summers can be maintained by sea-ice/ocean feedbacks long after volcanic aerosols are removed.

Decreased human populations

Some researchers have proposed that human influences on climate began earlier than is normally supposed (see Early anthropocene
for more details) and that major population declines in Eurasia and the
Americas reduced this impact, leading to a cooling trend. William Ruddiman has proposed that somewhat reduced populations of Europe, East Asia, and the Middle East during and after the Black Death
caused a decrease in agricultural activity. He suggests reforestation
took place, allowing more carbon dioxide uptake from the atmosphere,
which may have been a factor in the cooling noted during the Little Ice
Age. Ruddiman further hypothesizes that a reduced population in the Americas after European contact in the early sixteenth century could have had a similar effect.[81][82] Faust, Gnecco, Mannstein and Stamm (2005)[83] and Nevle (2011)[84]
supported depopulation in the Americas as a factor, asserting that
humans had cleared considerable amounts of forest to support agriculture
in the Americas before the arrival of Europeans brought on a population
collapse. A 2008 study of sediment cores and soil samples further
suggests that carbon dioxide uptake via reforestation in the Americas
could have contributed to the Little Ice Age.[85] The depopulation is linked to a drop in carbon dioxide levels observed at Law Dome, Antarctica.[83]

Increased human populations

It
has been speculated that increased human populations living at high
latitudes caused the Little Ice Age through deforestation. The increased
albedo
due to this deforestation (more reflection of solar rays from
snow-covered ground than dark, tree-covered area) could have had a
profound effect on global temperatures.[86]

Inherent variability of climate

Spontaneous
fluctuations in global climate might explain past variability. It is
very difficult to know what the true level of variability from only
internal causes might be since other forcings, as noted above, exist
whose magnitude may not be known either. One approach to evaluating
internal variability is to use long integrations of coupled
ocean-atmosphere global climate models.
They have the advantage that the external forcing is known to be zero,
but the disadvantage is that they may not fully reflect reality. The
variations may result from chaos-driven changes in the oceans, the atmosphere, or interactions between the two.[87] Two studies have concluded that the demonstrated inherent variability is not great enough to account for the Little Ice Age.[87][88]

An electric battery is a device consisting of one or more electrochemical cells with external connections provided to power electrical devices such as flashlights, smartphones, and electric cars.[1] When a battery is supplying electric power, its positive terminal is the cathode and its negative terminal is the anode.[2]
The terminal marked negative is the source of electrons that when
connected to an external circuit will flow and deliver energy to an
external device. When a battery is connected to an external circuit, electrolytes
are able to move as ions within, allowing the chemical reactions to be
completed at the separate terminals and so deliver energy to the
external circuit. It is the movement of those ions within the battery
which allows current to flow out of the battery to perform work.[3]
Historically the term "battery" specifically referred to a device
composed of multiple cells, however the usage has evolved to
additionally include devices composed of a single cell.[4]

According to a 2005 estimate, the worldwide battery industry generates US$48 billion in sales each year,[5] with 6% annual growth.

Batteries have much lower specific energy (energy per unit mass) than common fuels
such as gasoline. This is somewhat offset by the higher efficiency of
electric motors in producing mechanical work, compared to combustion
engines.

History

The usage of "battery" to describe a group of electrical devices dates to Benjamin Franklin, who in 1748 described multiple Leyden jars by analogy to a battery of cannon[6] (Benjamin Franklin borrowed the term "battery" from the military, which refers to weapons functioning together[7]).

Alessandro Volta built and described the first electrochemical battery, the voltaic pile, in 1800.[8]
This was a stack of copper and zinc plates, separated by brine-soaked
paper disks, that could produce a steady current for a considerable
length of time. Volta did not understand that the voltage was due to
chemical reactions. He thought that his cells were an inexhaustible
source of energy,[9]
and that the associated corrosion effects at the electrodes were a mere
nuisance, rather than an unavoidable consequence of their operation, as
Michael Faraday showed in 1834.[10]

Although early batteries were of great value for experimental
purposes, in practice their voltages fluctuated and they could not
provide a large current for a sustained period. The Daniell cell, invented in 1836 by British chemist John Frederic Daniell,
was the first practical source of electricity, becoming an industry
standard and seeing widespread adoption as a power source for electrical telegraph networks.[11] It consisted of a copper pot filled with a copper sulfate solution, in which was immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode.[12]

These wet cells used liquid electrolytes, which were prone to leakage
and spillage if not handled correctly. Many used glass jars to hold
their components, which made them fragile and potentially dangerous.
These characteristics made wet cells unsuitable for portable appliances.
Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical.[13]

Principle of operation

A voltaic cell for demonstration purposes. In this example the two half-cells are linked by a salt bridge separator that permits the transfer of ions.

Batteries convert chemical energy directly to electrical energy. A
battery consists of some number of voltaic cells. Each cell consists of
two half-cells
connected in series by a conductive electrolyte containing anions and
cations. One half-cell includes electrolyte and the negative electrode,
the electrode to which anions (negatively charged ions) migrate; the other half-cell includes electrolyte and the positive electrode to which cations (positively charged ions) migrate. Redox
reactions power the battery. Cations are reduced (electrons are added)
at the cathode during charging, while anions are oxidized (electrons are
removed) at the anode during charging.[14] During discharge, the process is reversed. The electrodes do not touch each other, but are electrically connected by the electrolyte.
Some cells use different electrolytes for each half-cell. A separator
allows ions to flow between half-cells, but prevents mixing of the
electrolytes.

Each half-cell has an electromotive force
(or emf), determined by its ability to drive electric current from the
interior to the exterior of the cell. The net emf of the cell is the
difference between the emfs of its half-cells.[15] Thus, if the electrodes have emfs E1{\displaystyle {\mathcal {E}}_{1}} and E2{\displaystyle {\mathcal {E}}_{2}}, then the net emf is E2−E1{\displaystyle {\mathcal {E}}_{2}-{\mathcal {E}}_{1}}; in other words, the net emf is the difference between the reduction potentials of the half-reactions.[16]

The electrical driving force or ΔVbat{\displaystyle \displaystyle {\Delta V_{bat}}} across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts.[17] The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf of the cell. Because of internal resistance,[18]
the terminal voltage of a cell that is discharging is smaller in
magnitude than the open-circuit voltage and the terminal voltage of a
cell that is charging exceeds the open-circuit voltage.[19] An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of E{\displaystyle {\mathcal {E}}} until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and stored a charge of one coulomb then on complete discharge it would perform 1.5 joules of work.[17] In actual cells, the internal resistance increases under discharge[18]
and the open circuit voltage also decreases under discharge. If the
voltage and resistance are plotted against time, the resulting graphs
typically are a curve; the shape of the curve varies according to the
chemistry and internal arrangement employed.

The voltage developed across a cell's terminals depends on the energy
release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries, but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts.[20] The high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.[21]

Primary batteries are designed to be used until exhausted of
energy then discarded. Their chemical reactions are generally not
reversible, so they cannot be recharged. When the supply of reactants in
the battery is exhausted, the battery stops producing current and is
useless.[22]

Secondary batteries can be recharged; that is, they can have their chemical reactions reversed by applying electric current to the cell. This regenerates the original chemical reactants, so they can be used, recharged, and used again multiple times.[23]

Some types of primary batteries used, for example, for telegraph circuits, were restored to operation by replacing the electrodes.[24]
Secondary batteries are not indefinitely rechargeable due to
dissipation of the active materials, loss of electrolyte and internal
corrosion.

Primary

Primary batteries, or primary cells,
can produce current immediately on assembly. These are most commonly
used in portable devices that have low current drain, are used only
intermittently, or are used well away from an alternative power source,
such as in alarm and communication circuits where other electric power
is only intermittently available. Disposable primary cells cannot be
reliably recharged, since the chemical reactions are not easily
reversible and active materials may not return to their original forms.
Battery manufacturers recommend against attempting to recharge primary
cells.[25] In general, these have higher energy densities than rechargeable batteries,[26] but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω). Common types of disposable batteries include zinc–carbon batteries and alkaline batteries.

Secondary

Secondary batteries, also known as secondary cells, or rechargeable batteries,
must be charged before first use; they are usually assembled with
active materials in the discharged state. Rechargeable batteries are
(re)charged by applying electric current, which reverses the chemical
reactions that occur during discharge/use. Devices to supply the
appropriate current are called chargers.

The oldest form of rechargeable battery is the lead–acid battery, which are widely used in automotive and boating
applications. This technology contains liquid electrolyte in an
unsealed container, requiring that the battery be kept upright and the
area be well ventilated to ensure safe dispersal of the hydrogen
gas it produces during overcharging. The lead–acid battery is
relatively heavy for the amount of electrical energy it can supply. Its
low manufacturing cost and its high surge current levels make it common
where its capacity (over approximately 10 Ah) is more important than
weight and handling issues. A common application is the modern car battery, which can, in general, deliver a peak current of 450 amperes.

The sealed valve regulated lead–acid battery
(VRLA battery) is popular in the automotive industry as a replacement
for the lead–acid wet cell. The VRLA battery uses an immobilized sulfuric acid electrolyte, reducing the chance of leakage and extending shelf life.[27] VRLA batteries immobilize the electrolyte. The two types are:

In the 2000s, developments include batteries with embedded electronics such as USBCELL, which allows charging an AA battery through a USB connector,[28]nanoball batteries that allow for a discharge rate about 100x greater than current batteries, and smart battery packs with state-of-charge monitors and battery protection circuits that prevent damage on over-discharge. Low self-discharge (LSD) allows secondary cells to be charged prior to shipping.

Dry cell

A dry cell uses a paste electrolyte, with only enough moisture
to allow current to flow. Unlike a wet cell, a dry cell can operate in
any orientation without spilling, as it contains no free liquid, making
it suitable for portable equipment. By comparison, the first wet cells
were typically fragile glass containers with lead rods hanging from the
open top and needed careful handling to avoid spillage. Lead–acid
batteries did not achieve the safety and portability of the dry cell
until the development of the gel battery.

A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride
in the form of a paste next to the zinc anode. The remaining space
between the electrolyte and carbon cathode is taken up by a second paste
consisting of ammonium chloride and manganese dioxide, the latter
acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride.

Molten salt

Molten salt batteries
are primary or secondary batteries that use a molten salt as
electrolyte. They operate at high temperatures and must be well
insulated to retain heat.

Reserve

A reserve battery
can be stored unassembled (unactivated and supplying no power) for a
long period (perhaps years). When the battery is needed, then it is
assembled (e.g., by adding electrolyte); once assembled, the battery is
charged and ready to work. For example, a battery for an electronic
artillery fuze
might be activated by the impact of firing a gun. The acceleration
breaks a capsule of electrolyte that activates the battery and powers
the fuze's circuits. Reserve batteries are usually designed for a short
service life (seconds or minutes) after long storage (years). A water-activated battery for oceanographic instruments or military applications becomes activated on immersion in water.

Cell performance

A battery's characteristics may vary over load cycle, over charge cycle, and over lifetime due to many factors including internal chemistry, current
drain, and temperature. At low temperatures, a battery cannot deliver
as much power. As such, in cold climates, some car owners install
battery warmers, which are small electric heating pads that keep the car
battery warm.

Capacity and discharge

A device to check battery voltage

A battery's capacity is the amount of electric charge
it can deliver at the rated voltage. The more electrode material
contained in the cell the greater its capacity. A small cell has less
capacity than a larger cell with the same chemistry, although they
develop the same open-circuit voltage.[30] Capacity is measured in units such as amp-hour
(A·h). The rated capacity of a battery is usually expressed as the
product of 20 hours multiplied by the current that a new battery can
consistently supply for 20 hours at 68 °F (20 °C), while remaining above
a specified terminal voltage per cell. For example, a battery rated at
100 A·h can deliver 5 A over a 20-hour period at room temperature.
The fraction of the stored charge that a battery can deliver depends on
multiple factors, including battery chemistry, the rate at which the
charge is delivered (current), the required terminal voltage, the
storage period, ambient temperature and other factors.[30]

The higher the discharge rate, the lower the capacity.[31]
The relationship between current, discharge time and capacity for a
lead acid battery is approximated (over a typical range of current
values) by Peukert's law:

t=QPIk{\displaystyle t={\frac {Q_{P}}{I^{k}}}}

where

QP{\displaystyle Q_{P}} is the capacity when discharged at a rate of 1 amp.

t{\displaystyle t} is the amount of time (in hours) that a battery can sustain.

k{\displaystyle k} is a constant around 1.3.

Batteries that are stored for a long period or that are discharged at
a small fraction of the capacity lose capacity due to the presence of
generally irreversible side reactions that consume charge
carriers without producing current. This phenomenon is known as internal
self-discharge. Further, when batteries are recharged, additional side
reactions can occur, reducing capacity for subsequent discharges. After
enough recharges, in essence all capacity is lost and the battery stops
producing power.

Internal energy losses and limitations on the rate that ions pass through the electrolyte cause battery efficiency
to vary. Above a minimum threshold, discharging at a low rate delivers
more of the battery's capacity than at a higher rate. Installing
batteries with varying A·h ratings does not affect device operation
(although it may affect the operation interval) rated for a specific
voltage unless load limits are exceeded. High-drain loads such as digital cameras
can reduce total capacity, as happens with alkaline batteries. For
example, a battery rated at 2 A·h for a 10- or 20-hour discharge would
not sustain a current of 1 A for a full two hours as its stated capacity
implies.

C rate

The C-rate is a measure of the rate at which a battery is being
discharged. It is defined as the discharge current divided by the
theoretical current draw under which the battery would deliver its
nominal rated capacity in one hour.[32]
A 1C discharge rate would deliver the battery's rated capacity in 1
hour. A 2C discharge rate means it will discharge twice as fast (30
minutes). A 1C discharge rate on a 1.6 Ah battery means a discharge
current of 1.6 A. A 2C rate would mean a discharge current of 3.2 A.
Standards for rechargeable batteries generally rate the capacity over a
4-hour, 8 hour or longer discharge time. Because of internal resistance
loss and the chemical processes inside the cells, a battery rarely
delivers nameplate rated capacity in only one hour. Types intended for
special purposes, such as in a computer uninterruptible power supply, may be rated by manufacturers for discharge periods much less than one hour.

Fast-charging, large and light batteries

As of 2013, the world's largest battery was in Hebei Province, China. It stored 36 megawatt-hours of electricity at a cost of $500 million.[34] Another large battery, composed of Ni–Cd cells, was in Fairbanks, Alaska. It covered 2,000 square metres (22,000 sq ft)—bigger than a football pitch—and weighed 1,300 tonnes. It was manufactured by ABB to provide backup power in the event of a blackout. The battery can provide 40 megawatts of power for up to seven minutes.[35]Sodium–sulfur batteries have been used to store wind power.[36]
A 4.4 megawatt-hour battery system that can deliver 11 megawatts for 25
minutes stabilizes the output of the Auwahi wind farm in Hawaii.[37]

Lithium–sulfur batteries were used on the longest and highest solar-powered flight.[38] The recharging speed of lithium-ion batteries can be increased by manufacturing changes.[39]

Lifetime

Battery life (and its synonym battery lifetime) has two meanings for
rechargeable batteries but only one for non-chargeables. For
rechargeables, it can mean either the length of time a device can run on
a fully charged battery or the number of charge/discharge cycles
possible before the cells fail to operate satisfactorily. For a
non-rechargeable these two lives are equal since the cells last for only
one cycle by definition. (The term shelf life is used to describe how
long a battery will retain its performance between manufacture and use.)
Available capacity of all batteries drops with decreasing temperature.
In contrast to most of today's batteries, the Zamboni pile,
invented in 1812, offers a very long service life without refurbishment
or recharge, although it supplies current only in the nanoamp range.
The Oxford Electric Bell has been ringing almost continuously since 1840 on its original pair of batteries, thought to be Zamboni piles.

Self-discharge

Disposable batteries typically lose 8 to 20 percent of their original
charge per year when stored at room temperature (20–30 °C).[40]
This is known as the "self-discharge" rate, and is due to
non-current-producing "side" chemical reactions that occur within the
cell even when no load is applied. The rate of side reactions is reduced
for batteries are stored at lower temperatures, although some can be
damaged by freezing.

Old rechargeable batteries self-discharge more rapidly than
disposable alkaline batteries, especially nickel-based batteries; a
freshly charged nickel cadmium (NiCd) battery loses 10% of its charge in
the first 24 hours, and thereafter discharges at a rate of about 10% a
month. However, newer low self-discharge nickel metal hydride (NiMH) batteries and modern lithium designs display a lower self-discharge rate (but still higher than for primary batteries).

Corrosion

Internal parts may corrode and fail, or the active materials may be slowly converted to inactive forms.

Physical component changes

Rechargeable batteries

The active material on the battery plates changes chemical
composition on each charge and discharge cycle; active material may be
lost due to physical changes of volume, further limiting the number of
times the battery can be recharged. Most nickel-based batteries are
partially discharged when purchased, and must be charged before first
use.[41] Newer NiMH batteries are ready to be used when purchased, and have only 15% discharge in a year.[42]

Some deterioration occurs on each charge–discharge cycle. Degradation
usually occurs because electrolyte migrates away from the electrodes or
because active material detaches from the electrodes. Low-capacity NiMH
batteries (1,700–2,000 mA·h) can be charged some 1,000 times, whereas
high-capacity NiMH batteries (above 2,500 mA·h) last about 500 cycles.[43] NiCd batteries tend to be rated for 1,000 cycles before their internal resistance permanently increases beyond usable values.

Charge/discharge speed

Overcharging

If a charger cannot detect when the battery is fully charged then overcharging is likely, damaging it.[44]

Memory effect

NiCd cells, if used in a particular repetitive manner, may show a decrease in capacity called "memory effect".[45] The effect can be avoided with simple practices. NiMH cells, although similar in chemistry, suffer less from memory effect.[46]

An analog camcorder [lithium ion] battery

Environmental conditions

Automotivelead–acid rechargeable batteries must endure stress due to vibration, shock, and temperature range. Because of these stresses and sulfation of their lead plates, few automotive batteries last beyond six years of regular use.[47] Automotive starting (SLI: Starting, Lighting, Ignition)
batteries have many thin plates to maximize current. In general, the
thicker the plates the longer the life. They are typically discharged
only slightly before recharge.

"Deep-cycle" lead–acid batteries such as those used in electric golf carts have much thicker plates to extend longevity.[48]
The main benefit of the lead–acid battery is its low cost; its main
drawbacks are large size and weight for a given capacity and voltage.
Lead–acid batteries should never be discharged to below 20% of their
capacity,[49]
because internal resistance will cause heat and damage when they are
recharged. Deep-cycle lead–acid systems often use a low-charge warning
light or a low-charge power cut-off switch to prevent the type of damage
that will shorten the battery's life.[50]

Storage

Battery life can be extended by storing the batteries at a low temperature, as in a refrigerator or freezer,
which slows the side reactions. Such storage can extend the life of
alkaline batteries by about 5%; rechargeable batteries can hold their
charge much longer, depending upon type.[51]
To reach their maximum voltage, batteries must be returned to room
temperature; discharging an alkaline battery at 250 mA at 0 °C is only
half as efficient as at 20 °C.[26] Alkaline battery manufacturers such as Duracell do not recommend refrigerating batteries.[25]

Battery sizes

Primary batteries readily available to consumers range from tiny button cells
used for electric watches, to the No. 6 cell used for signal circuits
or other long duration applications. Secondary cells are made in very
large sizes; very large batteries can power a submarine or stabilize an electrical grid and help level out peak loads.

Hazards

Explosion

A battery explosion is caused by misuse or malfunction, such as
attempting to recharge a primary (non-rechargeable) battery, or a short circuit. Car batteries are most likely to explode when a short-circuit generates very large currents. Car batteries produce hydrogen, which is very explosive, when they are overcharged (because of electrolysis
of the water in the electrolyte). The amount of overcharging is usually
very small and generates little hydrogen, which dissipates quickly.
However, when "jumping" a car battery, the high current can cause the
rapid release of large volumes of hydrogen, which can be ignited
explosively by a nearby spark, for example, when disconnecting a jumper cable.
When a battery is recharged at an excessive rate, an explosive gas
mixture of hydrogen and oxygen may be produced faster than it can escape
from within the battery, leading to pressure build-up and eventual
bursting of the battery case. In extreme cases, battery acid may spray
violently from the casing and cause injury. Overcharging—that is,
attempting to charge a battery beyond its electrical capacity—can also
lead to a battery explosion, in addition to leakage or irreversible
damage. It may also cause damage to the charger or device in which the
overcharged battery is later used. In addition, disposing of a battery
via incineration may cause an explosion as steam builds up within the
sealed case.

Recalls of devices using Lithium-ion batteries have been the most
common in recent years. This is in response to reported accidents and
failures, occasionally ignition or explosion.[52][53]
An expert summary of the problem indicates that this type uses "liquid
electrolytes to transport lithium ions between the anode and the
cathode. If a battery cell is charged too quickly, it can cause a short
circuit, leading to explosions and fires".[54]

Leakage

Leak-damaged alkaline battery

Many battery chemicals are corrosive, poisonous or both. If leakage
occurs, either spontaneously or through accident, the chemicals released
may be dangerous. For example, disposable batteries often use a zinc
"can" both as a reactant and as the container to hold the other
reagents. If this kind of battery is over-discharged, the reagents can
emerge through the cardboard and plastic that form the remainder of the
container. The active chemical leakage can then damage or disable the
equipment that the batteries power. For this reason, many electronic
device manufacturers recommend removing the batteries from devices that
will not be used for extended periods of time.

Toxic materials

Many types of batteries employ toxic materials such as lead, mercury, and cadmium as an electrode or electrolyte. When each battery reaches end of life it must be disposed of to prevent environmental damage.[55] Batteries are one form of electronic waste (e-waste). E-waste recycling services recover toxic substances, which can then be used for new batteries.[56]
Of the nearly three billion batteries purchased annually in the United
States, about 179,000 tons end up in landfills across the country.[57] In the United States, the Mercury-Containing and Rechargeable Battery Management Act
of 1996 banned the sale of mercury-containing batteries, enacted
uniform labeling requirements for rechargeable batteries and required
that rechargeable batteries be easily removable.[58]
California and New York City prohibit the disposal of rechargeable
batteries in solid waste, and along with Maine require recycling of cell
phones.[59]
The rechargeable battery industry operates nationwide recycling
programs in the United States and Canada, with dropoff points at local
retailers.[59]

The Battery Directive
of the European Union has similar requirements, in addition to
requiring increased recycling of batteries and promoting research on
improved battery recycling methods.[60]
In accordance with this directive all batteries to be sold within the
EU must be marked with the "collection symbol" (a crossed-out wheeled
bin). This must cover at least 3% of the surface of prismatic batteries
and 1.5% of the surface of cylindrical batteries. All packaging must be
marked likewise.[61]

Ingestion

Batteries may be harmful or fatal if swallowed.[62] Small button cells
can be swallowed, in particular by young children. While in the
digestive tract, the battery's electrical discharge may lead to tissue
damage;[63]
such damage is occasionally serious and can lead to death. Ingested
disk batteries do not usually cause problems unless they become lodged
in the gastrointestinal tract.
The most common place for disk batteries to become lodged is the
esophagus, resulting in clinical sequelae. Batteries that successfully
traverse the esophagus are unlikely to lodge elsewhere. The likelihood
that a disk battery will lodge in the esophagus is a function of the
patient's age and battery size. Disk batteries of 16 mm have become
lodged in the esophagi of 2 children younger than 1 year.[citation needed]
Older children do not have problems with batteries smaller than
21–23 mm. Liquefaction necrosis may occur because sodium hydroxide is
generated by the current produced by the battery (usually at the anode).
Perforation has occurred as rapidly as 6 hours after ingestion.[64]

Secondary (rechargeable) batteries and their characteristics

Inexpensive.
High-/low-drain, moderate energy density.
Can withstand very high discharge rates with virtually no loss of capacity.
Moderate rate of self-discharge.
Environmental hazard due to Cadmium – use now virtually prohibited in Europe.

Inexpensive.
Performs better than alkaline batteries in higher drain devices.
Traditional chemistry has high energy density, but also a high rate of self-discharge.
Newer chemistry has low self-discharge rate, but also a ~25% lower energy density.
Used in some cars.

Smaller volume than equivalent Li-ion.
Extremely expensive due to silver.
Very high energy density.
Very high drain capable.
For many years considered obsolete due to high silver prices.
Cell suffers from oxidation if unused.
Reactions are not fully understood.
Terminal voltage very stable but suddenly drops to 1.5 volts at 70–80% charge (believed to be
due to presence of both argentous and argentic oxide in positive plate – one is consumed first).
Has been used in lieu of primary battery (moon buggy).
Is being developed once again as a replacement for Li-ion.

Very expensive.
Very high energy density.
Not usually available in "common" battery sizes.Lithium polymer battery is common in laptop computers, digital cameras, camcorders, and cellphones.
Very low rate of self-discharge.

Terminal voltage varies from 4.2 to 3.0 volts during discharge.
Volatile: Chance of explosion if short-circuited, allowed to overheat, or not manufactured with rigorous quality standards.

Solid state batteries

On 28 February 2017, The University of Texas
at Austin issued a press release about a new type of solid-state
battery, developed by a team led by Lithium-ion (Li-Ion) inventor John Goodenough,
"that could lead to safer, faster-charging, longer-lasting rechargeable
batteries for handheld mobile devices, electric cars and stationary
energy storage".[67] More specifics about the new technology were published in the peer-reviewed scientific journal Energy & Environmental Science.

Independent reviews of the technology discuss the risk of fire and
explosion from Lithium-ion batteries under certain conditions because
they use liquid electrolytes. The newly developed battery should be
safer since it uses glass electrolytes, that should eliminate short
circuits. The solid-state battery is also said to have "three times the
energy density" increasing its useful life in electric vehicles, for
example. It should also be more ecologically sound since the technology
uses less expensive, earth-friendly materials such as sodium extracted
from seawater. They also have much longer life; ("the cells have
demonstrated more than 1,200 cycles with low cell resistance"). The
research and prototypes are not expected to lead to a commercially
viable product in the near future, if ever, according to Chris Robinson
of LUX Research. "This will have no tangible effect on electric vehicle
adoption in the next 15 years, if it does at all. A key hurdle that many
solid-state electrolytes face is lack of a scalable and cost-effective
manufacturing process," he told The American Energy News in an e-mail.[68]

Homemade cells

Almost any liquid or moist object that has enough ions to be
electrically conductive can serve as the electrolyte for a cell. As a
novelty or science demonstration, it is possible to insert two
electrodes made of different metals into a lemon,[69] potato,[70]
etc. and generate small amounts of electricity. "Two-potato clocks" are
also widely available in hobby and toy stores; they consist of a pair
of cells, each consisting of a potato (lemon, et cetera) with two
electrodes inserted into it, wired in series to form a battery with
enough voltage to power a digital clock.[71] Homemade cells of this kind are of no practical use.

A voltaic pile can be made from two coins (such as a nickel and a penny) and a piece of paper towel dipped in salt water. Such a pile generates a very low voltage but, when many are stacked in series, they can replace normal batteries for a short time.[72]

Sony has developed a biological battery
that generates electricity from sugar in a way that is similar to the
processes observed in living organisms. The battery generates
electricity through the use of enzymes that break down carbohydrates.[73]

Lead acid cells can easily be manufactured at home, but a tedious
charge/discharge cycle is needed to 'form' the plates. This is a process
in which lead sulfate forms on the plates, and during charge is
converted to lead dioxide (positive plate) and pure lead (negative
plate). Repeating this process results in a microscopically rough
surface, increasing the surface area, increasing the current the cell
can deliver.[74]

About Me

My formal training is in chemistry. I also read a great deal of physics and biology. In fact I very much enjoy reading in general, mostly science, but also some fiction and history. I also enjoy computer programming and writing. I like hiking and exploring nature. I also enjoy people; not too much in social settings, but one on one; also, people with interesting or "off-beat" minds draw me to them. I also have some interest in Buddhism.

These days I get a lot more information from the internet, primarily through Wiki. Some television, e. g., documentaries, PBS shows like "Nova" and "Nature".

My favorite science writers are Jacob Bronowski ("The Ascent of Man") and Richard Dawkins (his "The Blind Watchmaker" is right up there up Ascent). I also have a favorite writer on Buddhism, Pema Chodron. Favorite films are "Annie Hall" (by Woody Allen), "The Maltese Falcon", "One Flew Over The Cuckoo's Nest", "As Good As It Gets", "Conspiracy Theory", Monty Python's "Search For The Holy Grail" and "Life of Brian", and a few others which I can't think about at the moment.

I love a number of classical works (Beethoven's "Pastoral", "Afternoon Of A Fawn" and "Clair De Lune" by Debussey , Pachelbel's "Canon" come to mind. My favorite piece is probably Gershwin's "Rhapsody in Blue". But I also enjoy a great deal in modern music, including many jazz pieces, folk songs by people like Dylan, Simon and Garfunkel, a hodgepodge of pieces by Crosby, Stills, and Nash, Niel Young, and practically everything the Beatles wrote.

My life over the last few years has been in some disarray, but I am finally "getting it together.". As I am very much into the sciences and writing, I would like to move more in this direction. I also enjoy teaching. As for my political leanings, most people would probably describe as basically liberal, though not extremely so. My religious leanings are to the absolutely none: I've alluded to my interest in Buddhism, but again this is not any supernatural or scientifically untested aspect of it but in the way it provides a powerful philosophy and set of practical, day to day methods of dealing with myself and the other human beings.