February 23, 2016

A very perplexing ecological problem is unravelling the nature of relationship between diversity and stability of ecosystem. The problem here is that neither the meaning of stability is entirely clear, nor what aspects of diversity are being consided. Are only the number of species in the community and its relation to stability, or the relationship of evenness to stability, or some combination of both is to be considered? Most of the theoretical and empirical studies for long focused only on species diversity, i.e., the number of species present. Only recently works have focussed on the degree of functional diversity represented by species diversity. Advocates of conserving biological diversity have always invoked the diversity-stability hypothesis to justify concern about the loss of individual species. However, if other aspects of diversity also play important roles in the structure and function of ecosystems, a focus on the number of species alone may hinder proper appreciation of the role that evenness plays in the ability of ecosystems to respond to changes in energy and nutrient inputs.
There are at least three ways in which ecosystem stability might be defined:

1. Constancy- The ability of a community to resist changes in species composition and abundances in response to any disturbance.
This is not a particularly useful concept of stability for conservationists because few, if any, ecosystems could be described as truly constant. Even the ecosystems having powerful mechanisms for reacting to environmental fluctuations do so through internal changes that as quickly as possible bring it back to a stable state. But these surely involve responses and changes. These may more appropriately be regarded as examples of resilience than of constancy.
2. Resilience- The ability of a community to return to its pre-disturbance characteristics after changes induced by a disturbance.
Resilience corresponds to stability in the way it is studied in mathematical models. If deviations from an equilibrium are reduced with time, system is stable or if these are amplified with time, system is unstable. This approach still has little applicability to actual ecosystems. It measures a system’s tendency to return to a single stable point, but many ecological sytems appear to have multiple stable points. If disturbance remains below a particular threshold, ecosystem will return to its predisturbance configuration. If it exceeds that threshold, it may move to a new configuration. Furthermore, most ecological systems change not only in response to disturbance but also in response to natural, successional change. There is little evidence that ecological communities ever represent an equilibrium configuration from which it would make sense to study perturbations. Common between the constancy and resilience is focus of both on species persistence and abundance as measures of stability. For example, Selmants et al.[1] show that with decline in species diversity of serpentine grasslands in California, their susceptibility to invasion by exotic species increases. Put differently, diverse grasslands were more resilient than those with lesser diversity.
3. Dynamic stability- A system’s ability to determine its future states largely by its own current state with little or no reference to outside influences.
In many ways, this approach corresponds with our intuitive notions of stability and seems to make sense of the relationship between diversity and stability. It recalls saying of Commoner [2]: “The more complex the ecosystem, the more successfully it can resist a stress.” A dynamically stable system is relatively immune to disturbance like a rapidly spinning gyroscope is dynamically stable because the gyroscopic forces generated by it resist external forces that would alter is plane of rotation. This approach reflects the hope that stable systems would be able to maintain themselves without intervention.
A biological system with high diversity is more likely to be dynamically stable than one that has low diversity. The reason is very important role played by biotic interactions in ecosystem dynamics. This has increasingly been appreciated through many studies. In diverse communities, biotic interactions may often play a larger and very important role in the success of a species than its interactions with the physical environment. To the extent that changes in the system are driven by biotic interactions, it is dynamically stable, since characteristics of the system itself are determining its future state.
However, this formulation of the diversity-stability hypothesis is also not free from problems. How to identify systems whose future state depends primarily on their own internal characteristics?[3] Without a method to identify a dynamically stable system, even testing the truth of this approach to diversity-stability hypothesis is not possible. It seems to verge on circularity. The larger (more diverse) the system considered, fewer things are left out of it. Fewer the things left out, smaller the possible outside influences on the system. Smaller the possible outside influences, greater the degree of dynamic stability. Thus, dynamic stability is (almost) a necessary consequence of diversity simply because diverse systems include many components.[4] Moreover, the argument as presented says nothing about the types of diversity present, e.g., a diverse community assembled from non-native species would be as good as one composed solely of natives.

Ives and Carpenter [5] have suggested a different approach to understanding community stability. Their approach may be quite useful, because it points out firstly, that systems move to a region different from the one from which they were perturbed[6] and secondly, that things other than diversity (like the frequency and character of perturbation) may also affect the stability of ecosystems. A new concept in relation to stability is ‘Biological integrity’ that refers to a system’s wholeness, including presence of all appropriate elements and occurrence of all processes at appropriate rates.[7] But this approach too poses problems.
1. What are ‘appropriate elements’?
2. What are ‘appropriate rates of processes?’
By definition, naturally evolved assemblages possess biological integrity but random assemblages do not. It, therefore, provides justification for management of ecosystems focusing on native species rather than introduced ones. This seems like the logical fallacy of affirming the consequent. However, species composition of lakes exposed to nutrient enrichment or acidification responds more quickly and recovers more slowly than processes like primary production, respiration, and nutrient cycling. Shifts in biotic composition don’t necessarily lead to changes in process rates. These observations mean a focus on integrity rather than diversity makes sense but it makes more sense to conclude that species changes are a more sensitive indicator of what is going on than the process changes. Loss of native species from a system is truly a warning of process changes that may have consequences much larger than are suspected.

From the point of view of conservation, there are still many problems.

1. Can it be psossible that constancy or resilience based approaches to diversity-stability hypothesis probably are not true and may not provide a solid conceptual basis for arguing that conservation of biological diversity is an important goal.
2. Is it possible that a less specific version defining stability as a dynamic property related to the degree that the components of a system determine their own future state, provides a plausible basis for the hypothesis. Unfortunately, this version of the hypothesis verges on circularity and is almost immune to empirical investigation. It may also be pointed out that a system that is “stable” with respect to some perturbations like hurricanes, drought, or other extreme weather events may not be stable to others like invasion by exotic plants or animals, extinctions of component species, or other biotic changes. From the point of view of practical conservation applications, the diversity-stability hypothesis seems to provides merely a useful heuristic.
There seems something more useful for practical applicability in the idea of biological integrity. Easily observable changes in species composition and community structure may act as pointer to underlying changes in ecosystem processes more quickly than attempts to directly measure these processes. Diverse systems provide more indicators of change in these underlying processes and if the systems are managed so that they are protected then the underlying processes will remain intact too. Chapin et al. [8] summarized that:

1. High species richness maximizes resource acquisition at each trophic level and the retention of resources in the ecosystem. 2. High species diversity reduces the risk of large changes in ecosystem processes in response to directional or stochastic variation in the environment. 3. High species diversity reduces the probability of large changes in ecosystem processes in response to invasions of pathogens and other species. 4. Landscape heterogeneity most strongly influences those processes or organisms that depend on multiple patch types and are controlled by a flow of organisms, water, air, or disturbance among patches.

Wang and Loreau [9] developed the last point more formally and suggested that when thinking about a meta-community or meta-ecosystem it is useful to decompose the variability in response across the entire system into components analogous to those used in partitioning species diversity i.e.
1. Variation within the individual components of a meta-community or meta-ecosystem. 2. Variation among different components of a meta-community or meta-ecosystem. 3. Variation across the entire system, the sum (or product) of alpha and beta variation.

Thinking about ecosystem functioning at various scales, as Wang and Loreau suggest, leads to recognition that the experimental focus on diversity and variation in functioning leaves out a vital component for those trying to manage ecosystems that includes a variety of different habitats. Stability at the whole-system level may depend as much or more on retaining those distinct components as it does on stability within any one of them.

December 6, 2015

The temperature of a planet irradiated by solar radiation can be estimated by balancing the amount of radiation absorbed (Ra) against the amount of outgoing radiation (Ro). The Ra will be the product of:

(i) Solar irradiance (I)

(ii) Area of planet. Area of planet relevant for such calculations is the area of the planet as seen by incoming radiation which is given by r2 where r = radius of the planet.

(iii) Absorbed fraction of radiation. The fraction of radiation that is absorbed is given by (1 – A where A = albedo of planet. This albedo represents the fraction of radiation that is reflected back from the planet.

Intensity of outgoing radiation of a body is given by Stefan-Boltzmann Law i.e.

Io =  T4

where = Stefan-Boltzmann Constant = 5.6 x 10-8 Wm-2 K-4. The total energy radiated by the planet will be the product of the intensity of outgoing radiation (Io) and the area of the whole planetary surface giving out radiation (4  r 2). Thus, the outgoing radiation (Ro) from the planet is given by:

Ro = 4r2 T4

Effective planetary temperature

Since Ra = Ro i.e. system is assumed to be in steady-state where radiation absorbed and outgoing radiation are equal, an expression for the effective planetary temperature (Te) can be obtained from the above equation and it may be given by:

Te = [I – (1 – A)/4 ]0.25

In this expression of effective planetary temperature, effect of atmosphere has not been taken into account. For Earth, solar irradiance (I) at the top of atmosphere is about 1.4 x 103/m2/s and albedo of Earth as a whole is about 0.33. From these values, the calculated equilibrium temperature of Earth comes to be 254 K. However, the actual observed average ground level temperature of Earth is about 288 K. This higher effective temperature of Earth from the calculated value is due to the greenhouse-effect of atmosphere.

The black-body spectrum of Earth at 288 K shows that radiation from Earth is of much longer wavelength and is at much lower intensity than radiation from Sun. The absorption spectrum of Earth’s atmosphere overlaps fairly well with the solar emission spectrum. Except for a very narrow window in the absorption bands, much of the long-wave radiations from Earth correspond with the region of absorption in the atmosphere. This means that much of the incoming radiation reaches the Earth’s surface while the outgoing thermal radiation is largely absorbed by the atmosphere rather than being lost to space. Thus, the effect of atmosphere is to trap the outgoing thermal radiation. This effect is termed green-house effect.. The thermal radiation i.e. the heat trapped by the atmosphere due to green-house effect is responsible for the effective temperature of Earth being higher than the temperature calculated without taking into account the effect of atmosphere. In general, absorption of re-emitted long-wave radiation and vertical mixing processes determine the temperature profile of the lower part of atmosphere (troposphere) which in turn determine the Earth’s temperature.

Optical depth of atmosphere and Earth’s surface temperature

The atmosphere is not transparent to the outgoing long-wave radiation and much of this radiation is absorbed in the lower part of the atmosphere, which is warmer than the upper parts. Simple radiative equilibrium models have been developed for Earth and to account for this effect, these models divide the atmosphere into layers that are just thick enough to absorb the outgoing radiation. These atmospheric layers are said to be optically thick and the atmosphere is discussed in terms of its optical depth based on the number of these atmospheric layers of different optical thickness. Earth’s atmosphere is sometimes said to have two layers while that of planet Venus has almost 70 layers which are largely due to enormous amount of CO2 in the atmosphere of Venus. The radiation equilibrium model indicates that the effective planetary temperature (Te) is thus related to ground-level planetary temperature (Tg) by the equation:

Tg4 = (1 – )Te4 (where  = optical depth of atmosphere)

The optical depth of atmosphere increases with increase in atmospheric concentrations of carbon dioxide and water vapor because both these are principal atmospheric absorbers of outgoing long-wave radiation. With increasing concentrations of CO2 in lower layers of atmosphere, other such gases that are responsible for radiating heat to outer space are pushed to slightly higher and colder levels of atmosphere. The radiating gases will radiate heat less efficiently because they are colder at higher altitudes. Thus, the atmosphere becomes less efficient radiator of heat and this results in rise of atmospheric temperature. This rise in atmospheric temperature, in turn, leads to more evaporation and increase in atmospheric water vapor, which is a greenhouse gas and further increases the absorption of outgoing long-wave thermal radiation. This positive feedback results in further increase in atmospheric temperature. The model also suggests that increase in atmospheric CO2 is associated with decrease in temperatures of upper (stratospheric).

Vertical heat transport and Earth’s surface temperature

Simple models of radiation balance of atmosphere do not take into account various other processes that transport heat vertically in the atmosphere and, therefore, overestimate the surface temperature of Earth. Convection is major process of vertical heat transport and is very important in lowering the surface temperature. Convection occurs because warm air is lighter than cool air and so rises upwards carrying heat from Earth’s surface to the upper atmosphere. As warm air rises up, it expands due to fall in pressure and work done in expansion causes it to cool adiabatically. Thus,

For Earth’s atmosphere, the lapse rate ( T/ z) works out to be -9.8 k/km for dry air. However, the air is usually wet and as it rises up, it releases latent heat so the measured lapse rate is -6.5 K/km.

If atmospheric temperature falls much less slowly with height than the lapse rate (or even rises with height) then inversion conditions exist and air is very stable with respect to vertical convective mixing. Conversely, if temperature falls very rapidly with height, at a rate greater than lapse rate, then the atmosphere is unstable and convective mixing will be active.

Short-wave radiation and temperature

The discussion till now has assumed total transparency of atmosphere to incoming solar radiation. Though it is true for visible range of radiation, it is not true for ultra-violet region of the solar spectrum. Though the amount of such short-wave radiation is very small, it has important consequences for the temperature of Earth-atmosphere system.

Various ultra-violet wavelengths are absorbed in the atmosphere at different heights. At just over 40 km, absorption of ultra-violet radiation by ozone results in considerable warming of stratosphere and in this zone, temperature rises with altitude. Average temperature of stratosphere is 250 K. Considering it to be a black-body radiator, maximum power radiation would be expected at 11.5 µm. This value is very close to absorption band of carbon dioxide which means that this gas also plays important role in stratospheric temperature. Increase in concentration of carbon dioxide in stratosphere might allow more effective radiation from stratosphere and, therefore, its cooling. This effect is quite opposite to that noted for troposphere.

Further, at the altitude of thermosphere, atmosphere is very thin. In this zone, molecules are exposed to unattenuated solar radiation of extremely short wavelength i.e. of high energy. This radiation arises from the outer region of Sun. At wavelengths below 50 nm, effective emission temperature exceeds 10,000 K. High-energy solar protons of such wavelengths are absorbed by gas molecules giving them high transitional energies i.e. high temperatures. The energies may be large enough to dissociate oxygen and nitrogen. Temperatures in thermosphere undergo wide variations depending upon the state of Sun. During solar disturbances, output of high-energy protons is very much enhanced that results in very high atmospheric temperatures. Temperature in this zone may further be increased by another mechanism. The temperature is normally defined in terms of transitional energy but absorption and emission of radiation occur through vibrational and rotational changes. In upper atmosphere, the frequency of molecular collisions is relatively low and so exchange of translational, vibrational and rotational energies is infrequent. Hence the cooling of thermosphere by re-radiation is very inefficient. The temperature of thermosphere increases with height so it is also stable against convection. Heat can be lost only by very inefficient diffusion processes and as a result, thermospheric temperatures are extremely high.

Global warming conspiracy theory – Wikipedia, the free …
en.wikipedia.org/…/Global_warming_ …
Regarding the persistent belief in a global warming hoax they note that the Earthis continuing to warm and the rate of … THE GLOBAL WARMING HOAX
www​.wnho.net/global_warming.htm
Global warming is nothing more than just another hoax, just like Y2K and theglobal freezing claims in the 1960’s and … The fiddling with temperature data is the biggest science … www​.telegraph.co.uk/…/globalwarming/…
Feb 7, 2015 … When future generations look back on the global-warming scare of the past 30years, nothing will … 5 Scientific Reasons That Global Warming Isn’t Happening … m.townhall.com/…global-warming…/full
Feb 18, 2014 … How did global warming discussions end up hinging on what’s happening withpolar bears, … /2014/03/17/weather-channel-founder-explains-the-history-of-the-global-warming-hoax/. New Study Is A ‘Death Blow’ To Global Warming Hysteria | The … dailycaller.com/…/scientists-say-new-stud…
Mar 31, 2015 … A new study out of Germany casts further doubt on the so-called global warming “consensus.” Global Warming Hoax: News
www​.globalwarminghoax.com/
Global Warming Hoax is a rather large news and information source regardingglobal warming from a skeptical point of … Searches related to global warming hoax
global warming facts
global warming hoax statistics

October 30, 2014

A number of abrupt changes in climate of Earth during past 15,000 years caused by natural events have been identified. These changes had profound effects within a short period on natural conditions, flora and fauna worldwide and consequently affected course of human civilisational history also. Some of these changes are :

The Younger Dryas stadial, also referred to as the Big Freeze,[1] was a geologically brief (1,300 ± 70 years) period of cold climatic conditions and drought which occurred between approximately 12,800 and 11,500 years BP (between 10,800 and 9500 BC).[2] The Younger Dryas stadial is thought to have been caused by the collapse of the North American ice sheets, although rival theories have been proposed.It followed the Bølling-Allerød interstadial (warm period) at the end of the Pleistocene and preceded the preboreal of the early Holocene. It is named after an indicator genus, the alpine-tundra wildflower Dryas octopetala. In Ireland, the period has been known as the Nahanagan Stadial, while in the United Kingdom it has been called the Loch Lomond Stadial and most recently Greenland Stadial 1 (GS1).[3][4] The Younger Dryas (GS1) is also a Blytt-Sernander climate period detected from layers in north European bog peat.[citation needed]The Dryas stadials were cold periods which interrupted the warming trend since the Last Glacial Maximum 20,000 years ago. The Older Dryas occurred approximately 1,000 years before the Younger Dryas and lasted about 3000 years.[5] The Oldest Dryas is dated between approximately 18,000 and 15,000 BP (16000 to 13000 BC).Abrupt climate changeThe Younger Dryas saw a rapid return to glacial conditions in the higher latitudes of the Northern Hemisphere between 12.9–11.5 ka BP,[6] in sharp contrast to the warming of the preceding interstadial deglaciation. It has been believed that the transitions each occurred over a period of a decade or so,[7] but the onset may have been faster.[8] Thermally fractionated nitrogen and argon isotope data from Greenland ice core GISP2 indicate that the summit of Greenland was approximately 15 °C (27 °F) colder during the Younger Dryas[7] than today. In the UK, coleopteran (beetle) fossil evidence suggests that mean annual temperature dropped to −5 °C (23 °F),[9] and periglacial conditions prevailed in lowland areas, while icefields and glaciers formed in upland areas.[10] Nothing of the size, extent, or rapidity of this period of abrupt climate change has been experienced since.[6]Global effectsIn western Europe and Greenland, the Younger Dryas is a well-defined synchronous cool period.[11] But cooling in the tropical North Atlantic may have preceded this by a few hundred years; South America shows a less well defined initiation but a sharp termination. The Antarctic Cold Reversal appears to have started a thousand years before the Younger Dryas, and has no clearly defined start or end; Peter Huybers has argued that there is fair confidence in the absence of the Younger Dryas in Antarctica, New Zealand and parts of Oceania.[12] Timing of the tropical counterpart to the Younger Dryas – the Deglaciation Climate Reversal (DCR) – is difficult to establish as low latitude ice core records generally lack independent dating over this interval. An example of this is the Sajama ice core (Bolivia), for which the timing of the DCR has been pinned to that of the GISP2 ice core record (central Greenland). Climatic change in the central Andes during the DCR, however, was significant and characterized by a shift to much wetter, and likely colder, conditions.[13] The magnitude and abruptness of these changes would suggest that low latitude climate did not respond passively during the YD/DCR.In western North America it is likely that the effects of the Younger Dryas were less intense than in Europe; however, evidence of glacial re-advance[14] indicates Younger Dryas cooling occurred in the Pacific
Northwest. Other features seen include:

1. Replacement of forest in Scandinavia with glacial tundra (which is the habitat of the plant Dryas octopetala)

2. Glaciation or increased snow in mountain ranges around the world

3. Formation of solifluction layers and loess deposits in Northern Europe4. More dust in the atmosphere, originating from deserts in Asia5. Drought in the Levant, perhaps motivating the Natufian culture to develop agriculture.The Huelmo/Mascardi Cold Reversal in the Southern Hemisphere ended at the same time

The prevailing theory is that the Younger Dryas was caused by significant reduction or shutdown of the North Atlantic “Conveyor”, which circulates warm tropical waters northward, in response to a sudden influx of fresh water from Lake Agassiz and deglaciation in North America. Geological evidence for such an event is thus far lacking.[15] The global climate would then have become locked into the new state until freezing removed the fresh water “lid” from the north Atlantic Ocean. An alternative theory suggests instead that the jet stream shifted northward in response to the changing topographic forcing of the melting North American ice sheet, bringing more rain to the North Atlantic which freshened the ocean surface enough to slow the thermohaline circulation.[16] There is also some evidence that a solar flare may have been responsible for the megafaunal extinction, though it cannot explain the apparent variability in the extinction across all continents.[17] There is evidence that some previous glacial terminations had post glacial cooling periods similar to the Younger Dryas.[18]Impact hypothesisA hypothesized Younger Dryas impact event, presumed to have occurred in North America around 12.9 ka BP, has been proposed as the mechanism to have initiated the Younger Dryas cooling. Amongst other things findings of melt-glass material in sediments in Pennsylvania, South Carolina, and Syria have been reported. These researchers argue that this material, which dates back nearly 13,000 years, was formed at temperatures of 1,700 to 2,200 °C (3,100 to 4,000 °F) as the result of a bolide impact. They argue that these findings support the controversial Younger Dryas Boundary (YDB) hypothesis, that the bolide impact occurred at the onset of the Younger Dryas.[19] The hypothesis has been questioned by research that stated that most of the conclusions cannot be repeated by other scientists, misinterpretation of data, and the lack of confirmatory evidence.[20][21][22] A review of the sediments that are found at sites. New research found that sediments claimed, by the hypothesis proponents, to be deposits resulting from a bolide impact were, in fact, dated from much later or much earlier time periods than the proposed date of the cosmic impact. The researchers examined 29 sites that are commonly referenced to support the impact theory to determine if they can be geologically dated to around 13,000 years ago. Crucially, only 3 of the sites actually date from that time.[23] Although there may be several causes of the Younger Dryas, volcanic activity is considered one possibility.[1] The Laacher in Germany was of sufficient size, VEI 6, with over 10 km3 (2.4 cu mi) tephra ejected, to have caused significant temperature changes in the northern hemisphere. Laacher See tephra is found throughout the Younger Dryas boundary layer.[24][25][26] This possibility has been disputed by 14C analysis.[citation needed] In the view of Cambridge University volcanologist, Clive Oppenheimer, the magnitude of Laacher See was similar to the 1991 Mount Pinatubo eruption, and the effects were a year or two of northern hemisphere summer cooling and winter warming, and up to two decades of environmental disruption in Germany.[27]End of the climate periodMeasurements of oxygen isotopes from the GISP2 ice core suggest the ending of the Younger Dryas took place over just 40–50 years in three discrete steps, each lasting five years. Other proxy data, such as dust concentration, and snow accumulation, suggest an even more rapid transition, requiring about a 7 °C (13 °F) warming in just a few years.[6][7][28][29] Total warming in Greenland was 10 ± 4 °C (18 ± 7 °F).[30]. The end of the Younger Dryas has been dated to around 11.55 ka BP, occurring at 10 ka bp (uncalibrated radiocarbon year), a “radiocarbon plateau” by a variety of methods, with mostly consistent results:11.50 ± 0.05 ka BP: GRIP ice core,
Greenland[31]11.53 + 0.04 − 0.06 ka BP: Krakenes Lake, western Norway[32]11.57 ka BP: Cariaco Basin core, Venezuela[33]11.57 ka BP: German oak/pine dendrochronology[34]11.64 ± 0.28 ka BP: GISP2 ice core,
Greenland[28]

The Younger Dryas is often linked to the adoption of agriculture in the Levant.[35][36] It is argued that the cold and dry Younger Dryas lowered the carrying capacity of the area and forced the sedentary Early Natufian population into a more mobile subsistence pattern. Further climatic deterioration is thought to have brought about cereal cultivation. While there exists relative consensus regarding the role of the Younger Dryas in the changing subsistence patterns during the Natufian, its connection to the beginning of agriculture at the end of the period is still being debated.[37][38]

The failure of North Atlantic thermohaline circulation is used to explain rapid climate change in some science fiction writings as early as Stanley G. Weinbaum’s 1937 short story “Shifting Seas” where the author described the freezing of Europe after the Gulf Stream was disrupted, and more recently in Kim Stanley Robinson’s novels, particularly Fifty Degrees Below. It also underpinned the 1999 book, The Coming Global Superstorm. Likewise, the idea of rapid climate change caused by disruption of North Atlantic ocean currents creates the setting for 2004 apocalyptic science-fiction film The Day After Tomorrow. Similar sudden cooling events have featured in other novels, such as John Christopher’s The World in Winter, though not always with the same explicit links to the Younger Dryas event as is the case of Robinson’s work.

34. Bar-Yosef, O. and A. Belfer-Cohen: “Facing environmental crisis. Societal and cultural changes at the transition from the Younger Dryas to the Holocene in the Levant.” In: The Dawn of Farming in the Near East. Edited by R.T.J. Cappers and S. Bottema, pp. 55–66. Studies in Early Near Eastern Production, Subsistence and Environment 6. Berlin: Ex oriente.

Bond events are North Atlantic climate fluctuations occurring every ≈1,470 ± 500 years throughout the Holocene. Eight such events have been identified, primarily from fluctuations in ice-rafted debris. Bond events may be the interglacial relatives of the glacial Dansgaard–Oeschger events,[1] with a magnitude of perhaps 15–20% of the glacial-interglacial temperature change.Gerard C. Bond of the Lamont–Doherty Earth Observatory at Columbia University, was the lead author of the 1997 paper that postulated the theory of 1,470-year climate cycles in the Holocene, mainly based on petrologic tracers of drift ice in the North Atlantic.[2][3]The existence of climatic changes, possibly on a quasi-1,500 year cycle, is well established for the last glacial period from ice cores. Less well established is the continuation of these cycles into the holocene. Bond et al. (1997) argue for a cyclicity close to 1470 ± 500 years in the North Atlantic region, and that their results imply a variation in Holocene climate in this region. In their view, many if not most of the Dansgaard–Oeschger events of the last ice age, conform to a 1,500-year pattern, as do some climate events of later eras, like the Little Ice Age, the 8.2 kiloyear event, and the start of the Younger Dryas.The North Atlantic ice-rafting events happen to correlate with most weak events of the Asian monsoon for at least the past 9,000 years,[4][5] while also correlating with most aridification events in the Middle East for the past 55,000 years (both Heinrich and Bond events).[6][7] Also, there is widespread evidence that a ≈1,500 yr climate oscillation caused changes in vegetation communities across all of North America.[8]For reasons that are unclear, the only Holocene Bond event that has a clear temperature signal in the Greenland ice cores is the 8.2 kyr event.The hypothesis holds that the 1,500-year cycle displays nonlinear behavior and stochastic resonance; not every instance of the pattern is a significant climate event, though some rise to major prominence in environmental history.[9] Causes and determining factors of the cycle are under study; researchers have focused attention on variations in solar output, and “reorganizations of atmospheric circulation.”[9] Bond events may also be correlated with the 1800-year lunar tidal cycle.[10]

List of Bond events

Most Bond events do not have a clear climate signal; some correspond to periods of cooling, others are coincident with aridification in some regions.

No. Time (BP) Notes

1. 0 ≈0.5 ka See Little Ice Age

2.1 ≈1.4 ka See Migration Period[11]

3.2 ≈2.8 ka Early 1st millennium BC drought in the Eastern Mediterranean, possibly triggering the collapse of Late Bronze Age cultures.[12][13][14]

4.3 ≈4.2 ka See 4.2 kiloyear event; collapse of the Akkadian Empire, end of the Egyptian Old Kingdom. [15][16]

5.4 ≈5.9 ka See 5.9 kiloyear event;

6.5 ≈8.2 ka See 8.2 kiloyear event;

7.6 ≈9.4 ka Erdalen event of glacier activity in Norway,[17] as well as with a cold event in China.[18]

The 8.2 kiloyear event is the term that climatologists have adopted for a sudden decrease in global temperatures that occurred approximately 8,200 years before the present, or c. 6,200 BCE, and which lasted for the next two to four centuries. Milder than the Younger Dryas cold spell that preceded it, but more severe than the Little Ice Age that would follow, the 8.2 kiloyear cooling was a significant exception to general trends of the Holocene climatic optimum. During the event, atmospheric methane concentration decreased by 80 ppb or 15% emission reduction by cooling and drying at a hemispheric scale.[1]A rapid cooling around 6200 BCE was first identified by Swiss botanist Heinrich Zoller in 1960, who named the event Misox oscillation (for the Val Mesolcina).[2] It is also known as Finse event in Norway.[3] Bond et al. argued that the origin of the 8.2 kiloyear event is linked to a 1,500-year climate cycle; it correlates with Bond event 5.[4]

The strongest evidence for the event comes from the North Atlantic region; the disruption in climate shows clearly in Greenland ice cores and in sedimentary and other records of the temporal and tropical North Atlantic.[5][6][7] It is less evident in ice cores from Antarctica and in South American indices.[8][9] The effects of the cold snap were global, however, most notably in changes in sea level during the relevant era.

The 8.2 kiloyear cooling event may have been caused by a large meltwater pulse from the final collapse of the Laurentide ice sheet of northeastern North America—most likely when the glacial lakes Ojibway and Agassiz suddenly drained into the North Atlantic Ocean.[10][11][12] The same type of action produced the Missoula floods that created the Channeled scablands of the Columbia River basin. The meltwater pulse may have affected the North Atlantic thermohaline circulation, reducing northward heat transport in the Atlantic and causing significant circum-North Atlantic cooling. Estimates of the cooling vary and depend somewhat on the interpretation of the proxy data, but drops of around 1 to 5 deg C (1 to 11 deg F)[clarification needed Not equivalent] have been reported. In Greenland, the event started at 8175 Before Present, and the cooling was 3.3 deg C (decadal average) in less than ~20 years, and the coldest period lasted for about 60 years, and the total duration was about 150 years.[1] Further afield, some tropical records report a 3 deg C (5 deg F) cooling from cores drilled into an ancient coral reef in Indonesia. [13] The event also caused a global CO2 decline of ~ 25 ppm over ~ 300 years.[14] However, the dating and interpretation of this and other tropical sites are more ambiguous than the North Atlantic sites.

Drier conditions were notable in North Africa, while East Africa suffered five centuries of general drought. In West Asia and especially Mesopotamia, the 8.2 kiloyear event was a three-hundred year aridification and cooling episode, which may have provided the natural force for Mesopotamian irrigation agriculture and surplus production that were essential for the earliest class-formation and urban life.[citation needed] However multi-centennial changes around the same period are difficult to link specifically to the approximately 100-year abrupt event as recorded most clearly in the Greenland ice cores.

The initial melt water pulse caused between 0.5 and 4 m (1 ft 8 in and 13 ft 1 in) of sea-level rise. Based on estimates of lake volume and decaying ice cap size, values of 0.4–1.2 m (1 ft 4 in–3 ft 11 in) circulate. Based on sea-level data from below modern deltas, 2–4 m (6 ft 7 in–13 ft 1 in) of near-instantaneous rise is estimated, in addition to ‘normal’ post-glacial sea-level rise.[15] Melt water pulse sea-level rise was experienced fully at great distance from the release area. Gravity and rebound effects associated with the shifting of water masses meant that the sea-level fingerprint was smaller in areas closer to the Hudson Bay. The Mississippi delta records ~20%, NW Europe records ~70% and Asia records ~105% of the global averaged amount.[16] The cooling of the 8.2 kiloyear event was a temporary feature; the sea-level rise of the meltwater pulse was permanent.

In 2003, the Office of Net Assessment at the United States Department of Defense was commissioned to produce a study on the likely and potential effects of a modern climate change.[17] The study, conducted under ONA head Andrew Marshall, modelled its prospective climate change on the 8.2 kiloyear event, precisely because it was the middle alternative between the Younger Dryas and the Little Ice Age.[18]

The 5.9 kiloyear event was one of the most intense aridification events during the Holocene Epoch. It occurred around 3900 BC (5,900 years BP), ending the Neolithic Subpluvial and probably initiated the most recent desiccation of the Sahara desert.Thus, it also triggered worldwide migration to river valleys, such as from central North Africa to the Nile valley, which eventually led to the emergence of the first complex, highly organized, state-level societies in the 4th millennium BC.[1] It is associated with the last round of the Sahara pump theory.A model by Claussen et al. (1999) suggested rapid
desertification associated with vegetation-atmosphere interactions following a cooling event, Bond event 4.[2] Bond et al. (1997) identified a North Atlantic cooling episode 5,900 years ago from ice-rafted debris, as well as other such now called Bond events that indicate the existence of a quasiperiodic cycle of Atlantic cooling events, which occur approximately every 1,470 years ± 500 years.[3] For some reason, all of the earlier of these arid events (including the 8.2 kiloyear event) were followed by recovery, as attested by the wealth of evidence of humid conditions in the Sahara between 10,000 and 6,000 BP.[4] However, it appears that the 5.9 kiloyear event was followed by a partial recovery at best, with accelerated desiccation in the millennium that followed. For example, Cremaschi (1998) describes evidence of rapid aridification in Tadrart Acacus of southwestern Libya, in the form of increased aeolian erosion, sand incursions and the collapse of the roofs of rock shelters.[5] The 5.9 kiloyear event was also recorded as a cold event in the Erhai Lake (China) sediments.[6]In the Middle East the 5.9 kiloyear event contributed to the abrupt end of the Ubaid period.[7] It was associated with an abandonment of unwalled villages and the rapid growth of hierarchically structured walled cities, and in the Jemdet Nasr period, with the first book-keeping scripts.

The 4.2 kiloyear BP aridification event was one of the most severe climatic events of the Holocene period in terms of impact on cultural upheaval.[1] Starting in ≈2200 BC, it probably lasted the entire 22nd century BC. It is very likely to have caused the collapse of the Old Kingdom in Egypt as well as the Akkadian Empire in Mesopotamia.[2] The drought may have also initiated southeastward habitat tracking within the Indus Valley Civilization.[3]A phase of intense aridity in ≈4.2 ka BP is well recorded across North Africa,[4] the Middle East,[5] the Red Sea,[6] the Arabian peninsula,[7] the Indian subcontinent,[3] and mid continental North America.[8] Glaciers throughout the mountain ranges of western Canada advanced at about this time.[9] Evidence has also been found in an Italian cave flowstone,[10] and in the Kilimanjaro Ice sheet,[11] Andean glacier ice.[12] The onset of the aridification in Mesopotamia near 4100 B.P also coincided with a cooling event in the North Atlantic, known as Bond event 3.[1][13][14]

In ca. 2150 BC the Old Kingdom of Ancient Egypt was hit by a series of exceptionally low Nile floods, which was instrumental in the sudden collapse of centralized government in ancient Egypt.[15] Famines, social disorder, and fragmentation during a period of approximately 40 years were followed by a phase of rehabilitation and restoration of order in various provinces. Egypt was eventually reunified within a new paradigm of kingship. The process of recovery depended on capable provincial administrators, the deployment of the idea of justice, irrigation projects, and an administrative reform.The aridification of Mesopotamia may have been related to the onset of cooler sea surface temperatures in the North Atlantic (Bond event 3), as analysis of the modern instrumental record shows that large (50%) interannual reductions in Mesopotamian water supply result when subpolar northwest Atlantic sea surface temperatures are anomalously cool.[16] The headwaters of the Tigris and Euphrates Rivers are fed by elevation-induced capture of winter Mediterranean rainfall.The Akkadian Empire—which in 2300 B.C. was the second civilization to subsume independent societies into a single state (the first being ancient Egypt at around 3100 BC) —was brought low by a wide-ranging, centuries-long drought.[17] Archaeological evidence documents widespread abandonment of the agricultural plains of northern Mesopotamia and dramatic influxes of refugees into southern Mesopotamia around 2170 BC.[18] A 180-km-long wall, the “Repeller of the Amorites,” was built across central Mesopotamia to stem nomadic incursions to the south. Around 2150 BC, the Gutian people, who originally inhabited the Zagros Mountains, defeated the demoralized Akkadian army, took Akkad, and destroyed it around 2115 BC. Widespread agricultural change in the Near East is visible at the end of the third millennium BC.[19] Resettlement of the northern plains by smaller sedentary populations occurred near 1900 BC, three centuries after the collapse.[18]In the Persian Gulf region, there is a sudden change in settlement pattern, style of pottery and tombs at this time. The 22nd century BC drought marks the end of the Umm al-Nar period and the change to the Wadi Suq period.[7]The drought may have caused the collapse of Neolithic Cultures around Central China during the late third millennium BC.[20] At the same time, the middle reaches of the Yellow River saw a series of extraordinary floods.[21] In the Yishu River Basin, the flourishing Longshan culture was hit by a cooling that made the paddies shortfall in output or even no seeds were gathered. The scarcity in natural resource led to substantial decrease in population and subsequent drop in archaeological sites.[22] About 4000 cal. yr BP the Longshan culture was displaced by the Yueshi culture which was relatively underdeveloped, simple, and unsophisticated.

12. Davis, Mary E.; Thompson, Lonnie G. (2006). “An Andean ice-core record of a Middle Holocene mega-drought in North Africa and Asia”. Annals of Glaciology 43: 34–41. Bibcode:2006AnGla..43…34D. doi:10.3189/172756406781812456. Archived from the original on July 20, 2011.

19. Riehl, S. (2008). “Climate and agriculture in the ancient Near East: a synthesis of the archaeobotanical and stable carbon isotope evidence”. Vegetation History and Archaeobotany 17 (1): 43–51. doi:10.1007/s00334-008-0156-8.

20. Wu, Wenxiang; Liu, Tungsheng (2004). “Possible role of the “Holocene Event 3″ on the collapse of Neolithic Cultures around the Central Plain of China”. Quaternary International 117 (1): 153–166. Bibcode:2004QuInt.117..153W. doi:10.1016/S1040-6182(03)00125-3.

April 19, 2012

Electromagnetic radiation consists of the waves of enery combining electical and magnetic fields. These include whole range of electromagnetic wave spectrum: from very long-wave radio waves at one end to X-rays and gamma rays at the other end of spectrum. Visible light falls in a very narrow wave band in the middle of the spectrum. Every iving and non-living object in nature is constantly being exposed to a natural background of electromagnetic radiation that comes from space as well is produced by radioactive elements in the Earth’s crust. A large proportion of the cosmic radiation coming from space is absorbed by the atmosphere and only a very small portion reaches the ground. However, there is no such filtering of radiation originating from the Earth itself. All living organisms are evolutionarily adapted to such natural radiations in their natural environments. In fact, animals and plants use electromagnetic radiation for a variety of their living activities e. g. communication, control and regulation of their various physiological, psychlogical and behavioural functions. Though essential for living organisms, exposure to excess such radiation beyond the naturally evolved tolerence limits causes various harmful effects in them. The effects of increased exposure to electromagnetic radiation on human, animal and plant bodies are now coming to light and are being increasingly studied.

In the present urban, domestic and working place environments, sources of electromagnetic radiations are increasing rapidly. Increasing radiations from sources like power lines, microwave, telecommunication, electrical appliances, radar, transmissions of radio and television etc. are causing the problem of increasing electromagnetic pollution of environment.

The electromagnetic radiation may be classified into two broad categories according to the frequency and their effects. First category includes relatively low-frequency radiation, from visible light wave band down through infra-red, microwave, radar, television and radio waves and constitutes non-ionizing radiation. Second category includes relatively high frequency gamma and X-rays and constitutes ionizing radiation. Exposure to excessive dose of both the types of radiations casuses various harmful effects on living organisms.

The effects of extremely low-frequency electromagnetic radiations are dependent on dose and duration of exposure and are cummulative. It may take years of exposure before symptoms will appear. Usually symptoms of such electromagnetic pollution manifest as constant headaches, lack of energy, loss of apetite, mental blocks, decreased ability to concentrated, insonia & sleep disturbances, palpitaions, dizziness, trembling and rashes. After prolonged exposure, the symptoms may proceed to blackouts, nervous & psychological disorders like depression, feeling of being trapped, anxiety attacks, increased suicidal impulses, epilepsy, lowered libido & fertility, increased risk of arthritis and even cancer. White blood cells (WBC) are particularly sensitive to electromagnetic radiation and the risk of leukaemia is increased in those areas where exposure to such radiations is high e.g. around around power lines. Exposure to alternating magnetic fields accompanies exposure to electromagnetic radiations around power lines and from a variety of electrical appliances. Such exposure causes build up of serum triglycerides i.e. The fats found in blood stream that are implicated in heart diseases. Constant or frequent overexposure to such radiations may contribute to onset of heart problems.

A fully loaded 400 kv power line creates an electromagnetic field for 350 metres on either side of the line. This can generate electrical currents in the body, which produce an effect comparable to those that gives relief from pain. This effect of electromagnetic radiation on body is possibly due induced production of endorphins (natural pain-killers in body). It has been observed that continuous exposure to such radiations results in development of addictive dependency in cows and they start to prefer standing under power pylons for grazing or resting. They show withdrawl symptoms when away from such stimulation. It has been found that excessive concentration of positive ions builds up under power pylons and laboratory animals have died after constant exposure of 3 months to such conditions. A 50 Hz electromagnetic field has been found to adversely affect E. coli bacteria and water of wells underneath power lines has been shown to be devoid of naturally occurring bacteria present in waters away from such exposure. Laboratory experiments on rats, mouse and animal cells have shown an increased activity of enzyme ornithine decarboxylase in electromagnetic field of strength comparable to that produced by power lines. This enzyme speeds up the growth of cancer cells. Similar frequencies of radiation have been found to affect loss of calcium from the brain. it has been suggested that the problems arising from exposure around power lines may partly be due the effects of such radiation on calcium metabolism of the body.

When an object, living or non-living, enters an electromagnetic field, the field folds over the object so that its strength may become several hundred times more than that of unperturbed field. In the animal body, head houses the most vital organ, the brain which is most sensitive to electromagnetic field. Therefore, the head is the most affected part by the field-strength enhancement effect of electromagnetic field folded over animal body. Researches on the effects of electric blankets had shown alarming results because the body of user is entirely exposed to electromagnetic field for quite a few hours regularly. Among user women, seasonal (September to June) increase in miscarriages has been reported. The menstrual cycle is also disturbed in user women, which may be a contributing factor in the incidence of hormone related cancers such as breast cancers. The effect of electromagnetic field on such cancers is thought to be due to the effect of the field on melatonin production in the body. Electrically heated beds and blankets have also been linked to slower foetal development and learning problems in children, especually if mothers also used these during pregnancy.

The strength of electrical field drops off quite quickly with distance from the source and its frequency is clearly that of the source. However, the magnetic field fluctuates more, has ‘contaminating frequencies’ and its strength does not decrease with distance as quickly as that of electrical field. Furthermore, with increased distance, there is more likelihood that distribution of magnetic field over the body is uniform. A 400 kv power line may be generating underneath it a magnetic field of 1 microtesla (when current is around 100 A) to 10 microtesla (at greater loads, usual maxima being 5000 A) depending upon many variables like load, capacity, ground and weather conditions. It has been observed that people living near incoming supply in high-rise flats or ground floors are exposed to average magnetic field of 0.2 to 0.4 microtesla. These people have been found to be more susceptible to risks of heart diseases, cancers, depression and thyroid problems than those living on top floors away from the supply lines where magnetic field may be only around 0.015 microtesla. in cities, there may be continuous exposure to magnetic field of less than 0.1 microtesla in normal households. Such exposure may cause depressive feelings in the inmates. Electrical workers may experience exposure to levels up to 5 microtesla and the risks of various bodily and psychological disturbances to them may well be more harmful than moving in and out of similar field. Probably unbalanced, non-uniform magnetic field causes greater risk of various diseases and disturbances in the exposed subject than a balanced and unform field.

Apart from human beings, a number of detrimental effects of electromagnetic radiations have also been observed on animals and plants. It has been observed that earthworms are distributed and move away from underground power cables. Hens living near power lines lay scrambled eggs in thin shells, bees seal up their hives and become aggressive, cows loose apetite znd birds such as homing pigeons become disoriented. Plants exposed to electromagnetic radiation show disturbed root growth, seed germination, growth of pollen tubes, ion & water uptake and photosynthesis.

Exposure to microwaves is also increasing particularly in households due to increasing use of microwave-producing gadgets. Such exposure is also posing health risks as microwave exposure is known to cause cataracts and have detrimental effects on nervous and cardio-vascular systems.

April 14, 2012

Atmospheric pollution becomes problem over a large area because atmosphere transports relatively uniform concentrations of air pollutant(s) over considerable distances. Rhdhe et al. (1982) categorized the scale of atmospheric transport as follows:

Local transport: This occurs from individual point or line source for a distance of few kilometres only. It is mainly associated with plumes and is affected by local meteorological conditions. As the plume disperses, most of the pollutant falls back to the ground as dry deposition.

Regional transport: This occurs to distances less than a thousand kilometres. At this scale, individual plumes merge together and development of a relatively uniform profile of pollutant after about 1-2 hours becomes possible. Such transport causes spread of air pollution problem from urban/industrial sources to the surrounding downwind countryside. In such transport, pollutants come down both by dry and wet deposition depending upon meteorological conditions.

Sub-continental and continental transport: Such transport occurs over several to a few thousand kilometres. In such transport, relatively uniform pollutant profile becomes well developed the pollution undergoes several diurnal cycles. At this scale, wet deposition of pollutants becomes more important than dry deposition and interchange of pollution between troposphere and stratosphere becomes possible.

Global transport: Such transport extends from a few thousand kilometres to the entire global atmosphere. At this scale, definable pathways of pollutant transport disappear due to inevitable mixing in atmosphere. Continental and global transports of air pollution are termed long range transport (LRT). Such transport requires an organized meteorological system of atleast synoptic scale that will allow movement of pollution without much dispersion or loss.

Most unnfavourable conditions for LRT are windy-blustry conditions associated with heavy rain and strong turbulent mixing. Under such conditions, any organised mass of pollution will almost immediately be blown apart. Scavenging from within and below the clouds will remove virtually all the pollutant material quite close to source region. Further, there would be little time available for secondary chemistry to develop.

Most favourable conditions for LRT involve circulation associated with the back side of a high pressure system where two major features become crucial:

1. Persistence of a synoptic-scale inversion over a wide area reducing vertical distribution of pollution and restricting it to a fairly shalow depth of atmosphere.

Both these situations are assisted by flat terrain that allows uninturrupted horizontal movement of air mass. During development of LRT, pollutants collect initially under the influence of a shallow, stable and high-pressure system that has been stagnating over a source region for previous several days. During this period, the surface wind speeds have been less than 3 metres per second, mixing heights below the inversion have been restricted to 1500 metres or less, hours of sunshine have been close to maximum possible and there have been no weather fronts or precipitation. Over the period, which is most likely to persist for four days, the concentration of air pollutants increases significantly, extensive secondary reactions occur and poor visibility persists. Under such circumstances, LRT of air pollution can occur in three major ways with first one having strongest impact.

1. The high pressure slowly begins to move out of the area and polluted air is drawn towards the trailing edge (in the back side) where it starts moving northwards (in northern hemisphere) in the prevailing circulation. If a weak cold or warm front is in the vicinity, transport is improved and is often beteer directed by the enhanced pressure gradient. Inversion and stable atmosphere often restrict LRT to layers below 700 mb pressure altitude and the polluted air mass may move several thousand kilometres without much dispersion or dilution. Separation from the friction layer near the surface is also important if polluted air mass is to maintain its cohesion. Over continental areas, extreme stability of lowest atmospheric layers in winter suggests that at airflow at 850 mb, pressure level might be the best indicator of LRT. During summers, transport is usually not so well defined as the solar heating and vertical convection tend to dominate the stability situation.

2. Variants to LRT situation provide smaller but still important concentrations of pollutants to great distances downwind of sources. If edge of a high is associated with cool, cloudy weather and a stiff breeze, the transport will be quite rapid in a dynamic and unstable atmosphere. high humidity and cloud cover will allow oxidation of precurssors such as SO 2 or NOx without a great deal of dispersion away from the air mass core. This results in transfer of moderate concentrations of pollutants that are not removed by rainwater because of the absence of enough vertical diffusion for precipitation.

3. If circulation system on the edge of a high pressure draws moist air from ocean regions, which then moves over source areas of pollution, weak turbulent mixing will draw the polluted material into the air stream. This material will mix rapidly through the depth of the air flow creating quite uniform and mild pollution concentrations. This pollution will be transported for considerable distances. If it meets a mountain barrier in the way, the mildly polluted air can be drawn up orographically and forced over the top. Pollutants are then scavenged resulting in mildly acidic rainfall on the windward side of mountain, which may persist for several hours. In the tropics, LRT largely depends on the persistence and frequency of trade winds associated with emission source regions. Transport occurs at the altitude of trade wind inversion and in stable atmospheric conditions over long distances. This allows minimal dispersion of pollutant material in the air mass.

In southern hemisphere, there are very few air pollution sources and, therfore, no important LRT associated with air pollution. In the northern hemisphere, four major continental to hemispherical scale air pollution problems due to long range transport of air pollution. These are Arctic haze, Western Atlantic ocean air quality, Saharan dust and Asian dust.

Trajectory analysis

Measurements of LRT are often difficult and expensive. Therefore, trajectory analysis is often used to establish back trajectories (general source area of pollution origin) and forward trajectories (general location to which pollutants are being transported). Trajectories are calculated from synoptic level upper wind data and are normallly based on isobaric or isentropic principles. Two-dimensional models can not describe the vertical movements of either the airflow or the polluted air mass. Three-dimensional models, using a series of grid points at various altitudes have been designed to establish the sources of polluted air masses that have moved several thousand kilometres. Trajectories are best used as a support method to establish general transport movements and not to estimate the changes in pollution concentrations over the time. Benefits of such use are:

Support for chemical tracer experiments from different source regions;

Simulation of dry deposition;

Evaluation of acute air pollution problems;

Establishment of the source of a chronic pollution problem. At best, trajectories show general accuracy for five days assuming that air flow is consistent with little vertical or horizontal changes.

Most often trajectories lose accuracy after 48 hours since spatial distribution of upper synoptic grid and the number of measurements from them are too thin to obtain accurate interpolations of air flow variations. Distortions occur with the presence of turbulent eddies, evolving synoptic patterns, increasing diffusion and inaccuracies in determination of mean wind. Trajectory analysis fails in presence of fronts or other rapidly changing atmospheric conditions.

April 12, 2012

The climate is most important factor controlling the environment. The type of soil and the native vegetation in a given region are basically the product of the climate of the area. The scientific study of climate requires a scheme of climate classification. Such schemes aim to categorize all the variations in the climates found in different areas into several clearly defined and easily distinguishable groups. Many useful systems of climate classification can be devised by taking different weather elements such as temperature, pressure, winds, precipitation as the basis of classification. the distribution of natural vegetation and soils may suggest still other types of classification schemes.

Temperature as basis of climate classification
The general parallelism of isotherms with parallels of latitude was perhaps the first basis of climate classification. With intelligent application, temperature has been made a fundamental factor in most of the schemes. Three major climate groups are clearly recognizable on the basis of this criterion:

Equatorial tropical group: characterized by uniformly warm temperature throughout the year and absence of a winter season.

Polar-arctic group: characterized by the absence of true summer season.

A common boundary between polar-arctic climates and middle-latitude climates is 10 degrees celsius (50 degrees F) isotherm of the warmest month (i.e. july in northern hemisphere). The boundary between middle-latituse and equatorial climates is marked by 18 degrees celsius (64.4 degrees F) isotherm of the coldest month.

The use of temperature alone as the basis of climate classification is unsatisfactory because humid and desert regions are not distinguished in such scheme.

Precipitation as basis of climate classification
The climate map in such a scheme would be same as the mean annual rainfall map. Such system may be refined by subdividing classes according to distribution of precipitation throughout the year, whether uniform or seasonal. However, such classification scheme fails because it groups cold arctic climates together with hot deserts of low latitudes controlled by air temperature. The cold climates, in general, are effectively humid with same meager precipitaion that produces very dry deserts in hot subtropics and tropics.

Vegetation as basis of climate classification
Different plant types require special condition of temperature and precipitation for their survival. Thus, plants form an index of climate and limits of growth of key plant types provide meaningful boundaries of climstic zones. There is much merit in such vegetational zones based climate classificstion scheme. However, vegetation is an effect rather than a primary cause of climate. Therefore, it can not give so satisfactory a climate classification as the one based on the primary cause(s) of climate.

Koppen climate classification system
After various attempts of classification schemes based on some single characteristic feature, it became evident that a meaningful system should devise climate classes that combine temperature and precipitation characteristics but also set limits and boundaries that fit into obviously known vegetational distributions. Dr. Wladimir Koppen (1918) devised such a system that was later revised and extended by him and his students. this Koppen climate classification system has become most widely used system for biogeographical purposes.

Koppen system is strictly empirical in nature. Each climate group is defined according to fixed values of temperature and precipitation, computed according to averages of the year or of individual months. No attention is paid to the causes of climate in terms of pressure belts, wind belts, air masses, air fronts or storms. A given place can be assigned to a particular climate subgroup solely on the basis of the records of temperature and precipitation of that area provided that the records are long enough to yield meaningful averages. Since air temperature and precipitation are most easily obtainable surface weather data, the system has great advantage that the areas covered by each subtype of climate can be computed or estimated for large regions of the world. This system thus, incorporates an empirical-quantitative approach.
Koppen system has a shorthand code of letters disignating major climate groups (A, B, C, D), subgroups within major groups(S, W, f, m, s, w) and further subdivisions to distinguish particular seasonal characteristics of temperature and precipitation (a, b, c, d, h, k).

Air mass source regions and frontal zones as basis of climate classification

Increasingly detailed studies of vsrious charateristics of weather elements, global circulation, air mass properties, source regions and cyclonic storms have yielded many principles related to the causes of weather patterns and their seasonal variations at global scale. Therefore, such knowledge has also been incorporated in the explanatory-descriptive system of climate classification based on the cause and effect. Thus, such system is based on the location of air mass source regions and nature and movement of air masses, fronts and cyclonic storms. Koppen code symbols have been incorporated into this system by interrelating both systems. This system simply provides a reasonable scientic explanation for the existence of Koppen’s climate groups.