The evolution of plants has resulted in widely varying levels of complexity, from the earliest algal mats, through bryophytes, lycopods, and ferns, to the complex gymnosperms and angiosperms of today. While many of the groups which appeared earlier continue to thrive, as exemplified by algal dominance in marine environments, more recently derived groups have also displaced previously ecologically dominant ones, e.g. the ascendance of flowering plants over gymnosperms in terrestrial environments.[6]:498

Evidence for the appearance of the first land plants occurs in the Ordovician, around 450 million years ago, in the form of fossil spores.[7] Land plants began to diversify in the Late Silurian, from around 430 million years ago, and the results of their diversification are displayed in remarkable detail in an early Devonian fossil assemblage from the Rhynie chert. This chert, formed in volcanic hot springs, preserved several species of early plants in cellular detail by petrification.[8]

By the middle of the Devonian, many of the features recognised in plants today were present, including roots and leaves. Late Devonian free-sporing plants such as Archaeopteris had secondary vascular tissue that produced wood and had formed forests of tall trees. Also by late Devonian, Elkinsia, an early seed fern, had evolved seeds.[9] Evolutionary innovation continued into the Carboniferous and is still ongoing today. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the appearance of the flowering plants in the Triassic (~200 million years ago), and their later diversification in the Cretaceous and Paleogene. The latest major group of plants to evolve were the grasses, which became important in the mid-Paleogene, from around 40 million years ago. The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low CO2 and warm, dry conditions of the tropics over the last 10 million years.

Land plants evolved from a group of green algae, perhaps as early as 510 million years ago;[10] some molecular estimates place their origin even earlier, as much as 630 million years ago.[11] Their closest living relatives are the charophytes, specifically Charales; assuming that the Charales' habit has changed little since the divergence of lineages, this means that the land plants evolved from a branched, filamentous alga dwelling in shallow fresh water,[12] perhaps at the edge of seasonally desiccating pools.[10] The alga would have had a haplontic life cycle: it would only very briefly have had paired chromosomes (the diploid condition) when the egg and sperm first fused to form a zygote; this would have immediately divided by meiosis to produce cells with half the number of unpaired chromosomes (the haploid condition). Co-operative interactions with fungi may have helped early plants adapt to the stresses of the terrestrial realm.[13]

The Devonian marks the beginning of extensive land colonization by plants, which – through their effects on erosion and sedimentation – brought about significant climatic change.

Plants were not the first photosynthesisers on land; weathering rates suggest that photosynthetic organisms were already living on the land 1,200 million years ago,[10] and microbial fossils have been found in freshwater lake deposits from 1,000 million years ago,[14] but the carbon isotope record suggests that they were too scarce to impact the atmospheric composition until around 850 million years ago.[15] These organisms, although phylogenetically diverse,[16] were probably small and simple, forming little more than an "algal scum".[10]

The first evidence of plants on land comes from spores of mid-Ordovician age (early Llanvirn, ~470 million years ago).[17][18][19] These spores, known as cryptospores, were produced either singly (monads), in pairs (dyads) or groups of four (tetrads), and their microstructure resembles that of modern liverwort spores, suggesting they share an equivalent grade of organisation.[7] Their walls contain sporopollenin – further evidence of an embryophytic affinity.[20] It could be that atmospheric 'poisoning' prevented eukaryotes from colonising the land prior to this,[21] or it could simply have taken a great time for the necessary complexity to evolve.[22]

Trilete spores similar to those of vascular plants appear soon afterwards, in Upper Ordovician rocks.[23] Depending exactly when the tetrad splits, each of the four spores may bear a "trilete mark", a Y-shape, reflecting the points at which each cell squashed up against its neighbours.[17] However, this requires that the spore walls be sturdy and resistant at an early stage. This resistance is closely associated with having a desiccation-resistant outer wall—a trait only of use when spores must survive out of water. Indeed, even those embryophytes that have returned to the water lack a resistant wall, thus don't bear trilete marks.[17] A close examination of algal spores shows that none have trilete spores, either because their walls are not resistant enough, or in those rare cases where it is, the spores disperse before they are squashed enough to develop the mark, or don't fit into a tetrahedral tetrad.[17]

The earliest megafossils of land plants were thalloid organisms, which dwelt in fluvial wetlands and are found to have covered most of an early Silurian flood plain. They could only survive when the land was waterlogged.[24] There were also microbial mats.[25]

Once plants had reached the land, there were two approaches to dealing with desiccation. The bryophytes avoid it or give in to it, restricting their ranges to moist settings, or drying out and putting their metabolism "on hold" until more water arrives. Tracheophytes resist desiccation: They all bear a waterproof outer cuticle layer wherever they are exposed to air (as do some bryophytes), to reduce water loss, but—since a total covering would cut them off from CO2 in the atmosphere—they rapidly evolved stomata,[clarification needed] small openings to allow, and control the rate of, gas exchange. Tracheophytes also developed vascular tissue to aid in the movement of water within the organisms (see below), and moved away from a gametophyte dominated life cycle (see below). Vascular tissue also facilitated upright growth without the support of water and paved the way for the evolution of larger plants on land.

The establishment of a land-based flora caused increased accumulation of oxygen in the atmosphere, as the plants produced oxygen as a waste product. When this concentration rose above 13%, wildfires became possible. This is first recorded in the early Silurian fossil record by charcoalified plant fossils.[26] Apart from a controversial gap in the Late Devonian, charcoal is present ever since.

Charcoalification is an important taphonomic mode. Wildfire drives off the volatile compounds, leaving only a shell of pure carbon. This is not a viable food source for herbivores or detritovores, so is prone to preservation; it is also robust, so can withstand pressure and display exquisite, sometimes sub-cellular, detail.

All multicellular plants have a life cycle comprising two generations or phases. One is termed the gametophyte, has a single set of chromosomes (denoted 1N), and produces gametes (sperm and eggs). The other is termed the sporophyte, has paired chromosomes (denoted 2N), and produces spores. The gametophyte and sporophyte may appear identical – homomorphy – or may be very different – heteromorphy.

The pattern in plant evolution has been a shift from homomorphy to heteromorphy. The algal ancestors of land plants were almost certainly haplobiontic, being haploid for all their life cycles, with a unicellular zygote providing the 2N stage. All land plants (i.e. embryophytes) are diplobiontic – that is, both the haploid and diploid stages are multicellular.[6] Two trends are apparent: bryophytes (liverworts, mosses and hornworts) have developed the gametophyte, with the sporophyte becoming almost entirely dependent on it; vascular plants have developed the sporophyte, with the gametophyte being particularly reduced in the seed plants.

It has been proposed that the basis for the emergence of the diploid phase of the life cycle as the dominant phase, is that diploidy allows masking of the expression of deleterious mutations through genetic complementation.[27][28] Thus if one of the parental genomes in the diploid cells contains mutations leading to defects in one or more gene products, these deficiencies could be compensated for by the other parental genome (which nevertheless may have its own defects in other genes). As the diploid phase was becoming predominant, the masking effect likely allowed genome size, and hence information content, to increase without the constraint of having to improve accuracy of replication. The opportunity to increase information content at low cost is advantageous because it permits new adaptations to be encoded. This view has been challenged, with evidence showing that selection is no more effective in the haploid than in the diploid phases of the lifecycle of mosses and angiosperms.[29]

There are two competing theories to explain the appearance of a diplobiontic lifecycle.

The interpolation theory (also known as the antithetic or intercalary theory)[30] holds that the sporophyte phase was a fundamentally new invention, caused by the mitotic division of a freshly germinated zygote, continuing until meiosis produces spores. This theory implies that the first sporophytes bore a very different morphology to the gametophyte they depended on.[30] This seems to fit well with what is known of the bryophytes, in which a vegetative thalloid gametophyte is parasitised by simple sporophytes, which often comprise no more than a sporangium on a stalk. Increasing complexity of the ancestrally simple sporophyte, including the eventual acquisition of photosynthetic cells, would free it from its dependence on a gametophyte, as seen in some hornworts (Anthoceros), and eventually result in the sporophyte developing organs and vascular tissue, and becoming the dominant phase, as in the tracheophytes (vascular plants).[6] This theory may be supported by observations that smaller Cooksonia individuals must have been supported by a gametophyte generation. The observed appearance of larger axial sizes, with room for photosynthetic tissue and thus self-sustainability, provides a possible route for the development of a self-sufficient sporophyte phase.[30]

The alternative hypothesis is termed the transformation theory (or homologous theory). This posits that the sporophyte appeared suddenly by a delay in the occurrence of meiosis after the zygote germinated. Since the same genetic material would be employed, the haploid and diploid phases would look the same. This explains the behaviour of some algae, which produce alternating phases of identical sporophytes and gametophytes. Subsequent adaption to the desiccating land environment, which makes sexual reproduction difficult, would result in the simplification of the sexually active gametophyte, and elaboration of the sporophyte phase to better disperse the waterproof spores.[6] The tissue of sporophytes and gametophytes preserved in the Rhynie chert is of similar complexity, which is taken to support this hypothesis.[30][31][32]

To photosynthesise, plants must absorb CO2 from the atmosphere. However, this comes at a price: while stomata are open to allow CO2 to enter, water can evaporate.[33] Water is lost much faster than CO2 is absorbed, so plants need to replace it, and have developed systems to transport water from the moist soil to the site of photosynthesis.[33] Early plants transported water within the porous walls of their cells. Later, they evolved the ability to control water loss (and CO2 acquisition) through the use of a waterproof cuticle perforated by stomata that could open and close to regulate evapotranspiration. Specialised water transport tissues subsequently evolved, first in the form of hydroids, then tracheids and secondary xylem, followed by vessels in flowering plants.[33]

The high CO2 levels of Silurian-Devonian times, when plants were first colonising land, meant that the need for water was relatively low. As CO2 was withdrawn from the atmosphere by plants, more water was lost in its capture, and more elegant transport mechanisms evolved.[33] As water transport mechanisms, and waterproof cuticles, evolved, plants could survive without being continually covered by a film of water. This transition from poikilohydry to homoiohydry opened up new potential for colonisation.[33] Plants then needed a robust internal structure that contained long narrow channels for transporting water from the soil to all the different parts of the above-soil plant, especially to the parts where photosynthesis occurred.

During the Silurian, CO2 was readily available, so little water needed to be expended to acquire it. By the end of the Carboniferous, when CO2 levels had lowered to something approaching today's, around 17 times more water was lost per unit of CO2 uptake.[33] However, even in these "easy" early days, water was at a premium, and had to be transported to parts of the plant from the wet soil to avoid desiccation. This early water transport took advantage of the cohesion-tension mechanism inherent in water. Water has a tendency to diffuse to areas that are drier, and this process is accelerated when water can be wicked along a fabric with small spaces. In narrow columns of water, such as that within the plant cell walls or in tracheids, when molecules evaporate from one end, they pull the molecules behind them along the channels. Therefore, transpiration alone provided the driving force for water transport in early plants.[33] However, without dedicated transport vessels, the cohesion-tension mechanism cannot transport water more than a few cm, limiting the size of the earliest plants.[33] This process demands a steady supply of water from one end, to maintain the chains; to avoid exhausting it, plants developed a waterproof cuticle. Early cuticle may not have had pores but did not cover the entire plant surface, so that gas exchange could continue.[33]

A banded tube from the Late Silurian/Early Devonian. The bands are difficult to see on this specimen, as an opaque carbonaceous coating conceals much of the tube. Bands are just visible in places on the left half of the image. Scale bar: 20 μm

To be free from the constraints of small size and constant moisture that the parenchymatic transport system inflicted, plants needed a more efficient water transport system. During the early Silurian, they developed specialized cells, which were lignified (or bore similar chemical compounds)[33] this process coincided with cell death, allowing cell contents to be emptied and water to be passed through them.[33] These wider, dead, empty cells (xylem) were much more conductive than the inter-cell method, giving the potential for transport over longer distances, and higher CO2 diffusion rates.

The earliest macrofossils to bear water-transport tubes are Silurian plants placed in the genus Cooksonia.[34] The early Devonian pretracheophytes Aglaophyton and Horneophyton have structures very similar to the hydroids of modern mosses.

Plants continued to innovate new ways of reducing the resistance to flow within their cells, thereby increasing the efficiency of their water transport. Thickened bands on the walls of tubes are apparent from the early Silurian onwards[35] are adaptations to ease the flow of water.[36] Banded tubes, as well as tubes with pitted ornamentation on their walls, were lignified[37] and, when they form single celled conduits, are referred to as tracheids. These, the "next generation" of transport cell design, have a more rigid structure than hydroids, preventing their collapse at higher levels of water tension.[33] Tracheids may have a single evolutionary origin, possibly within the hornworts,[38] uniting all tracheophytes (but they may have evolved more than once).[33]

Water transport requires regulation, and dynamic control is provided by stomata.[6]:521 By adjusting the amount of gas exchange, they can restrict the amount of water lost through transpiration. This is an important role where water supply is not constant, and indeed stomata appear to have evolved before tracheids, being present in the non-vascular hornworts.[33]

An endodermis probably evolved during the Silu-Devonian, but the first fossil evidence for such a structure is Carboniferous.[33] This structure in the roots covers the water transport tissue and regulates ion exchange (and prevents unwanted pathogens etc. from entering the water transport system). The endodermis can also provide an upwards pressure, forcing water out of the roots when transpiration is not enough of a driver.

Once plants had evolved this level of controlled water transport, they were truly homoiohydric, able to extract water from their environment through root-like organs rather than relying on a film of surface moisture, enabling them to grow to much greater size.[33] As a result of their independence from their surroundings, they lost their ability to survive desiccation – a costly trait to retain.[33]

During the Devonian, maximum xylem diameter increased with time, with the minimum diameter remaining pretty constant.[36] By the Middle Devonian, the tracheid diameter of some plant lineages[39] had plateaued.[36] Wider tracheids allow water to be transported faster, but the overall transport rate depends also on the overall cross-sectional area of the xylem bundle itself.[36] The increase in vascular bundle thickness further seems to correlate with the width of plant axes, and plant height; it is also closely related to the appearance of leaves[36] and increased stomatal density, both of which would increase the demand for water.[33]

While wider tracheids with robust walls make it possible to achieve higher water transport pressures, this increases the problem of cavitation.[33] Cavitation occurs when a bubble of air forms within a vessel, breaking the bonds between chains of water molecules and preventing them from pulling more water up with their cohesive tension. A tracheid, once cavitated, cannot have its embolism removed and return to service (except in a few advanced angiosperms[verification needed] that have developed a mechanism of doing so). Therefore, it is well worth plants' while to avoid cavitation occurring. For this reason, pits in tracheid walls have very small diameters, to prevent air entering and allowing bubbles to nucleate.[33] Freeze-thaw cycles are a major cause of cavitation.[33] Damage to a tracheid's wall almost inevitably leads to air leaking in and cavitation, hence the importance of many tracheids working in parallel.[33]

Ultimately, however, some cavitation incidents will occur, so plants have evolved a range of mechanisms to contain the damage.[33] Small pits link adjacent conduits to allow fluid to flow between them, but not air – although ironically these pits, which prevent the spread of embolisms, are also a major cause of them.[33] These pitted surfaces further reduce the flow of water through the xylem by as much as 30%.[33] Conifers, by the Jurassic, developed an ingenious improvement,[40] using valve-like structures to isolate cavitated elements. These torus-margo[41] structures have a blob floating in the middle of a donut; when one side depressurises the blob is sucked into the torus and blocks further flow.[33] Other plants simply accept cavitation; for instance, oaks grow a ring of wide vessels at the start of each spring, none of which survive the winter frosts. Maples use root pressure each spring to force sap upwards from the roots, squeezing out any air bubbles.

Growing to height also employed another trait of tracheids – the support offered by their lignified walls. Defunct tracheids were retained to form a strong, woody stem, produced in most instances by a secondary xylem. However, in early plants, tracheids were too mechanically vulnerable, and retained a central position, with a layer of tough sclerenchyma on the outer rim of the stems.[33] Even when tracheids do take a structural role, they are supported by sclerenchymatic tissue.

Tracheids end with walls, which impose a great deal of resistance on flow;[36] vessel members have perforated end walls, and are arranged in series to operate as if they were one continuous vessel.[36] The function of end walls, which were the default state in the Devonian, was probably to avoid embolisms. An embolism is where an air bubble is created in a tracheid. This may happen as a result of freezing, or by gases dissolving out of solution. Once an embolism is formed, it usually cannot be removed (but see later); the affected cell cannot pull water up, and is rendered useless.

End walls excluded, the tracheids of prevascular plants were able to operate under the same hydraulic conductivity as those of the first vascular plant, Cooksonia.[36]

The size of tracheids is limited as they comprise a single cell; this limits their length, which in turn limits their maximum useful diameter to 80 μm.[33] Conductivity grows with the fourth power of diameter, so increased diameter has huge rewards; vessel elements, consisting of a number of cells, joined at their ends, overcame this limit and allowed larger tubes to form, reaching diameters of up to 500 μm, and lengths of up to 10 m.[33]

Vessels first evolved during the dry, low CO2 periods of the Late Permian, in the horsetails, ferns and Selaginellales independently, and later appeared in the mid Cretaceous in angiosperms and gnetophytes.[33] Vessels allow the same cross-sectional area of wood to transport around a hundred times more water than tracheids![33] This allowed plants to fill more of their stems with structural fibres, and also opened a new niche to vines, which could transport water without being as thick as the tree they grew on.[33] Despite these advantages, tracheid-based wood is a lot lighter, thus cheaper to make, as vessels need to be much more reinforced to avoid cavitation.[33]

The branching pattern of megaphyll veins may belie their origin as webbed, dichotomising branches.

Leaf lamina. The leaf architecture probably arose multiple times in the plant lineage

Leaves today are, in almost all instances, an adaptation to increase the amount of sunlight that can be captured for photosynthesis. Leaves certainly evolved more than once, and probably originated as spiny outgrowths to protect early plants from herbivory.[citation needed]

Leaves are the primary photosynthetic organs of a plant. Based on their structure, they are classified into two types: microphylls, which lack complex venation patterns, and megaphylls, which are large and have complex venation. It has been proposed that these structures arose independently.[42] Megaphylls, according to Walter Zimmerman's telome theory,[43] have evolved from plants that showed a three-dimensional branching architecture, through three transformations—overtopping, which led to the lateral position typical of leaves, planation, which involved formation of a planar architecture, webbing or fusion, which united the planar branches, thus leading to the formation of a proper leaf lamina. All three steps happened multiple times in the evolution of today's leaves.[44]

It is widely believed that the telome theory is well supported by fossil evidence. However, Wolfgang Hagemann questioned it for morphological and ecological reasons and proposed an alternative theory.[45][46] Whereas according to the telome theory the most primitive land plants have a three-dimensional branching system of radially symmetrical axes (telomes), according to Hagemann's alternative the opposite is proposed: the most primitive land plants that gave rise to vascular plants were flat, thalloid, leaf-like, without axes, somewhat like a liverwort or fern prothallus. Axes such as stems and roots evolved later as new organs. Rolf Sattler proposed an overarching process-oriented view that leaves some limited room for both the telome theory and Hagemann's alternative and in addition takes into consideration the whole continuum between dorsiventral (flat) and radial (cylindrical) structures that can be found in fossil and living land plants.[47][48] This view is supported by research in molecular genetics. Thus, James (2009)[49] concluded that "it is now widely accepted that... radiality [characteristic of axes such as stems] and dorsiventrality [characteristic of leaves] are but extremes of a continuous spectrum. In fact, it is simply the timing of the KNOX gene expression!"

From the point of view of the telome theory, it has been proposed that before the evolution of leaves, plants had the photosynthetic apparatus on the stems. Today's megaphyll leaves probably became commonplace some 360mya, about 40my after the simple leafless plants had colonized the land in the Early Devonian. This spread has been linked to the fall in the atmospheric carbon dioxide concentrations in the Late Paleozoic era associated with a rise in density of stomata on leaf surface. This must have allowed for better transpiration rates and gas exchange. Large leaves with less stomata would have gotten heated up in the sun's heat, but an increased stomatal density allowed for a better-cooled leaf, thus making its spread feasible.[50][51]

The rhyniophytes of the Rhynie chert consisted of nothing more than slender, unornamented axes. The early to middle Devonian trimerophytes may be considered leafy. This group of vascular plants are recognisable by their masses of terminal sporangia, which adorn the ends of axes which may bifurcate or trifurcate.[6] Some organisms, such as Psilophyton, bore enations. These are small, spiny outgrowths of the stem, lacking their own vascular supply.

Around the same time, the zosterophyllophytes were becoming important. This group is recognisable by their kidney-shaped sporangia, which grew on short lateral branches close to the main axes. They sometimes branched in a distinctive H-shape.[6] The majority of this group bore pronounced spines on their axes. However, none of these had a vascular trace, and the first evidence of vascularised enations occurs in the Rhynie genus Asteroxylon. The spines of Asteroxylon had a primitive vascular supply – at the very least, leaf traces could be seen departing from the central protostele towards each individual "leaf". A fossil clubmoss known as Baragwanathia had already appeared in the fossil record about 20 million years earlier, in the Late Silurian.[52] In this organism, these leaf traces continue into the leaf to form their mid-vein.[53] One theory, the "enation theory", holds that the leaves developed by outgrowths of the protostele connecting with existing enations, but it is also possible that microphylls evolved by a branching axis forming "webbing".[6]

Asteroxylon[54] and Baragwanathia are widely regarded as primitive lycopods.[6] The lycopods are still extant today, familiar as the quillwortIsoetes and the club mosses. Lycopods bear distinctive microphylls – leaves with a single vascular trace. Microphylls could grow to some size – the Lepidodendrales boasted microphylls over a meter in length – but almost all just bear the one vascular bundle. An exception is the rare branching in some Selaginella species.

The more familiar leaves, megaphylls, are thought to have separate origins – indeed, they appeared four times independently, in the ferns, horsetails, progymnosperms, and seed plants.[55] They appear to have originated from dichotomising branches, which first overlapped (or "overtopped") one another, and eventually developed "webbing" and evolved into gradually more leaf-like structures.[53] So megaphylls, by this "teleome theory", are composed of a group of webbed branches[53] – hence the "leaf gap" left where the leaf's vascular bundle leaves that of the main branch resembles two axes splitting.[53] In each of the four groups to evolve megaphylls, their leaves first evolved during the Late Devonian to Early Carboniferous, diversifying rapidly until the designs settled down in the mid Carboniferous.[55]

The cessation of further diversification can be attributed to developmental constraints,[55] but why did it take so long for leaves to evolve in the first place? Plants had been on the land for at least 50 million years before megaphylls became significant. However, small, rare mesophylls are known from the early Devonian genus Eophyllophyton – so development could not have been a barrier to their appearance.[56] The best explanation so far incorporates observations that atmospheric CO2 was declining rapidly during this time – falling by around 90% during the Devonian.[57] This corresponded with an increase in stomatal density by 100 times. Stomata allow water to evaporate from leaves, which causes them to curve. It appears that the low stomatal density in the early Devonian meant that evaporation was limited, and leaves would overheat if they grew to any size. The stomatal density could not increase, as the primitive steles and limited root systems would not be able to supply water quickly enough to match the rate of transpiration.[50]

Clearly, leaves are not always beneficial, as illustrated by the frequent occurrence of secondary loss of leaves, famously exemplified by cacti and the "whisk fern" Psilotum.

Secondary evolution can also disguise the true evolutionary origin of some leaves. Some genera of ferns display complex leaves which are attached to the pseudostele by an outgrowth of the vascular bundle, leaving no leaf gap.[53] Further, horsetail (Equisetum) leaves bear only a single vein, and appear to be microphyllous; however, both the fossil record and molecular evidence indicate that their forebears bore leaves with complex venation, and the current state is a result of secondary simplification.[58]

Deciduous trees deal with another disadvantage to having leaves. The popular belief that plants shed their leaves when the days get too short is misguided; evergreens prospered in the Arctic circle during the most recentgreenhouse earth.[59] The generally accepted reason for shedding leaves during winter is to cope with the weather – the force of wind and weight of snow are much more comfortably weathered without leaves to increase surface area. Seasonal leaf loss has evolved independently several times and is exhibited in the ginkgoales, some pinophyta and certain angiosperms.[60] Leaf loss may also have arisen as a response to pressure from insects; it may have been less costly to lose leaves entirely during the winter or dry season than to continue investing resources in their repair.[61]

Various physical and physiological forces like light intensity, humidity, temperature, wind speeds etc. are thought to have influenced evolution of leaf shape and size. It is observed that high trees rarely have large leaves, owing to the obstruction they generate for winds. This obstruction can eventually lead to the tearing of leaves, if they are large. Similarly, trees that grow in temperate or taiga regions have pointed leaves, presumably to prevent nucleation of ice onto the leaf surface and reduce water loss due to transpiration. Herbivory, not only by large mammals, but also small insects has been implicated as a driving force in leaf evolution, an example being plants of the genus Aciphylla, that are commonly found in New Zealand. The now extinct Moas fed upon these plants, and it is seen that the leaves have spines on their laminas, which probably functioned to discourage the moas from feeding on them. Other members of Aciphylla, which did not co-exist with the moas, do not have these spines.[62]

At the genetic level, developmental studies have shown that repression of the KNOX genes is required for initiation of the leafprimordium. This is brought about by ARP genes, which encode transcription factors. Genes of this type have been found in many plants studied till now, and the mechanism i.e. repression of KNOX genes in leaf primordia, seems to be quite conserved. Interestingly, expression of KNOX genes in leaves produces complex leaves. It is speculated that the ARP function arose quite early in vascular plant evolution, because members of the primitive group Lycophytes also have a functionally similar gene.[63] Other players that have a conserved role in defining leaf primordia are the phytohormones auxin, gibberelin and cytokinin.

The diversity of leaves

One interesting feature of a plant is its phyllotaxy. The arrangement of leaves on the plant body is such that the plant can maximally harvest light under the given constraints, and hence, one might expect the trait to be genetically robust. However, it may not be so. In maize, a mutation in only one gene called ABPHYL(ABnormal PHYLlotaxy) was enough to change the phyllotaxy of the leaves. It implies that sometimes, mutational adjustment of a single locus on the genome is enough to generate diversity. The abphyl gene was later on shown to encode a cytokinin response regulator protein.[64]

Once the leaf primordial cells are established from the SAM cells, the new axes for leaf growth are defined, one important (and more studied) among them being the abaxial-adaxial (lower-upper surface) axes. The genes involved in defining this, and the other axes seem to be more or less conserved among higher plants. Proteins of the HD-ZIPIII family have been implicated in defining the adaxial identity. These proteins deviate some cells in the leaf primordium from the default abaxial state, and make them adaxial. It is believed that, in early plants with leaves, the leaves just had one type of surface — the abaxial one. This is the underside of today's leaves. The definition of the adaxial identity occurred some 200 million years after the abaxial identity was established.[65] One can thus imagine the early leaves as an intermediate stage in evolution of today's leaves, having just arisen from spiny stem-like outgrowths of their leafless ancestors, covered with stomata all over, and not optimized as much for light harvesting.

How the wide variety of plant leaf morphology is generated is a subject of intense research. Some common themes have emerged. One of the most significant is the involvement of KNOX genes in generating compound leaves, as in the tomato(see above). But, this again is not universal. For example, the pea uses a different mechanism for doing the same thing.[66][67] Mutations in genes affecting leaf curvature can also change leaf form, by changing the leaf from flat, to a crinkly shape,[68] like the shape of cabbage leaves. There also exist different morphogen gradients in a developing leaf which define the leaf's axis. Changes in these morphogen gradients may also affect the leaf form. Another very important class of regulators of leaf development are the microRNAs, whose role in this process has just begun to be documented. The coming years should see a rapid development in comparative studies on leaf development, with many EST sequences involved in the process coming online.

The early Devonian landscape was devoid of vegetation taller than waist height. Without the evolution of a robust vascular system, taller heights could not be attained. There was, however, a constant evolutionary pressure to attain greater height. The most obvious advantage is the harvesting of more sunlight for photosynthesis – by overshadowing competitors – but a further advantage is present in spore distribution, as spores (and, later, seeds) can be blown greater distances if they start higher. This may be demonstrated by Prototaxites, thought to be a Palaeozoic fungus reaching eight metres in height.[69]

To attain arborescence, early plants had to develop woody tissue that provided support and water transport. The stele of plants undergoing "secondary growth" is surrounded by the vascular cambium, a ring of cells which produces more xylem (on the inside) and phloem (on the outside). Since xylem cells comprise dead, lignified tissue, subsequent rings of xylem are added to those already present, forming wood.

The first plants to develop this secondary growth, and a woody habit, were apparently the ferns, and as early as the Middle Devonian one species, Wattieza, had already reached heights of 8 m and a tree-like habit.[70]

Other clades did not take long to develop a tree-like stature; the Late Devonian Archaeopteris, a precursor to gymnosperms which evolved from the trimerophytes,[71] reached 30 m in height. These progymnosperms were the first plants to develop true wood, grown from a bifacial cambium, of which the first appearance is in the Middle Devonian Rellimia.[72] True wood is only thought to have evolved once, giving rise to the concept of a "lignophyte" clade.

These Archaeopteris forests were soon supplemented by lycopods, in the form of lepidodendrales, which topped 50m in height and 2m across at the base. These lycopods rose to dominate Late Devonian and Carboniferous coal deposits.[73] Lepidodendrales differ from modern trees in exhibiting determinate growth: after building up a reserve of nutrients at a lower height, the plants would "bolt" to a genetically determined height, branch at that level, spread their spores and die.[74] They consisted of "cheap" wood to allow their rapid growth, with at least half of their stems comprising a pith-filled cavity.[6] Their wood was also generated by a unifacial vascular cambium – it did not produce new phloem, meaning that the trunks could not grow wider over time.[verification needed]

The horsetailCalamites was next on the scene, appearing in the Carboniferous. Unlike the modern horsetail Equisetum, Calamites had a unifacial vascular cambium, allowing them to develop wood and grow to heights in excess of 10 m. They also branched multiple times.

While the form of early trees was similar to that of today's, the groups containing all modern trees had yet to evolve.

The dominant groups today are the gymnosperms, which include the coniferous trees, and the angiosperms, which contain all fruiting and flowering trees. It was long thought that the angiosperms arose from within the gymnosperms, but recent molecular evidence suggests that their living representatives form two distinct groups.[75][76][77] The molecular data has yet to be fully reconciled with morphological data,[78][79][80] but it is becoming accepted that the morphological support for paraphyly is not especially strong.[81] This would lead to the conclusion that both groups arose from within the pteridosperms, probably as early as the Permian.[81]

The angiosperms and their ancestors played a very small role until they diversified during the Cretaceous. They started out as small, damp-loving organisms in the understory, and have been diversifying ever since the mid[verification needed]-Cretaceous, to become the dominant member of non-boreal forests today.

The roots (bottom image) of Lepidodendrales (Stigmaria) are thought to be functionally equivalent to the stems (top), as the similar appearance of "leaf scars" and "root scars" on these specimens from different species demonstrates.

Roots are important to plants for two main reasons: Firstly, they provide anchorage to the substrate; more importantly, they provide a source of water and nutrients from the soil. Roots allowed plants to grow taller and faster.

The onset of roots also had effects on a global scale. By disturbing the soil, and promoting its acidification (by taking up nutrients such as nitrate and phosphate[verification needed]), they enabled it to weather more deeply, promoting the draw-down of CO2[82] with huge implications for climate.[83] These effects may have been so profound they led to a mass extinction.[84]

But how and when did roots evolve in the first place? While there are traces of root-like impressions in fossil soils in the Late Silurian,[85] body fossils show the earliest plants to be devoid of roots. Many had tendrils that sprawled along or beneath the ground, with upright axes or thalli dotted here and there, and some even had non-photosynthetic subterranean branches which lacked stomata. The distinction between root and specialised branch is developmental; true roots follow a different developmental trajectory to stems. Further, roots differ in their branching pattern, and in possession of a root cap.[10] So while Siluro-Devonian plants such as Rhynia and Horneophyton possessed the physiological equivalent of roots,[citation needed] roots – defined as organs differentiated from stems – did not arrive until later.[10] Unfortunately, roots are rarely preserved in the fossil record, and our understanding of their evolutionary origin is sparse.[10]

Rhizoids – small structures performing the same role as roots, usually a cell in diameter – probably evolved very early, perhaps even before plants colonised the land; they are recognised in the Characeae, an algal sister group to land plants.[10] That said, rhizoids probably evolved more than once; the rhizines of lichens, for example, perform a similar role. Even some animals (Lamellibrachia) have root-like structures.[10] Rhizoids were present in most of the earliest vascular plants, and on this basis seem to have presaged true plant roots.[86]

More advanced structures are common in the Rhynie chert, and many other fossils of comparable early Devonian age bear structures that look like, and acted like, roots.[10] The rhyniophytes bore fine rhizoids, and the trimerophytes and herbaceous lycopods of the chert bore root-like structure penetrating a few centimetres into the soil.[87] However, none of these fossils display all the features borne by modern roots.[10] Roots and root-like structures became increasingly more common and deeper penetrating during the Devonian, with lycopod trees forming roots around 20 cm long during the Eifelian and Givetian. These were joined by progymnosperms, which rooted up to about a metre deep, during the ensuing Frasnian stage.[87] True gymnosperms and zygopterid ferns also formed shallow rooting systems during the Famennian.[87]

The rhizophores of the lycopods provide a slightly different approach to rooting. They were equivalent to stems, with organs equivalent to leaves performing the role of rootlets.[10] A similar construction is observed in the extant lycopod Isoetes, and this appears to be evidence that roots evolved independently at least twice, in the lycophytes and other plants,[10] a proposition supported by studies showing that roots are initiated and their growth promoted by different mechanisms in lycophytes and euphyllophytes.[88]

A vascular system is indispensable to rooted plants, as non-photosynthesising roots need a supply of sugars, and a vascular system is required to transport water and nutrients from the roots to the rest of the plant.[12] These plants are little more advanced than their Silurian forebears, without a dedicated root system; however, the flat-lying axes can be clearly seen to have growths similar to the rhizoids of bryophytes today.[89]

By the Middle to Late Devonian, most groups of plants had independently developed a rooting system of some nature.[89] As roots became larger, they could support larger trees, and the soil was weathered to a greater depth.[84] This deeper weathering had effects not only on the aforementioned drawdown of CO2, but also opened up new habitats for colonisation by fungi and animals.[87]

Roots today have developed to the physical limits. They penetrate many[quantify] metres of soil to tap the water table.[verification needed] The narrowest roots are a mere 40 μm in diameter, and could not physically transport water if they were any narrower.[10] The earliest fossil roots recovered, by contrast, narrowed from 3 mm to under 700 μm in diameter; of course, taphonomy is the ultimate control of what thickness can be seen.[10]

The efficiency of many plants' roots is increased via a symbiotic relationship with a fungal partner. The most common are arbuscular mycorrhizae (AM), literally "tree-like fungal roots". These comprise fungi that invade some root cells, filling the cell membrane with their hyphae. They feed on the plant's sugars, but return nutrients generated or extracted from the soil (especially phosphate), to which the plant would otherwise have no access.

This symbiosis appears to have evolved early in plant history. AM are found in all plant groups, and 80% of extant vascular plants,[90] suggesting an early ancestry; a "plant"-fungus symbiosis may even have been the step that enabled them to colonise the land,.[91] Such fungi increase the productivity even of simple plants such as liverworts.[92] Indeed, AM are abundant in the Rhynie chert;[93] the association occurred even before there were true roots to colonise, and some have suggested that roots evolved to provide a more comfortable habitat for mycorrhizal fungi.[94]

Early land plants reproduced in the fashion of ferns: spores germinated into small gametophytes, which produced eggs and/or sperm. These sperm would swim across moist soils to find the female organs (archegonia) on the same or another gametophyte, where they would fuse with an egg to produce an embryo, which would germinate into a sporophyte.[87]

Heterosporic plants, as their name suggests, bear spores of two sizes – microspores and megaspores. These would germinate to form microgametophytes and megagametophytes, respectively. This system paved the way for ovules and seeds: taken to the extreme, the megasporangia could bear only a single megaspore tetrad, and to complete the transition to true ovules, three of the megaspores in the original tetrad could be aborted, leaving one megaspore per megasporangium.

The transition to ovules continued with this megaspore being "boxed in" to its sporangium while it germinates. Then, the megagametophyte is contained within a waterproof integument, which forms the bulk of the seed. The microgametophyte – a pollen grain which has germinated from a microspore – is employed for dispersal, only releasing its desiccation-prone sperm when it reaches a receptive megagametophyte.[6]

Lycopods and sphenopsids got a fair way down the path to the seed habit without ever crossing the threshold. Fossil lycopod megaspores reaching 1 cm in diameter, and surrounded by vegetative tissue, are known (Lepidocarpon, Achlamydocarpon);– these even germinate into a megagametophyte in situ. However, they fall short of being ovules, since the nucellus, an inner spore-covering layer, does not completely enclose the spore. A very small slit (micropyle) remains, meaning that the megasporangium is still exposed to the atmosphere. This has two consequences – firstly, it means it is not fully resistant to desiccation, and secondly, sperm do not have to "burrow" to access the archegonia of the megaspore.[6]

A Middle Devonianprecursor to seed plants from Belgium has been identified predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed. Runcaria has all of the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the ovule.[95]

The first spermatophytes (literally: "seed plants") – that is, the first plants to bear true seeds – are called pteridosperms: literally, "seed ferns", so called because their foliage consisted of fern-like fronds, although they were not closely related to ferns. The oldest fossil evidence of seed plants is of Late Devonian age, and they appear to have evolved out of an earlier group known as the progymnosperms. These early seed plants ranged from trees to small, rambling shrubs; like most early progymnosperms, they were woody plants with fern-like foliage. They all bore ovules, but no cones, fruit or similar. While it is difficult to track the early evolution of seeds, the lineage of the seed ferns may be traced from the simple trimerophytes through homosporous Aneurophytes.[6]

This seed model is shared by basically all gymnosperms (literally: "naked seeds"), most of which encase their seeds in a woody cone or fleshy aril (the yew, for example), but none of which fully enclose their seeds. The angiosperms ("vessel seeds") are the only group to fully enclose the seed, in a carpel.

Fully enclosed seeds opened up a new pathway for plants to follow: that of seed dormancy. The embryo, completely isolated from the external atmosphere and hence protected from desiccation, could survive some years of drought before germinating. Gymnosperm seeds from the Late Carboniferous have been found to contain embryos, suggesting a lengthy gap between fertilisation and germination.[96] This period is associated with the entry into a greenhouse earth period, with an associated increase in aridity. This suggests that dormancy arose as a response to drier climatic conditions, where it became advantageous to wait for a moist period before germinating.[96] This evolutionary breakthrough appears to have opened a floodgate: previously inhospitable areas, such as dry mountain slopes, could now be tolerated, and were soon covered by trees.[96]

Seeds offered further advantages to their bearers: they increased the success rate of fertilised gametophytes, and because a nutrient store could be "packaged" in with the embryo, the seeds could germinate rapidly in inhospitable environments, reaching a size where it could fend for itself more quickly.[87] For example, without an endosperm, seedlings growing in arid environments would not have the reserves to grow roots deep enough to reach the water table before they expired from dehydration.[87] Likewise, seeds germinating in a gloomy understory require an additional reserve of energy to quickly grow high enough to capture sufficient light for self-sustenance.[87] A combination of these advantages gave seed plants the ecological edge over the previously dominant genus Archaeopteris, thus increasing the biodiversity of early forests.[87]

Despite these advantages, it is common for fertilized ovules to fail to mature as seeds.[97] Also during seed dormancy (often associated with unpredictable and stressful conditions) DNA damage accumulates.[98][99][100] Thus DNA damage appears to be a basic problem for survival of seed plants, just as DNA damage is a major problem for life in general.[101]

Flowers are modified leaves possessed only by the angiosperms, which are relatively late to appear in the fossil record. The group originated and diversified during the Early Cretaceous and became ecologically significant thereafter.[102] Flower-like structures first appear in the fossil records some ~130 mya, in the Cretaceous.[103]

Colorful and/or pungent structures surround the cones of plants such as cycads and Gnetales, making a strict definition of the term "flower" elusive.[80]

The main function of a flower is reproduction, which, before the evolution of the flower and angiosperms, was the job of microsporophylls and megasporophylls. A flower can be considered a powerful evolutionary innovation, because its presence allowed the plant world to access new means and mechanisms for reproduction.

The evolution of syncarps.
a: sporangia borne at tips of leaf
b: Leaf curls up to protect sporangia
c: leaf curls to form enclosed roll
d: grouping of three rolls into a syncarp

The flowering plants have long been assumed to have evolved from within the gymnosperms; according to the traditional morphological view, they are closely allied to the Gnetales. However, as noted above, recent molecular evidence is at odds with this hypothesis,[76][77] and further suggests that Gnetales are more closely related to some gymnosperm groups than angiosperms,[75] and that extant gymnosperms form a distinct clade to the angiosperms,[75][76][77] the two clades diverging some 300 million years ago.[104]

The relationship of stem groups to the angiosperms is important in determining the evolution of flowers. Stem groups provide an insight into the state of earlier "forks" on the path to the current state. Convergence increases the risk of misidentifying stem groups. Since the protection of the megagametophyte is evolutionarily desirable, probably many separate groups evolved protective encasements independently. In flowers, this protection takes the form of a carpel, evolved from a leaf and recruited into a protective role, shielding the ovules. These ovules are further protected by a double-walled integument.

Penetration of these protective layers needs something more than a free-floating microgametophyte. Angiosperms have pollen grains comprising just three cells. One cell is responsible for drilling down through the integuments, and creating a conduit for the two sperm cells to flow down. The megagametophyte has just seven cells; of these, one fuses with a sperm cell, forming the nucleus of the egg itself, and another joins with the other sperm, and dedicates itself to forming a nutrient-rich endosperm. The other cells take auxiliary roles.[clarification needed] This process of "double fertilisation" is unique and common to all angiosperms.

The inflorescences of the Bennettitales are strikingly similar to flowers

In the fossil record, there are three intriguing groups which bore flower-like structures. The first is the Permian pteridosperm Glossopteris, which already bore recurved leaves resembling carpels. The MesozoicCaytonia is more flower-like still, with enclosed ovules – but only a single integument. Further, details of their pollen and stamens set them apart from true flowering plants.

The Bennettitales bore remarkably flower-like organs, protected by whorls of bracts which may have played a similar role to the petals and sepals of true flowers; however, these flower-like structures evolved independently, as the Bennettitales are more closely related to cycads and ginkgos than to the angiosperms.[105]

However, no true flowers are found in any groups save those extant today. Most morphological and molecular analyses place Amborella, the nymphaeales and Austrobaileyaceae in a basal clade called "ANA". This clade appear to have diverged in the early Cretaceous, around 130 million years ago – around the same time as the earliest fossil angiosperm,[106][107] and just after the first angiosperm-like pollen, 136 million years ago.[81] The magnoliids diverged soon after, and a rapid radiation had produced eudicots and monocots by 125 million years ago.[81] By the end of the Cretaceous 66 million years ago, over 50% of today's angiosperm orders had evolved, and the clade accounted for 70% of global species.[108] It was around this time that flowering trees became dominant over conifers.[6]:498

The features of the basal "ANA" groups suggest that angiosperms originated in dark, damp, frequently disturbed areas.[109] It appears that the angiosperms remained constrained to such habitats throughout the Cretaceous – occupying the niche of small herbs early in the successional series.[108] This may have restricted their initial significance, but given them the flexibility that accounted for the rapidity of their later diversifications in other habitats.[109]

The family Amborellaceae is regarded as being the sister clade to all other living flowering plants. The complete genome of Amborella trichopoda is still being sequenced as of March 2012[update]. By comparing its genome with those of all other living flowering plants, it will be possible to work out the most likely characteristics of the ancestor of A. trichopoda and all other flowering plants, i.e. the ancestral flowering plant.[112]

It seems that on the level of the organ, the leaf may be the ancestor of the flower, or at least some floral organs. When some crucial genes involved in flower development are mutated, clusters of leaf-like structures arise in place of flowers. Thus, sometime in history, the developmental program leading to formation of a leaf must have been altered to generate a flower. There probably also exists an overall robust framework within which the floral diversity has been generated. An example of that is a gene called LEAFY (LFY), which is involved in flower development in Arabidopsis thaliana. The homologs of this gene are found in angiosperms as diverse as tomato, snapdragon, pea, maize and even gymnosperms. Expression of Arabidopsis thaliana LFY in distant plants like poplar and citrus also results in flower-production in these plants. The LFY gene regulates the expression of some genes belonging to the MADS-box family. These genes, in turn, act as direct controllers of flower development.[citation needed]

The members of the MADS-box family of transcription factors play a very important and evolutionarily conserved role in flower development. According to the ABC Model of flower development, three zones — A,B and C — are generated within the developing flower primordium, by the action of some transcription factors, that are members of the MADS-box family. Among these, the functions of the B and C domain genes have been evolutionarily more conserved than the A domain gene. Many of these genes have arisen through gene duplications of ancestral members of this family. Quite a few of them show redundant functions.

The evolution of the MADS-box family has been extensively studied. These genes are present even in pteridophytes, but the spread and diversity is many times higher in angiosperms.[113] There appears to be quite a bit of pattern into how this family has evolved. Consider the evolution of the C-region gene AGAMOUS (AG). It is expressed in today's flowers in the stamens, and the carpel, which are reproductive organs. Its ancestor in gymnosperms also has the same expression pattern. Here, it is expressed in the strobili, an organ that produces pollen or ovules.[114] Similarly, the B-genes' (AP3 and PI) ancestors are expressed only in the male organs in gymnosperms. Their descendants in the modern angiosperms also are expressed only in the stamens, the male reproductive organ. Thus, the same, then-existing components were used by the plants in a novel manner to generate the first flower. This is a recurring pattern in evolution.

There is enormous variation in the developmental programs of plants. For example, grasses possess unique floral structures. The carpels and stamens are surrounded by scale-like lodicules and two bracts: the lemma and the palea. Genetic evidence and morphology suggest that lodicules are homologous to eudicot petals.[115] The palea and lemma may be homologous to sepals in other groups, or may be unique grass structures. The genetic evidence is not clear.

Variation in floral structure is typically due to slight changes in the MADS-box genes and their expression pattern.

Arabidopsis thaliana has a gene called AGAMOUS that plays an important role in defining how many petals and sepals and other organs are generated. Mutations in this gene give rise to the floral meristem obtaining an indeterminate fate, and many floral organs keep on getting produced. Roses, carnations and morning glory, for example, that have very dense floral organs. These flowers have been selected by horticulturists for increased number of petals. Researchers have found that the morphology of these flowers is because of strong mutations in the AGAMOUS homolog in these plants, which leads to them making a large number of petals and sepals.[116] Several studies on diverse plants like petunia, tomato, Impatiens, maize etc. have suggested that the enormous diversity of flowers is a result of small changes in genes controlling their development.[117]

Some of these changes also cause changes in expression patterns of the developmental genes, resulting in different phenotypes. The Floral Genome Project looked at the EST data from various tissues of many flowering plants. The researchers confirmed that the ABC Model of flower development is not conserved across all angiosperms. Sometimes expression domains change, as in the case of many monocots, and also in some basal angiosperms like Amborella. Different models of flower development like the Fading boundaries model, or the Overlapping-boundaries model which propose non-rigid domains of expression, may explain these architectures.[118] There is a possibility that from the basal to the modern angiosperms, the domains of floral architecture have gotten more and more fixed through evolution.

Another floral feature that has been a subject of natural selection is flowering time. Some plants flower early in their life cycle, others require a period of vernalization before flowering. This outcome is based on factors like temperature, light intensity, presence of pollinators and other environmental signals: genes like CONSTANS (CO), Flowering Locus C (FLC) and FRIGIDA regulate integration of environmental signals into the pathway for flower development. Variations in these loci have been associated with flowering time variations between plants. For example, Arabidopsis thaliana ecotypes that grow in the cold, temperate regions require prolonged vernalization before they flower, while the tropical varieties, and the most common lab strains, don't. This variation is due to mutations in the FLC and FRIGIDA genes, rendering them non-functional.[119]

Quite a few players in this process are conserved across all the plants studied. Sometimes though, despite genetic conservation, the mechanism of action turns out to be different. For example, rice is a short-day plant, while Arabidopsis thaliana is a long-day plant. Now, in both plants, the proteins CO and FLOWERING LOCUS T (FT) are present. But, in Arabidopsis thaliana, CO enhances FT production, while in rice, the CO homolog represses FT production, resulting in completely opposite downstream effects.[120]

The Anthophyte theory was based on the observation that a gymnospermic group Gnetales has a flower-like ovule. It has partially developed vessels as found in the angiosperms, and the megasporangium is covered by three envelopes, like the ovary structure of angiosperm flowers. However, many other lines of evidence show that Gnetales is not related to angiosperms.[105]

The Mostly Male theory has a more genetic basis. Proponents of this theory point out that the gymnosperms have two very similar copies of the gene LFY, while angiosperms just have one. Molecular clock analysis has shown that the other LFY paralog was lost in angiosperms around the same time as flower fossils become abundant, suggesting that this event might have led to floral evolution.[121] According to this theory, loss of one of the LFYparalog led to flowers that were more male, with the ovules being expressed ectopically. These ovules initially performed the function of attracting pollinators, but sometime later, may have been integrated into the core flower.

Photosynthesis is not quite as simple as adding water to CO2 to produce sugars and oxygen. A complex chemical pathway is involved, facilitated along the way by a range of enzymes and co-enzymes. The enzymeRuBisCO is responsible for "fixing" CO2 – that is, it attaches it to a carbon-based molecule to form a sugar, which can be used by the plant, releasing an oxygen molecule along the way. However, the enzyme is notoriously inefficient, and, as ambient temperature rises, will increasingly fix oxygen instead of CO2 in a process called photorespiration. This is energetically costly as the plant has to use energy to turn the products of photorespiration back into a form that can react with CO2.

C4 plants evolved carbon concentrating mechanisms. These work by increasing the concentration of CO2 around RuBisCO, thereby facilitating photosynthesis and decreasing photorespiration. The process of concentrating CO2 around RuBisCO requires more energy than allowing gases to diffuse, but under certain conditions – i.e. warm temperatures (>25 °C), low CO2 concentrations, or high oxygen concentrations – pays off in terms of the decreased loss of sugars through photorespiration.

One type of C4 metabolism employs a so-called Kranz anatomy. This transports CO2 through an outer mesophyll layer, via a range of organic molecules, to the central bundle sheath cells, where the CO2 is released. In this way, CO2 is concentrated near the site of RuBisCO operation. Because RuBisCO is operating in an environment with much more CO2 than it otherwise would be, it performs more efficiently.

A second mechanism, CAM photosynthesis, temporally separates photosynthesis from the action of RuBisCO. RuBisCO only operates during the day, when stomata are sealed and CO2 is provided by the breakdown of the chemical malate. More CO2 is then harvested from the atmosphere when stomata open, during the cool, moist nights, reducing water loss.

These two pathways, with the same effect on RuBisCO, evolved a number of times independently – indeed, C4 alone arose 62 times in 18 different plant families. A number of 'pre-adaptations' seem to have paved the way for C4, leading to its clustering in certain clades: it has most frequently been innovated in plants that already had features such as extensive vascular bundle sheath tissue.[123] Many potential evolutionary pathways resulting in the C4phenotype are possible and have been characterised using Bayesian inference,[122] confirming that non-photosynthetic adaptations often provide evolutionary stepping stones for the further evolution of C4.

The C4 construction is most famously used by a subset of grasses, while CAM is employed by many succulents and cacti. The trait appears to have emerged during the Oligocene, around 25 to 32million years ago;[124] however, they did not become ecologically significant until the Miocene, 6 to 7million years ago.[125] Remarkably, some charcoalified fossils preserve tissue organised into the Kranz anatomy, with intact bundle sheath cells,[126] allowing the presence C4 metabolism to be identified. Isotopic markers are used to deduce their distribution and significance. C3 plants preferentially use the lighter of two isotopes of carbon in the atmosphere, 12C, which is more readily involved in the chemical pathways involved in its fixation. Because C4 metabolism involves a further chemical step, this effect is accentuated. Plant material can be analysed to deduce the ratio of the heavier 13C to 12C. This ratio is denoted δ13C. C3 plants are on average around 14‰ (parts per thousand) lighter than the atmospheric ratio, while C4 plants are about 28‰ lighter. The δ13C of CAM plants depends on the percentage of carbon fixed at night relative to what is fixed in the day, being closer to C3 plants if they fix most carbon in the day and closer to C4 plants if they fix all their carbon at night.[127]

It's troublesome procuring original fossil material in sufficient quantity to analyse the grass itself, but fortunately there is a good proxy: horses. Horses were globally widespread in the period of interest, and browsed almost exclusively on grasses. There's an old phrase in isotope palæontology, "you are what you eat (plus a little bit)" – this refers to the fact that organisms reflect the isotopic composition of whatever they eat, plus a small adjustment factor. There is a good record of horse teeth throughout the globe, and their δ13C has been measured. The record shows a sharp negative inflection around 6 to 7million years ago, during the Messinian, and this is interpreted as the rise of C4 plants on a global scale.[125]

While C4 enhances the efficiency of RuBisCO, the concentration of carbon is highly energy intensive. This means that C4 plants only have an advantage over C3 organisms in certain conditions: namely, high temperatures and low rainfall. C4 plants also need high levels of sunlight to thrive.[128] Models suggest that, without wildfires removing shade-casting trees and shrubs, there would be no space for C4 plants.[129] But, wildfires have occurred for 400 million years – why did C4 take so long to arise, and then appear independently so many times? The Carboniferous (~300 million years ago) had notoriously high oxygen levels – almost enough to allow spontaneous combustion[130] – and very low CO2, but there is no C4 isotopic signature to be found. And there doesn't seem to be a sudden trigger for the Miocene rise.

During the Miocene, the atmosphere and climate were relatively stable. If anything, CO2 increased gradually from 14 to 9million years ago before settling down to concentrations similar to the Holocene.[131] This suggests that it did not have a key role in invoking C4 evolution.[124] Grasses themselves (the group which would give rise to the most occurrences of C4) had probably been around for 60 million years or more, so had had plenty of time to evolve C4,[132][133] which, in any case, is present in a diverse range of groups and thus evolved independently. There is a strong signal of climate change in South Asia;[124] increasing aridity – hence increasing fire frequency and intensity – may have led to an increase in the importance of grasslands.[134] However, this is difficult to reconcile with the North American record.[124] It is possible that the signal is entirely biological, forced by the fire- (and elephant?)-[135] driven acceleration of grass evolution – which, both by increasing weathering and incorporating more carbon into sediments, reduced atmospheric CO2 levels.[135] Finally, there is evidence that the onset of C4 from 9 to 7million years ago is a biased signal, which only holds true for North America, from where most samples originate; emerging evidence suggests that grasslands evolved to a dominant state at least 15Ma earlier in South America.

Transcription factors and transcriptional regulatory networks play key roles in plant development and stress responses, as well as their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.[136]

Structure of Azadirachtin, a terpenoid produced by the Neem plant, which helps ward off microbes and insects. Many secondary metabolites have complex structures

Secondary metabolites are essentially low molecular weight compounds, sometimes having complex structures. They function in processes as diverse as immunity, anti-herbivory, pollinator attraction, communication between plants, maintaining symbiotic associations with soil flora, or enhancing the rate of fertilization, and hence are significant from the evo-devo perspective. The structural and functional diversity of these secondary metabolites across the plant kingdom is vast; it is estimated that hundreds of thousands of enzymes might be involved in this process in the entire of the plant kingdom, with about 15–25% of the genome coding for these enzymes, and every species having its unique arsenal of secondary metabolites.[137] Many of these metabolites are of enormous medical significance to humans.

The purpose of producing so many secondary metabolites, with a significant chunk of the metabolome devoted to this activity is unclear. It is hypothesized that most of these chemicals help in generating immunity and, in consequence, the diversity of these metabolites is a result of a constant arms race between plants and their parasites. Some evidence supports this case. A central question involves the reproductive cost to maintaining such a large inventory of genes devoted to producing secondary metabolites. Various models have been suggested that probe into this aspect of the question, but a consensus on the extent of the cost has yet to be established;[138] as it is still difficult to predict whether a plant with more secondary metabolites increases its survival or reproductive success compared to other plants in its vicinity.

Secondary metabolite production seems to have arisen quite early during evolution. In plants, they seem to have spread out using mechanisms including gene duplications or the evolution of novel genes. Furthermore, research has shown that diversity in some of these compounds may be positively selected for. Although the role of novel gene evolution in the evolution of secondary metabolism is clear, there are several examples where new metabolites have been formed by small changes in the reaction. For example, cyanogen glycosides have been proposed to have evolved multiple times in different plant lineages. There are several such instances of convergent evolution. For example, enzymes for synthesis of limonene – a terpene – are more similar between angiosperms and gymnosperms than to their own terpene synthesis enzymes. This suggests independent evolution of the limonene biosynthetic pathway in these two lineages.[139]

While environmental factors are significantly responsible for evolutionary change, they act merely as agents for natural selection. Change is inherently brought about via phenomena at the genetic level: mutations, chromosomal rearrangements, and epigenetic changes. While the general types of mutations hold true across the living world, in plants, some other mechanisms have been implicated as highly significant.

Genome doubling is a relatively common occurrence in plant evolution and results in polyploidy, which is consequently a common feature in plants. It is believed that at least half (and probably all) plants have seen genome doubling in their history. Genome doubling entails gene duplication, thus generating functional redundancy in most genes. The duplicated genes may attain new function, either by changes in expression pattern or changes in activity. Polyploidy and gene duplication are believed to be among the most powerful forces in evolution of plant form; though it is not known why genome doubling is such a frequent process in plants. One probable reason is the production of large amounts of secondary metabolites in plant cells. Some of them might interfere in the normal process of chromosomal segregation, causing genome duplication.

In recent times, plants have been shown to possess significant microRNA families, which are conserved across many plant lineages. In comparison to animals, while the number of plant miRNA families are lesser than animals, the size of each family is much larger. The miRNA genes are also much more spread out in the genome than those in animals, where they are more clustered. It has been proposed that these miRNA families have expanded by duplications of chromosomal regions.[140] Many miRNA genes involved in regulation of plant development have been found to be quite conserved between plants studied.

Domestication of plants like maize, rice, barley, wheat etc. has also been a significant driving force in their evolution. Research concerning the origin of maize has found that it is a domesticated derivative of a wild plant from Mexico called teosinte. Teosinte belongs to the genusZea, just as maize, but bears very small inflorescence, 5–10 hard cobs and a highly branched and spread out stem.

Interestingly, crosses between a particular teosinte variety and maize yields fertile offspring that are intermediate in phenotype between maize and teosinte. QTL analysis has also revealed some loci that, when mutated in maize, yield a teosinte-like stem or teosinte-like cobs. Molecular clock analysis of these genes estimates their origins to some 9,000 years ago, well in accordance with other records of maize domestication. It is believed that a small group of farmers must have selected some maize-like natural mutant of teosinte some 9,000 years ago in Mexico, and subjected it to continuous selection to yield the familiar maize plant of today.[141]

Another interesting case is that of cauliflower. The edible cauliflower is a domesticated version of the wild plant Brassica oleracea, which does not possess the dense undifferentiated inflorescence, called the curd, that cauliflower possesses.

Cauliflower possesses a single mutation in a gene called CAL, controlling meristem differentiation into inflorescence. This causes the cells at the floral meristem to gain an undifferentiated identity and, instead of growing into a flower, they grow into a lump of undifferentiated cells.[142] This mutation has been selected through domestication since at least the Greek empire.

An additional contributing factor in some plants leading to evolutionary change is the force due to coevolution with fungal parasites. In an environment with a fungal parasite, which is common in nature, the plants must make adaptation in an attempt to evade the harmful effects of the parasite.[143]

Whenever a parasitic fungus is siphoning limited resources away from a plant, there is selective pressure for a phenotype that is better able to prevent parasitic attack from fungi. At the same time, fungi that are better equipped to evade the defenses of the plant will have greater fitness level. The combination of these two factors leads to an endless cycle of evolutionary change in the host-pathogen system.[144]

Because each species in the relationship is influenced by a constantly changing symbiont, evolutionary change usually occurs at a faster pace than if the other species was not present. This is true of most instances of coevolution. This makes the ability of a population to quickly evolve vital to its survival. Also, if the pathogenic species is too successful and threatens the survival and reproductive success of the host plants, the pathogenic fungi risk losing their nutrient source for future generations. These factors create a dynamic that shapes the evolutionary changes in both species generation after generation.[144]

Genes that code for defense mechanisms in plants must keep changing to keep up with the parasite that constantly works to evade the defenses. Genes that code for attachment mechanisms are the most dynamic and are directly related to the evading ability of the fungi.[145] The greater the changes in these genes, the more change in the attachment mechanism. After selective forces on the resulting phenotypes, evolutionary change that promotes evasion of host defenses occurs.

Fungi not only evolve to avoid the defenses of the plants, but they also attempt to prevent the plant from enacting the mechanisms to improve its defenses. Anything the fungi can do to slow the evolution process of the host plants will improve the fitness of future generations because the plant will not be able to keep up with the evolutionary changes of the parasite. One of the main processes by which plants quickly evolve in response to the environment is sexual reproduction. Without sexual reproduction, advantageous traits could not be spread through the plant population as quickly allowing the fungi to gain a competitive advantage. For this reason, the sexual reproductive organs of plants are targets for attacks by fungi. Studies have shown that many different current types of obligate parasitic plant fungi have developed mechanisms to disable or otherwise affect the sexual reproduction of the plants. If successful, the sexual reproduction process slows for the plant, thus slowing down evolutionary change or in extreme cases, the fungi can render the plant sterile creating an advantage for the pathogens. It is unknown exactly how this adaptive trait developed in fungi, but it is clear that the relationship to the plant forced the development of the process.[146]

Some researchers are also studying how a range of factors affect the rate of evolutionary change and the outcomes of change in different environments. For example, as with most evolution, increases in heritability in a population allow for a greater evolutionary response in the presence of selective pressure. For traits specific to the plant-fungi coevolution, researchers have studied how the virulence of the invading pathogen affects the coevolution. Studies involving Mycosphaerella graminicola have consistently showed that virulence of a pathogen does not have a significant impact on the evolutionary track of the host plant.[147]

There can be other factors in that can affect the process of coevolution. For example, in small populations, selection is a relatively weaker force on the population due to genetic drift. Genetic drift increases the likelihood of having fixed alleles which decreases the genetic variance in the population. Therefore, if there is only a small population of plants in an area with the ability to reproduce together, genetic drift may counteract the effects of selection putting the plant in a disadvantageous position to fungi which can evolve at a normal rate. The variance in both the host and pathogen population is a major determinant of evolutionary success compared to the other species. The greater the genetic variance, the faster the species can evolve to counteract the other organism’s avoidance or defensive mechanisms.[143]

Due to the process of pollination for plants, the effective population size is normally larger than for fungi because pollinators can link isolated populations in a way that the fungus is not able. This means positive traits that evolve in non-adjacent but close areas can be passed to nearby areas. Fungi must individually evolve to evade host defenses in each area. This is obviously a clear competitive advantage for the host plants. Sexual reproduction with a broad, high variance population leads to fast evolutionary change and higher reproductive success of offspring.[148]

Environment and climate patterns also play a role in evolutionary outcomes. Studies with oak trees and an obligate fungal parasite at different altitudes clearly show this distinction. For the same species, different altitudinal positions had drastically different rates of evolution and changes in the response to the pathogens due to the organism also in a selective environment due to their surroundings.[149]

Coevolution is a process that is related to the red queen hypothesis. Both the host plant and parasitic fungi have to continue to survive to stay in their ecological niche. If one of the two species in the relationship evolves at a significantly faster rate than the other, the slower species will be at a competitive disadvantage and risk the loss of nutrients. Because the two species in the system are so closely linked, they respond to external environment factors together and each species affects the evolutionary outcome of the other. In other words, each species exerts selective pressure on the other. Population size is also a major factor in the outcome because differences in gene flow and genetic drift could cause evolutionary changes that do not match the direction of selection expected by forces due to the other organism. Coevolution is an important phenomenon necessary for understanding the vital relationship between plants and their fungal parasites.

^Hagemann, W (1999). "Towards an organismic concept of land plants: the marginal blastozone and the development of the vegetation body of selected frondose gametophytes of liverworts and ferns". Plant Systematics and Evolution. 216: 81–133. doi:10.1007/bf00985102.