Globally, we face significant economic, social, and environmental risks as we confront the challenges associated with climate change. The magnitude and rapidity of the projected changes in climate are unprecedented, and their implications for the health of our planet and the legacy we will leave to our children, our grandchildren, and future generations are of vital concern. We need to develop strategies now to adapt to the changes, and this process must begin at the local level.

Understanding and Assessing Climate Change: Implications for Nebraska documents many of the key challenges that Nebraska will face as a result of climate change. Commentaries from experts on Nebraska's water resources, energy supply and use, agriculture, forests, health, ecosystems, urban systems and rural communities, and infrastructure and vulnerabilities raise serious concerns about the impacts of projected changes in climate, but they also provide a starting point for discussions about the actions that we can take to overcome these challenges.

Global annual average temperature (as measured over both land and oceans; scale on left) has increased by more than 1.4°F (0.8°C) since 1880. Red bars show temperatures above the long-term average, and blue bars indicate temperatures below the long-term average. The black line shows atmospheric carbon dioxide (CO2) concentration in parts per million (ppm); scale on right. While there is a clear long-term global warming trend, some years do not show a temperature increase relative to the previous year, and some years show greater changes than others. These year-to-year fluctuations in temperature are due to natural processes, such as the effects of El Niños, La Niñas, and the eruption of large volcanoes. (Figure source: NOAA NCDC. Temperature data from NOAA NCDC 2012; CO2 data from NOAA ESRL 2012.)

Introduction

Globally, we face significant economic, social, and environmental risks as we confront the challenges associated with climate change. The body of scientific evidence confirms with a high degree of certainty that human activities in the form of increased concentrations of greenhouse gases (GHGs) since the beginning of the Industrial Revolution, changes in land use, and other factors are the primary cause for the warming that the planet has experienced, especially in recent decades.

Is there a debate within the scientific community with regard to observed changes in climate and human activities as the principal causal factor? The short answer here is “no”, at least certainly not among climate scientists—that is, those scientists who have actual expertise in the study of climate and climate change. For more than a decade, there has been broad and overwhelming consensus within the climate science community that the human‐induced effects on climate change are both very real and very large. The debate in 2014 is restricted to precisely how these changes will play out and what actions we will need to take to adapt to and mitigate the effects of these changes.

The magnitude and rapidity of the projected changes in climate are unprecedented. The implications of these changes for the health of our planet, and the legacy we will leave to our children, our grandchildren and future generations are of vital concern. Therefore, it is imperative that we develop strategies now to adapt to the multitude of changes we are experiencing and will continue to experience in our climate. This process of adaptation must begin at the local level, where these changes are being observed and their impacts felt. However, global agreements on the reduction of GHG emissions are a critical part of the solution in terms of mitigating as much future warming as possible.

The approach taken in this report is to review the voluminous scientific literature on the subject and interpret—given time and resource constraints—our current understanding of the science of climate change and the implications of projections of climate change for Nebraska. The goal of this report is to inform policy makers, natural resource managers, and the public about 1) the state of the science on climate change, 2) current projections for ongoing changes over the twenty‐first century, 3) current and potential future impacts, and 4) the management and policy implications of these changes. Hopefully, this report will lead to a higher degree of awareness and the initiation of timely and appropriate strategic actions that enable Nebraskans to prepare for and adapt to current and future changes in our climate.

The Earth’s Climate System

Changes to the components of the earth’s climate system are caused by changes in forcings, or external factors, that may be either positive (lead to warming) or negative (lead to cooling). Climate forcings can be classified as natural or anthropogenic—that is, human‐induced. Examples of natural forcings include solar variability and volcanic eruptions, while anthropogenic forcings include GHG emissions, aerosol production, and land‐use changes.

Changes in natural forcings have always occurred and continue today, having produced climate change and variability throughout the earth’s history; only recently have anthropogenic forcings become large enough to significantly affect the climate system.

Nearly all the energy driving the climate system comes from the sun. Although solar output varies over time and has led to climate changes during the earth’s geologic history, changes in solar radiation cannot account for the warming observed over the past 30 years, during which accurate measurements of solar output have been made. In the absence of solar forcing, the largest climate forcing is due to changes in atmospheric composition, particularly of GHGs and aerosols. Global climate models cannot reproduce the recent observed warming without including anthropogenic forcings (particularly GHG emissions).

Evidence that human activities influence the global climate system continues to accumulate because of an increased understanding of the climate system and its response to natural and anthropogenic factors, more and better observations, and improved climate models. In fact, in their latest assessment report, the Intergovernmental Panel on Climate Change (IPCC) now states with 95% confidence that human influence is the main cause of the observed warming in the atmosphere and oceans and other indicators of climate change and that continued emissions of GHGs will cause further warming and changes in these components of the climate system. Before the large‐scale use of fossil fuels for energy (starting during the Industrial Revolution), the concentrations of the major GHGs were remarkably constant during human history. Since then, the concentration of these gases has risen—slowly at first, then more rapidly since the middle of the twentieth century. Furthermore, scientists can say with very high confidence that the rate of increase of these gases is unprecedented in the last 22,000 years—and with high confidence over the last ~800,000 years.

Evidence for a Changing Climate

Multiple lines of evidence show that the earth’s climate has changed on global, regional, and local scales. Scientists from around the world have collected this evidence from weather stations, satellites, buoys, and other observational networks. When taken together, the evidence clearly shows that our planet is warming. However, temperature change represents only one aspect of a changing climate. Changes in rainfall, increased melting of snow and ice, rising sea levels, and increasing sea surface temperatures are only a few of the key indicators of a changing climate.

Although the globe as a whole is getting warmer, observations show that changes in climate have not been uniform in space and time. Some areas have cooled while others have warmed, a reflection of normal climate variability and differing controls on regional climate. Likewise, some areas have experienced increased droughts while others have had more floods. Changes in Nebraska’s climate are occurring within the context of these global and regional changes.

Past and Projected Changes in Nebraska’s Climate

Nebraska has experienced an overall warming of about 1°F since 1895. When this is separated into daytime highs and nighttime lows, we find that the trend in low temperatures is greater than the trend in high temperatures, both of which show an overall warming. These trends are consistent with the changes experienced across the Plains states in general, which show a warming that is highest in winter and spring and a greater warming for the nighttime lows than for daytime highs. By far, the vast majority of this warming has occurred during the winter months, with minimum temperatures rising 2.0‐4.0°F per century and maximum temperature increases of 1.0‐2.5°F per century. Summer minimum temperatures have shown an increase of 0.5‐1.0°F per century at most locations, but maximum temperature trends generally range from ‐0.5 to +0.5°F per century. Unlike temperature, however, there is no discernable trend in mean annual precipitation in Nebraska. Since 1895, the length of the frost‐free season has increased by 5 to 25 days across Nebraska, and on average statewide by more than one week. The length of the frost‐free season will continue to increase in future decades.

Projected temperature changes for Nebraska range from an increase of 4‐5oF (low emission scenarios) to 8‐9°F (high emission scenarios) by the last quarter of the twenty‐first century (2071‐2099). This range is based on our current understanding of the climate system under a variety of future emissions scenarios. The range of temperature projections emphasizes the fact that the largest uncertainty in projecting climate change beyond the next few decades is the level of heat‐trapping gas emissions that will continue to be emitted into the atmosphere and not because of model uncertainty.

Under both low and high emissions scenarios, the number of high temperature stress days over 100°F is projected to increase substantially in Nebraska and the Great Plains region. By midcentury (2041‐2070), this increase for Nebraska would equate to experiencing typical summer temperatures equivalent to those experienced during the 2012 drought and heat wave. The number of warm nights, defined as the number of nights with the minimum temperature remaining above 80°F for the southern Plains and above 60°F for the northern Plains, is expected to increase dramatically. For Nebraska, the number of warm nights is expected to increase by an additional 20‐25 nights for the low emissions scenario and 25‐40 nights for the high emissions scenario.

With the projected increase in global and regional temperatures, there has been an increase in heat wave events occurring around the world. This can be demonstrated by the ratio of maximum temperature records being broken in comparison to the number of minimum temperature records being broken. The current ratio across the United States is approximately 2 to 1, providing further evidence of a significant warming trend.

Current trends for increased precipitation in the northern Great Plains are projected to become even more pronounced, while the southern Great Plains will continue to become drier by midcentury and later. The greatest increases for the northern Great Plains states so far have been in North and South Dakota, eastern Montana, and most of eastern Nebraska. Little change in precipitation in the winter and spring months is expected for Nebraska. Any increases in the summer and fall months are expected to be minimal and precipitation may be reduced during the summer months in the state. An increase in the percentage of average annual precipitation falling in heavy rainfall events has been observed for portions of the northern Great Plains states, including eastern Nebraska, and the Midwest. This trend is expected to continue in the decades ahead. Flood magnitude has been increasing because of the increase in heavy precipitation events. Soil moisture is projected to decrease by 5‐10% by the end of the century, if the high emissions scenario ensues.

A major concern for Nebraska and other central Great Plains states is the current and continued large projected reduction in snowpack for the central and northern Rocky Mountains. This is due to both a reduction in overall precipitation (rain and snow) and warmer conditions, meaning more rain and less snow, even in winter. Flows in the Platte and Missouri rivers during the summer months critically depend on the slow release of water as the snowpack melts. These summer flows could be greatly reduced in coming years.

Human activities local to Nebraska can also be important in terms of how they influence the climate at the microclimatic level. In particular, the advent of large‐scale irrigation in Nebraska since the 1960s has kept the summertime climate in Nebraska cooler and wetter than it otherwise would have been. However, if reduced water availability curtails irrigation in the state, then the microclimatic effects of irrigation will be lessened in the future, exacerbating the effects of anthropogenic climate change.

Drought is a critical issue for Nebraska. This was demonstrated clearly during 2012, which was the driest and hottest year for the state based on the climatological record going back to 1895. Although the long‐term climatological record does not yet show any trends in drought frequency or severity from a national perspective, there is some evidence of more frequent and severe droughts recently in the western and southwestern United States, respectively. Looking ahead, however, the expectation is that drought frequency and severity in Nebraska would increase—particularly during the summer months—because of the combination of increasing temperatures and the increased seasonal variability in precipitation that is likely to occur. Modeling studies show that drought, as indicated by the commonly used Palmer Drought Severity Index (PDSI), is expected to increase in the future. The PDSI uses temperature and precipitation data to estimate relative dryness. Temperature increases could result in widespread drying over the United States in the latter half of the twenty‐first century, with severe drought being the new climate normal in parts of the central and western United States.

Implications of Projected Climate Changes in Nebraska

Current and projected changes in temperature will have positive benefits for some and negative consequences for others, typically referred to as winners and losers. However, the changes in climate currently being observed extend well beyond temperature and include changes in precipitation amounts, seasonal distribution, intensity, and form (snow versus rain). Changes in the observed frequency and intensity of extreme events are of serious concern today and for the future because of the economic, social, and environmental costs associated with responding to, recovering from, and preparing for these extreme events in the near and longer term.

To address the implications of observed and projected changes in climate on particular sectors, experts with knowledge of, and practical experience in, the principal sectors of importance to Nebraska were invited to prepare commentaries for this report. The basis for these commentaries was the information contained in the recently released National Climate Assessment Report. The key sectors chosen for inclusion in the Nebraska climate change report were water resources; energy supply and use; agriculture; forests; human health; ecosystems; urban systems, infrastructure and vulnerability; and rural communities. An assessment of the importance of observed and projected changes in climate for the insurance industry, both globally and locally, was also completed. These commentaries raise serious concerns about how the projected changes in climate will impact Nebraska, and they provide a starting point for discussions about the actions that we should take to adapt to the changes in each sector.

It is critically important to point out that the implications of and potential impacts associated with observed and projected changes in climate will be closely associated with the management practices employed in these specific sectors. For example, the impacts of projected changes in climate on the productivity of a specific farm will be dependent on the ability of that producer to adapt to these changes as they occur, and the producer’s access to new and innovative technologies that facilitate the adaptation process. Early adapters will be better able to cope with changes as they occur.

This report documents many of the key challenges that Nebraska will face as a result of climate change. Imbedded in each of these challenges are opportunities. A key takeaway message from the report is that, with this knowledge in hand, we can identify actions that need to be implemented to avoid or reduce the deleterious effects of climate change in Nebraska. Action now is preferable and more cost effective than reaction later.

Report Goals

The goal of this report is to inform policy makers, natural resource managers, and the public about:

1. the current state of the science on climate change; 2. the latest projections for ongoing changes over the twenty‐first century; and 3. potential impacts for Nebraska of those changes.

Overview: Key Points

Scientific evidence confirms that human activities are the primary cause for the warming that the planet has experienced, especially in recent decades.

broad and overwhelming consensus within the climate science community.

any debate restricted to precisely how these changes will play out and what actions are needed to adapt to and mitigate these changes.

The magnitude and rapidity of the projected changes in climate are unprecedented, compared to natural climate change and variability.

Natural forcings have always occurred and continue today, having produced climate change and variability throughout the earth’s history; only recently have anthropogenic forcings become large enough to significantly affect the climate system.

Multiple lines of observational evidence show that the earth’s climate is changing on global, regional, and local scales and is warming overall.

temperature change represents only one aspect of a changing climate.

changes in rainfall, increased melting of snow and ice, rising sea levels, and increasing ocean acidity are only a few of the other key indicators.

Past and Projected Changes in Nebraska’s Climate: Key Points

Nebraska has experienced an overall warming of about 1°F since 1895, with warming trends that are highest in winter and spring and for the nighttime lows than for daytime highs. Since 1895, the length of the frost‐free season has increased by 5 to 25 days across Nebraska.

Projected temperature changes for Nebraska range from an increase of 4‐5oF (low emission scenarios) to 8‐9°F (high emission scenarios) by the end of the twenty‐first century.

the largest uncertainty in projecting climate change beyond the next few decades is in the greenhouse gas emission scenarios assumed and not because of climate model uncertainty.

Under both low and high emissions scenarios, the number of high temperature stress days over 100°F is projected to increase substantially in Nebraska.

the number of warm nights is expected to increase by an additional 20‐40.

There is no observed trend in mean annual precipitation across Nebraska.

increase in heavy rainfall events has been observed for portions of Nebraska.

flood magnitude has been increasing because of this increase in heavy precipitation events.

Little change in total annual precipitation is projected for Nebraska.

increasing trend in heavy rainfall events is expected to continue.

Soil moisture is projected to decrease by 5‐10% by the end of the century, if the high emissions scenario ensues. This can lead to enhanced drought conditions for Nebraska.

A major concern for Nebraska is the large projected reduction in snowpack for the Rocky Mountains.

summer flows in the Platte and Missouri rivers critically depend on the slow release of water as the snowpack melts and could be greatly reduced in coming years.

The advent of large‐scale irrigation in Nebraska since the 1960s has kept the summertime climate in Nebraska cooler and wetter than it otherwise would have been.

if reduced water availability curtails irrigation, the effects of global warming will be exacerbated.

Projections are for increasing drought frequency and severity because of the combination of increased temperatures and increased seasonal variability in precipitation.

Implications of Projected Climate Changes in Nebraska: Key Points

To address the implications of changes in climate, experts in the principal sectors of importance to Nebraska were invited to prepare commentaries for this report. These commentaries raise serious concerns about how the projected changes in climate will impact Nebraska, and they provide a starting point for discussions about the actions that we should take to adapt to the changes in each sector. The key sectors chosen for inclusion were

water resources;

energy supply and use;

agriculture;

forests;

human health;

ecosystems;

urban systems, infrastructure and vulnerability; and

rural communities.

Summary

This report documents many of the key challenges that Nebraska will face as a result of climate change. Imbedded in each of these challenges are opportunities. A key takeaway message from the report is that, with this knowledge in hand, we can identify actions that need to be implemented to avoid or reduce the deleterious effects of climate change in Nebraska. Action now is preferable and more cost effective than reaction later.

1. What do you consider the key challenges in getting Nebraskans to actively engage in serious discussions about climate change and its implications for the state? How can we change the focus from a political to a science‐based discussion?

2. Has your community incorporated projected changes in climate into future planning efforts regarding water and energy supply and demand reduction, impacts on infrastructure, human health and related services, etc.? If not, working collectively with other concerned citizens, how can you influence those discussions in the near future?

3. Nebraska is about to elect a new governor. What is the position of each of the candidates on climate change and have they stated this position publicly? If elected, how might they incorporate this issue in future policy discussions and decisions? Can you identify a mechanism to communicate these suggestions to the staff of the newly elected governor?

4. What are your most important concerns regarding the impacts of climate change for Nebraska on the principal sectors addressed in the report? What about other sectors not addressed in the report?

5. How can the University of Nebraska assist the state in addressing the impacts of climate change in the near and longer term?

6. Climate change likely will limit the availability of water in the future through more variable precipitation, higher temperatures, reduced snowpack in the mountains to the west, and reduced recharge to our aquifers. How do we balance the needs of agriculture with the other important uses of water? What is the role of Natural Resource Districts, state agencies, and others in addressing these challenges?

Optimized use of agricultural pesticides, fertilizers, seeds, water, energy and other crop amendments

Improving the well-being of those who depend upon Nebraska agriculture

SSCM Components

My Note: See Below

Satellite-Based Auto-Guidance - Satellite-based auto-guidance systems can provide significant benefits for the crop production industry. Improved equipment make it possible to use satellite-based auto guidance in diverse growing environments.

Yield Monitoring and Mapping - Yield mapping refers to the process of collecting georeferenced data on crop yield and characteristics, such as moisture content, while the crop is being harvested. Various methods, using a range of sensors, have been developed for mapping crop yields.

On-the-Go Vehicle-Based Soil Sensors - Sensors can measure a variety of soil properties. These sensors are used in conjunction with GPS to develop field maps or to control variable rate application equipment in real-time.

SSCM Components

Auto-steer technology has taken the farming community community by storm in recent years. This technology is more frequenlty referred to as auto-guidance, the guidance of agricultural vehicles using satellite-based positioning equipment (e.g., GPS receivers).

Rising energy costs and more reasonably priced auto-guidance systems have made a clearer cost justification for investment in this new technology. As many of the benefits of auto-guidance technology become increasingly evident, early adopters continue discovering additional advantages. The most obvious rewards include:

Reduced skips and overlaps

Lower operator fatigue

Ability to work in poor visibility conditions

Minimal setup and service time

Ease of use

The three levels of automation for steering an agricultural vehicle

Navigation aids - Relatively inexpensive navigation aids, known as parallel tracking devices or, more commonly, lightbars, are being used by operators to visualize their position with respect to previous passes and to recognize the need to make steering adjustments if a measured geographic position deviates from the desired track.

Auto-guidance - More advanced auto-guidance options include similar capabilities with the additional option of automatically steering the vehicle using either an integrated electro-hydraulic control system or a mechanical steering device installed inside the cab. When implementing an auto-guidance option, the operator takes control during turns and other maneuvers and over-sees equipment performance when the auto-guidance mode is engaged.

Field robots - Finally, with autonomous vehicles, the operator’s presence on board is not required and the entire operation is controlled remotely (via wireless communication) or in robotic mode. This can be beneficial, for example, when applying chemicals that are hazardous to human health. The greatest liability of autonomous vehicles, improper response in unpredictable field situations, has been the major drawback of robotic agriculture. Therefore, auto-guidance has been recognized as the most promising option for today’s farming operations.

Despite the type of system used, since the radio signal processed by receivers can be affected by several factors (atmospheric interference, configuration of satellites in the sky, time estimation uncertainties, etc.), the applicability of uncorrected position estimates is rather limited. To adjust estimated geographic coordinates in real time, various differential correction services are used. In addition to the differential correction, most receivers apply signal filtering techniques to assure the best possible predictability of antenna location. Based on the quality of differential correction and internal signal processing, positioning receivers used for auto-guidance have been advertised according to the level of anticipated accuracy: sub-meter, decimeter, and centimeter.

Positioning receiver accuracy Levels

Sub-meter - Widely used in agriculture and other industries, single-frequency receivers with submeter level accuracy frequently rely on several alternative differential correction services provided by public and private entities. Popular in the past, the Coast Guard differential correction AM radio signal (known more commonly as Beacon) is broadcast through a network of towers located near navigable waters. More recently, Wide Area Augmentation System (WAAS) has been deployed by the Federal Aviation Administration to broadcast a satellite-based differential correction service. A similar service is available through free-of-charge John Deere StarFire 1 (SF1) and subscription-based OmniSTAR Virtual Base Station (VBS) options.

Decimeter - To achieve decimeter level accuracy, dual-frequency receivers can be used with subscription-based John Deere StarFire2 (SF2) or correction services, or with a local base DGPS station.

Centimeter - A local base station is also required to implement a Real Time Kinematic (RTK) differential correction service, which provides the ultimate centimeter level of accuracy. In certain locations around the US, local networks of permanent RTK base stations have been established by private entities to provide fee-based coverage of areas with relatively high demand for superior positioning accuracy.

Overall Performance

When adapting auto-guidance to a particular farm operation, it is necessary to understand that positioning error is just one factor causing less than perfect field performance. In addition, the ability to maintain desirable geometric relationships between passes is affected by vehicle dynamics, ability of the field implement to track behind the vehicle, and actual conditions of the field surface. Therefore, poor quality of the steering control system, sloped terrain, or misalignments in the implement will cause the overall field performance to suffer.

Currently, hands-free steering of agricultural vehicles is accomplished using either a steering device attached to the steering column or through an electro-hydraulic steering system. An easy-to-setup steering column device can be attached to an existing steering wheel or the steering wheel can be replaced with an actuator module that includes its own steering wheel. Auto-guidance systems integrated with electro-hydraulic steering control circuits alter the travel direction similar to conventional power steering. A control valve is used to properly direct hydraulic oil when a steering adjustment needs to be made. When retrofitting old tractors some manufacturers provide other hydraulic drive components to guarantee the required steering performance. It is obvious that actuators adjusting direction of travel through a steering column can be less responsive than those that change the orientation of vehicle wheels directly. In most instances, a wheel angle sensor is used as a steering feedback in addition to the records of heading obtained from the GPS receiver. This makes electro-hydraulic steering systems even more reliable.

Control of vehicle dynamics becomes more challenging when farming sloped ground. Thus, roll (tilt from side to side), pitch (tilt from front to back) and yaw (turn around vertical axis) alter location of the positioning antenna with respect to other parts of the vehicle. For example, when driving along a slope, the horizontal position of the antenna located on the top of a cab shifts to one side of the tractor with respect to the projected center of the tractor. This causes an engaged steering control system to guide the vehicle so that the point directly below the antenna (not the center of the vehicle) would follow the desired pass. To compensate for these attitude-caused challenges, most auto-guidance systems include a combination of gyroscopes and accelerometers or several antennas placed in different locations on the cab. Less advanced terrain compensation modules can deal only with roll and pitch angles, while more sophisticated sensing systems, frequently called 6-axes, can measure the total dynamic attitude of the vehicle in space.

Vehicle stability and proper alignment of the implement attached to the vehicle are also important when implementing auto-guidance. If a skip followed by an overlap takes place with every alternating pass in the opposite direction when making straight and level trips from one end of the field to the other, even a properly adjusted pulled implement will not follow the on sloped terrain. In that case the implement will tend to stay close to the center of a turn or shift downward.

Several manufacturers have addressed implement tracking solutions that allow accurate sensing of the implement’s position with respect to the vehicle and mechanical adjustment of this position using a set of large-diameter disc coulters to overcome the occurring side shift. Additional developments are focused on compensating for known shifts of the implement by adjusting the vehicle’s trajectory to assure proper tracking of the implement instead of the vehicle. Optical and mechanical crop-based guidance systems can also be useful when it comes to the position of the implement with respect to previously established rows.

Reference to commercial products or trade names is made with the understanding that no discrimination is intended and no endorsement by University of Nebraska–Lincoln Extension is implied.

Yield monitoring equipment was introduced in the early 1990s and is increasingly considered a conventional practice in modern agriculture. The pioneers of precision agriculture already have generated several years of yield history and have examined different ways of interpreting and processing these data.

Yield Mapping Concept

Yield mapping refers to the process of collecting georeferenced data on crop yield and characteristics, such as moisture content, while the crop is being harvested. Various methods, using a range of sensors, have been developed for mapping crop yields.

Travel speed sensor - determines the distance the combine travels during a certain logging interval (Sometimes travel speed is measured with a GPS receiver or a radar or ultrasonic sensor.)

Each sensor has to be properly calibrated according to the operator’s manual. Calibration converts the sensor’s signal to physical parameters. A proprietary binary log file is created during harvest to record the output of all sensors as a function of time. This file can be converted to a text format or displayed as a map using the yield monitor vendor’s software.

Processing Yield Maps

The yield calculated at each field location can be displayed on a map using a Geographic Information System (GIS) software package. The raw log file, however, contains points recorded during turns and the sensor measurements do not correspond to the exact harvest locations because grain flow through a combine is a delayed process (unless real-time correction is applied). To eliminate these obvious errors, the raw data is shifted to compensate for the combining delay, and the points corresponding to the header up position are removed. Settings for grain flow delay are combine- and sometimes even crop-specific, but typical values for grain crops range from about 10 to 12 seconds.

Usually a few points at the beginning and at the end of a pass should be removed as well. These are referred to as start-and end-pass delays. Start-pass delays occur when the combine starts harvesting the crop, but grain flow has not stabilized because the elevator is gradually filling up. Similarly, end-pass delays occur when the combine moves out of the crop and grain flow gradually declines to zero when the elevator is completely emptied. Consult the manufacturer of your yield monitor for the most appropriate settings to use with your combine.

Shifting of raw data to correct for grain flow delay as well as deletion of points that represent header status up and start-and end-pass delays is the primary data filtering procedure built into software supplied with yield mapping systems.

Yield History Evaluation

Evaluating the temporal (year-to-year) variation of yield distribution within the field is an essential step in defining field areas with potentially high and low yields. Several approaches can be used to evaluate temporal effects on yield. One approach is to calculate the relative (normalized) yield for each point or grid cell. Normalized yield can be defined as the ratio of the actual yield to the field average:

When growing conditions in a field vary considerably, such as irrigated and dryland areas or different crops or varieties grown in different areas, normalization should be done separately for those areas, with the resulting relative yields recombined into one data file for the whole field. The following figure shows a relative yield history for a field with corn (soybean in the southern half in 2000) grown using furrow-irrigation (until 2001) and center-pivot irrigation (in 2002).

Maps of relative yield of corn and soybean grown during a seven-year period (red indicates low-yielding areas and green indicates higher than average yields).

Potential Applications

Yield maps represent the output of crop production. On one hand this information can be used to investigate the existence of spatially variable yield limiting factors. On the other hand, the yield history can be used to define spatially variable yield goals that may allow varying inputs according to expected field productivity.

The following flowchart illustrates the process one might follow in deciding whether to invest in site-specific crop management, based on analysis of yield maps. If yield variability across the field cannot be explained by any spatially inconsistent field property, uniform management may be appropriate. Site-specific management becomes a promising strategy if yield patterns are consistent from year to year and can be correlated to one or more field properties (e.g. nutrient supply, topography, past management, etc.).

If the causes for yield variation are known and can be eliminated permanently, the entire area could be brought to similar growing conditions and managed uniformly thereafter. This concept was one of the earliest philosophies behind precision agriculture, but is likely only feasible for certain field properties. For example, variable rate liming can be used to correct acidic areas in a field. In this case, the yield map is used only to investigate whether low soil pH is a yield-limiting factor, and the soil map is used to prescribe variable application rates. Another example would be localized deep soil tillage to alleviate compaction in selected field areas.

Most yield limiting factors cannot be modified permanently through single measures because of economic or practical constraints. Consequently, site-specific crop management may be used to appropriately account for the existing spatial variability in attainable yield and/or soil properties.

Summary

Yield maps are one of the most valuable sources of spatial data for precision agriculture. In developing these maps, it is essential to remove the data points that do not accurately represent the yield at a corresponding location. Map averaging or smoothing is usually done to aid data interpretation. A long yield history is essential to avoid drawing conclusions that are affected by the weather or other unpredictable factors during a particular year. Typically, at least five years of yield maps are desired. Processed yield maps can be used to investigate factors affecting the yield or to prescribe variable rate applications of agricultural inputs according to spatially variable yield goals (yield potential). Producers interested in precision farming should, however, always evaluate different management approaches to identify those that provide the greatest benefit at a particular site.

Sensors that measure a variety of essential soil properties on the go are being developed. These sensors can be used either to control variable rate application equipment in real-time or in conjunction with a Global Positioning System (GPS) to generate field maps of particular soil properties. Depending on the spacing between passes, travel speed, and sampling and/or measurement frequency, the number of measurement points per acre varies; however, in most cases, it is much greater than the density of manual grid sampling. The cost of mapping usually is reduced as well.

Measuring Soil Properties

When thinking about an ideal precision agriculture system, producers visualize a sensor located in direct contact with, or close to, the ground and connected to a “black box” which analyzes sensor response, processes the data, and changes the application rate instantaneously. They also hope that the real-time information detected by the sensor and used to prescribe the application rate would optimize the overall economic or agronomic effect of the production input. This approach, however, does not take into account several difficulties met in the “real world”:

1. Most sensors and applicator controllers need a certain time for measurement, integration, and/or adjustment, which decreases the allowable operation speed or measurement density.

3. Currently, there is no site-specific management prescription algorithm proven to be the most favorable for all variables involved in crop production.

Rather than using real-time, on-the-go sensors with controllers, a map-based approach may be more desirable because of the ability to collect and analyze data, make the prescription, and conduct the variable rate application in two or more steps. In this case, multiple layers of information including yield maps, a digital elevation model (DEM), and various types of imagery could be pooled together using a geographic information system (GIS) software package designed to manage and process spatial data. Prescription maps can be developed using algorithms that involve several data sources as well as personal experience.

Sensors for Automated Measurements

Scientists and equipment manufacturers are trying to modify existing laboratory methods or develop indirect measurement techniques that could allow on-the-go soil mapping. To date, only a few types of sensors have been investigated, including:

Electromagnetic

Optical

Mechanical

Electrochemical

Airflow

Acoustic

Electromagnetic sensors use electric circuits to measure the capability for soil particles to conduct or accumulate electrical charge. When using these sensors, the soil becomes part of an electromagnetic circuit, and changing local conditions immediately affect the signal recorded by a data logger. Several such sensors are commercially available:

Electromagnetic soil properties, for the most part, are influenced by soil texture, salinity, organic matter, and moisture content. In some cases, other soil properties such as residual nitrates or soil pH can be predicted using these sensors. Several approaches for applying electromagnetic sensors have been observed in recent years.

Optical sensors use light reflectance to characterize soil. These sensors can simulate the human eye when looking at soil as well as measure near-infrared, mid-infrared, or polarized light reflectance. Vehicle-based optical sensors use the same principle technique as remote sensing. To date, various commercial vendors provide remote sensing services that allow measurement of bare soil reflectance using a satellite or airplane platform. Cost, timing, clouds, and heavy plant residue cover are major issues limiting the use of bare soil imagery from these platforms.

Close-range, subsurface, vehicle-based optical sensors have the potential to be used on the go, in a way similar to electromagnetic sensors, and can provide more information about single data points since reflectance can be easily measured in more than one portion of the spectrum at a time. Several researchers have developed optical sensors to predict clay, organic matter, and moisture content.

Mechanical sensors can be used to estimate soil mechanical resistance (often related to compaction).These sensors use a mechanism that penetrates or cuts through the soil and records the force measured by strain gauges or load cells. Several researchers have developed prototypes that show the feasibility of continuous mapping of soil resistance; however, none of these devices is commercially available. The draft sensors or “traction control” system on tractors uses a similar technology to control the three-point hitch on the go.

Electrochemical sensors could provide the most important type of information needed for precision agriculture — soil nutrient levels and pH. When soil samples are sent to a soil-testing laboratory, a set of standardized laboratory procedures is performed. These procedures involve sample preparation and measurement. Some measurements (especially determination of pH) are performed using an ion-selective electrode (with glass or polymer membrane or ion sensitive field effect transistor). These electrodes detect the activity of specific ions (nitrate, potassium, or hydrogen in case of pH). Several researchers are trying to adapt existing soil preparation and measurement procedures to essentially conduct a laboratory test on the go. The values obtained may not be as accurate as a laboratory test, but the high sampling density may increase the overall accuracy of the resulting soil nutrient or pH maps.

Airflow sensors were used to measure soil air permeability on the go. The pressure required to squeeze a given volume of air into the soil at fixed depth was compared to several soil properties. Experiments showed potential for distinguishing between various soil types, moisture levels, and soil structure/compaction.

Acoustic sensors have been investigated to determine soil texture by measuring the change in noise level due to the interaction of a tool with soil particles. A low signal-to-noise ratio did not allow this technology to develop.

Sensor Data Usage

Although various vehicle-based soil sensors are under development, only electromagnetic sensors are commercially available and widely used. Ideally, producers would like to operate sensors that provide inputs for existing prescription algorithms. Instead, commercially available sensors provide measurements such as electrical conductivity (EC) that cannot be used directly since the absolute value depends on a number of physical and chemical soil properties such as: texture, organic matter, salinity, moisture content, etc. Alternatively, electromagnetic sensors give valuable information about soil differences and similarities, which makes it possible to divide the field into smaller and relatively consistent areas referred to as management zones.

For example, such zones could be defined according to various soil types in a field. In fact, electrical conductivity maps usually can better reveal boundaries of certain soil types than soil survey maps (used for rural property tax assessment). Different anomalies such as eroded hillsides or ponding also can be easily identified on an electrical conductivity map. The following figure compares a soil survey and an electrical conductivity map for the same field showing some differences in boundaries.

Yield maps also frequently correlate to electrical conductivity maps, as shown below. In many instances, such similarities can be explained through differences in soil. In general, the electrical conductivity maps may indicate areas where further exploration is needed to explain yield differences. Both yield potential and nutrient availability maps may have a similar pattern as soil texture and/or organic matter content maps. Often these patterns also can be revealed through an electrical conductivity map.

Therefore, it seems reasonable to use on-the-go mapping of electromagnetic soil properties as one layer of data to discover the heterogeneity (differences) of soil within a field (similar to using bare soil imagery). Zones with similar electrical conductivity and a relatively stable yield may receive a uniform treatment that can be prescribed based on fewer soil samples in the zones on the electrical conductivity map.

As new on-the-go soil sensors are developed, different real-time and map-based variable rate soil treatments may be economically applied to much smaller field areas, reducing the effect of soil variability within each management zone.

Summary

More accurate soil property maps are needed to successfully implement site-specific management decisions. Inadequate sampling density and the high cost of conventional soil sampling and analysis have been limiting factors. On-the-go, vehicle-based soil sensors represent an alternative that could both improve the quality and reduce the cost of soil maps. When further developed, on-the-go soil sensors may be used for either real-time or map-based control of agricultural inputs. To date, only systems that map electromagnetic soil properties are available commercially. These maps can be used to define management zones reflecting obvious trends in soil properties. Each zone can be sampled and treated independently. Smaller management zones will be feasible when new on-the-go soil sensors are developed and commercialized.

Researchers at the University of Nebraska continue work on vehicle-based soil sensors, which could be used for research and commercial applications. The sensors can improve the quality and decrease the cost of soil maps and will facilitate the decision-making process.

Site-specific management of soil pH is a precision agriculture practice that can provide positive economic and environmental impacts on modern crop production. This publication addresses several frequently asked questions related to the meaning of soil pH, lime requirement, and quality of data used to prescribe site-specific management of soil pH.

What is site-specific management of soil pH?

One of the goals of precision agriculture is to manage agricul­tural inputs according to changing local field conditions in order to increase profitability and reduce environmental waste of agricultural inputs. According to many adopters, variable rate liming is one of the profitable and popular practices in site-specific crop management. In addition to acidic field areas, having knowledge of areas with alkaline soil conditions (high pH) can be useful to avoid lime application in these areas and also aid in the selection of crop varieties tolerant to problems associated with high pH (e.g., iron chlorosis).

Currently, variable rate lime prescription maps are gener­ated based on soil samples collected manually and analyzed in laboratory conditions. These samples are usually obtained with a 2.5-acre sampling frequency (Soil Sampling for Precision Agriculture, EC00-154).

Is 2.5-acre grid sampling an adequate approach?

The adjacent photo illustrates a common problem with creating a pre­scription map for applying variable rate lime using 2.5-acre grid sampling. In this case, 330- by 330-ft (2.5-acre) grid cells are superimposed on a bare-soil infrared image. The field has terraces which appear as dark lines so it is evident that there is a significant slope in this field. The white areas are eroded Nora soil, with alkaline (high pH) subsoil near the surface. The darker areas are less eroded, and more acid in the upper horizon.

If a few cores are taken near the center of a grid cell (red dot), the sample pH is likely to be greater than 7 since it is within the white area. As a result, the entire grid cell will receive no lime. If several cores are taken randomly throughout the grid cell, such as at the yellow dots, and then combined, the result will be nearer the average pH for the grid cell. However, the variability in this grid cell is likely to be as high as it is across the field, so little is accomplished. Using this method, it is likely that the grid cell will receive too little lime in the non-eroded portion, and too much lime on the eroded spot. Since the grid lines do not coincide with the patterns of variability, the variable rate application is not necessarily more appropriate than a uniform application. In this example, the analysis cost would be eight times as high as when a regular 20-acre composite sampling strategy is used. Overall, grid sampling with 2.5-acre grids increases analysis cost and often fails to adequately measure spatial pH variability, resulting in reduced profitability of variable rate liming.

The quality of prescription maps generated using a grid sam­pling can be improved by decreasing the grid size to 1 acre; how­ever, the cost of the laboratory analysis for pH and buffer pH will increase. Although the procedure will not need to be repeated for five or more years and the cost can be prorated over that time, the profitability of variable rate liming using this sampling strategy remains questionable. Even in a 1-acre grid cell, a 50-ft lime spreader can make four passes with several different applied rates in each pass (more than 16 - 50 by 50-ft squares can be located within 1-acre grid cell). Therefore, the mapping method still does not match the application technique.

If the earlier mentioned spatial structure exists, certain map interpolation methods can be used to better predict lime application rates in unsampled locations. However, even with the best (from a scientific viewpoint) interpolation method, errors will remain. Any type of interpolation is ineffective when substantial soil variability can be found between nearest soil samples.

Could directed sampling be helpful?

Directed (also called guided) sampling according to relatively uniform required lime application zones is a promising approach for many fields. The zones are determined by considering the variations in the field that may affect lime requirement, including soil types, topographic position, past management, aerial images of bare soil and growing crops, spatial variation in historical yields, soil electrical conductivity maps and/or other data layers.

How can the accuracy of soil pH maps be improved?

Since the beginning of precision agriculture approach, several researchers and manufacturers pursued the development of on-the-go soil sensors to accurately map pH (and other soil proper­ties) at a relatively low cost (On-the-Go Vehicle-Based Soil Sensors, EC02-178). Based on research conducted at Purdue University and the University of Nebraska–Lincoln, Veris Technologies, Inc., based in Salina, Kan., launched production of the world’s first automated on-the-go soil pH mapping system in the summer of 2003. This product is called the Mobile Sensor Platform (MSP). It consists of a widely used electrical conductivity (EC) mapping unit and a Soil pH ManagerTM.

During field operation, the Soil pH ManagerTM automatically collects and measures a soil sample without stopping. While mapping a field, row cleaners remove crop residue. A hydraulic cylinder on a parallel linkage retracts to lower the cutting shoe assembly into the soil, and the cutting shoe creates a soil core which flows into the sampling trough. The previous core sample is discharged at the rear of the trough as it is replaced by the new sample core entering in the front. The hydraulic cylinder extends to raise the sampling trough containing the soil core out of the soil while bringing the new sample in contact with two ion-selective pH electrodes. During sampling, the electrodes are washed with two flat fan nozzles. Covering disks fill the soil trench and cover the track. Measurement depth is adjustable from 1.5 to 6 inches, typically with a 3-inch average effective measurement depth. Soil cores are brought into direct contact with the electrodes and held in place for 7-25 seconds (depending on the electrode response). Every measurement represents an average of the outputs produced by the two electrodes. Two independent mea­surements allow cross-validation of electrode performances and filtration of erroneous readings. The recorded electrode output is converted to pH values according to the selected electrode calibration parameters. Every measurement is geo-referenced using a Global Positioning System (GPS) receiver.

This increase in sampling density frequently results in more accurate soil pH maps. For example, the figure above illustrates a 60-acre Kansas field. The neutral soil band near the northwest field boundary (caused by an adjacent gravel road) and a fuzzy pattern of acidic soil in the middle of the field were hidden when the 2.5-acre grid sampling approach was applied. Laboratory analysis of 10 validation samples confirmed that the map based on on-the-go sensing was more accurate than the interpolated map based on 2.5-acre grid sampling.

Can on-the-go soil sensing be used directly to prescribe lime application rates?

Soil pH maps based on on-the-go measurements indicate the variability of soil acidity/alkalinity but need to be translated to lime application maps prior to variable rate liming. This is somewhat challenging as soil buffering capacity typically varies across the field, and the amount of lime needed to change soil pH by one unit is not constant. Therefore, the Veris® MSP combines soil pH and electrical conductivity mapping capabilities as electrical conductivity maps often reflect changes in soil texture (percentage of clay, silt, and sand), the major factor affecting soil buffering capability. Therefore, lime prescription maps can be calculated from the simultaneously obtained electrical conductivity and soil pH measurements.

Does variable rate liming pay?

As with other site-specific crop management strategies, the profitability of variable-rate liming depends on: 1) quality of information, 2) additional application cost and data collection and processing costs, and 3) the variability in lime requirement for the particular field.

For instance, variable-rate liming will not be profitable if lime requirement is uniform or soil acidity is not limiting the yield. Also, liming may require several years to impact the yield and should be considered a long-term investment. Finally, poor quality of information used to prescribe variable-rate liming may result in inappropriate changes of lime application rates and therefore increase (rather than reduce) soil pH variability at the farmer’s expense.

In a recent University of Nebraska–Lincoln study of the value of soil pH maps, it has been shown that the expected net return (crop sale revenue) over cost of lime (NRCL) during a four-year corn-soybean growing cycle is affected by the errors associated with different mapping approaches. Based on the model devel­oped, higher errors mean lower potential benefit.

For the selected field conditions with slightly acidic (5.8 average pH) soil and 9% variability, “low accuracy map” means either a map obtained using 2.5-acre grid sampling or simply assuming that soil pH is constant across the field (composite field sampling). A “high accuracy map” can be obtained through 1-acre grid sampling, on-the-go mapping, or properly conducted directed sampling. The difference between expected NRCL corresponding to high and low accuracy maps represents the expected economic benefit that typically ranges between $5 and $15 per acre. Of course, this benefit should cover the difference in costs associated with both methods, which ranges between $0 and $20/acre. For example, the cost of on-the-go mapping can be similar to the 2.5-acre grid sampling, and the 1-acre grid sampling costs $20/acre more than the whole-field composite sampling.

Summary

When implementing different precision agriculture practices, site-specific management of soil pH has been shown to be one of the most promising strategies in fields with substantial variability in soil pH. Justification of variable-rate liming is complicated by the following: liming is a long-term investment; lime requirements across fields are not always highly variable; and the conventionally implemented 2.5-acre grid soil sampling does not provide the sampling density needed to accurately determine the variability of soil pH in many fields. The recently commercialized technology of on-the-go soil mapping provides a better basis of information about spatial variability of soil pH and other properties related to buffering characteristics (i.e., electrical conductivity). With proper consideration of all the information available, an optimized strategy for site-specific pH management can be developed and positive economic and environmental impacts can be achieved.

Links

The information in this publication is supplied with the understanding that no endorsement of specific products named, nor discrimination of products not named, is implied by University of Nebraska–Lincoln Extension

As various aspects of precision agriculture are implemented in Nebraska, some of the most frequent questions asked by producers, fertilizer dealers and crop consultants relate to soil sampling. Should I soil sample this field on a grid? What grid spacing should I use? How often should I sample? Can I use a yield map to tell where to soil sample? All of these are good questions, but often we do not have definitive answers. Site-specific management research conducted in recent years in Nebraska, however, provides some direction on how to implement a soil sampling program for precision agriculture.

Basic Sampling Principles

Historically, the objectives of soil sampling have been to determine the average nutrient status of a field and to provide some measure of nutrient variability in a field. Soil sampling for precision agriculture has these same objectives with some modifications. Instead of a field, producers are interested in areas within fields. They also are interested in relating trends in soil fertilizer levels to other field properties that are predictable or easily measured. Knowledge of factors influencing soil nutrient levels including soil type, topography, cropping history, manure application, fertilizer application and leveling for irrigation will help the producer determine the most effective sampling approach. The basic principles of soil sampling still apply to precision sampling. An adequate number of samples should be collected to accurately characterize nutrient levels. The samples should be collected to the proper depth for non-mobile and mobile nutrients. Samples should be handled and stored to minimize contamination and degradation.

Grid Sampling

When variable rate fertilizer application was first practiced 8-10 years ago, application maps were most often derived from grid soil samples collected at average densities of one sample for every three to four acres. In research studies conducted in Nebraska, fields have been grid sampled at much higher densities (up to 42 samples per acre) to approximate the true spatial variability of a number of soil nutrient levels. Sampling at high densities allows the evaluation of lower sampling densities on nutrient maps. In some cases, fewer samples can result in inaccurate maps.

Consider directed sampling if: â– Yield maps, remotely sensed images, or other sources of spatial information are available and show consistency from one layer to another. â– You have experience farming the field that you feel would provide direction on where to delineate management zones. â– There is limited or no history of livestock or manure influence on the field.

The following figure shows how a tenfold range in sampling density at a research site in Lincoln County resulted insignificantly different patterns. In this case, the coarser sampling grid missed a systematic pattern in soil nitrate, probably related to livestock fencing. The average recommended nitrogen rate for the field at the higher grid density was 148 lb N/acre. The average recommended nitrogen rate was162 lb N/acre at the lower grid density; 45 percent of the field received a different nitrogen recommendation with the coarser grid. The coarse grid was denser than most commercial grid sampling practiced by fertilizer dealers and crop consultants.

In other situations, accurate maps can be generated at much lower sampling densities. At a site in Buffalo County, a grid density of 14 samples per acre was compared to a density of one sample per 3.7 acres. The coarse grid is similar to that used commercially. In this case, the nitrogen rate maps were not greatly different — 17.6 percent of the field received a different nitrogen recommendation with the coarser grid, and the average nitrogen rate was the same for both grids — 158 lb N/acre.

The optimum grid density depends on the site, and to some extent what nutrient is being assessed — soil organic matter, nitrate, phosphorus, zinc, etc. It helps to know the spatial variability of the field in order to know the optimum grid density —which, after all, is the reason for grid sampling. This also raises the basic question of why we would choose to grid soil sample. Is there a better way to obtain the desired information?

Directed Sampling

Directed soil sampling is in many ways simply an extension of how soil samples were often collected in the past. For example, if a field contains significant areas of more than one soil series, the University of Nebraska recommendation was to collect samples from each soil series. Also, if parts of the field had different preceding crops, different fertilization histories, eroded areas, or an old farmstead location, these areas were to be sampled separately. In these situations, the producer is using his knowledge of spatial factors to direct where samples are taken to determine if they have different fertilizer needs. The new tools of yield maps, aerial photographs and remotely sensed images simply provide more information about variability in the field and where soil sampling can help interpret variability.

Recommendations

Producers interested in soil sampling for precision agriculture should first consider how they will use information from soil sampling. Some variable rate fertilizer application equipment is controlled by software based on grid samples. In these situations, the field will need to be grid sampled or there must be some way of generating grid information from directed samples. Check with your custom applicator to insure that the information you collect will be compatible with the variable rate equipment requirements.

Grid Sampling

Density. A well-done nutrient map derived from a grid sample can be a valuable resource for many years. Consequently, the density should be adequate to provide confidence in the accuracy of the maps developed from the data. We suggest analyzing one sample per acre, which is composited from five cores collected in a tight radius about the sample point (Figure 3). This density will result in a map that will be good for many years —10 to 20 years for soil organic matter and cation exchange capacity; five to ten years for pH; and four to five years for phosphorus, potassium and zinc. On fields in which variability is expected to be low, a sampling density of two to two-and-one-half acres per sample may be acceptable. Grid sampling at densities coarser than one sample for every 2.5 acres is not recommended, if the goal is to develop a resource of nutrient maps that can be used with confidence over several years.

Sampling Pattern and Depth. An offset grid pattern is recommended as shown in the figure below. This will provide more information at a lower cost than a regular grid pattern. Individual cores should be collected in a radius of 8-10 feet of the grid point, to a depth of 8 inches. The grid point should represent the central position of a composited sample. Collect samples within the 8-10 foot radius randomly, in order to avoid systematic patterns such as starter or preplant bands. Conduct a general fertility analysis on the samples, including soil organic matter, pH, phosphorus, potassium and other nutrients of interest.

Frequency. As already mentioned, a nutrient map derived from a grid-sampled field can last a long time. If variable rate application of fertilizer or lime occurs, this will have the potential to change nutrient levels or soil pH over time. Soil phosphorus levels will not change drastically with single variable rate applications. We suggest that grid samples be collected every five years for phosphorus. Lime application according to recommendations should amend soil pH for 8-10 years. Even if variable rate lime application has occurred according to a grid-sampled map of pH, it should not be necessary to grid sample for soil pH for 8-10years after application.

Residual Nitrate Sampling. Grid sampling for nitrate-N is not recommended because annual fluctuations in nitrate levels would require annual grid sampling, which is not cost effective for most crops with current fertilizer prices. Instead, residual nitrate sampling (to a depth of 3 feet) should be done on a directed sampling basis.

Directed Sampling

Consider Multiple Data Layers. Patterns which show consistency from one data layer to another, such as multiple years of yield maps, or a yield map and an aerial photo, are more likely related to soils than other sources of variability. In many cases, a soil series map or topography map can be a good base upon which to overlay yield maps and other sources of spatial information. Your experience gained from tillage, cultivation, harvest and field scouting can also serve as effective data layers.

Minimize Subdivision. After deriving information from multiple data layers, including your experience, subdivide the field into management zones. Look for general categories when subdividing and avoid creating many subdivisions. Generally, four to six subdivisions should be adequate. Excessive subdivision may create small areas which are not really manageable. Management zones need not be contiguous. Samples collected from more than one area of a field may fall into the same range of yield, soil color, etc. and thus the same zone.

Soil Fertility Isn’t Everything. As you look for consistent patterns in fields, remember that soil fertility will not be the only factor influencing patterns in yield maps, remotely sensed images, and other sources of spatial information. Soil factors other than fertility, such as compaction, top-soil depth, and texture will influence patterns. Other sources of stress, such as disease, weeds and insects may significantly influence yield and other patterns. Consider scouting fields for these factors during the growing season according to categories derived from spatial data.

Accurately Sample Each Zone. Soil samples should be collected from each zone according to current recommendations (NebGuide G91-1000, Guidelines for Soil Sampling). For general fertility recommendations, collect 10-15 cores to a depth of 8 inches from within the zone, then composite samples into one to send to the lab for analysis. Samples can be georeferenced with a GPS receiver for repeatability if desired. This will allow you to collect samples in the future from basically the same locations, even though you are compositing the cores for analysis.

Residual Nitrate Sampling. Collect six to eight cores to a depth of 3 feet for residual nitrate from each zone, compositing the samples into one to send to the lab for nitrate analysis. For convenience, consider collecting a deep sample for residual nitrate at every other location that you collect surface samples, particularly if georeferencing sample locations.

Choosing a Method

Both grid and directed soil sampling are valid options for precision soil sampling — each has advantages and disadvantages. Unless the grid is dense enough, grid sampling may miss patterns and boundaries that are evident from looking at soil surveys or yield maps. Grid sampling is very expensive — both to collect and to analyze the samples. Directed sampling uses other sources of spatial information to make informed decisions on where to sample, however, there may be patterns in soil fertility which are not detectable except with grid sampling. The adjoining figure is an example of such a situation. This is a map of soil phosphorus from a field in Clay County. The pattern of soil phosphorus is strongly influenced by the location of a farmstead with confined livestock in the northern portion of the field at some time in the past — 40 or more years ago. Without knowing the farmstead’s location to direct sampling, a directed sampling approach would not be likely to detect this area of high soil phosphorus. Other sources of spatial information (the county soil survey, yield map, aerial photograph) give no indication of high soil phosphorus or the past presence of a farmstead. This field also is an example of the benefits of precision sampling over traditional sampling methods. The averageBray-1 P test is 15.1 ppm — just slightly over the critical level of 15 ppm at which phosphorus fertilization is recommended by the University of Nebraska for corn. Traditional sampling procedures might suggest that this field does not need phosphorus fertilizer; however, precision sampling shows that most of the field actually tests well below 15 ppm and phosphorus fertilization should significantly increase yield

Applying different amounts of nitrogen (N) fertilizer in different parts of the field according to soil conditions seems intuitively obvious. Producers know soils differ within fields, and often those differences can result in significant yield variation. During the growing season, crops may express differences in leaf color if nitrogen or other nutrients are low in supply and deficiencies result. Crop and soil computer simulation models also suggest there can be substantial differences in soil nitrogen supply or crop nitrogen demand within a field. Yet, in practice, researchers and producers alike have found it difficult to profitably implement site-specific nitrogen management (SSNM) for most agronomic crops. This site reviews recent research in site-specific nitrogen management and recommends how irrigated corn producers in Nebraska might implement this technology on their farms.

Approaches to Site-Specific Nitrogen Management

Based on research in Nebraska with irrigated corn, we suggest producers consider three options for varying nitrogen rate within fields: a predictive approach (zone-based yield potential), a reactive approach (sensor-based) and one that uses localized references within yield potential zones.

Predictive Approach

A predictive approach to nitrogen management is one in which the time and amount of nitrogen application is prescribed prior to planting, accounting for soil nitrogen supply, crop nitrogen demand, fertilizer nitrogen efficiency and fertilizer and crop prices. A site-specific predictive approach relies primarily on the use of multiple layers of spatial information to generate yield potential zones within fields. Accordingly, a field is a good candidate for site-specific nitrogen management only if it appears to have some significant variability — in texture, elevation, management or some other known factor. If a field appears to be quite uniform in nature, it is not likely a good candidate for varying nitrogen rate spatially within the field.

Spatial Data Collection

The first step in this approach is to collect spatial data. The starting point should be at least three years of yield maps. If the field has been in a row-crop rotation, we suggest normalizing yield to allow comparison across crops. One approach to normalization is to express relative yield as a ratio of the actual yield to the field average. For example, if the actual yield for a point is 197 bushels per acre, and the field average for that year is 235 bushels per acre, the relative yield is 0.838. Having corn and soybean yield expressed in relative terms allows quantitative comparison of yield over years. Normalization of yield for a given year’s data should only occur after yield measurements collected with a combine yield monitor have been cleaned to remove outliers. Yield-cleaning algorithms are built into some yield mapping software packages or are available as stand-alone programs. Examples of free yield-cleaning software are Yield Check (University of Nebraska–Lincoln, soilfertility.unl.edu) and Yield Editor (University of Missouri, www.ars.usda.gov/Services/docs.htm?docid=4776).

Various other spatial data layers should be considered as well. The following information layers are suggested, in order of cost efficacy: soil series, aerial imagery, soil ECa and elevation. Digitized soil series boundaries (Soil Survey Spatial and Tabular Data – SSURGO), elevation (digitized elevation models – DEM), aerial imagery (Digital Ortho Quadrangles – DOQ) and other geospatial information layers are available free from a variety of on-line resources, such as the Nebraska Department of Natural Resources (www.dnr.ne.gov/databank/spat.html) and the Natural Resources Conservation Service (datagateway.nrcs.usda.gov/). Several recent years of aerial imagery may be available from the NRCS at little or no cost.

Creating Yield Potential Zones

Once several layers of spatial information are available, integrate those layers into three to five yield potential zones — areas of distinctly differing yield potential which are consistent from year to year. Currently, that is easier said than done. The simplest approach is to lay out various maps on a table, compare them side-by-side, and look for common features over space and time. Based on this visual comparison, manually draw boundaries for yield potential zones on a base map. The capability of quantitative management zone delineation may exist in various agriculturally oriented software packages, but details will vary from system to system. The following figure is one example of delineating yield potential zones. In this example, two years of relative yield (2004 corn and 2005 soybean) are combined with deep ECa (0-3 feet) and yield potential zones defined using MZA software. For this example, using two yield potential zones is most appropriate. Small inclusions of each zone within the other are relatively minor and can be ignored, as they are too small to manage practically. As software options are continually refined, the process of delineating yield potential zones should become easier. The producer needs to agree in principle with the zones delineated by software. There should be some reasonable explanation for why different zones exist, the number of zones and where boundaries occur. Yield potential zones may not be contiguous — that is, areas with similar yield potential may be in different areas of the field. These can and should be treated as one zone, as their properties and yield potential are the same.

Soil Sampling

Once yield potential zones for a field have been created, these provide direction for soil sampling in the field. General fertility soil samples should be collected to a depth of 8 inches. These samples can be analyzed for soil organic matter, pH, phosphorus, potassium, zinc and nitrate-N. Typically, collect 15-20 cores from each zone, then composite these samples and keep a well-mixed subsample to send to the lab. Deep soil samples for residual nitrate-N should be collected from 8-10 cores in each zone to a depth of 3 feet. These can be separated into depth increments or treated as a single continuous core, but must be well mixed before saving a subsample to send to the lab. Samples should exclude areas, such as old feedlots or farmsteads, that may tend to skew soil test results. (If these areas are large enough, they should be considered as separate zones.)

This process will provide soil test results which are average for the zone, at significantly less cost than grid soil sampling. If yield potential zones are not contiguous, as will often be the case, normally these should be sampled and composited as one zone to minimize sampling cost. The following figure is an illustration of soil sampling patterns for a field with five yield potential zones. In this case, Zone 2 has two patches within the field. Since soil characteristics and yield potential are the same for each of the patches of Zone 2, samples from the two patches can be combined for analysis.

Nitrogen Recommendations

Set the expected yield for the middle yield potential zone as the field average. Set expected yield for higher or lower yield potential zones accordingly, but don’t differ from the field average more than about 30 percent. Use the University of Nebraska–Lincoln nitrogen rate algorithm for corn to generate nitrogen recommendations for each yield potential zone. Use the zone-specific expected yield, organic matter and residual nitrate-N. A spreadsheet is available at soilfertility.unl.edu to calculate economically adjusted nitrogen rates for corn, using current corn and fertilizer prices.

This process will result in fertilizer nitrogen recommendations which are uniform within each yield potential zone. We suggest that site-specific nitrogen management using this approach be used with preplant or sidedress application only, not fall application. If fertigation through a center pivot irrigation system is planned for the field and will apply nitrogen uniformly across the field, adjust variable nitrogen rates downward according to the planned fertigation amount.

There may be situations where producers have access to more detailed information to further refine nitrogen rates within zones, rather than applying a uniform rate for each zone. The most likely situation is the use of detailed soil organic matter data. Since the University of Nebraska–Lincoln nitrogen recommendation algorithm for corn uses soil organic matter as a variable, it may make sense to use detailed soil organic matter within zones, along with zone average residual soil nitrate and expected yield. Soil organic matter can be predicted fairly accurately from high resolution bare soil aerial photographs. In addition, there are several prototype on-the-go soil sensors for soil organic matter measurement which may be available soon. one example from a site where six yield potential zones were delineated, then fertilizer nitrogen rate was varied within zones according to the soil organic matter map.

Reactive Approach

Reactive nitrogen management allows the timing and amount of fertilizer nitrogen to be regulated through diagnostic tools that assess soil or crop nitrogen status and yield potential during the growing season. Recent research has focused on the use of either passive or active vehicle-mounted sensors for real-time nitrogen management. Passive sensors rely on canopy reflectance from the sun, while active sensors use their own light source. Consequently, passive sensors must be used during daylight hours, and factors influencing canopy reflectance from the sun, such as clouds and solar angle, may influence spectral data. Active sensors are designed to cancel out solar influences, relying solely on reflectance from the internal light source, and thus can be used anytime, day or night, regardless of cloud cover. Active sensors generally are designed to emit light in both visible and near-infrared wavelengths, and use ratios of these spectra, called vegetation indices, to determine canopy chlorophyll status and thus nitrogen status, as well as biomass. Examples of commercial active crop canopy sensors are the Greenseeker (www.ntechindustries.com/) and the Crop Circle (www.hollandscientific.com/). Aerial photographs also can be used for reactive nitrogen management, especially if both natural color and near-infrared images are available.

Soil Sampling

This step can be considered optional if nitrogen management will be sensor-based. Soil test information collected prior to the growing season may be useful in establishing a target nitrogen rate for the field, but is not essential. If soil samples are collected, use sampling procedures recommended in UNL Extension EC-155, Nutrient Management for Agronomic Crops.

Initial Nitrogen Application

Prior to or at planting, apply a portion of the anticipated fertilizer nitrogen requirement for the crop. The primary need is to supply adequate nitrogen for the crop until the canopy nitrogen status can be accurately sensed, at about the V8 leaf stage. This amount will depend on soil residual nitrate-N and organic matter levels, but typically will be in the range of 40-70 lb nitrogen per acre. At the same time, apply nitrogen to at least two reference strips crossing the range of soils found in the field. The nitrogen rate applied to these reference strips should be high enough to ensure that nitrogen will not limit corn yield potential throughout the growing season, but not excessively high. We suggest rates of 200 lb nitrogen per acre following soybean, or 250 lb nitrogen per acre following corn, which can be adjusted if soil residual nitrate-N levels are known. For successive years of site-specific nitrogen management, rotate the location of reference strips.

Measure Canopy Nitrogen Status

In-season canopy nitrogen sensing can be useful even if site-specific nitrogen management is not planned, but uniform fertigation is an option. Crop canopy sensing can still be done with sensors on a high clearance vehicle or with aerial photographs, but nitrogen application will occur later through the irrigation system. Canopy sensing should be complete no later than V16. Nitrogen application rates should be calculated in the same manner as previously described. Apply nitrogen at the rate of 20-30 lb nitrogen per acre per irrigation within two weeks after silking.

Localized References

The use of localized references is another option to field length, fixed nitrogen rate reference strips. Localized references, also termed calibration ramps (Oklahoma State University), are relatively small areas with differential nitrogen rates. To be most effective, these should be located after yield potential zones have been determined, using procedures described earlier. Within each yield potential zone, locate nitrogen rate blocks of 0.5, 0.75, 1.0, 1.25 and 1.5 times the field average nitrogen rate. Nitrogen should be applied preplant or at planting, in order to ensure an adequate nitrogen supply early in the growing season. Localized references can be used in several ways. If they are relatively small (perhaps 50-100 feet long), the primary use will be to calibrate nitrogen sensors within each yield potential zone. If larger nitrogen rate blocks are used — perhaps 300 feet long — they will be large enough to yield map accurately. The highest nitrogen rate can serve as a reference for crop canopy sensors in the same manner as a fixed rate, field-length strip. They also can be useful in interpreting aerial photographs. Larger localized references also can be used without sensors, by collecting data on yield response to nitrogen within separate yield potential zones. This information provides field and zone-specific nitrogen rate calibrations which will help fine-tune nitrogen management in future years. As with field-length reference strips, be sure to place localized references in different locations each year to ensure that nitrogen response is not influenced by residual effects of the prior years’ treatment.

Precision Ag Resources

ActiveSync

Microsoft ActiveSync is a synchronization software for Windows Mobile-based Pocket PCs. The software is used for data transfer between desktop PCs and PDAs.

As-applied map

Is a map containing site-specific information about the location and rate of application for fertilizer or chemical input. Usually created with a GPS equipped applicator and data logger.

Atomic clock

The world’s most accurate clock. Each GPS satellite has an atomic clock on-board, which is used by GPS receivers to calculate position.

Attribute

A characteristic that describes a Feature. Attributes are pieces of information that describe a point, line, or polygon.

Autonomous operation

Vehicle guidance without the need for human intervention. A tractor may be driven by a series of on-boards sensors and GPS for precision driving without damage to crops.

Backup

A computer backup is a duplicate set of data that can be used to retrieve files in the event of data loss. Backups are often stored using CD-R, DVD, or USB external hard drives.

Base station

Also called a reference station, is a receiver located at a surveyed benchmark. The base station calculates the error for each satellite and through differential correction, improves the accuracy of GPS positions collected at unknown locations by a roving GPS receiver.

Card reader

A device that reads removable memory cards such as Secure Digital, Smart Media, CF, and Memory Sticks. Usually connects to PC using an USB port.

CD-R

A recordable compact disc. Most CD-R discs hold up to 650 MB. Can be used for moving large amount of data or as a data backup.

Coast Guard Beacon

Coast Guard Beacon is a differential correction source for GPS. The Coast Guard built the system for maritime navigation. The signal is ground-based and generally available throughout the United States, but may be limited in some areas. The signal is free, but requires a special GPS antenna.

Com port

Communication port is used to connect electronic devices to a computer. It is a 9-pin connector called DB-9.

Compact flash (CF) card

A CF card is a small removable mass storage device. Used as non-volatile memory in digital cameras and PDA devices.

Constellation

Refers to either the specific set of satellites used in calculating positions or all the satellites visible to a GPS receiver at one time.

Coverage

A vector data format for storing the location, shape, and attributes of geographic features. A coverage usually represents a single theme such as soils, streams, roads, or land use. It is one of the primary vector data storage formats for ArcInfo. A coverage stores geographic features as primary features (such as arcs, nodes, polygons, and label points) and secondary features (such as tics, map extent, links, and annotation). Associated feature attribute tables describe and store attributes of the geographic features.

Data logger

Used to store electronic data sent by a measurement device. A yield monitor is an example of a data logging device.

Datum

A set of parameters and control points used to accurately define the three-dimensional shape of the earth. The datum defines part of a geographic coordinate system that is the basis for a planar coordinate system. For example, the North American Datum for 1983 (NAD83) is the datum for map projections and coordinates within the United States and throughout North America.

Digital Elevation Model

DEM - depicts landscape elevation. The data are developed by the USGS at either 10-meter or 30-meter grid spacing. Often used in a GIS as a data layer.

Differential Global Positioning System (DGPS)

Is a technique used to improve GPS accuracy. See differential correction.

Differential correction

A technique used to improve GPS accuracy. Differential correction signals remove or reduce errors associated with atmospheric effects and other positioning errors. The correction signal can be land-based or satellite-based. Common sources of differential correction include WAAS, Omnistar, and Coast Guard Beacon.

DOQ

Digital orthophoto quadrangle (DOQ) is an aerial photograph. It has been orthorectified--altered so that it has the geometric properties of a map; DOQ's meet National Map Accuracy Standards. Thus the user can measure distances accurately on a DOQ.

DVD

A digital media source capable of storing large amounts of data. Is a common format for movies, but is also used for computer data storage. Each DVD can hold up to 4.7 GB of data.

e00

ArcInfo interchange format used to transfer coverages. The data must be converted prior to use in a GIS system.

Expansion pack

Is a sleeve that slides on to IPAQ PDA device. The expansion pack may contain extra battery power, memory slots or other enhancing features.

Fix

A single position calculated by a GPS receiver with latitude, longitude, altitude, time, and date.

Geographic coordinate system

A reference system using latitude and longitude to define the locations of points on the surface of a sphere or spheroid.

Geostationary satellite

A satellite positioned approximately 35,700 km above the earth’s equator. At this altitude the satellite orbits as fast as the earth rotates on its axis, so it remains effectively stationary above a point on the earth. Satellite-based differential correction satellites are geostationary.

Geosynchronous satellite

A satellite moving west to east whose orbital period is equal to the earth’s rotational period. If the orbit is circular and lies in the plane of the equator, the satellite will remain over one point. Otherwise it will appear to make a figure eight once a day between the latitudes that correspond to its angle of inclination over the equator. The constellation for GPS is geosynchronous.

GeoTiff

GeoTIFF refers to TIFF files, which have geographic data embedded within the TIFF file. The geographic data can then be used to position the image in the correct location and geometry on the screen in a geographic information system.

GIS (Geographic Information System)

A computer based system that is capable of collecting, managing and analyzing geographic spatial data. This capability includes storing and utilizing maps, displaying the results of data queries and conducting spatial analysis.

GPS (Global Positioning System)

Global Positioning System is a constellation of 24 satellites, developed by the U.S. Department of Defense, which are used to calculate your position anywhere on earth.

Grain flow dynamics

Refers to the grain flow through the harvest machine. Grain flow dynamics result in errors relating to grain mixing, delayed measurement and resulting positional errors in yield mapping systems.

JPEG

JPEG is a standardized image compression mechanism used to reduce file size. JPEG stands for Joint Photographic Experts Group. JPEG is designed for compressing either full-color or gray-scale images. Often used for internet delivery of aerial photography.

JPW

JPEG World file (JPW) contains coordinate information used to position an image with respect to a coordinate system. JPEG World File format is supported by most GIS software packages. However, the JPW file does not contain projection information and care should be taken to ensure that the coordinate system is the same as other data layers in a GIS system.

Latitude

A north/south measurement of position perpendicular to the earth's polar axis.

Lightbar

Is a navigation tool coupled with a GPS designed to keep the driver on-course. Applications include planting and fertilizer applications to reduce skips and overlaps. Typically, guidance is provided through as series of LED lights.

Line

A shape having length and direction but no area, connecting at least two x, y coordinates. Lines represent geographic features too narrow to be displayed as an area at a given scale, such as contours, street centerlines, or streams, or linear features with no area such as state and county boundary lines.

Longitude

An east/west measurement of position in relation to the Prime Meridian, an imaginary circle that passes through the north and south poles.

Memory stick

A memory stick is a small removable mass storage device about the size of a stick of gum. Used as non-volatile memory in digital cameras and other electronics made by Sony Corporation.

Map Projection

A mathematical formula that transforms feature locations between the earth's curved surface and a map's flat surface. A projected coordinate system includes the information needed to transform locations expressed as latitude and longitude values to x, y coordinates. Projections cause distortions in one or more of these spatial properties--distance, area, shape, and direction.

Mass flow sensor

Is a sensor that measures grain flow in a yield monitor system.

MHz

In computers, is a measure of processor speed. Higher numbers reflect a faster computer processor.

Moisture sensor

Is a sensor that measures grain moisture in a yield monitor system.

Multimedia card

A multimedia card is a small removable mass storage device. Used as non-volatile memory in digital cameras and PDA devices.

Multipath

Interference caused by reflected GPS signals arriving at the receiver, typically as a result of nearby structures or other reflective surfaces.

Omnistar

A subscription based differential GPS source. Omnistar is a satellite-based DGPS source that requires a special GPS antenna.

Operating system

Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.

PDA

Personal Digital Assistant (PDA) are small portable computers, usually handheld or pocket size, that organize data, such as your schedule, appointment calendar, address book, and to-do list. They are also used for GPS navigation and GIS data collection.

PDOP

An indicator of satellite geometry for a constellation of satellites used to determine position. Positions with a higher PDOP value generally constitute poorer measurement results than with lower PDOP value.

Pixel

A pixel is an individual cell in a matrix that comprises a raster dataset. See Raster.

Point

A single x, y coordinate that represents a geographic feature too small to be displayed at as a line or polygon at a specific scale.

Polygon

A two-dimensional closed figure with at least three sides that represents an area.

Processor

A silicon chip that contains a CPU. In the world of personal computers, the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers is a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles.

Projection

See map projection.

RAM

Acronym for random access memory, a type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is used as temporary memory used to quickly access commonly used files.

Raster

Represents any data source that uses a grid structure to store geographic information. Common examples include remotely sensed data like satellite imagery, scanned data, and photographs.

Re-project

Is the process of converting a map from one map projection to a different map projection. For example, Lat/Long to UTM coordinates.

ROM

Acronym for read-only memory, computer memory on which data has been prerecorded. Unlike RAM, ROM retains its contents even when the computer is turned off. ROM is referred to as being nonvolatile, whereas RAM is volatile.

Rover

RTK

Scale

The ratio or relationship between a distance or area on a map and the corresponding distance or area on the ground.

Secure digital (SD) card

A small removable mass storage device. Used as non-volatile memory for digital cameras and PDA devices.

Selective availability (SA)

A method of intentional accuracy degradation by the Department of Defense (DoD). With SA turned on GPS accuracy reduced to 100 meters. Selective availability has been turned off as of May 2, 2000. The DoD has no plans to turn SA on in the future. Instead other methods of GPS access will be employed.

Shapefile

A vector data format for storing the location, shape, and attributes of geographic features. A shapefile is stored in a set of related files and contains one feature class, point, line or polygon.

Smart media

Is a small removable mass storage device. Used as non-volatile memory for digital cameras.

Spatial data

Information about the locations and shapes of geographic features; Any data that can be mapped.

SSURGO

Soil Survey Geographic (SSURGO) Database. It is a digital version of the NRCS soil books. Each soil type is represented as a polygon and tied with associated soil type properties.

Synchronize

Synchronizing two devices refers to an exact duplicate between the PDA and PC. If one unit has files newer than the other unit, software will update each unit.

Tabular data

Descriptive information that is stored in rows and columns and can be linked to map features.

TFW

TIFF World file (TFW) contains coordinate information used to position an image with respect to a coordinate system. TIFF World File format is supported by most GIS software packages. However, the TFW file does not contain projection information and care should be taken to ensure that the coordinate system is the same as other data layers in a GIS system.

TIFF

An acronym for Tag(ged) Image File Format. It is one of the most popular and flexible of the current public domain raster file formats. Commonly used in aerial photography and satellite imagery.

Tiger

Topologically Integrated Geographic Encoding and Referencing system - the name for the digital database developed at the U.S. Census Bureau to support its mapping needs for the Decennial Census and other Bureau programs. Data sets include political boundaries, demographics, roads, stream networks.

Unprojected map

A map that is not projected to a coordinate system. Refers to data containing latitude and longitude coordinates.

USB

Communication protocol known as Universal Serial Bus. Commonly used for computer peripherals such as printers, keyboards, scanners, and digital cameras. USB devices allow for hot-swappable capability without restarting the computer to recognize a connected device.

UTM

Universal Transverse Mercator - UTM is a projected coordinate system that divides the world into 60 north and south zones, six degrees wide.

Vector

A coordinate-based data structure commonly used to represent linear geographic features. Features are stored as points, lines, and polygons.

WAAS

Wide Area Augmentation System (WAAS) is a differential correction source for GPS. The FAA built the system for aviation navigation. The signal is satellite-based and only available in the United States. The signal is free and can be used with any WAAS-enabled GPS receivers. No special GPS antenna required.

Yield calibration

Procedures used to calibrate a yield monitor for specific harvest conditions such as grain type, grain flow, and grain moisture.

Yield mapping

Is a yield monitor coupled with a GPS. Each yield reading is tagged with a latitude and longitude coordinates, which is then used to produce a yield map.

(2 MB; 6 pages) These systems utilize GPS to reduce over-application of chemicals in the field. Boom sections are turned off over previously treated areas or in "no spray zones" determined by the operator. Substantial savings can be realized, especially in irregularly shaped fields.

(1.8 MB; 6 pages) Collecting accurate yield monitor data is critical for producers who plan to adopt site-specific crop management strategies for their operations. This publication, part of the precision agriculture series, explains basic information to be used by the combine operator during harvest to improve the quality of yield data collected during harvest.

(466 KB; 8 pages) Describes several approaches to site-specific nitrogen management including data collection, soil sampling and nitrogen recommendations as well as a history and economic analysis of the practice.

(362KB; 8 pages) Yield mapping refers to the process of collecting georeferenced data on crop yield and characteristics, such as moisture content, while the crop is being harvested. This EC explains more about how to acquire and use this precision agriculture tool.

(181 KB; 4 pages) Sensors can measure a variety of essential soil properties. These sensors can be used to control variable rate application equipment in real-time or in conjunction with a Global Posititioning System to generate field maps of particular soil properties.

(2 MB; 6 pages) These systems utilize GPS to reduce over-application of chemicals in the field. Boom sections are turned off over previously treated areas or in "no spray zones" determined by the operator. Substantial savings can be realized, especially in irregularly shaped fields.

(1.8 MB; 6 pages) Collecting accurate yield monitor data is critical for producers who plan to adopt site-specific crop management strategies for their operations. This publication, part of the precision agriculture series, explains basic information to be used by the combine operator during harvest to improve the quality of yield data collected during harvest.

(466 KB; 8 pages) Describes several approaches to site-specific nitrogen management including data collection, soil sampling and nitrogen recommendations as well as a history and economic analysis of the practice.

(362KB; 8 pages) Yield mapping refers to the process of collecting georeferenced data on crop yield and characteristics, such as moisture content, while the crop is being harvested. This EC explains more about how to acquire and use this precision agriculture tool.

(181 KB; 4 pages) Sensors can measure a variety of essential soil properties. These sensors can be used to control variable rate application equipment in real-time or in conjunction with a Global Posititioning System to generate field maps of particular soil properties.

Presentations developed by University of Nebraska-Lincoln faculty. Archived here in PDF and Flash format. Permission of authors is required to use the content of these materials for public presentation.

Is software for screening geo-referenced yield monitor data. It detects six common types of erroneous yield monitor values:

combine header status up,

start-/end-pass delays,

grain flow, distance traveled, and grain moisture outliers,

values exceeding minimum and maximum biological yield limits,

local neighborhood outliers, and

short segments and co-located points.

Erroneous values are either deleted or flagged for further analysis to identify where they have occurred in the field. The output file can then be used in other commercial mapping software to create a yield map or maps of the different categories of erroneous yield values. (University of Nebraska-Lincoln)