6
Water Quality Integrity

As discussed in Chapters 4 and 5, breaches in physical and hydraulic integrity can lead to the influx of contaminants across pipe walls, through breaks, and via cross connections. These external contamination events can act as a source of inoculum, introduce nutrients and sediments, or decrease disinfectant concentrations within the distribution system, resulting in a degradation of water quality. Even in the absence of external contamination, however, there are situations where water quality is degraded due to transformations that take place within piping, tanks, and premise plumbing. Most measurements of water quality taken within the distribution system cannot differentiate between the deterioration caused by externally vs. internally derived sources. For example, decreases in disinfectant concentrations with travel time through the distribution system could be the result of demand from an external contamination event or it could be due to disinfectant reactions with pipe walls and natural organic matter remaining after treatment.

This chapter deals with the various internal processes or events occurring within a distribution system that lead to degradation of water quality, the consequences of those processes, methods for detecting the loss of water quality, operational procedures for preventing these events, and finally, how to restore water quality integrity if it is lost. In many cases, the detection methods and recovery remedies are similar to those discussed in previous chapters.

FACTORS CAUSING LOSS OF WATER QUALITY INTEGRITYAND THEIR CONSEQUENCES

For water quality integrity to be compromised, specific reactions must occur that introduce undesirable compounds or microbes into the bulk fluid of the distribution system. These reactions can occur either at the solid–liquid interface of the pipe wall or in solution. Obvious microbial examples include the growth of biofilms and detachment of these bacteria within distribution system pipes and the proliferation of nitrifying organisms. Important chemical reactions include the leaching of toxic compounds from pipe materials, internal corrosion, scale formation and dissolution, and the decay of disinfectant residual that occurs over time as water moves through the distribution system. All these interactions are governed by a suite of chemical and physical parameters including temperature, pH, flow regime, concentration and type of disinfectant, the nature and abundance of natural organic matter, pipe materials, etc. Many of these variables may be linked in distribution systems; for example, seasonal increases

in temperature may be accompanied by changes in organic matter, flow regimes, and disinfectant concentrations. As a consequence, attempting to correlate the occurrence of a given event (such as corrosion, microbial growth, disinfectant decay, or DBP formation) within distribution systems to a single variable (such as temperature) is difficult.

Biofilm Growth

One way in which water quality can be degraded in the distribution system is due to the growth of bacteria on surfaces as biofilms. Virtually every water distribution system is prone to the formation of biofilms regardless of the purity of the water, type of pipe material, or disinfectant used. The extent of biofilm formation and growth, the microbial ecology that develops, and the subsequent water quality changes depend on surface-mediated reactions (e.g., corrosion, disinfectant demand, immobilization of substrates for bacterial growth), mass transfer and mass transport processes, and bulk fluid properties (concentration and type of disinfectants, general water chemistry, organic concentration, etc.). These interactions can be exceedingly complex, which typically means that the mechanisms leading to biofilm growth may not be obvious and are often system specific.

Bacteria growing in biofilms can subsequently detach from the pipe walls. Because these organisms must survive in the presence of the disinfectant residual present in the distribution system, the interaction between the suspended organisms and residual is critical. If the residual has decayed due to reactions with compounds in the water or with the pipe wall, intrusion, or other sufficient external contamination, it is possible for attached bacteria to be released into water that contains insufficient disinfectant to cause their inactivation. The potential for this to occur is higher in premise plumbing, which generally has longer water residence times that may lead to very low disinfectant concentrations.

Pathogenic Microorganisms

An obvious risk to public health from distribution system biofilms is the release of pathogenic bacteria. As discussed in Chapter 3, there are instances where opportunistic pathogens have been detected in biofilms, including Legionella, Aeromonas spp., and Mycobacterium spp. Assessing risk from these organisms in biofilms is complicated by the potential for two modes of transmission. Aeromonas spp. causes disease by ingestion, while the other two organisms cause the most severe forms of disease after inhalation. In the case of Aeromonas spp., which is included as one of the unregulated “contaminants” to be tested for in the Contaminant Candidate List, it has been shown that drinking

Coliforms and Heterotrophs

Another consequence of biofilms is their potential to support the growth and release of organisms of regulatory concern, especially coliforms. Coliforms released from biofilms may result in elevated coliform detection even though physical integrity (i.e., breaches in the distribution system) and disinfectant residual have been maintained (Characklis, 1988; Haudidier et al., 1988; Smith et al., 1990). It should be noted that coliforms arising from biofilms are generally considered to be low risk (see Chapter 2), which is also inferred by EPA’s variance to the Total Coliform Rule for coliforms emanating from biofilms (see page 208). However, coliform regrowth may indirectly present a risk by masking the presence of bacteria introduced in a simultaneous contamination event. If repeated occurrences of coliforms in the distribution system force a utility to notify the public, there can be a loss of consumer confidence and trust in the utility.

The regrowth of heterotrophs in biofilms can also be of concern, especially for European communities that are required to monitor their presence. Some U.S. utilities routinely monitor heterotrophs using heterotrophic plate counts (HPC) as a general indicator of microbial quality, and may be required to assess their numbers if chlorine residuals are too low. In general, heterotrophic bacteria are usually not of public health concern, but with the growing immunocompromised population many utilities are interested in minimizing the presence of these organisms in their water.

Corrosion and Other Effects

In addition to the regrowth issue, biofilms in distribution systems can cause other negative effects on finished water quality. The processes listed here do not require that the organisms detach from the surfaces, since the changes in water quality are due to their metabolic activities as they grow on the surfaces.

Bacterial biofilms may contribute to the corrosion of pipe surfaces and their eventual deterioration. Although a considerable amount of corrosion internal to the pipe can be mediated by abiotic factors, it is known that bacteria can both directly and indirectly influence corrosion of metal surfaces. Of particular concern is the pitting of copper that can lead to pinhole leaks in premise plumbing. Geesey et al. (1993) reported that pitting of copper plumbing in four hospitals around the world was likely attributable to bacterial activity. Wagner et al. (1997) have said that biologically produced polymers typical of biofilms create high and low chloride concentration cells, and consequently localized corrosion cells, leading to increased copper corrosion. Laboratory studies have shown that

the presence of bacteria on copper surfaces could accelerate corrosion when compared to an abiotic system (Webster et al., 2000). In other studies, specific organisms were correlated with copper corrosion and could be isolated from pits (Bremer and Geesey, 1991; Bremer et al., 1992). However, other research has shown that organisms alone did not cause copper pitting, and that particulate matter was also required (Walker et al., 1998).

Microbes may also influence iron surfaces in distribution systems. Iron bacteria can grow on ferrous metal surfaces (Ridgway et al., 1981), and by virtue of their metabolism may modify the local chemistry at the metal surface which in turn promotes localized corrosion (Victoreen, 1974). As stated by McNeill and Edwards (2001), there are many possible effects of bacterial action and biofilm formation on iron corrosion. These include the production of differential aeration cells (Lee et al., 1980), soluble metal uptake by biofilm polymers (Tuovinen et al., 1980), changes in iron speciation by oxidation or reduction (Shair, 1975; Denisov et al., 1981; Kovalenko et al., 1982; Okereke and Stevens, 1991; Chapelle and Lovely, 1992; Nemati and Webb, 1997), and the production of pH gradients (Tuovinen et al., 1980) or corrosive hydrogen sulfide (Tuovinen et al., 1980; DeAraujo-Jorge et al., 1992). All of these factors can contribute to increased localized corrosion and the deterioration of the pipe material, as well as influencing water quality by causing the release of metal ions or corrosion products and associated problems with water color.

Other effects of biofilms are worth noting. As demonstrated in the wastewater industry, it is possible to have nitrifying bacteria present in biofilms, and these organisms could result in nitrification episodes in distribution systems where chloramine is used (Wolfe et al., 1990, and see the section below). Actinomycetes or fungi present in biofilms may result in taste and odor problems (Burman, 1965, 1973; Olson, 1982), which then lead to consumer complaints. Excess biofilm growth can result in the loss of hydraulic capacity by increasing fluid frictional resistance at the pipe wall (see examples in Characklis et al., 1990). Finally, growth of biofilms and the associated organics can create a chlorine demand at the pipe wall.

Biologically Stable Water

Because this report focuses on distribution system events, it does not delve into failures or breaches at the treatment plant that might allow a breakthrough of contaminated water. Nonetheless, a brief discussion of biologically stable water is warranted, given its potential to reduce the growth of bacteria in the distribution system. Drinking water is generally considered to be biologically stable if it does not support the growth of bacteria in the distribution system. In its broadest sense, biologically stable water restricts growth because it lacks an essential nutrient (nitrogen or phosphorus), is sufficiently low in utilizable organic carbon, or contains adequate disinfectant. Although all of these parameters may influence biofilm growth, the U.S. drinking water industry has typically

viewed biologically stable water as sufficiently low in organic carbon as to limit the proliferation of heterotrophic bacteria. In this context, the general concepts of microbial stable water and maximum regrowth potential are relatively well understood (Rittman and Snoeyink, 1984; Sathasivan et al., 1997).

Another mechanism for ensuring biological stability is the maintenance of an adequate disinfectant residual. However, since disinfectants decay in the distribution system, reliance on a residual to ensure biological stability may not be entirely feasible. Within distal portions of the distribution system or within stagnant portions of premise plumbing, disinfectants disappear via reactions with pipe or bulk water or via nitrification. At these locations, any available organics can then be freely utilized by the bacteria present.

The reduction of organic carbon to control microbial growth may allow utilities to decrease their reliance on disinfectants. This approach also has the advantage of decreasing the potential for the production of disinfectant byproducts (DBPs). Organic carbon removal is most often accomplished through enhanced coagulation, granular activated carbon filtration, or biological filtration. Although there is controversy surrounding target concentrations of organics that will limit regrowth, some recommendations have been made. van der Kooij et al. (1989) and van der Kooij and Hijnen (1990) showed a correlation between assimilable organic carbon (AOC) and regrowth in a non-disinfected distribution system, and provided evidence for biological stability in the Netherlands when the AOC concentration (Pseudomonas fluorescens P17 + Spirillum NOX) is reduced to 10 µg acetate C eq/L (van der Kooij 1992). LeChevallier et al. (1991) have suggested that coliform regrowth may be controlled by influent AOC levels (P17 + NOX) below 50 µg acetate C eq/L. Based on a field study, LeChevallier et al. (1996) subsequently recommended a level below 100 µg C/L to control regrowth. Servais et al. (1991) have associated biological stability with a biodegradable dissolved organic carbon (BDOC) level of 0.2 mg/L, but Joret et al. (1994) have stated that the value is 0.15 mg/L at 20° C and 0.30 mg/L at 15° C.

It should also be noted that organic carbon may not be the limiting nutrient. In Japan and Finland, evidence supports the concept that phosphorus is limiting (Miettinen et al., 1997; Sathasivan et al., 1997; Sathasivan and Ohgaki, 1999; Lehtola and Miettinen, 2001; Keinanen et al., 2002; Lehtola et al., 2002a,b, 2004). In these cases, the addition of phosphate-based corrosion inhibitors may decrease the biological stability of the water and allow for regrowth (Miettinen et al., 1997).

This discussion illustrates that the best strategy for creating and maintaining biologically stable water is most likely to be system specific. Each water utility should identify the limiting nutrient and best practices to attain and then maintain biological stability. Changing water quality goals should then keep these factors in mind. For example, the dosing of ammonia during a switch to chloramination would relieve nitrogen limitations to regrowth, whereas dosing of phosphate corrosion inhibitors can relieve phosphate limitations.

Nitrification

Biological nitrification is a process in which bacteria oxidize reduced nitrogen compounds (e.g., ammonia) to nitrite and then nitrate. It is associated with nitrifying bacteria in distribution systems and long retention times in water supply systems practicing chloramination. One of the most important problems exacerbated by nitrification is loss of the chloramine disinfectant residual. This occurs because a reduction in ammonia results in an increased ratio of chlorine to ammonia nitrogen. This ratio controls the stability of monochloramine, which is governed by a complex set of reactions (Jafvert and Valentine, 1992; also see following section on loss of disinfectant residual). As the ratio approaches 1.5 on a molar basis, a rapid loss of monochloramine occurs attributable to the eventual oxidation of N(III) to primarily nitrogen gas and the release of more ammonia. The released ammonia can then be further oxidized by the nitrifying organisms, establishing what amounts to a positive feedback loop. Furthermore, the loss of disinfectant residual removes one of the controls on the activity of nitrifiers, and it may also lead to the increased occurrence of microorganisms such as coliforms (Wolfe et al., 1988, 1990) and heterotrophic bacteria.

As discussed in NRC (2005), the loss of chloramine residual is the most significant health threat that can result from nitrification. It should be noted, however, that there are other lesser health effects of nitrification that may be important for certain populations. Nitrite and nitrate have been shown to cause methemoglobinemia (blue baby syndrome), an acute response to nitrite that results in a blockage of oxygen transport (Bouchard et al., 1992). Methemoglobinemia affects primarily infants below six months of age, but it may occur in adults of certain ethnic groups (Navajos, Eskimos) and those suffering from a genetic deficiency of certain enzymes (Bitton, 1994). Pregnant women may also be at a higher risk of methemoglobinemia than the general population (Bouchard et al., 1992). A second concern is that nitrate may be reduced to nitrite in the low pH environment of the stomach, reacting with amines and amides to form N-nitroso compounds (Bouchard et al., 1992; De Roos et al., 2003). Nitrosamines and nitrosamides have been linked to different types of cancer, but the intake of nitrate from drinking water and its causal relation to the risk of cancer is still a matter of debate (Bouchard et al., 1992). A study by Gulis et al. (2002) in Slovakia related increased colorectal cancer and non-Hodgkin’s lymphoma to medium (10.1–20 mg/l) and high (20.1–50 mg/l) concentrations of nitrate nitrogen in drinking waters. Similarly, Sandor et al. (2001) showed a correlation between the consumption of waters containing greater than 88 mg/l nitrate nitrogen and gastric cancer. Despite numerous papers (Sandor et al., 2001; Gulis et al., 2002; Kumar et al., 2002; De Roos et al., 2003; Coss et al., 2004; Fewtrell, 2004), the concentration at which nitrate nitrogen in drinking waters presents a health risk is unclear (Fewtrell, 2004). Finally, a lesser but still significant water quality effect of nitrification is a reduction in alkalinity and pH in low

alkalinity waters. This may cause the pH to decrease to the point that corrosion of lead or copper becomes a problem.

It is important to recognize that nitrate and nitrite may come from sources other than nitrification. van der Leeden et al. (1990) found that 93 percent of all U.S. water supplies contain less than 5 mg/l nitrate, but noted that these values may be changing as a result of the increased use of nitrate-containing fertilizers. Increased use of chloramination (up to 50 percent of the surface water systems in the United States may use chloramination in the near future as a result of the Stage 1 Disinfectants/Disinfection Byproducts Rule; EPA, 2003) may result in higher levels of nitrate in drinking waters (Bryant et al., 1992), but the increment in nitrate plus nitrite nitrogen from this source would typically be less than 1 mg/L, which is well below the current maximum contaminant level (MCL). Thus, as stated earlier the concern may be predominantly for more susceptible populations (pregnant women, infants, some ethnic groups).

Interestingly, although nitrification is a recognized potential problem in water systems practicing chloramination, nitrification control is required or encouraged in only 11 of 34 states that responded to a survey of drinking water programs conducted by the Association of State Drinking Water Administrators in March 2003 (see Table 2-5). This illustrates the need for state agencies to recognize the potential issues associated with chloramination and nitrification, and thereby prepare their utilities to deal with this potentially problematic issue.

Leaching

All materials in the water distribution system, including pipes, fittings, linings, other materials used in joining or sealing pipes, and internal coatings leach substances into the water. The processes that account for this include corrosion, dissolution, diffusion, and detachment. Taste and odor problems (Burlingame et al., 1994; Khiari et al., 2002) are the most likely outcome of leaching because most substances leaching into water from materials in the distribution system are non-toxic, present only at trace levels, or are in a form unlikely to cause health problems.

There are however, a few situations in which leaching may present a substantial health risk. By far the most significant is the leaching of lead from lead pipe, lead-containing solder, and lead service connections. Monitoring of lead in tap water and replacement of these lines are important components of the Lead and Copper Rule. Other materials used in distribution systems that have the potential for leaching include PVC pipes manufactured before about 1977. These are known to leach carcinogenic vinyl chloride into water at levels above the MCL (AWWA and EES, Inc., 2002). Cement materials have, under unusual circumstances, leached aluminum into drinking water at concentrations that caused death in hemodialysis and other susceptible patients (Berend et al., 2001). Because levels of aluminum normally present in drinking water can also threaten this population, the FDA has issued guidance for water purification pre-

treatments in the U.S. for dialysis and other patients (http://www.gewater.com/library/ tp/1111_Water_The.jsp). Asbestos fibers may also be released from asbestos cement; the content of asbestos in water is regulated with an MCL, although utilities are not required to monitor for asbestos in the distribution system. Finally, excessive leaching of organic substances from linings, joints, and sealing materials have occasionally been noted. Some of these substances may support the growth of biofilms (Shoenen, 1986), such that their use should be limited.

For new materials, NSF International establishes levels of allowable contaminant leaching through ANSI/NSF Standard 61 (see Chapter 2). However, this standard, which establishes minimum health effect requirements for chemical contaminants and impurities, does not establish performance, taste and odor, or microbial growth support requirements for distribution system components. This is unfortunate because research has shown that distribution system components can significantly impact the microbial quality of drinking water via leaching. Procedures are available to evaluate growth stimulation potential of different materials (Bellen et al., 1993), but these tests are not applied in the United States by ANSI/NSF.

Internal Corrosion

Internal corrosion manifests as (1) the destruction of metal pipe interiors by both uniform and pitting corrosion (see Chapter 4) and (2) the buildup of scales of corrosion products on the internal pipe wall that hamper the flow of water (see Chapter 5). A large number of water quality parameters such as disinfectant residual, temperature, redox potential, alkalinity, calcium concentration, total dissolved solids concentration, and pH play an important role both in the internal corrosion of pipe materials and the subsequent release of iron. The products of corrosion may appear in water as dissolved and particulate metals, and the particles may cause aesthetic problems because of their color and turbidity if they are present in sufficient concentration. Metals such as lead and copper in tap water are governed by the Lead and Copper Rule; asbestos particles and iron particles with adsorbed chemicals such as arsenic (Lytle et al., 2004) are of concern because of possible health effects. The quality of distributed water must be controlled so that both corrosion and metal release do not cause water quality problems.

Scale Formation and Dissolution

Scale on pipe surfaces may form in distribution systems for a variety of reasons including precipitation of residual aluminum coagulant after filtration, precipitation of corrosion products, precipitation of corrosion inhibitors, and precipitation of calcium carbonate and silicate minerals. Scale that forms in a thin,

smooth coat that protects the metal pipe by reducing the rate of corrosion is generally desirable, whereas uncontrolled precipitation can reduce the effective diameters of distribution pipes and can create rough surfaces, both of which reduce the hydraulic capacity of the system (as discussed in Chapter 5) and increase the cost of distributing water.

In terms of internal contamination events, rough surfaces and scales with reduced metals such as ferrous iron can increase problems with biofilms (Camper et al., 2003). That is, ferrous iron reacts with chlorine and monochloramine, reducing the effective concentration of disinfectant in the vicinity of biofilms. Furthermore, rough surfaces contain niches where microbes can grow without exposure to hydraulic shear. If the scale material is loosely attached to the pipe wall, such as some aluminum precipitates, hydraulic surges can result in substantial increases in the turbidity of tap water. Scales are also important because they can dissolve under some water quality conditions and release metals to the water in the distribution system. For example, Sarin et al. (2003, 2004) showed that iron scales release iron during flow stagnation, which then causes turbid and colored water. Dodge et al. (2002), Valentine and Stearns (1994), and Lytle et al. (2002) showed that uranium, radium-226, and arsenic, respectively, could be adsorbed to iron corrosion scales found in distribution systems. (In order for these metals to accumulate they must be present in the source water.) Lytle et al. (2002) showed that arsenic would accumulate on iron solids in distribution systems even when present in water at concentrations less than 10 µg/L. Aluminum and manganese solids can also adsorb metal contaminants and may subsequently release them because of changes in water quality. Research is needed to fully characterize this potential source of contamination related to internal corrosion and scale dissolution and to find ways to control it.

Other Chemical Reactions that Occur as Water Ages

Many water distribution systems in the United States experience long retention times or increased water age, in part due to the need to satisfy fire fighting requirements. Although not a specific degradative process, water age is a characteristic that affects water quality because many deleterious effects are time dependent. The most important for consideration here are (1) the loss of disinfectant residuals and (2) the formation of DBPs. The importance of water age is recognized in part by the survey of state drinking water programs where nearly all states that responded to the survey either required or encouraged utilities to minimize dead ends and to have proper flushing devices at remaining dead ends (Table 2-3).

Loss of Disinfectant Residual

Maintenance of a disinfectant residual throughout a distribution system is considered an important element in a multiple barrier strategy aimed at maintaining the integrity of a distribution system. It is generally assumed that the presence of a disinfectant is desirable because it may kill pathogenic organisms, and therefore the lack of a disinfectant is an undesirable situation. The absence of a disinfectant residual when one is expected may also indicate that the integrity of the system has been compromised, possibly by intrusion or nitrification. If the disinfectant is chloramine, its decay will produce free ammonia that could promote the onset of nitrification. Understanding the nature of the processes leading to disinfectant losses, especially when those processes lead to excessive decay rates, is important in managing water quality.

Loss of disinfectants in distribution systems is typically due to reduction reactions in the bulk water phase and at the pipe–water interface that reduce disinfectant concentration over time, although nitrification (in the case of chloramine) can also play a role. Dissolved constituents that can act as reductants in the aqueous phase include natural organic matter (NOM) and ferrous Fe(II) and manganous Mn(II) ions. These substances may occur in the water either as a result of incomplete removal during treatment, from the corrosion of pipe material (e.g., cast iron), or from the reduction of existing insoluble iron and manganese deposits. Disinfectants may also readily react with reduced forms of iron and manganese oxides typically found on the surface of cast iron pipes as well as with adsorbed NOM (Tuovinen et al., 1980, 1984; Sarin et al., 2001, 2004). Benjamin et al (1996) found that the accumulation of iron corrosion products at the pipe wall and the release of these products into the bulk water led to a deterioration of water quality. There have been several reports that the loss of chlorine residuals in corroded unlined metallic pipes (particularly cast iron) increases with increasing velocity (Powell, 1998; Powell et al., 2000; Grayman et al., 2002; Doshi et al., 2003). Correlative evidence for the role of corrosion in reducing disinfectant residuals was produced by Camper et al. (2003), who studied the interactions between pipe materials, organic carbon levels, and disinfectants using annular reactors with ductile–iron, polyvinyl chloride (PVC), epoxy, and cement-lined coupons at four field sites. They found that iron surfaces supported much higher bacterial populations than other materials.

Modeling efforts to understand disinfectant decay have been primarily empirical in nature or semi-mechanistic, and they have mostly addressed non-biological reactions. The primary purpose of these types of models is to serve as a predictive tool in managing water quality. Most modeling research has targeted the relatively fast reactions of free chlorine in the aqueous phase, predicting free chlorine decay versus hydraulic residence time using single system-specific decay coefficients. For example, Vasconcelos et al. (1996) developed several simple empirical mathematical models to describe free chlorine decay. Clark (1998) proposed a chlorine decay and TTHM formation model based on a

competitive reaction between free chlorine and NOM. The model was validated against the Vasconcelos et al. (1996) data sets and found to be as good or better (based on r2 values) than the models examined by Vasconcelos et al. (1996).

More sophisticated models have improved predictive management capabilities and are also useful as research tools in the elucidation of fundamental processes. Rossman et al. (1994) developed a chlorine decay model that includes first-order bulk phase and reaction-limited wall demand coefficients; this model is incorporated into EPANET1. The model developed by Clark (1998) was extended to include a rapid and slow reaction component and to study the effect of variables such as temperature and pH (Clark and Sivaganesan, 2001). Further extensions included the formation of brominated byproducts (Clark et al., 2001). McClellan et al. (2000) modeled the aqueous-phase loss of free chlorine due to reactions with NOM by partitioning the NOM into reactive and non-reactive fractions. Other models have incorporated reactions with reactive pipe surfaces that may dominate the loss pathways (Lu et al., 1995; Vasconcelos et al., 1997) as well as bulk phase reactions. Clark and Haught (2005) were able to predict free chlorine loss in corroded, unlined metallic pipes subject to changes in velocity by modeling the phenomena as being governed by mass transfer to the pipe wall where the chlorine was rapidly reduced.

Less studied has been the loss of monochloramine in distribution systems. Monochloramine, while generally less reactive than free chlorine, is inherently unstable because it undergoes autodecomposition. While autodecomposition occurs via a complex set of reactions, the net loss of monochloramine occurs according to the stoichiometry:

(1)

This reaction has been reasonably well studied (Valentine et al., 1998; Vikesland et al., 2000) and can be approximated (in the absence of other reactions) by a simple second-order relationship:

(2)

where kvcsc is a rate constant describing the second order loss of monochloramine (Valentine et al., 1998). Its derivation involves the simplifying assumption that monochloramine decays by a mechanism involving the rate limiting formation of dichloramine that then rapidly decays. As such, kvcsc is a combination of several fundamental rate constants and the Cl/N ratio. It can be simply calculated and used to predict monochloramine decay in the aqueous phase in the absence of other demand reactions. It should be pointed out that

1

EPANET is a model developed by EPA that performs an extended period simulation of hydraulic and water quality behavior within pressurized pipe networks (see Chapter 7).

chloramine will decay more rapidly than predicted by this approach if significant amounts of demand substances other than ammonia are present in solution or if reactions with pipe walls are considered. Other demand substances can include NOM and reduced metals in the aqueous phase, as well as Fe(II) in pipe deposits.

Wilczak (2001) found that a sequential first-order empirical model best fit the East Bay Municipal Utility District’s chloramine decay data. Palacios and Smith (2002) found that chloramine decay in San Francisco’s water was consistent with a sequential first-order model, but that a simple first-order decay rate could be applied to the data due to the low organic matter concentrations. Duirk et al. (2002) developed a comprehensive aqueous-phase chloramine reaction model that accounts for both monochloramine autodecomposition as well as reduction by NOM that is similar in structure to that proposed by McClellan et al. (2000). Reaction of trace levels of free chlorine that equilibrate with monochloramine was a key mechanism accounting for slow monochloramine loss due to reaction with NOM. As a consequence, loss of chloramine should decrease with increasing pH because both autodecomposition and its reaction with NOM become slower. Table 6-1 summarizes the mechanisms for loss of a chloramine residual.

Disinfection Byproduct Formation

Formation of DBPs in distribution systems is attributable to reactions of chemical disinfectants with NOM either in bulk solution or associated with pipe deposits (Rossman et al., 2001). The importance of NOM associated with pipe deposits is based largely on evidence from controlled lab studies and is open to speculation, and must certainly be very system specific.

Most studies have focused on formation of halogenated DBPs produced from reactions of NOM with free chlorine, especially those DBPs that are currently regulated by the EPA—trihalomethanes (THMs) and haloacetic acids (HAAs). However, over 600 potentially harmful DBPs have been identified (Richardson, 1998) including both chlorinated and brominated compounds. Brominated compounds arise from the oxidation of bromide which can be an important factor in determining DBP speciation even when found at the sub-milligram per liter level. Many of the DBPs formed in chloraminated systems are the same as those observed in systems practicing chlorination (Figure 6-1). This may be a consequence of similar formation mechanisms involving free chlorine or attributable to the practice of prechlorination prior to ammonia addition and subsequent chloramine formation. However, the rates of formation of most DBPs are much slower in chloraminated systems, resulting in the reduced formation of many DBPs, especially THMs.

Table 6-2 lists some of the DBPs rated as high priority and observed in a recent comprehensive survey of 12 full-scale treatment plants in the United States in 2000. The halogenated DBPs detected in this study have included mono-, di-, tri-, and/or tetra- species of halomethanes (HMs) (including iodinated species); haloacetonitriles (HANs); haloketones (HKs); haloacetaldehydes (HAs); and halonitromethanes (HNMs). The presence of bromide resulted in a shift in speciation for the trihalomethanes (THMs) and haloacetic acids (HAAs). Brominated DBPs for other classes of DBPs (HANs, HKs, HAs, HNMs) were also detected. Chloramination formed certain dihalogen-substituted DBPs (HAAs, HAs) preferentially over related trihalogenated species. In addition, chlorine dioxide produced dihalogenated HAAs (Richardson et al., 2004). Recently several DBPs have been identified as unique to chloraminated systems. These include N-nitrosodimethylamine (NDMA) (Choi and Valentine, 2002 a,b, Mitch and Sedlak, 2002), cyanogen chloride, and several iodohaloacetic acids, none of which are currently regulated at the federal level. The state of California has, however, established a notification level of 10 ppb in drinking water for NDMA, a potent carcinogen (http://www.dhs.ca.gov/ps/ddwem/chemicals/NDMA/NDMAindex.htm). NDMA seems to be a relatively widespread DBP and may become more prevalent as the use of chloramination increases.

Given the relatively high reactivity of free chlorine with NOM, it is not surprising that a significant amount of DBPs is formed in the water treatment plant as the result of primary disinfection (i.e., disinfection at the treatment plant to meet CT requirements). DBP formation, however, continues in the distribution system, as shown in Figure 6-2. Based on an evaluation of data from utilities that participated in the Information Collection Rule (ICR) and that use surface water as their source, TTHMs increased through distribution systems on average about 50 percent when chlorine was used to maintain the distribution system residual (McGuire and Graziano, 2002). Similar results were obtained for chloraminated distribution systems, mainly because these systems had water with higher TTHM precursors than those utilities that were using free chlorine. Chloramine-specific DBPs (like N-nitrosodimethylamine cyanogen chloride,

iodohaloacetic acids) are expected to form primarily in distribution systems since chloramine is not usually used during primary disinfection (although ammonia is sometimes added in the treatment plant to stop THM and HAA formation). Finally, haloacetic acid levels are also expected to increase in the distribution system, but not to the same degree as THMs.

It should be noted that processes may occur in distribution systems that cause a loss of DBPs. For example Baribeau et. al. (2006) showed that the formation of several haloacetic acids did not increase with water age in a chlorinated distribution system (Figure 6-3). Speight and Singer (2005) correlated HAA reduction to a reduction in chlorine and suggested that the observed HAA loss was due to biodegradation that was otherwise inhibited in the presence of

chlorine. However, interpreting field data can be difficult, and changes in DBP concentrations may alternatively be attributable to changes in treatment plant operation (Pereira et al., 2004). The nature of DBP decay processes are not well established in actual distribution systems, but laboratory studies suggest that these may include biodegradation (Baribeau et al., 2005a), hydrolysis (Zhang and Minear, 2002), and reduction by reduced forms of iron (Chun et al., 2005; Zhang et al., 2004).

DBP modeling efforts can be categorized as empirical and semi-mechanistic. Motivation for modeling includes estimating the extent of the problem from easily measured parameters, predicting the influence of treatment practices aimed at reducing DBPs, and as a tool in establishing fundamental mechanisms. Amy et al. (1987) correlated DBP formation to a number of important variables that include chlorine dosages, DOC and bromide concentrations, temperature, and contact time. Harrington et al. (1992) used a similar approach to develop an empirical model to simulate THM and HAA formation during water treatment. More recently, semi-mechanistic kinetic models have been developed that couple disinfectant loss to the formation of selected DBPs. McClellan et al. (2000) proposed a model for the formation of THMs in chlorinated water assuming a fixed number of chlorine-consuming and THM-forming sites per mg C in the aqueous phase. Duirk and Valentine (2002) used a similar approach to model dichloroacetic acid formation from the reaction of monochloramine with NOM. As already stated, no efforts have as yet included the specific role of NOM on deposit/pipe surfaces that may be required to adequately model DBP formation in distribution systems. In spite of the limitations, considerable progress has been made in using models to explain observations and make simple predictions about the influence of treatment practices and distribution system residence time. Continued effort is needed to refine these models by including unifying principles that are not system specific and an improved description of all pertinent phenomena.

It should be pointed out that while chlorine demand may be a useful measurement to correlate with DBP formation in the bulk water (Gang et al., 2002), this would not be the case if the demand were governed by inorganic constituents or by reactions with deposit materials (corrosion products, etc.).

DETECTING LOSS OF WATER QUALITY INTEGRITY

Distinguishing the loss of water quality integrity due to internal changes as opposed to changes brought about by external events is very difficult because there are few parameters that can be conclusively linked to internal contamination. Routinely monitored parameters such as temperature, pH, disinfectant residual, and even microbial constituents cannot differentiate between external and internal contamination. Other less routinely monitored constituents, including dissolved metals, turbidity, total organic carbon, synthetic organic compounds, or nuisance organisms such as invertebrates may also not be definitive for exter-

nal vs. internal sources. However, if utilities have a good understanding of their distribution system and its points of vulnerability, value judgments as to the potential source of the detected contaminants can be made. With these limitations in mind, the following sections on detecting physical, chemical, and biological changes are presented. When there is a clear distinction between the ability of the monitoring method to distinguish between internal and external contamination, these methods are thereby identified.

Detection of Physical and Chemical Changes

Taste and Odor

Tastes and odors detectable by the consumer are a common indication of a loss of water quality integrity (McGuire, 1995). In fact consumers may only complain about the loss of water quality if they detect taste and odors (Watson, 2004). Fortunately, methods exist to directly evaluate the flavor and odor of tap water (Krasner et al., 1985; Dietrich et al., 2004; APHA, 2005) and a guide exists to determine possible sources within the distribution system and customers’ premises (McGuire et al., 2004).

Because most drinking waters in the United States contain a total chlorine residual, the taste and odor of tap water might be described as “chlorinous.” Whether this is noticeable to the water-consuming public depends on the chlorine species present, the concentration of the residual, and the temperature of the tap water. Other causative agents of tastes and odors in drinking water are usually metals, volatile organic chemicals, and microbial activity, with the latter being the most prevalent (APHA, 2005). A very common cause is open storage reservoirs where algae have been allowed to grow within the water as well as along the sides of the basins. These algae can produce earthy, musty, grassy, fishy, decaying vegetation and similar odors. Watson and Ridal (2004) credited taste and odors to periphyton and more specifically to certain cyanobacteria present in biofilms, as well as to the presence of dreissenid mussels in the Great Lake region. Skjevrak et al. (2004) detected the presence of ectocarpene, dictyipterenes, beta-ionone, menthol, menthone, and other VOCs in biofilms within distribution systems. Furthermore, bacteria such as actinomycetes can give rise to geosmin, and other microorganisms such as certain fungi have been associated with consumer complaints about taste and odor.

Much of the biological activity that causes taste and odor problems is indirect. Taste and odors problems may arise as a result of bacterial processes in certain types of pipes, such as iron, copper, and lead (Geldreich, 1996). In water systems with chlorophenols or bromophenols, biological activity (particularly fungal) can convert these compounds to very odorous chloro/bromo-anisoles that have much lower thresholds of odor detection than the original compounds (Bruchet, 1999; Montiel et al., 1999). It is also possible that other chlorinated

and oxidant-derived byproducts can be produced or allowed to increase in the distribution system to the point where they begin to be detectable by customers. Finally, if a source water contributes sulfur or iron to the distribution system (such as from a groundwater supply), biological activity in the distribution system can produce compounds that change the taste of the water.

Tastes and odors may be associated with external contamination events, such as permeation and intrusion. Among the compounds most likely to present a taste and odor problem stemming from an external contamination event are gasoline additives or constituents, soluble components of soil, and compounds found in sewage.

Changes in taste and odor can occur anywhere in the distribution system that the chlorine residual deteriorates and the water becomes stagnant, such as in storage tanks, at dead-end water mains, and behind closed valves. Also in stagnant areas of the distribution system where corrosion conditions release iron into the water, the iron may be detected by customers both visually and by taste. Interestingly, most nuisance tastes and odors that cause customer complaints originate within customers’ premises (except for those that come from source water such as geosmin, 2-methylisoborneol, and certain chemical spills) (Suffet et al., 1995; Khiari et al., 2002). Common causes are stagnant plumbing (musty odors from biological growth), backflow events (various types of chemical odors), hot water heater odors (hydrogen sulfide from biological activity in hot water tanks), and corrosion of plumbing materials (release of copper, zinc, and iron). New plastic pipe can leach odors for a period of time.

Within the main distribution system, new pipe and facilities need to be checked for their contributions to potential off-odors before they are released for use. Ductile iron pipe that is lined with cement-mortar might have an asphaltic coating that can leach volatile organic chemicals into the water if it has not cured sufficiently. New pipe joint lubricant can also impart aldehyde-type odors to the water. New linings of storage tanks also need to be cured adequately before being placed into service.

Finally, the stability of the chlorine or chloramine residual is important to controlling undesirable tastes and odors. Blending of source waters or boosting of disinfectants that is not well controlled can produce dichloramine (which is more odorous than monochloramine). It has also been shown that when a system uses chlorine dioxide as a primary oxidant, chlorite in the distribution system can react with free chlorine to reform chlorine dioxide that (1) can give a strong chlorinous odor at the tap and (2) can be released into the air of a home and react with volatile organic chemicals (such as from new carpet or paneling) to create cat urine or kerosene type odors (Dietrich and Hoehn, 1991).

Color and Turbid Water

Colored and turbid water at the tap is a strong indication that corrosion of iron, iron release from scales, and post precipitation of aluminum salts are not

being controlled. The presence of colored or turbid water is therefore indicative of changes in quality due to internal contamination. Iron released from the pipe wall as Fe2+ will diffuse into the bulk water where it is oxidized by oxygen or disinfectant, and then precipitates as ferric oxyhydroxide particles that cause color and turbidity (Lytle and Snoeyink, 2002; Sarin et al., 2003). The effect of this process may be made worse if the particles settle during periods of low flow and are then resuspended by hydraulic surges. Post precipitation of aluminum may result in particles that are loosely attached to the pipe wall and are suspended during hydraulic surges. This type of aluminum precipitate can be the cause of turbid or dirty water.

Dissolved and Particulate Metal Concentrations

If water leaving the treatment plant has metal concentrations that meet regulatory requirements, and if these levels increase in transit through the distribution system (typically iron) or in premise plumbing (lead, copper), then it may be assumed that leaching and internal corrosion are occurring. Although it is beyond the scope of this report to discuss the details of the Lead and Copper Rule, this is the only example of a regulation that specifically addresses the internal degradation of water quality in a distribution system. In the case of iron, elevated levels are more likely to be associated with secondary standards and aesthetic concerns rather than with a specific public health threat.

Disinfectant Residual and Disinfection Byproduct Measurements

Measurements of disinfectant residuals and DBP concentrations often accompany one another and are routinely practiced using a number of standard analytical methods. Sudden temporal increases in disinfectant loss indicate a sudden change in water quality or system characteristics. For example, this might be due to significant input of a reactive contaminant due to a cross-connection or rapid onset of nitrification in the system. Unexpected spatial losses point to problems associated with specific elements of the distribution system such as a zone where internal corrosion is excessive or where nitrification is occurring.

Identification of the causes of excessive disinfectant loss and DBP formation involves a combination of bench studies and field observations. The significance of what is considered excessive loss must be gauged against what is considered “normal” or not excessive. Free chlorine is stable for many days in water containing no reactive constituents such as NOM. The applied dose is then a bench mark for comparison. Simulated Distribution System (SDS) jar testing using water obtained from the point of entry into a distribution system or at other points can be used to determine the rate of disinfectant loss attributable to bulk phase reactions as well as DBP formation. This requires only measure-

ments of the free chlorine and DBP concentrations as a function of contact time. The rate of chlorine loss and DBP formation in the SDS can then be compared with losses observed in the system between points of known hydraulic residence time (acknowledging that residence time at a given point is actually a distribution of values and not easy to determine—see Chapter 5). Differences between the lab and field must be attributable to processes occurring inside the distribution system, most likely at the pipe–water interface.

Determining if chloramine loss is “excessive” is more complicated. The SDS jar test will be indicative of how fast the chloramine disappears in the bulk phase but will not by itself reveal the mechanism if the loss is unexpectedly high. This must also be compared to the rates of loss from autodecomposition, which can be predicted using the second order relationship previously discussed or perhaps measured in “distilled water” at the same pH as the system of interest. If the bulk water reactions are much higher than expected in clean water or than predicted, then one can presume the presence of significant amounts of reactive substances such as ferrous iron or NOM. The loss rate in the bulk phase can then be compared to values determined by measuring chloramine concentrations in the system at points of known hydraulic residence times. If the rate of loss determined by the field measurements are much higher than the bulk loss rates, then presumably this is due to reactions at the pipe–water interface. These include biological nitrification and reaction with reduced iron.

Indicators of Nitrification

Smith (2006) recently summarized important parameters (Table 6-3) that can be used as indicators of biological nitrification—one phenomenon that is directly associated with internal changes in water quality. The most important indicators (after loss of residual) are formation of nitrite and nitrate, loss of ammonia, and a decrease in pH. An increase in heterotrophic plate count may also

TABLE 6-3 Usefulness of Water Quality Parameters for Distribution System Nitrification Monitoring

indicate the growth of nitrifying organisms in the system. However, since biological nitrification can be associated with biofilms (Regan et al., 2003), an absence of nitrifying organisms in the bulk phase does not necessarily indicate the absence of nitrification.

Detection of Biological Changes

Several biological constituents, some of which are part of compliance monitoring, can be used to detect the loss of water quality integrity due to both internal and external contamination events. The applicability of each group of organisms for assessing internal changes in water quality is described in this section.

Heterotrophic Plate Counts

Since the end of the 19th century, heterotrophic plate counts (HPC) have been used as an indicator of the proper functioning of treatment processes (Bartram et al., 2003). By extension, HPC have also been used as an index of water quality and safety in the distribution system, and the method continues to be used in many countries as an index of regrowth (an internal event) of microorganisms within the distribution system. Although it is difficult to establish the exact contribution of suspended bacteria vs. proliferation and release of biofilm cells if increases in HPC are observed, there is evidence that biofilm growth and detachment can be the source of elevated bacterial numbers. Published accounts by van der Wende et al. (1989) and LeChevallier et al. (1990) demonstrated that elevated bacterial counts in water could not be attributed to replication of suspended cells, but rather was due to biofilm growth on pipe surfaces. Accordingly, the Surface Water Treatment Rule allows HPC levels to be used as a surrogate for a “detectable” residual for regulatory compliance purposes, provided that HPC is less than or equal to 500 colony forming units (CFU)/ml. Although other countries do not set specific numerical limits for HPC (Robertson and Brooks, 2003), the European Union has a recommendation of 100 CFU/ml. In addition, the World Health Organization is presently debating whether or not HPC counts should be included in their regulations on water quality.

Linking changes in HPC with a meaningful water quality variable can be very difficult. Many conditions, such as an increase or decrease in the organic carbon concentration, stagnation, loss of disinfectant residual, and/or nitrification, will result in an increase in HPC. Another cause of observed rises in HPC might be that chorine-injured organisms regain culturability, even though their actual numbers have not changed. It is also possible that increased HPC is not due to bacterial growth in the distribution system but may originate from an external contamination event. In a study where HPC was analyzed in a distribution system with groundwater as its source, the concentrations of HPC varied

with distance in the distribution system (Pepper et al., 2004), suggesting that most of the HPC bacteria originated from the distribution system rather than from the source water, although it could not be determined with certainty. Intrusion could account for increased HPC, either by fomenting bacterial regrowth as a result of nutrient intrusion, or simply by increasing HPC as a result of bacterial intrusion. Therefore, differentiating between possible causes for changes in HPC cannot be done without thorough monitoring of the system over a long period of time, and without having a thorough knowledge of the microbial diversity, the physiology, and the ecology of the microbiota being detected (Szewsyk et al., 2000).

A further complication is that the concentrations of HPC bacteria in a water sample are dependent on the media being used for their enumeration. R2A agar has been shown to give the highest numbers; however, the importance of this with regards to the use of different media vis-à-vis the detection of anomalies in the system is yet to be determined. Because of the variety of incubation temperatures and media used, it is difficult to compare within or between systems.

Regardless of the detection method employed, it is possible to use HPC bacteria as a general indicator of distribution system hygiene and performance. This approach requires that samples be taken at regular spatial intervals along the distribution system at time points that reflect the hydraulic residence time of the water in that section of the pipe. If increased bacterial numbers are seen in the same “packet” of water in a plug flow system, it is evidence that deterioration in water quality has occurred. With reasonable forensic investigation, the utility can then determine if the increased counts are due to internal vs. external events. This type of system monitoring is already performed by industries (other than water supply) that rely on high-quality water for manufacturing purposes.

Coliforms

Under ideal circumstances, the presence of coliforms in a drinking water sample should indicate external fecal contamination of the water supply, which is the main premise behind the current Total Coliform Rule. Although this concept has served the industry reasonably well, it is not without flaws. Methods may not be sufficiently sensitive for detection, and sample collection may give false positive (e.g., contaminated faucet screens) or false negative (disinfectant residual not neutralized) results. Studies of coliform presence in distribution systems indicate that coliforms may be introduced via treatment breakthrough as well as by intrusion events, main breaks, and other external contamination events (Besner et al., 2002). Furthermore, on occasion, coliforms have been shown to multiply in biofilms, contributing to their detection in drinking waters (LeChevallier et al., 1996). These same authors indicated that there was a correlation between coliform occurrence and variables such as temperature, AOC levels, and disinfectant type being used. Therefore, it is often not possible to

determine if a coliform-positive sample is the result of an external contamination event vs. regrowth of the organism within a biofilm. This problem is acknowledged in the Total Coliform Rule, in which the EPA allows for a variance from the regulation (40CFR, Code of Federal Regulations, 1993) “for systems that demonstrate to the State that the violation of the total coliform MCL is due to a persistent growth of total coliforms in the distribution system rather than fecal or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation or maintenance of the distribution system.”

Although the term coliform is used, it should not be forgotten that the group includes several different genera (see Table 3-2) which may survive/regrow differently under different conditions. When E. coli is found in drinking water, it is generally believed to be associated with an external contamination event (see references in Tallon et al., 2005) and linked to fecal contamination rather than an internal/biofilm source. Consequently, E. coli is one monitoring tool that is used to distinguish between internal and external contamination. However, it should be noted that E. coli is less resistant to disinfectants than some other pathogenic bacteria, viruses, and the protozoan cysts/oocysts. Thus, its absence does not indicate an absence of pathogens.

Other Indicators of Fecal Contamination

There is a great deal of interest in identifying alternative indicators for waterborne pathogens, as evidenced by the recent publication of the NRC report Indicators for Waterborne Pathogens that summarizes the most recent insights on the topic (NRC, 2004). An overview of some of these organisms (Clostridium perfringens, Enterococci and fecal streptococci, Bacteroides spp., Bacillussubtilis spores, Pseudomonas spp., Aeromonas spp., Staphylococci, HPC bacteria, hydrogen sulfide producers, and bacteriophages) is given in Table III of Tallon et al. (2005). As noted in the table, most of these organisms are not entirely specific to fecal contamination and/or suffer from difficulties in detection. As a case in point, the spores of Clostridium perfringens have been proposed as an indicator of fecal contamination of water and have been used as surrogates for assessing the efficacy of water treatment processes designed to remove viruses and the cysts/oocysts of Giardia and Cryptosporidium spp. (Payment and Franco, 1993; Venczel et al., 1997). Because the spores are more resistant to disinfection and the environment, their responses to these stresses are less pronounced than vegetative bacteria. This indicator is not always specific for fecal contamination, however, because it can be found in soils and sediments as part of the natural flora. Nonetheless, to date it has not been identified as a part of the natural flora of drinking water distribution systems, and as such, may be a reasonable indicator of external contamination regardless of whether it arises from fecal contamination or the soil.

Direct Detection of Pathogens

It would be optimal to directly measure pathogenic microorganisms or molecules specific for them instead of relying on indicator organisms. At the present time, however, this approach is not realistic for several reasons. There is a wide diversity of potential pathogens, ranging from viruses to bacteria to fungi to protozoa, and each group of organisms represents a unique challenge for detection. For many viruses and protozoa, there are no appropriate lab-based culturing methods. If culturing methods are possible, selective media can reduce the chances for recovering stressed organisms. Organisms can be present in such low numbers that direct sampling is not sufficiently sensitive, and concentration methods are also prone to error. With the advent of improved molecular methods in the future, some of these limitations may be overcome. A review of the issues associated with implementing these novel methods for pathogen detection as well as problems associated with conventional approaches for assessing microbial water quality have been published previously (NRC, 2004). This report also points out that even if these new methods show promise, standardization and validation are critical if the methods are to be used in a regulatory context.

Summary

Water quality integrity needs to be evaluated rapidly and, if at all possible, using in-line, real time methods (as discussed in Chapter 7). Unfortunately, current microbial detection methods do not lend themselves to this approach, given the many types of microbes possible, the limitations of individual indicators, and the rapidity and sensitivity of certain methods. HPC can be a useful parameter, but only if frequent monitoring is carried out and only if anomalous levels are detected (Robertson and Brooks, 2003). The levels that have been proposed as action levels or guidelines by various industries are too site-, season-, and method-specific to be generally applied. In any case, HPC counts do not lend themselves to on-line monitoring because it may take up to seven days for results, depending on the media being used. The concentration of HPC within the distribution system may be an indirect measure of AOC and BDOC in the water as organic carbon may be a reason for HPC regrowth (Robertson and Brooks, 2003). It may be tempting to suggest that AOC and/or BDOC be measured in place of HPC, but these assays are not as easily applied by most utilities and often take longer than the incubation period required for the HPC measurements.

Because many of the organisms present within distribution systems cannot be detected using the plate count methods typical for HPC (Block, 1992; Leclerc, 2003), efforts have been made to use rapid and relatively inexpensive microscopic techniques to visualize all microbes present employing fluorescent DNA stains, such as acridine orange direct counts (AODC), 4'-6-diamidino-2-phenylindole (DAPI), SYBER® Green, propidium iodide, etc.). Problems asso-

ciated with these methods are that both dead and live cells are detected, there is interference from inorganic constituents, background autofluorescence can occur, and the level of detection may not be sufficiently low. Other dyes have been used by some investigators (McFeters et al., 1999), but their usefulness for routine monitoring remains to be seen. It is also possible to use specialized equipment such as the ChemScan RDI/ Scan RDI™ or flow cytometers to quantify fluorescently stained organisms. In these cases, the equipment is expensive, and for flow cytometry, extensive optimization may be required.

Indirect methods could be used to determine water quality integrity; for example adenosine triphosphate (ATP) methods have been proposed, but they are rather expensive and do not lend themselves to routine monitoring. Although hand-held ATP photometers have been developed and are currently being used in the food industry, the usefulness of these methods in the water industry remains in question, given their low level of sensitivity and the relatively dilute nature of finished water.

Molecular methods for the detection of microbes are still far from routine. Although promising, PCR-based methods employing specific primers for a suite of targeted organisms or for 16S ribosomal DNA may suffer from the same problems as HPC and coliform counts. New analytical methods have been used for early detection of chemical agents in water, and many approaches are being developed in response to the need to detect chemical bioterrorism and warfare agents. Calles et al. (2005) describe photoionization and quadrupole ion trap, time-of-flight mass spectrometry as a means of detection of certain hazardous compounds in a fast and sensitive manner. Similar approaches for biological contaminants and indicator organisms are possible, but considerably more research and development will be needed to ensure that the methods are reliable. Even with the possibility for increased sensitivity and results that can be obtained more quickly, the overall limitations on the use of coliform bacteria to signify fecal contamination still exist.

Indeed, it is improbable that one indicator (such as coliforms) will serve all needs because of the varied sources of contamination, the different types of organisms involved, the impact these events have on public health, and the changing regulatory climate. NRC (2004) proposes a tiered approach for microbial monitoring. The first level is routine monitoring of common indicators to provide an early warning of a health risk or a change from background conditions that could pose a health risk. These methods should be rapid, reasonably inexpensive, and low in cost. If a potential problem is identified at this level, it should be followed by more detailed studies to assess the extent of public health risk. This might involve expanded sampling and using a more tailored detection method for indicators or even direct measurement of pathogens with molecular methods. The third level is a detailed investigation of the source of contamination so that it can be ameliorated. The need for standardized methods decreases as the investigation moves through the three phases, such that at the third level, specialized research tools may be required.

MAINTAINING WATER QUALITY INTEGRITY

Maintenance of water quality in the distribution system requires diligent attention by treatment plant personnel and those in charge of the distribution system. A delicate balance must be achieved in order to comply with relevant regulations while taking into account the detrimental effects that may occur as water travels through miles of pipes to the consumer. As regulations become increasingly more complicated and stringent, it is more difficult for utilities to balance the requirements. For example, there may be a need to bolster disinfectant residuals at various points throughout the distribution system, but this may lead to unacceptable levels of DBPs. The issue is even more complicated in premise plumbing where long periods of stagnation ultimately influence the water quality that the consumer receives. The following section describes methods and processes for maintaining the water quality integrity of potable water. The committee believes that nitrification control is best accomplished by maintaining an adequate disinfection residual and by booster disinfection, both of which are discussed below. The reader is referred to AWWA (2006) for further details on nitrification control.

Adequate Disinfection Residual

Maintenance of a disinfectant residual in the distribution system is required under the Surface Water Treatment Rule and has been designated as the best available technology for compliance with the Total Coliform Rule. The practice of carrying a disinfectant residual through the distribution system is also integral to the control of biofilms. This residual is intended to act as a prophylactic in the event of intrusion or backflow of a contaminant within the distribution system. With regard to the latter, the difficulty arises in determining what constitutes an adequate residual. There are examples of disease outbreaks caused by external contamination of a distribution system with a virus (Levy et al., 1998) and Giardia (Craun and Calderon, 2001); in both cases, a disinfectant residual was present or required. A few studies have examined the persistence of pathogens introduced into water carrying a disinfectant residual. Using sewage at various concentrations, Snead et al. (1980) demonstrated that there was no pathogen inactivation by chorine at 0.2 mg/L when 0.01 percent sewage was added. Payment (1999) reported inactivation of indigenous coliforms in sewage only if the chlorine concentration was greater than 0.6 mg/L. These low levels of chlorine are typical in sections of distribution systems with more advanced water age and in premise plumbing. More recent work has involved modeling of potential intrusion events to obtain insight on how chlorine and monochloramine inactivate organisms under relevant scenarios. Propato and Uber (2004), whose approach incorporates the variable factors of intrusion location along with mixing and contact time prior to consumption to simulate inactivation of pathogens in the distribution system, showed that monochloramine did

not provide any protection against contamination by Giardia under realistic conditions. In another modeling study, chloramine and chlorine were evaluated for their ability to inactivate Giardia and E. coli O157:H7 under a range of water quality conditions in the distribution system (Baribeau et al., 2005b). This group demonstrated that chlorine at a level of 0.5 mg/L would not inactivate Giardia, but was sufficient to disinfect E. coli when a simulated sewage intrusion event was 0.2 percent of the total flow. In contrast, monochloramine under the same conditions performed poorly in reducing the E. coli counts in a reasonable amount of time. In both of these modeling studies, there were inherent assumptions that remain to be verified under field conditions. However, the insights obtained, along with the laboratory and disease outbreak data, demonstrate that criteria for disinfecting organisms introduced during external contamination events is not well understood.

Federal regulations regarding the maintenance of a distribution system residual require, for large systems, a detectable free or combined residual in 95 percent of the sample results analyzed during a one-month period, or demonstration of a heterotrophic plate count less than 500 CFU/mL. Smaller systems have reduced monitoring requirements. Some states have chosen to define “detectable residual” including specific requirements for chlorinated and chloraminated distribution systems. For example, Texas requires 0.2 mg/L for chlorinated water and 0.5 mg/L for chloraminated water (TECQ, 2005). The North Carolina regulations are even more stringent (NCDENR, 2004), in that when chlorine is the single applied disinfectant, the residual disinfectant in the distribution system must be at least 0.2 mg/1 as free chlorine in at least 95 percent of the samples each month. When ammonia and chlorine are applied together as disinfectants, the residual disinfectant must be at least 2.0 mg/1 as combined chlorine in at least 95 percent of the samples each month.

In addition to the state regulations mentioned above, the literature implicates and in some cases makes suggestions for appropriate disinfectant levels. For example, systems that maintained dead-end free chlorine levels of < 0.2 mg/liter or monochloramine levels of < 0.5 mg/liter had substantially more coliform occurrences than systems that maintained higher disinfectant residuals (LeChevallier et al., 1996). This committee did not reach consensus on recommending specific numbers, given the need for additional research on the level of protection provided by maintenance of a disinfectant residual and the large variability in contact time between points of contaminant entry and consumers, both within an individual distribution system and between systems. To date, most studies have examined rather large amounts of contamination (1 percent or more), and studies have not been done in flowing pipes, where the hydrodynamics would be important. It is not clear what level of microbial inactivation would be required during an event of a given magnitude, nor how that might vary depending on the type of organisms in the vicinity of the distribution system. Given that current federal regulations for surface water systems require a “detectable” disinfectant level within the distribution system, each utility should set targets depending on the expected loss of residual in the system. This loss

will depend on (1) the extent of treatment to minimize disinfectant demand in the bulk water and (2) distribution system operational practices that minimize the disinfectant demand of the pipe walls and minimize water age (such as turnover of stored water).

Booster Disinfection

Reactions can reduce disinfectant residuals within distribution systems, such that some utilities have chosen to use booster chlorination or booster chloramination to increase residuals at susceptible locations. Using additional points of disinfectant application in the distribution system can reduce the amount of chlorine added at a treatment plant for the purpose of maintaining the distribution system residual. This, in turn, has the potential to limit DBP formation and subsequent exposure of those consumers’ drinking water from taps close to the initial source. The booster disinfection simultaneously increases protection (in terms of the presence of a residual) for those drinking water from taps with longer hydraulic residence times.

An important consideration for implementing booster chlorination or chloramination is the proper location of facilities. Kirmeyer et al. (2000) list the following criteria in selecting the location of booster stations:

The location should be such that a relatively large volume of water can be disinfected.

The water to be treated travels in one direction.

The chlorine residual in the water has begun to decrease, but has not totally dissipated.

The chlorine can be applied uniformly into the water.

The location is acceptable by neighbors and is easily accessible for chemical delivery vehicles with room for chemical storage and feed equipment.

Power is readily available.

Communications systems are readily available for the SCADA system.

Flow and/or residual pacing can be used.

Safety concerns can be addressed.

For a common inlet/outlet line, chlorine should be injected as the storage facility is filling, although mixing the chlorine throughout the contents may be difficult.

Booster Chlorination

Booster chlorination is an alternative for maintaining a residual in drinking water systems where substantial disinfectant degradation occurs with travel through the system. When Uber (2003) surveyed 4,000 drinking water utilities, 15 percent of the respondents reported currently using booster chlorination for

(1) disinfectant residual maintenance, (2) prevention of biological regrowth, and (3) disinfection after open reservoirs. Utilities practicing booster disinfection ranged in size from 0.14 to 830 MGD, with an average of 55 MGD. Most (55 percent) of the booster stations operated with a constant delivery dose, although 35 percent used flow-pacing or residual pacing to adjust dose. A few stations used a time-dependent set-point regime. Fifty-seven (57) percent of the stations were controlled manually, 33 percent were automated, and ten percent were controlled remotely with the aid of SCADA. Half of the stations with automatic control also had remote alarms. Examples provided in the report show that incorporation of decay rate and THM formation data is fundamental to predicting whether there will be any net gain in maintenance of residual and formation of THMs when disinfectant application is changed from a single location to multiple locations. Important products of the study were the Booster Disinfection Design and Analysis software and network models, which aid in the placement and operation of booster disinfection systems. The software is capable of providing information such as (1) setting the dosing schedules given the locations are provided; (2) selecting of booster dose schedules and location; and (3) heuristic screening of potential booster locations.

Booster Chloramination

Approximately 12 percent of the respondents to the survey published in Uber (2003) practice chloramination, by one of two methods. Most reportedly use chlorine to bind excess ammonia—a useful approach if chloramine decay results in the excess ammonia or if sufficient ammonia remained during the initial formation of chloramine. Three booster stations were identified in the survey using a second method in which both chlorine and ammonia were applied at the same location. This approach is used when there is a need to increase the overall concentration of chloramine present. Wilczak et al. (2003) reviewed operating practices at utilities employing booster chloramination with the addition of free chlorine (Martin and Cummings, 1993; Cohen, 1998; Ireland and Knudson, 1998) or both chlorine and ammonia (Potts et al., 2001). Monitoring of chlorine and ammonia was practiced by all utilities, in the majority of cases with on-line combined chlorine analyzers. Nitrite, pH, and on-line free ammonia analyzers were also employed by some of the utilities. Process control options included manual dose control, dosage determined by the flow, dosage determined by the flow along with measurements of the chlorine residual, and dosage set by the flow and controlled by a desired feed level set point. In all cases, operators could manually alter the chemical doses depending on water quality results. The goal of all utilities was to maintain a total chlorine residual of at least 2.0 mg/L.

Several factors should be investigated before implementing booster disinfection. Hatcher et al. (2004) described recommendations made to the Sweetwater Authority, which could be made without capital expenditure in the distribution system. Among these were (1) optimizing corrosion control, (2) improving the biostability of the water by reducing the free ammonia concentrations at the treatment plant effluent to prevent nitrification (thereby avoiding disinfectant loss), and (3) conversion of the last of three plants to chloramine to avoid chlorine-chloramine blending in the system. Grayman et al. (2004) described the methods to characterize and improve mixing within a storage reservoir so that disinfectant decay in the reservoir could be minimized. If operations such as these (improved tank mixing, optimized chloramine formation at the plant, improved corrosion control to reduce disinfectant demand by pipe surfaces) can improve the maintenance of the total chlorine residual, boosting may not be necessary.

Corrosion Control

As discussed in Chapter 4, there are measures that can be taken to control both internal corrosion and metal release, including materials selection for the distribution system, addition of a corrosion inhibitor such as phosphates, control of the chemistry of the water being distributed, or some combination of these approaches. Materials selection is important because materials that are not subject to corrosion can be used when it is very difficult to control water composition. The use of phosphate inhibitors to control problems with iron, lead, and copper is widespread, but care must be taken to ensure that the added phosphate does not decrease the biological stability of the water or cause a problem in municipal wastewater treatment plant discharges. Water stagnation is an important cause of many iron release problems, so distribution systems must be designed to maintain flowing water conditions to the extent possible (Sarin et al., 2004). Also, the use of proper pH control and maintenance of an acceptable alkalinity concentration are also effective ways to control both corrosion and metal release (Sarin et al., 2003).

It should be noted that internal corrosion control can positively influence the effectiveness of chlorine-based disinfectants for inactivation of bacteria in biofilms. Corrosion products react with residual chlorine, preventing the biocide from penetrating the biofilm and controlling coliform growth. Studies have shown that free chlorine is impacted to a greater extent than monochloramine, although the effectiveness of both disinfectants is impaired if corrosion rates are not controlled (LeChevallier et al., 1990, 1993). Increasing the phosphate-based corrosion inhibitor dose, especially during the summer months, can help reduce corrosion rates.

Materials Specification to Control Leaching

Chapter 4 discusses how the quality of materials is critical to minimizing the potential for external contamination to enter into the potable water system via leaks, breaks, and permeation. Internal contamination processes like leaching and corrosion are also related to the quality of the materials used in the distribution system. Most substances leaching into water from materials in the distribution system are non-toxic and unlikely to cause health problems. However, PVC pipes manufactured before about 1977, cement materials, and the excessive leaching of organic substances from linings and joining and sealing materials have occasionally been noted. In addition to the direct effect of leaching, new pipe materials can have a significant indirect effect on internal water quality. For example, they may exert a chlorine demand that can reduce the residual disinfectant in the distribution system and hence degrade the microbial quality of the drinking water (Haas et al., 2002). Standards for manufacture of materials and guidance for specifying materials need to be updated to address water quality issues (e.g., leaching of emerging chemicals, biogrowth promoting potential, leaching of non-health related but taste/odor-related chemicals, susceptibility to permeation).

RECOVERING WATER QUALITY INTEGRITY

Recovering water quality integrity following an internal contamination event in the distribution system revolves around a few specific activities. In many cases, the same method is used to address several issues including colored/turbid water, loss of residual, nitrification, and elevated microbial counts. Options are limited to (1) flushing to remove the taste/odor/color/ turbidity or to restore disinfectant concentrations, (2) permanently switching disinfectants to maintain a residual, typically from free chlorine to chloramine, (3) periodically changing from chloramine to free chlorine to mitigate nitrification, (4) implementing corrosion control to reduce corrosion and leaching, or (5) changing water sources. These approaches are discussed in more detail below. It should be noted that other distribution system maintenance and repair options such as cleaning, relining, replacement, and localized disinfection can also alleviate internally derived water quality problems; these methods are described in Chapters 4 and 5.

Flushing

As shown in previous chapters, water main flushing is an operational activity that involves moving water through the distribution system, often at a rate that facilitates scouring of the surfaces, and discharging it through hydrants or blow-off ports. Many researchers and utility managers have suggested that op-

timized flushing is important in maintaining or recovering water quality (Pattison, 1980; Emde et al., 1995, 1997; Smith et al., 1996; Antoun et al., 1997; Barbeau et al., 1999). Disinfectant residuals may be regained or maintained by moving out “old” water and replacing it with water containing a measurable residual.

Flushing can also be used to remove deposits as well as discolored water resulting from suspended material, which can be an effective method for combating biofilm growth. Gauthier et al. (1997) showed that loose deposits in a French system removed by flushing contained organisms including invertebrates, protozoa, and bacteria. Ackers et al. (2001) characterized the sediments removed by flushing as corrosion products, components of the pipe lining, treatment breakthrough, animal and biomatter, and calcium deposits. Antoun et al. (1997) recommended that flushing be used to reduce the potential for total coliform positive samples in a distribution system, and the approach seemed to alleviate the problem. Similarly, Emde et al. (1995 and 1997) suggested that flushing at sufficient velocities could remove biomass from pipe surfaces, therefore controlling biofilms. This approach was utilized by the Zurich Water Supply in Switzerland to control regrowth in a distribution system supplying water without a secondary disinfectant (Klein and Forster, 1998). In another situation, temporary control of invertebrates through a program including flushing was advocated (van Lieverloo et al., 1998).

A recent survey (Friedman et al., 2003) showed that of 23 U.S. utilities that responded, 20 have regularly scheduled flushing programs, while the remaining three flushed on an as-needed basis. In order of frequency cited, the objectives used for flushing were to eliminate colored water, restore disinfectant residual, reduce turbidity, eliminate tastes and odors, reduce the number of bacteria, reduce DBP precursors, remove sediment, comply with regulations, maintain water quality, decrease chlorine demand, respond to customer complaints, reduce corrosion inhibitor build-up, eliminate stale water, respond to animal activity, and eliminate lime deposits. Another survey (summarized in Table 2-5) suggests that flushing is variously supported by state agencies. Of 34 responding states, in only 11 states are flushing/cleaning/pigging required, with 20 others encouraging the practices by utilities. One of the difficulties in assessing the efficacy of flushing for restoring or maintaining water quality is the lack of data collection by utilities. A nation-wide survey (Chadderton et al., 1992) showed that utilities typically implemented flushing in response to consumer complaints. In this study they also found that less than half of the utilities collected water samples during the flushing process for analysis of chlorine, turbidity, bacteria, or other parameters. In most cases, flushing proceeded until the water was visually clear. In light of the benefits that can be attained by properly conducted flushing (to minimize the amount of water wasted and appropriately discharge the waste), more attention should be given to this approach for maintaining quality or resolving problems.

Change Disinfectant

For nearly 100 years, drinking water utilities in the United States and Canada have converted to chloramine for improved ability to maintain a residual at dead ends or other areas with high hydraulic residence time due to the lower decay rates of chloramine versus free chlorine. Systems also have converted to chloramine for taste, odor, or DBP control (Kirmeyer et al., 1993). Lowering of the THM standard from 100 to 80 µg/L in 2001 and the forthcoming requirement to meet this value at each monitoring location have encouraged additional utilities to switch to chloramine.

In recent years the ability of chloramine to penetrate biofilms and control Legionella has received more attention and helped those who have already converted for other reasons (e.g., maintenance of residual, taste and odor, or DBP control) justify their decision. Preliminary results show that although Legionella is better controlled with chloramination (see Chapter 3 and Pryor et al., 2004), it is possible that other organisms of public health concern such as Mycobacteria could have a selective advantage under these conditions (Pryor et al., 2004). The ability of chloramine to control biofilms has been documented by several utilities that find it easier to meet the requirement of the Total Coliform Rule using chloramine (Norton et al., 1997; Richard Mann, Metropolitan Water District of Southern California, Los Angeles, CA, personal communication, 1998) although this increased control is not universal (Muylwyk et al., 2001). In a multi-year study in pipe loops with a three-day residence time, both chlorine and chloramine were found to be effective disinfectants but chloramine persisted longer (Clark et al., 1994). Other laboratory studies have not demonstrated an advantage of monochloramine over free chlorine (Ollos et al., 1998; Clark and Sivaganesan, 1999; Camper et al., 2003).

Meanwhile, issues regarding the disturbance of bacteria or metallic oxides on the pipe walls during the disinfectant switch and their influence on water quality have been and continue to be a concern. A recent example of how the disturbance of pipe walls can affect water quality is described by Edwards and Dudi (2004) regarding the District of Columbia Water and Sewer Authority’s drinking water lead levels after the conversion from free chlorine to chloramine. During the time when chlorine was used as the disinfectant in the distribution system, lead dioxide formed on the lead pipes. When the redox potential decreased because of the conversion from chlorine to monochloramine, the lead dioxide was converted to a more soluble lead (+II) compound, and this caused an increase in the lead concentration in the bulk water. While it is likely that iron release and elevated coliform levels concomitant with the conversion to chloramine will resolve as the distribution systems reequilibrate, in some cases utilities have intervened with flushing to accelerate the transition, but noted increases in water quality problems during the flushing event.

In addition to permanent changes in disinfectant, there are instances where short-term switches are practiced. Drinking water utilities using chloramine as a disinfectant residual sometimes temporarily switch to free chlorine both as a

preventative and control measure for nitrification. This can be accomplished system-wide by simply turning off the ammonia feed pumps. To switch to free chlorine in an isolated pressure zone or storage facility, enough chlorine must be added to pass the breakpoint and achieve a free chlorine residual. There are differing views about the practice of periodic chlorination in chloraminated systems. Indeed, the practice has been abandoned by some utilities in Southern California as a response to legitimate concerns over short-term exposure to elevated DBP levels (Hatcher et al., 2004).

When nitrification occurs in storage tanks, and other strategies such as adjustments in chlorine to ammonia ratio, increased turnover, or flushing have not solved the problem, breakpoint chlorination is a common response (Skadsen, and Cohen, 2006). In this case chlorine is added to oxidize all of the nitrogen species (ammonia and nitrite) and achieve a free chlorine residual. This strategy is effective because ammonia oxidizing bacteria are sensitive to free chlorine (Baribeau, 2006), but temporary since their populations will recover after the weaker disinfectant (chloramine) is reestablished in the facility or system. This is demonstrated by a need for repeated breakpoint chlorination of reservoirs especially in summer months.

Adequate mixing is important for efficient breakpoint events and for routine maintenance of the disinfectant residual within storage facilities. Figure 6-4 shows the variability of a chloramine (total chlorine) residual at the sample tap on a common inlet/outlet pipe to a storage tank. As the tank fills, water from the distribution system with a total chlorine residual of about 2 mg/L enters the tank;

as the tank drains the water is observed to have a total chlorine residual of about 1 mg/L. Some intermediate points are seen on the figure that are characteristic of distribution water mixing with tank water (Guistino, 2003). Grayman et al. (2004) have described design and operational problems that lead to poor mixing within storage facilities. Figure 6-4 also illustrates the utility of using continuous disinfectant analyzers on storage facilities. The data reveal that the disinfectant residual is a function of the fill and drain cycle (or elevation) of the tank more than the water quality of the storage facility and distribution system.

Regardless of whether disinfectant changes are long- or short-term, utilities should be aware that these changes may have implications for protecting public health, especially during an intrusion event. Chloramine may be inadequate for protection against microorganisms that enter the distribution system during intrusion, as discussed previously for Giardia cysts but also for enteric viruses with less susceptibility to chloramine than chlorine. Karim et al. (2003) showed that over half of soil samples collected during pipe replacements tested positive for enteric viruses.

Change Treatment/Corrosion Control

If corrosion or metal release is identified as a problem, one of the measures listed in the previous section on corrosion control should be undertaken. Identification of the cause of the problem is most important relative to selection of the best approach. For example, if the cause of the problem is variable pH, treatment to control pH and possibly to add alkalinity for the purpose of increasing buffer intensity is probably necessary. Adding alkalinity also benefits corrosion control via the addition of carbonate (from the standpoint of developing a stable scale).

CONCLUSIONS AND RECOMMENDATIONS

Beyond contamination that enters potable water from external sources (intrusion, cross connections, etc.), there are processes within the distribution system that contribute to degradation of water quality. The large surface area to volume ratio of pipe surfaces, reactive pipe materials, advanced water ages, and bulk water reactions all contribute to deleterious changes in water quality from the treatment plant to the consumer. Maintaining water quality integrity in the distribution system is challenging because of the complexity of the system. There are interactions between the type and concentration of disinfectants, corrosion control schemes, operational practices (e.g., flow characteristics, water age, flushing practices), the materials used for pipes and plumbing, the biological stability of the water, and the efficacy of treatment. In some cases, changes to improve water quality may be reasonably easy, while others may be extremely difficult. The following conclusions and recommendations are made.

Prior to distribution, the quality of treated water should be adjusted tominimize deterioration of water quality. For example, appropriate use of phosphate inhibitors and control of pH and alkalinity can be used to minimize both internal corrosion of lead, copper, and iron pipes and the formation of colored water owing to the release of iron from corrosion scales. Coagulation with aluminum salts should be done in a way that minimizes the residual aluminum concentration in filtered water, thereby reducing the amount of aluminum that precipitates in the distribution system. Ensuring that water is biologically stable via removal of organic carbon will have a positive impact on the preservation of water quality as it travels from the treatment plant to the consumer. It should be kept in mind that other chemical adjustments may have an impact on the biological stability of the water.

Microbial growth and biofilm development in distribution systemsshould be minimized. Even though general heterotrophs are not likely to be of public health concern, their activity can promote the production of tastes and odors, increase disinfectant demand, and may contribute to corrosion. Biofilms may also harbor opportunistic pathogens (those causing disease in the immunocompromised); this is of greatest importance in premise plumbing where long residence times contribute to disinfectant decay and subsequent bacterial growth and release. Coliforms may also proliferate in biofilms. With perhaps the exception of E. coli, coliforms from biofilms are indistinguishable from those arising from external contamination. When these coliforms are detected, it is difficult to determine if a contamination event of public health significance has occurred (fecal contamination) as opposed to the growth of indigenous organisms.

Residual disinfectant choices should be balanced to meet the overallgoal of protecting public health. For free chlorine, the potential residual loss and DBP formation should be weighed against the problems that may introduced by chloramination, which include nitrification, lower disinfectant efficacy against suspended organisms, and the potential for deleterious corrosion problems. Although some systems have demonstrated increased biofilm control with chloramination, this response has not been universal. This ambiguity also exists for the control of opportunistic pathogens.

Current microbial monitoring is limited in its ability to indicate distribution system contamination events, such that new methods and strategiesare needed. Current methods are not specific for the source of contamination and do not allow for good process control because of the substantial amount of time needed to generate results. There are limits to the effectiveness of indicators with respect to predicting the presence of pathogens. A tiered approach to microbial monitoring should be considered in which common indicators are used initially to provide an early warning of a potential health risk, followed by more detailed studies to assess the extent of the risk using more specific methods. In concert with this, techniques for monitoring for specific pathogens with

known public health significance should be developed, ideally with results available on-line and in real time. The implementation of best practices to maintain water quality (see Chapter 2) is needed until better monitoring approaches can be developed.

Standards for materials used in distribution systems need to be updated to address their impact on water quality, and research is needed todevelop new materials that will have minimal impacts. Materials standards have historically been designed to address physical/strength properties including the ability to handle pressure and stress. Testing of currently available materials should be expanded to include (1) the potential for permeation of contaminants, and (2) the potential for leaching of compounds of public health concern as well as those that contribute to tastes and odors and support biofilm growth. The results of these tests should be incorporated into the standards in a way that water quality deterioration attributable to distribution system materials is minimized. Also, research is needed to develop new materials that minimize adverse water quality effects such as the high concentrations of undesirable metals and deposits that result from corrosion and the destruction of disinfectant owing to interactions with pipe materials.

Clark, R. M., and M. Sivaganesan. 1999. Characterizing the effect of chorine and chloramines on the formation of biofilm in a simulated drinking water distribution system. EPA/600/R-01/024. Washington, DC: EPA.

Denisov, G. V., B. G. Kovrov, and T. F. Kovaleva. 1981. Effect of the pH and temperature of the medium on the rate of oxidation of Fe+2 to Fe+3 by a culture of Thiobacillus ferrooxidans and the coefficient of efficiency of biosynthesis (translated from Russian). Mikrobiologiya 50:696.

De Roos, A. J., M. H. Ward, C. F. Lynch, and K. P. Cantor. 2003. Nitrate in public water supplies and the risk of colon and rectum cancers. Epidemiology 14(6):640–649.

Ireland, C., and M. Knudson. 1998. Portland’s experience with chloramine residual management. In: Proceedings of the Conference on Protecting Water Quality in the Distribution System: What is the Role of Disinfection Residuals? April 26–28, Philadelphia, PA.

Montiel, A., S. Rigal, and B. Welte. 1999. Study of the origin of musty taste in the drinking water supply. Water Science and Technology 40(6):171–177.

Muylwyk, Q., J. MacDonald, and M. Klawunn. 2001. Success! Switching from chloramines to chlorine in the distribution system: results from a one year full-scale trial. In: Proceedings of the Water Quality Technology Conference. Denver, CO: AWWA.

National Research Council (NRC). 2004. Indicators for Waterborne Pathogens. Washington, DC: National Academies Press.

Woolschlager, J., B. Rittmann, P. Piriou, and B. Schwartz. 2001. Using a comprehensive model to identify the major mechanisms of chloramine decay in distribution systems. Water Sci. Technol.: Water Supply 1(4):103–110.

Protecting and maintaining water distributions systems is crucial to ensuring high quality drinking water. Distribution systems -- consisting of pipes, pumps, valves, storage tanks, reservoirs, meters, fittings, and other hydraulic appurtenances -- carry drinking water from a centralized treatment plant or well supplies to consumers’ taps. Spanning almost 1 million miles in the United States, distribution systems represent the vast majority of physical infrastructure for water supplies, and thus constitute the primary management challenge from both an operational and public health standpoint. Recent data on waterborne disease outbreaks suggest that distribution systems remain a source of contamination that has yet to be fully addressed. This report evaluates approaches for risk characterization and recent data, and it identifies a variety of strategies that could be considered to reduce the risks posed by water-quality deteriorating events in distribution systems. Particular attention is given to backflow events via cross connections, the potential for contamination of the distribution system during construction and repair activities, maintenance of storage facilities, and the role of premise plumbing in public health risk. The report also identifies advances in detection, monitoring and modeling, analytical methods, and research and development opportunities that will enable the water supply industry to further reduce risks associated with drinking water distribution systems.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.