This example raises some new issues as well as some we discussed in the earlier examples: EPA relies on a highly flawed "category approach" that ignores major differences in the properties and structures of the 13 members of this category. It compounds this problem by unquestioningly accepting data from inadequate studies to assert low toxicity, rather than demanding that sufficient studies be provided. As a result, it fails to identify, let alone require to be filled, the enormous gaps in the data available for many of the category members. EPA ignores or dismisses without explanation its own earlier comments raising serious concerns about the quality and completeness of data provided by the sponsor of these chemicals under the HPV Challenge. Finally, this example once again shows how EPA's heavy reliance on self-reported use information from manufacturers paints an incomplete and potentially very misleading picture of the actual uses of industrial chemicals.

The Fatty Nitrogen Derived Cationics category includes 13 chemicals that are used in industrial and consumer detergents and cleaners, as well as hair care products (conditioners or softeners), disinfectants, textile softening and antistatic agents, deodorizers, emulsifiers, dispersants, coagulants, industrial lubricants and corrosion inhibitors, among other uses. Two supporting chemicals are registered with EPA as antimicrobial pesticides. Annual production volumes ranged from <1 million pounds for two of the category members to 50-100 million pounds for one of the chemicals. The sponsor of this group of chemicals under the HPV Challenge was the American Chemistry Council's Fatty Nitrogen Derivatives Panel's Cationics Task Group.

Is this a legitimate category?

EPA and international protocols provide for the grouping of chemicals into categories for data development and assessment purposes. However, that approach starts with a hypothesis that chemicals that have structural similarities actually possess similar or predictable patterns of biological activity. These protocols require that the hypothesis actually be demonstrated to be true, once the available data on physical-chemical properties, environmental fate and toxicity/ecotoxicity for the proposed category members are assembled, and that a full and compelling rationale be provided.

Neither the sponsor of this category nor EPA has done any such thing in the present case. Indeed, the sparse available data do not support the category:

Few measured physical-chemical and environmental fate data have been provided, which is essential to demonstrate similarity in properties and behavior. Instead, estimated values are provided that are either eerily identical for all category members (e.g., bioconcentration factor, or BCF) or actually vary dramatically across category members (e.g., Henry's Law Constant, which is a measure of the distribution of a chemical between water and the air above it). These findings hardly support the hypothesis that all category members will behave similarly.

The spotty measured biodegradation data that have been provided also differ significantly (ranging from 0% degradation in 28 days, to 12% in 182 days, to 98% in 2 days). EPA acknowledges this huge range, but fails to discuss how it comports – or doesn't – with the category rationale.

Acute oral mammalian toxicity values (LD50s) are available for most category members – but they also vary, from 238 to >16,300 milligrams per kilogram of body weight (mg/kg-bw). And while the sponsor and EPA argue these data are similar enough to support the category (they only span EPA's moderate and low hazard classifications), there is no reason to expect that the mechanisms that impart acute toxicity are at all related to those that lead to other toxicities, such as reproductive or developmental toxicity (these endpoints are discussed further below). So how exactly do similar data for one endpoint support a conclusion that data for entirely unrelated endpoints will be comparably similar? EPA never bothers to explain this.

As discussed below, far fewer data are available for the other human health endpoints, and what data are available do not support the category.

Despite these findings, EPA states that it agrees with the sponsor's category justification, asserting that the category members possess "similar physicochemical properties, biodegradability, aquatic toxicity, mammalian toxicity and environmental disposition patterns." The differences in the actual data noted above are neatly set aside.

In comments EPA provided to the sponsor in 2002, EPA itself clearly considered the chemicals in this category to be sufficiently different from each other structurally that it broke them into three subcategories and argued for the need for more data to be provided within the subcategories in order to bolster the overall category. (As we'll discuss below, the sponsor refused to provide the additional data, and EPA capitulated with nary a word as to why). These subcategories are: a) four chemicals that have a single alkyl chain; b) seven that have two alkyl chains; and c) a third subcategory that includes one chemical with three alkyl chains and one that has a dimeric structure.

EPA's rankings

Now let's look at how EPA ranks the category, and some of the many reasons why we disagree.

Hazard rankings: EPA ranks the entire chemical category as moderate for human health hazard, apparently due to the results of repeated dose testing. While EPA maintains that none of the chemicals are expected to bioaccumulate, it expects most (9) of the category members to exhibit moderate persistence. The hazard for aquatic organisms – fish, invertebrates and algae – is ranked high, based on the results of both acute and chronic testing using multiple test species.

Exposure rankings: Exposures to workers, consumers and children are expected by EPA to be high, due to the uses of these chemicals in common household and personal care products. Releases to the environment are not known, so EPA estimated that exposures to the general public and the environment resulting from such releases would likely be moderate.

Risk rankings: EPA judged the risk ranking for this group of chemicals to be medium for all possible receptors.

Prioritization ranking: EPA assigned this chemical category a medium priority and identified a list of "possible next steps" to get additional information that would "assist EPA" to develop a better understanding of use and exposures. These include just about everything you can imagine EPA would have needed to make any findings about exposure in the first place: potential releases to water from manufacturing, use and disposal; information concerning worker exposures; and information concerning potential exposures to these chemicals in consumer products, such as presence and concentration and consumer use patterns.

Why We Disagree:

1. As discussed above, the grouping of chemicals into categories can in some cases be justified, but it requires a sufficient amount of measured data to demonstrate that the chemicals within the category actually behave in a similar or predictable manner reflective of their structural similarity. In this case, the available data are grossly insufficient to support the category. Yet EPA still manages to conclude there are no data gaps for any endpoints for this entire category.

2. As noted earlier, EPA broke this category into three subcategories, based on structural differences. Let's examine in more detail the nature and extent of mammalian toxicity data provided for the first of these subcategories, the mono alkyl quaternary ammonium chlorides:

a. Repeated dose toxicity. None of the members of this subcategory has a reliable repeated dose toxicity study (a test that is used to evaluate health effects from more than single-dose exposures and serves as a screen for possible effects from chronic exposure). No oral studies are available, and the single dermal study provided used only a single dose. That dose yielded no adverse effects – but the dose was very low, below the dose where if an effect were seen EPA would rank the hazard as high.

Tests that use only a single dose and tests that fail to find an effect level because they use doses that are too low are insufficient to support any hazard assessment. Yet EPA doesn't even discuss the matter. It doesn't acknowledge the test is insufficient, nor does it identify this endpoint as a data gap, nor does it adopt a reasonable default assumption in the absence of valid data that developmental toxicity could be high. Instead, it proceeds merrily to "read across" this negative result to the untested members of the subcategory. And worst of all, it actually concludes in its hazard characterization summary that "no treatment-related systemic toxicity was evident at the doses tested" – completely burying critical information about data quality and reaching a scientifically unjustified conclusion.

b. Reproductive toxicity. This same subcategory lacks any reproductive toxicity data whatsoever. In comments EPA provided back in 2002 on the test plan submitted by the sponsor of this category, EPA requested that a combined reproductive/ developmental toxicity test be conducted to address this glaring gap (as well as the corresponding gap in data for developmental toxicity). The sponsor responded in 2003, stating that in its view the requested additional testing "will not further the understanding of potential human health hazards…" of these chemicals. The sponsor provided no rationale to support its claim, failing even to acknowledge EPA's point that there were no data available for any of the monoalkyls.

In EPA's current ChAMP assessment, issued in March 2009, EPA now states merely that it accepts this response, and provides absolutely no explanation for its change of heart. Instead, EPA – without any stated justification – is now content to "read across" to all four members of this subcategory the data from a "supporting" chemical that is actually a di-alkyl, not a mono-alkyl, compound. Indeed, this supporting chemical serves as the ONLY source of reproductive toxicity test data for all 13 category members! On this basis, EPA then blithely claims there is no reproductive toxicity – and no data gap for reproductive toxicity – for all 13 members of this category.

c. Developmental toxicity. Data are available for one of the four members of the mono alkyl subcategory for the oral route of exposure, and for two other members for dermal exposure. EPA "reads across" these data to the untested members of the subcategory. That might normally be sufficient, but again in this case neither test yielded any adverse effects at the highest doses tested, and again those doses were very low, below the dose where if an effect were seen EPA would rank the hazard as high.

Remember, tests that fail to find an effect level because they use doses that are too low are insufficient to satisfy an endpoint and support hazard assessment. But once again, instead of acknowledging this inadequacy, identifying this as a data gap, and using a reasonable default assumption that developmental toxicity could be high, what does EPA do? It claims "no signs of developmental toxicity were observed"!

In its hazard characterization summary, EPA downplays or omits the results of the developmental toxicity studies it reviewed. It ignores without explanation evidence of adverse effects that are at least equivocal, and may be significant:

EPA claims that a test done on the di-alkyl supporting chemical "resulted in no developmental toxicity," despite the fact that increased fetal mortality and decreased fetal body weight were observed – and at doses that warrant a high hazard ranking using EPA's criteria.

Similarly, with respect to tests done on two dialkyl category chemicals, EPA claims the studies did not produce an effect at the highest doses tested. Yet both chemicals were actually found to have increased fetal resorptions, albeit at doses that EPA would rank as low-hazard.

In each of these cases, we are forced to infer a rationale because EPA never clearly explains its decisions. But the apparent rationales – effects seen only at doses that are also toxic to the mother, effects within the range historically observed for controls in the laboratory (though with no supporting data provided by the laboratory) – are not sufficient to conclude there are no adverse developmental effects, even if they are also insufficient to conclude there are such effects. Indeed, in a screening-level hazard characterization based on scant or equivocal data, the default should be either to assume an effect exists or at the very least to call for further testing.

Finally, even if one were to accept these studies as definitively negative for the dialkyl subcategory, EPA has no basis either to extrapolate that finding to the other subcategories or to paper over the enormous data gap that exists for this endpoint.

3. We also disagree with EPA's risk ranking for this chemical category. Even assuming EPA's moderate ranking of the human health hazard of this group is appropriate, given that there remain important data gaps, and exposures to humans are ranked high, and the chemicals are likely to be washed down the drain into the environment, we don't see how this category should be ranked as anything but both high risk and high priority.

4. The two supporting chemicals used to provide hazard data for this category are both antimicrobials, and given EPA's readiness to treat all of the chemicals together, it is reasonable to assume that the category chemicals may also have antimicrobial properties. Add to this the facts that the chemicals in this category are used in a widespread and dispersive manner, and that they may well end up in wastewater treatment plants and surface waters, there is every reason to be concerned that they could adversely affect beneficial microbes used in wastewater treatment and found in the environment.

Antimicrobials used and marketed as such are exempted from TSCA, and are instead regulated under the Federal Insecticide, Fungicide and Rodenticide Act. However, the chemicals in this category are used in applications where such a function and associated claims are not operative. That is, we have a group of chemicals that EPA considers similar to known antimicrobial pesticides that are disposed of down the drain, into sewage treatment systems that may not effectively remove these chemicals, potentially resulting in distribution through the ecosystem.

EPA is wholly silent on this issue. In addition to collecting more data, we would strongly recommend that these chemicals be evaluated to determine if inherent antimicrobial properties warrant regulation as antimicrobial pesticides.

5. EPA ranks most (9) of the category members as exhibiting moderate persistence, and the remaining four as of low persistence. Yet it barely discusses the available data and ignores the following:

3 of the 4 chemicals EPA ranked low for biodegradation appear to exceed EPA's criterion for a low ranking, and several of those ranked moderate show little if any biodegradation.

EPA ranks as moderate one chemical that showed 0% degradation in 28 days. This obviously should be ranked high.

EPA also ranks as moderate another chemical that has a half-life in soil of a whopping 1,048 days – nearly 3 years! EPA's criteria rank any half-life in soil that exceeds 180 days as high – yet EPA's ranking of this chemical is, inexplicably, moderate.

6 of the category members have no biodegradation data, and any rationale for EPA's implied read-across from the other members is absent.

In comments EPA provided earlier to the sponsor, EPA raised several serious concerns about the extent and nature of data provided for, and the sponsor's claims made about, biodegradation. In the sponsor's response, it refused to do the testing EPA called for or to revise its claims that all category members are biodegradable.

In its current ChAMP assessment, EPA seems once again to have fully capitulated: no data gaps identified, no flagging of the enormous range in biodegradation values that calls into question the viability of the category, and a failure in contradiction of its own criteria to identify any of the category members as having high persistence.

6. EPA claims that none of the category members are expected to bioaccumulate. This is based, however, entirely on estimated data using an EPA model that yields the exact same result – a bioconcentration factor estimate of 71 – for all 13 category members. Doesn't this bear some explanation? Why can't this parameter be measured? EPA's silence is deafening.

7. Last, but not least, this ChAMP assessment amply illustrates the huge shortcomings of the use information EPA has sought to collect under its Inventory Update Rule (IUR). As we've discussed at length before, these data are of questionable value because they are self-reported by manufacturers, often incomplete because of major reporting loopholes EPA has provided, and often kept removed from any public access because EPA provides wide latitude for submitters to claim the information to be confidential. This category is a great example of these problems.

IUR submissions were received for 11 of the 13 category members. Here's a summary of the extent of data EPA did and did not receive, and what it has not made public because of confidential business information (CBI) claims:

Types of responses in IUR submissions Number of chemicals

Type of Use Information

NRO*

CBI**

No use reported

Information provided

Industrial Processing and Use

5

4

1

1

Commercial and Consumer Use

—

9

2

1 ("other")

Use in Products Intended for Children

—

9

2

1 ("no uses")

* Submitter did not submit because information is "not readily obtainable" ** Number represents CBI submissions identified as such by EPA. EPA notes the number could be higher.

In this case, EPA has ready access to other public sources of use information for these chemicals – including, ironically, the HPV Challenge submission for this category that was provided by the same companies that under the IUR claimed such information CBI! But in other cases, such information may not be available, as reliable and current chemical use information is very limited in general.

Given the poor performance of manufacturers reporting under the IUR, and the often very limited availability of information from other sources, how comfortable are you in EPA using these data to assess chemicals' risks to workers, the public, the environment, consumers and children?

Conclusion

In an earlier post, we characterized EPA as constructing "houses of cards" in its ChAMP assessments. Some of you might have considered that to be awfully harsh.

But here's a classic house of cards: EPA uses very spotty, variable, misclassified or entirely estimated data on degradation or bioconcentration to claim that this questionable category of chemicals has only low or moderate environmental persistence and bioaccumulation potential. It then uses those findings to downgrade use information that clearly indicates these chemicals are used in a manner that culminates in down-the-drain disposal or similar release, and hence significant, ubiquitous releases to the environment – in order to claim that exposures to aquatic organisms and the general population via such releases will only be "medium." Finally, it uses that finding to downgrade its finding of high aquatic hazard to a medium risk ranking, and to further downplay its (flawed) moderate human health hazard by finding only a medium risk to the general population.