Almost half of the world{\textquoteright}s population still cooks on biomass cookstoves of poor efficiency and primitive design, such as three stone fires (TSF). Emissions from biomass cookstoves contribute to adverse health effects and climate change. A number of improved cookstoves with higher energy efficiency and lower emissions have been designed and promoted across the world. During the design development, and for the selection of a stove for dissemination, the stove performance and emissions are commonly evaluated, communicated and compared using the arithmetic average of replicate tests made using a standardized laboratory-based test, commonly the water boiling test (WBT). However, the statistics section of the test protocol contains some debatable concepts and in certain cases, easily misinterpreted recommendations. Also, there is no agreement in the literature on how many replicate tests should be performed to ensure {\textquotedblleft}confidence{\textquotedblright} in the reported average performance (with three being the most common number of replicates). This matter has not received sufficient attention in the rapidly growing literature on stoves, and yet is crucial for estimating and communicating the performance of a stove, and for comparing the performance between stoves. We illustrate an application using data from a number of replicate tests of performance and emission of the Berkeley{\textendash}Darfur Stove (BDS) and the TSF under well-controlled laboratory conditions. Here we focus on two as illustrative: time-to-boil and emissions of PM2.5 (particulate matter less than or equal to 2.5 μm in diameter). We demonstrate that an interpretation of the results comparing these stoves could be misleading if only a small number of replicates had been conducted. We then describe a practical approach, useful to both stove testers and designers, to assess the number of replicates needed to obtain useful data from previously untested stoves with unknown variability.

Almost half of the world{\textquoteright}s population still cooks on biomass cookstoves of poor efficiency and primitive design, such as three stone fires (TSF). Emissions from biomass cookstoves contribute to adverse health effects and climate change. A number of {\textquotedblleft}improved cookstoves{\textquotedblright} with higher energy efficiency and lower emissions have been designed and promoted across the world. During the design development, and for selection of a stove for dissemination, the stove performance and emissions are commonly evaluated, communicated and compared using the arithmetic average of replicate tests made using a standardized laboratory-based test, commonly the water boiling test (WBT). However, published literature shows different WBT results reported from different laboratories for the same stove technology. Also, there is no agreement in the literature on how many replicate tests should be performed to ensure {\textquotedblleft}significance{\textquotedblright} in the reported average performance. This matter has not received attention in the rapidly growing literature on stoves, and yet is crucial for estimating and communicating the performance of a stove, and for comparing the performance between stoves. We present results of statistical 2 analyses using data from a number of replicate tests of performance and emission of the Berkeley-Darfur Stove (BDS) and the TSF under well-controlled laboratory conditions. We observed moderate variability in the test results for the TSF and BDS when measuring several characteristics. Here we focus on two as illustrative: time-to-boil and PM2.5 (particulate matter less than or equal to 2.5 micrometers in diameter) emissions. We demonstrate that interpretation of the results comparing these stoves could be misleading if only a small number of replicates had been conducted. We then describe a practical approach, useful to both stove testers and designers, to assess the number of replicates needed to obtain useful data. Caution should be exercised in attaching high credibility to results based on only a few replicates of cookstove performance and emissions. Stove designers, testers, program implementers and decision makers should all benefit from improved awareness of the importance of adequate number of replicates required to produce practically useful test data.

Through mass-balance modeling of various ventilation scenarios that might satisfy the ASHRAE 62.1 Indoor Air Quality (IAQ) Procedure, we estimate indoor concentrations of contaminants of concern (COCs) in California "big box" stores, compare estimates to available thresholds, and for selected scenarios estimate differences in energy consumption. Findings are intended to inform decisions on adding performance-based approaches to ventilation rate (VR) standards for commercial buildings. Using multi-zone mass-balance models and available contaminant source rates, we estimated concentrations of 34 COCs for multiple ventilation scenarios: VRmin (0.04 cfm/ft2), VRmax (0.24 cfm/ft2), and VRmid (0.14 cfm/ft2). We compared COC concentrations with available health, olfactory, and irritant thresholds. We estimated building energy consumption at different VRs using a previously developed EnergyPlus model. VRmax did control all contaminants adequately, but VRmin did not, and VRmid did so only marginally. Air cleaning and local ventilation near strong sources both showed promise. Higher VRs increased indoor concentrations of outdoor air pollutants. Lowering VRs in big box stores in California from VRmax to VRmid would reduce total energy use by an estimated 6.6\% and energy costs by 2.5\%. Reducing the required VRs in California{\textquoteright}s big box stores could reduce energy use and costs, but poses challenges for health and comfort of occupants. Source removal, air cleaning, and local ventilation may be needed at reduced VRs, and even at current recommended VRs. Also, alternative ventilation strategies taking climate and season into account in ventilation schedules may provide greater energy cost savings than constant ventilation rates, while improving IAQ.

This document reports on work that Lawrence Berkeley National Laboratory performed to support the Department of Homeland Security{\textquoteright}s testing of ARFCAM and LACIS systems. In the sections that follow, LBNL lists the scope of work, field analyses conducted, and preliminary results.

LBNL developed a model of the Port Gaston building at the Nevada Test Site and calibrated it using data from field experiments, both blower door and tracer gas tests. Model development and comparison to data show very good agreement. The model was developed to (1) support the interpretation of data from field trials performed by Signature Science LLC, (2) support the placement of sampler equipment, and (3) predict if meteorological differences between the Wet-Run/Dry-Run and the Hot-Run might adversely affect the development of the Hot Run Test Plan. LBNL reported its findings on each task to the experiment team at scheduled planning meetings. In the end, we note that the model was used limitedly because the data from the Wet-Run/Dry Run were if such high quality.

Lastly, LBNL conducted a research experiment at the end of the Wet-Run/Dry-Run to study if, and to what degree, specific TICs sorb and desorb on indoor surfaces. We found that several of the TICs either sorb onto surfaces or are lost through chemical reactions. These findings may have important implications on determining sheltering-in-place concepts of operation.

The sudden release of toxic contaminants that reach indoor spaces can be hazardous to building occupants. For an acutely toxic contaminant, the speed of the emergency response strongly influences the consequences to occupants. The design of a real time sensor system is made challenging both by the urgency and complex nature of the event, and by the imperfect sensors and models available to describe it. In this research, we use Bayesian modeling to combine information from multiple types of sensors to improve the characterization of a release. We discuss conceptual and algorithmic considerations for selecting and fusing information from disparate sensors. To explore system performance, we use both real tracer gas data from experiments in a three story building, along with synthetic data, including information from door position sensors. The added information from door position sensors is found to be useful for many scenarios, but not always. We discuss the physical conditions and design factors that affect these results, such as the influence of the door positions on contaminant transport. We highlight potential benefits of multisensor data fusion, challenges in realizing those benefits, and opportunities for further improvement.

The Lawrence Berkeley National Laboratory (LBNL), the University of California Merced(UCM), and the United Technologies Research Center (UTRC) conducted field studiesand modeling analyses in the Classroom and Office Building (COB) and the Scienceand Engineering Building (S\&E) at the University of California, Merced. In the first year,of a planned multiyear project, our goal was to study the feasibility and efficacy ofoccupancy-based energy management. The first-year research goals were twofold.The first was to explore the likely energy savings if we know the number and location ofbuilding occupants in a typical commercial building. The second was to model andestimate people movement in a building. Our findings suggest that a 10-14\% reductionin HVAC energy consumption is possible over typical HVAC operating conditions whenwe know occupancy throughout the building. With the conclusion of the first-year tasks,we plan to review these results further before this group pursues follow-on funding.

This study investigated the hypothesis that increased exposure to polycyclic aromatic hydrocarbons (PAHs) increases breast cancer risk. PAHs are products of incomplete burning of organic matter and are present in cigarette smoke, ambient air, drinking water, and diet. PAHs require metabolic transformation to bind to DNA, causing DNA adducts, which can lead to mutations and are thought to be an important pre-cancer marker. In breast tissue, PAHs appear to be metabolized to their cancer-causing form primarily by the cytochrome P450 enzyme CYP1B1. Because the genotoxic impact of PAH depends on their metabolism, we hypothesized that high CYP1B1 enzyme levels result in increased formation of PAH-DNA adducts in breast tissue, leading to increased development of breast cancer. We have investigated molecular mechanisms of the relationship between PAH exposure, CYP1B1 expression and breast cancer risk in a clinic-based case-control study. We collected histologically normal breast tissue from 56 women (43 cases and 13 controls) undergoing breast surgery and analyzed these specimens for CYP1B1 genotype, PAH-DNA adducts and CYP1B1 gene expression. We did not detect any difference in aromatic DNA adduct levels of cases and controls, only between smokers and non-smokers. CYP1B1 transcript levels were slightly lower in controls than cases, but the difference was not statistically significant. We found no correlation between the levels of CYP1B1 expression and DNA adducts. If CYP1B1 has any role in breast cancer etiology it might be through its metabolism of estrogen rather than its metabolism of PAHs. However, due to the lack of statistical power these results should be interpreted with caution.

The goal of this project was to answer the following questions concerning response to a future anthrax release (or suspected release) in a building:

Based on past experience, what rules of thumb can be determined concerning: (a) the amount of sampling that may be needed to determine the extent of contamination within a given building; (b) what portions of a building should be sampled; (c) the cost per square foot to decontaminate a given type of building using a given method; (d) the time required to prepare for, and perform, decontamination; (e) the eectiveness of a given decontamination method in a given type of building?

Based on past experience, what resources will be spent on evaluating the extent of contamination, performing decontamination, and assessing the eectiveness of the decontamination in a building of a given type and size?

What are the trade-os between cost, time, and eectiveness for the various sampling plans, sampling methods, and decontamination methods that have been used in the past?

Contaminant releases in or near a building can lead to significant human exposures unless prompt response is taken. U.S. Federal and local agencies are implementing programs to place air-monitoring samplers in buildings to quickly detect biological agents. We describe a probabilistic algorithm for siting samplers in order to detect accidental or intentional releases of biological material. The algorithm maximizes the probability of detecting a release from among a suite of realistic scenarios. The scenarios may differ in any unknown, for example the release size or location, weather, mode of building operation, etc. The algorithm also can optimize sampler placement in the face of modeling uncertainties, for example the airflow leakage characteristics of the building, and the detection capabilities of the samplers. In anillustrative example, we apply the algorithm to a hypothetical 24-room commercial building, finding optimal networks for a variety of assumed sampler types and performance characteristics. We also discuss extensions of this work for detecting ambient pollutants in buildings, and for understanding building-wide airflow, pollutant dispersion, and exposures.

Sudden releases of a toxic agent indoors can cause immediate and long-term harm to occupants. In order to protect building occupants from such threats, it is necessary to have a robust air monitoring system that can detect, locate, and characterize accidental or deliberate toxic gas releases. However, developing such a system is complicated by several requirements, in particular the need to operate in real-time. This task is further complicated when monitoring sensors are prone to false positive and false negative readings. We report on work towards developing an indoor monitoring system that is robust even in the presence of poor quality sensor data. The algorithm, named BASSET, combines deterministic modeling and Bayesian statistics to join prior knowledge of the contaminant transport in the building with real-time sensor information. We evaluate BASSET across several data sets, which varyin sensor characteristics such as accuracy, response time, and trigger level. Our results suggest that optimal designs are not always intuitive. For example, a network comprised of slower but more accurate sensors may locate the contaminant source more quickly than a network with faster but less accurate sensors.

This section summarizes the risk assessment approaches and risk management methods that can be applied to study geologic carbon sequestration in California, including CO2 capture, transportation and storage operations. Risk assessment and risk management concepts are introduced and applied. Known hazards from carbon sequestration are discussed individually, and risk reduction options are presented. Human exposure limits, pathways for human and environmental exposure during operations and post-operations, and how risks change over the time scales associated with storage as the CO2 migrates and reacts in the subsurface environment, are also discussed.

We developed a physiologically based pharmacokinetic model of PCB 153 in women, and predict its transfer via lactation to infants. The model is the first human, population-scale lactational model for PCB 153. Data in the literature provided estimates for model development and for performance assessment. Physiological parameters were taken from a cohort in Taiwan and from reference values in the literature. We estimated partition coefficients based on chemical structure and the lipid content in various body tissues. Using exposure data in Japan, we predicted acquired body burden of PCB 153 at an average childbearing age of 25 years and compare predictions to measurements from studies in multiple countries. Forward-model predictions agree well with human biomonitoring measurements, as represented by summary statistics and uncertainty estimates. The modelsuccessfully describes the range of possible PCB 153 dispositions in maternal milk,suggesting a promising option for back estimating doses for various populations. One example of reverse dosimetry modeling was attempted using our PBPK model for possible exposure scenarios in Canadian Inuits who had the highest level of PCB 153 in their milk in the world.

This book chapter was presented by Ashok J. Gadgil, Michael D. Sohn and Priya Sreedharan at the 2008\ 29th NATO/SPS International Technical Meeting on Air Pollution and its Application in Aveiro, Portugal.

Releases of acutely toxic airborne contaminants in or near a building can lead to significant human exposures unless prompt response measures are identified and implemented. Commonly, possible responses include conflicting strategies, such as shutting the ventilation system off versus running it in a purge (100\% outside air) mode, or having occupants evacuate versus sheltering in place. The right choice depends in part on quickly identifying the source locations, the amounts released, and the likely future dispersion routes of the pollutants. This paper summarizes recent developments to provide such estimates in real time using an approach called Bayesian Monte Carlo updating. This approach rapidly interprets measurements of airborne pollutant concentrations from multiple sensors placed in the building and computes best estimates and uncertainties of the release conditions. The algorithm is fast, capable of continuously updating the estimates as measurements stream in from sensors. The approach is employed, as illustration, to conduct two specific investigations under different situations.

Present and future concentrations of DDT in the environment are calculated with the global multi-media model CliMoChem. Monte Carlo simulations are used to assess the importance of uncertainties in substance property data, emission rates, and environmental parameters for model results. Uncertainties in the model results, expressed as 95\% confidence intervals of DDT concentrations in various environmental media, in different geographical locations, and at different points in time are typically between one and two orders of magnitude. An analysis of rank correlations between model inputs and predicted DDT concentrations indicates thatemission estimates and degradation rate constants, in particular in the atmosphere, are the most influential model inputs. For DDT levels in the Arctic, temperature dependencies of substance properties are also influential parameters. A Bayesian Monte Carlo approach is used to update uncertain model inputs based on measurements of DDT in the field. The updating procedure suggests a lower value for half-life in air and a reduced range of uncertainty for KOW of DDT. As could be expected, the Bayesian updating yields model results that are closer to observations, and model uncertainties have decreased. The combined sensitivity analysis and Bayesian Monte Carlo approach provide new insight into important processes that govern the global fate and persistence of DDT in the environment.

We compare model predictions to measurements of SF6 and environmental tobacco smoke particle concentrations in a three-room chamber experiment. To make predictions of multi-room aerosol transport and fate, we linked a multizone airflow model (COMIS) with an indoor aerosol dynamics model (MIAQ4). The linked models provide improved simulation capabilities for predicting aerosol concentrations and exposures in buildings. In this application, we found that the multizone air flow model was vital for predicting the inter-room airflows due to temperature differences between the rooms and when air-sampling pumps were operating during the experiment. Model predictions agree well with measurements, as shown by several comparison metrics. However, predictions of airborne ETS concentrations are slightly lower than measurements. This is mostly attributable to under-stating the source release amount, which we specified independently from literature estimates. Model predictions of ETS particle-size distributions agree with measurements; size bins with the peak concentrations are slightly over-predicted initially, but agree thereafter.

Rapid detection of toxic agents in the indoor environment is essential to protecting building occupants from accidental or intentional releases. While there is much research dedicated to designing sensors to detect airborne toxic contaminants, little research has addressed how to incorporate such sensors into a monitoring system designed to protect building occupants. To design sensor systems, sensor designers must quantify design tradeoffs, such as response time and accuracy, to optimize the performance of an overall system. We illustrate the importance of a systems approach for properly evaluating such tradeoffs, using data from tracer gas experiments conducted in a three-floor unit at the Dugway Proving Grounds, Utah. We apply Bayesian statistics to assess the effects of various sensor characteristics, such as response time, threshold level and accuracy, on overall system performance. We evaluated the system performance by the time (and thus amount of data) needed to characterize the release (location, amount released, and duration). We demonstrate that a systems perspective is necessary to understand the potential benefits of selecting values of specific sensor characteristics to optimize sensor system performance.

We compare computational fluid dynamics (CFD) predictions using a steady-state Reynolds Averaged Navier-Stokes (RANS) model with experimental data on airflow and pollutant dispersion under mixed-convection conditions in a 7 x 9 x 11m high experimental facility. The Rayleigh number, based on height, was O(1011) and the atrium was mechanically ventilated. We released tracer gas in the atrium and measured the spatial distribution of concentrations; we then modeled the experiment using four different levels of modeling detail. The four computational models differ in the choice of temperature boundary conditions and the choice of turbulence model. Predictions from a low-Reynolds-number k- model with detailed boundary conditions agreed well with the data using three different model-measurement comparison metrics. Results from the same model with a single temperature prescribed for each wall also agreed well with the data. Predictions of a standard k- model were about the same as those of an isothermal model; neither performed well. Implications of the results for practical applications are discussed.

We analyzed more than 70,000 air leakage measurements in houses across the United States to relate leakage area{\textemdash}the effective size of all penetrations of the building shell{\textemdash}to readily available building characteristics such as building size, year built, geographic region, and various construction characteristics. After adjusting for the lack of statistical representativeness of the data, we found that the distribution of leakage area normalized by floor area is approximately lognormal. Based on a classification tree analysis, year built and floor area are the two most significant predictors of leakage area: older and smaller houses tend to have higher normalized leakage areas than newer and larger ones. Multivariate regressions of normalized leakage are presented with respect to these two factors for three house classifications: low-income households, energy program houses, and conventional houses. We demonstrate a method of applying the regression model to housing characteristics from the American Housing Survey to derive a leakage-area distribution for all single-family houses in the US. The air exchange rates implied by these estimates agree reasonably well with published measurements.

We analyzed more than 70,000 air leakage measurements in houses across the United States to relate leakage area{\textemdash}the effective size of all penetrations of the building shell{\textemdash}to readily available building characteristics such as building size, year built, geographic region, and various construction characteristics. After adjusting for the lackof statisticalrepresentativeness of the data, we found that the distribution of leakage area normalized by floor area is approximately lognormal. Based on a classification tree analysis, year built and floor area are the two most significant predictors of leakage area: older and smaller houses tend to have higher normalized leakage areas than newer and larger ones.Multivariate regressions of normalized leakage are presented with respect to these two factors for three house classifications: low-income households, energy program houses, and conventional houses. We demonstrate a method of applying the regression model to housing characteristics from the American Housing Survey to derive a leakage-area distribution for all single-family houses in the US. The air exchange rates implied by these estimates agree reasonably well with published measurements.

An accidental or intentional outdoor release of pollutants can produce a hazardous plume, potentially contaminating large portions of a metropolitan area as it disperses downwind. To minimize health consequences on the populace, government and research organizations often recommend sheltering in place when evacuation is impractical. Some reports also recommend "hardening" an indoor shelter, for example by applying duct tape to prevent leakage into a bathroom. However, few studies have quantified the perceived beneficial effects of sheltering and hardening, or examined the limits of their applicability. In this paper, we examine how sheltering and hardening might reduce exposure levels under different building and meteorological conditions (e.g., wind direction). We predict concentrations and exposure levels for several conditions, and discuss the net benefits from several sheltering and hardening options.

Airflow models of buildings require dozens to hundreds of parameter values, depending on the complexity of the building and the level of fidelity desired for the model. Values for many of the parameters are usually subject to very large uncertainties (possibly an order of magnitude). Experiments can be used to calibrate or "tune" the model: input parameters can be adjusted until predicted quantities match observations. However, experimental time and equipment are always limited and some parameters are hard to measure, so it is generally impractical to perform an exhaustive set of measurements. Consequently, large uncertainties in some parameters typically remain even after tuning the model. We propose a method to help determine which measurements will maximally reduce the uncertainties in those input parameters that have the greatest influence on behavior of interest to researchers. Implications for experimental design are discussed.

The objective of this research project was to improve the basis for estimating environmental tobacco smoke (ETS) exposures in a variety of indoor environments. The research utilized experiments conducted in both laboratory and {\textquoteleft}real-world{\textquoteright} buildings to 1) study the transport of ETS species from room to room, 2) examine the viability of using various chemical markers as tracers for ETS, and 3) to evaluate to what extent re-emission of ETS components from indoor surfaces might add to the ETS exposure estimates. A three-room environmental chamber was used to examine multi-zone transport and behavior of ETS and its tracers. One room (simulating a smoker{\textquoteright}s living room) was extensively conditioned with ETS, while a corridor and a second room (simulating a child{\textquoteright}s bedroom) remained smoking-free. A series of 5 sets of replicate experiments were conducted under different door opening and flow configurations: sealed, leaky, slightly ajar, wide open, and under forced air-flow conditions. When the doors between the rooms were slightly ajar the particles dispersed into the other rooms, eventually reaching the same concentration. The particle size distribution took the same form in each room, although the total numbers of particles in each room depended on the door configurations. The particle number size distribution moved towards somewhat larger particles as the ETS aged. We also successfully modeled the inter-room transport of ETS particles from first principles {\textendash} using size fractionated particle emission factors, predicted deposition rates, and thermal temperature gradient driven inter-room flows, This validation improved our understanding of bulk inter-room ETS particle transport. Four chemical tracers were examined: ultraviolet-absorbing particulate matter (UVPM), fluorescent particulate matter (FPM), nicotine and solanesol. Both (UVPM) and (FPM) traced the transport of ETS particles into the non-smoking areas. Nicotine, on the other hand, quickly adsorbed on unconditioned surfaces so that nicotine concentrations in these rooms remained very low, even during smoking episodes. These findings suggest that using nicotine as a tracer of ETS particle concentrations may yield misleading concentration and/or exposure estimates. The results of the solanesol analyses were compromised, apparently by exposure to light during collection (lights in the chambers were always on during the experiments). This may mean that the use of solanesol as a tracer is impractical in {\textquoteright}real-world{\textquoteright} conditions. In the final phase of the project we conducted measurements of ETS particles and tracers in three residences occupied by smokers who had joined a smoking cessation program. As a pilot study, its objective was to improve our understanding of how ETS aerosols are transported in a small number of homes (and thus, whether limiting smoking to certain areas has an effect on ETS exposures in other parts of the building). As with the chamber studies, we examined whether measurements of various chemical tracers, such as nicotine, solanesol, FPM and UVPM, could be used to accurately predict ETS concentrations and potential exposures in {\textquoteleft}real-world{\textquoteright} settings, as has been suggested by several authors. The ultimate goal of these efforts, and a future larger multiple house study, is to improve the basis for estimating ETS exposures to the general public. Because we only studied three houses no firm conclusions can be developed from our data. However, the results for the ETS tracers are essentially the same as those for the chamber experiments. The use of nicotine was problematic as a marker for ETS exposure. In the smoking areas of the homes, nicotine appeared to be a suitable indicator; however in the non-smoking regions, nicotine behavior was very inconsistent. The other tracers, UVPM and FPM, provided a better basis for estimating ETS exposures in the {\textquoteright}real world{\textquoteright}. The use of solanesol was compromised - as it had been in the chamber experiments.

Physiologically based pharmacokinetic (PBPK) modeling is a well-established toxicological tool designed to relate exposure to a target tissue dose. The emergence of federal and state programs for environmental health tracking and the availability of exposure monitoring through biomarkers creates the opportunity to apply PBPK models to estimate exposures to environmental contaminants from urine, blood, and tissue samples. However, reconstructing exposures for large populations is complicated by often having too few biomarker samples, large uncertainties about exposures, and large inter-individual variability. In this paper we use an illustrative case study to identify some of these difficulties and for a process for confronting them by reconstructing population-scale exposures using Bayesian inference. The application consists of interpreting biomarker data from eight adult males with controlled exposures to trichloroethylene (TCE) as if the biomarkers were random samples from a large population with unknown exposure conditions. The TCE concentrations in blood from the individuals fell into two distinctly different groups even though the individuals were simultaneously in a single exposure chamber. We successfully reconstructed the exposure scenarios for both subgroups -- although the reconstruction of one subgroup is different than what is believed to be the true experimental conditions. We were however unable to predict with high certainty the concentration of TCE in air.

The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (i) identify the most robust demographic variables within the population for a given exposure factor, (ii) partition the population data into subsets based on these variables, and (iii) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors -- body weight (BW) and exposure duration (ED) -- using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

This report presents advice on how to operate a building to reduce casualties from a biological or chemical attack, as well as potential changes to the building (e.g. the design of the ventilation system) that could make it more secure. It also documents the assumptions and reasoning behind the advice. The particular circumstances of any attack, such as the ventilation system design, building occupancy, agent type, source strength and location, and so on, may differ from the assumptions made here, in which case actions other than our recommendations may be required; we hope that by understanding the rationale behind the advice, building operators can modify it as required for their circumstances. The advice was prepared by members of the Airflow and Pollutant Transport Group, which is part of the Indoor Environment Department at the Lawrence Berkeley National Laboratory. The group{\textquoteright}s expertise in this area includes: tracer-gas measurements of airflows in buildings (Sextro, Thatcher); design and operation of commercial building ventilation systems (Delp); modeling and analysis of airflow and tracer gas transport in large indoor spaces (Finlayson, Gadgil, Price); modeling of gas releases in multi-zone buildings (Sohn, Lorenzetti, Finlayson, Sextro); and occupational health and safety experience related to building design and operation (Sextro, Delp). This report is concerned only with building design and operation; it is not a how-to manual for emergency response. Many important emergency response topics are not covered here, including crowd control, medical treatment, evidence gathering, decontamination methods, and rescue gear.

We describe a framework for developing response recommendations to unexpected toxic pollutant releases in commercial buildings. It may be applied in conditions where limited building- and event-specific information is available. The framework is based on a screening-level methodology to develop insights, or rules-of-thumb, into the behavior of airflow and pollutant transport. A three-stage framework is presented: (1) develop a building taxonomy to identify generic, orprototypical, building configurations, (2) characterize uncertainty and conduct simulation modeling to predict typical airflow and pollutant transport behavior, and (3) rank uncertainty contributions to determine how information obtained at a site might reduce uncertainties in the model predictions. The approach is applied to study a hypothetical pollutant release on the first floor of a five-story office building. Key features that affect pollutant transport are identified and described by value-ranges in the building stock. Simulation modeling provides predictions and uncertainty estimates of time-dependent pollutant concentrations, following a release, for a range of indoor and outdoor conditions. In this exercise, we predict concentrations on the fifth floor to be an order of magnitude less than on the first, coefficients of variation greater than 2, and information about the HVAC operation and window position most reducing uncertainty in predicted peak concentrations.

Accurate characterization of particle concentrations indoors is critical to exposure assessments. It is estimated that indoor particle concentrations depend strongly on outdoor concentrations. For health scientists, knowledge of the factors that control the relationship of indoor particle concentrations to outdoor levels is particularly important. In this paper, we identify and evaluate sources of data for those factors that affect the transport to and concentration of outdoor particles indoors. To achieve this goal, we (i) identify and assemble relevant information on how particle behavior during air leakage, HVAC operation, and particle filtration effects indoor particle concentration; (ii) review and evaluate the assembled information to distinguish data that are directly relevant to specific estimates of particle transport from those that are only indirectly useful; and (iii) provide a synthesis of the currently available information on building air- leakage parameters and their effect on indoor particle matter concentrations.

The recent contamination of several U.S. buildings by letters containing anthrax demonstrates the need to understand better the transport and fate of anthrax spores within buildings. We modeled the spread of anthrax for a hypothetical office suite and estimated the distribution of mass and resulting occupant exposures. Based on our modeling assumptions, more than 90\% of the anthrax released remains in the building during the first 48 hours, with the largest fraction of the mass accumulating on floor surfaces where it is subject to tracking and resuspension. Although tracking and resuspension account for only a small amount of mass transfer, the model results suggests they can have an important effect on subsequent exposures. Additional research is necessary to understand and quantify these processes.

Releases of airborne contaminants in or near a building can lead to significant human exposures unless prompt response measures are taken. However, possible responses can include conflicting strategies, such as shutting the ventilation system off versus running it in a purge mode, or having occupants evacuate versus sheltering in place. The proper choice depends in part on knowing the source locations, the amounts released, and the likely future dispersion routes of the pollutants. We present an approach that estimates this information in real time. It applies Bayesian statistics to interpret measurements of airborne pollutant concentrations from multiple sensors placed in the building and computes best estimates and uncertainties of the release conditions. The algorithm is fast, capable of continuously updating the estimates as measurements stream in from sensors. We demonstrate the approach using a hypothetical pollutant release in a five-room building. Unknowns to the interpretation algorithm include location, duration, and strength of the source, and some building and weather conditions. We examine two sensor sampling plans and three levels of data quality. Data interpretation in all examples is rapid; however, locating and characterizing the source with high probability depends on the amount and quality of data, and the sampling plan.

Contaminant releases in or near a building can lead to significant human exposures unless prompt response measures are taken. However, selecting the proper response depends in part on knowing the source locations, the amounts released, and the dispersion characteristics of the pollutants. We present an approach that estimates this information in real time. It uses Bayesian statistics to interpret measurements from sensors placed in the building yielding best estimates and uncertainties for the release conditions, including the operating state of the building. Because the method is fast, it continuously updates the estimates as measurements stream in from the sensors. We show preliminary results for characterizing a gas release in a three-floor, multi-room building at the Dugway Proving Grounds, Utah, USA.

The process of characterizing human exposure to particulate matter requires information on both particle concentrations in microenvironments and the time-specific activity budgets of individuals among these microenvironments. Because the average amount of time spent indoors by individuals in the US is estimated to be greater than 75\%, accurate characterization of particle concentrations indoors is critical to exposure assessments for the US population. In addition, it is estimated that indoor particle concentrations depend strongly on outdoor concentrations. The spatial and temporal variations of indoor particle concentrations as well as the factors that affect these variations are important to health scientists. For them, knowledge of the factors that control the relationship of indoor particle concentrations to outdoor levels is particularly important. In this report, we identify and evaluate sources of data for those factors that affect the transport to and concentration of outdoor particles in the indoor environment. Concentrations of particles indoors depend upon the fraction of outdoor particles that penetrate through the building shell or are transported via the air handling (HVAC) system, the generation of particles by indoor sources, and the loss mechanisms that occur indoors, such as deposition. To address these issues, we (i) identify and assemble relevant information including the behavior of particles during air leakage, HVAC operations, and particle filtration; (ii) review and evaluate the assembled information to distinguish data that are directly relevant to specific estimates of particle transport from those that are only indirectly useful and (iii) provide a synthesis of the currently available information on building air-leakage parameters and their effect on indoor particle matter concentrations.

The numerical investigation of airflow and chemical transport characteristics for a general class of buildings involves identifying values for model parameters, such as effective leakage areas and temperatures, for which a fair amount of uncertainty exists. A Monte Carlo simulation, with parameter values drawn from likely distributions using Latin Hypercube sampling, helps to account for these uncertainties by generating a corresponding distribution of simulated results. However, conducting large numbers of model runs can challenge a simulation program, not only by increasing the need for fast algorithms, but also by proposing specific combinations of parameter values that may define difficult numerical problems. The paper describes several numerical approaches to improving the speed and reliability of the COMIS multizone airflow simulation program. Selecting a broad class of algorithms based on the mathematical properties of the airflow systems (symmetry and positive-definiteness), it evaluates new solution methods for possible inclusion in the COMIS code. In addition, it discusses further changes that will likely appear in future releases of the program.

A Bayesian uncertainty analysis approach is developed as a tool for assessing and reducing uncertainty in ground-water flow and chemical transport predictions. The method is illustrated for a site contaminated with chlorinated hydrocarbons. Uncertainty in source characterization, in chemical transport parameters, and in the assumed hydrogeologic structure was evaluated using engineering judgment and updated using observed field data. The updating approach using observed hydraulic head data was able to differentiate between reasonable and unreasonable hydraulic conductivity fields but could not differentiate between alternative conceptual models for the geological structure of the subsurface at the site. Updating using observed chemical concentration data reduced the uncertainty in most parameters and reduced uncertainty in alternative conceptual models describing the geological structure at the site, source locations, and the chemicals released at these sources. Thirty-year transport projections for no-action and source containment scenarios demonstrate a typical application of the methods.

This paper describes the first efforts at developing a set of prototypical buildings defined to capture the key features affecting airflow and pollutant transport in buildings. These buildings will be used to model airflow and pollutant transport for emergency response scenarios when limited site-specific information is available and immediate decisions must be made, and to better understand key features of buildings controlling occupant exposures to indoor pollutant sources. This paper presents an example of this approach for a prototypical intermediate-sized, open style, commercial building. Interzonal transport due to a short-term source release, e.g., accidental chemical spill, in the bottom and the upper floors is predicted and corresponding HVAC system operation effects and potential responses are considered. Three-hour average exposure estimates are used to compare effects of source location and HVAC operation.

A publicly available aerosol dynamics model, MIAQ4, is coupled to a widely used multizone air flow and transport model, COMIS, to better understand and quantify the behavior of particles in indoor environments. MIAQ4 simulates the evolution of a size and chemically resolved particle distribution, including the effects of direct indoor emission, ventilation, filtration, deposition, and coagulation. COMIS predicts interzonal air-exchange rates based on pressure gradients (due to wind, buoyancy, and HVAC operation) and leaks between the zones and with the outside. The capabilities of the coupled system are demonstrated by predicting the transport of particles from two sources in a residence: environmental tobacco smoke (ETS) and particles generated from cooking. For ETS, MIAQ4 predicts particle size distributions that are similar to the emission source profile because ETS particles, concentrated in the size range 0.1{\textendash}1 μm, are transformed by coagulation and deposition slowly compared with the rates of transport. For cooking, MIAQ4 predicts that the larger-sized particles will settle rapidly, causing a shift in size distribution as emissions are transported to other rooms.