Sample records for models assessing nutrient-limited

Full Text Available We assessed nitrogen and phosphorus limitation in a floodplain forest in southern Georgia in USA using two commonly used methods: nitrogen to phosphorus (N:P ratios in litterfall and fertilized ingrowth cores. We measured nitrogen (N and phosphorus (P concentrations in litterfall to determine N:P mass ratios. We also installed ingrowth cores within each site containing native soil amended with nitrogen (N, phosphorus (P, or nitrogen and phosphorus (N + P fertilizers or without added fertilizer (C. Litter N:P ratios ranged from 16 to 22, suggesting P limitation. However, fertilized ingrowth cores indicated N limitation because fine-root length density was greater in cores fertilized with N or N + P than in those fertilized with P or without added fertilizer. We feel that these two methods of assessingnutrientlimitation should be corroborated with fertilization trials prior to use on a wider basis.

With the advent of phosphorus (P)-adsorbent materials and techniques to address eutrophication in aquatic systems, there is a need to develop interpretive techniques to rapidly assess changes in potential nutrientlimitation. In a trial application of the P-adsorbent, lanthanum-modified bentonite

With the advent of phosphorus (P)-adsorbent materials and techniques to address eutrophication in aquatic systems, there is a need to develop interpretive techniques to rapidly assess changes in potential nutrientlimitation. In a trial application of the P-adsorbent, lanthanum-modified bentonite

Full Text Available Terrestrial carbon (C cycle models applied for climate projections simulate a strong increase in net primary productivity (NPP due to elevated atmospheric CO2 concentration during the 21st century. These models usually neglect the limited availability of nitrogen (N and phosphorus (P, nutrients that commonly limit plant growth and soil carbon turnover. To investigate how the projected C sequestration is altered when stoichiometric constraints on C cycling are considered, we incorporated a P cycle into the land surface model JSBACH (Jena Scheme for Biosphere–Atmosphere Coupling in Hamburg, which already includes representations of coupled C and N cycles.

The model reveals a distinct geographic pattern of P and N limitation. Under the SRES (Special Report on Emissions Scenarios A1B scenario, the accumulated land C uptake between 1860 and 2100 is 13% (particularly at high latitudes and 16% (particularly at low latitudes lower in simulations with N and P cycling, respectively, than in simulations without nutrient cycles. The combined effect of both nutrients reduces land C uptake by 25% compared to simulations without N or P cycling. Nutrientlimitation in general may be biased by the model simplicity, but the ranking of limitations is robust against the parameterization and the inflexibility of stoichiometry. After 2100, increased temperature and high CO2 concentration cause a shift from N to P limitation at high latitudes, while nutrientlimitation in the tropics declines. The increase in P limitation at high-latitudes is induced by a strong increase in NPP and the low P sorption capacity of soils, while a decline in tropical NPP due to high autotrophic respiration rates alleviates N and P limitations. The quantification of P limitation remains challenging. The poorly constrained processes of soil P sorption and biochemical mineralization are identified as the main uncertainties in the strength of P limitation

As coastal plants that can survive in salt water, mangroves play an essential role in large marine ecosystems (LMEs). The Red Sea, where the growth of mangroves is stunted, is one of the least studied LMEs in the world. Mangroves along the Central Red Sea have characteristic heights of ~2 m, suggesting nutrientlimitation. We assessed the nutrient status of mangrove stands in the Central Red Sea and conducted a fertilization experiment (N, P and Fe and various combinations thereof) on 4-week-old seedlings of Avicennia marina to identify limiting nutrients and stoichiometric effects. We measured height, number of leaves, number of nodes and root development at different time periods as well as the leaf content of C, N, P, Fe, and Chl a in the experimental seedlings. Height, number of nodes and number of leaves differed significantly among treatments. Iron treatment resulted in significantly taller plants compared with other nutrients, demonstrating that iron is the primary limiting nutrient in the tested mangrove population and confirming Liebig\\'s law of the minimum: iron addition alone yielded results comparable to those using complete fertilizer. This result is consistent with the biogenic nature of the sediments in the Red Sea, which are dominated by carbonates, and the lack of riverine sources of iron.

Restoration of wet grassland communities on peat soils involves management of nutrient supply and hydrology. The concept of nutrientlimitation was discussed as well as its interaction with drainage and rewetting of severely drained peat soils. Different methods of assessingnutrientlimitation were

Background: Rapid determination of which nutrientslimit the primary production of macroalgae and seagrasses is vital for understanding the impacts of eutrophication on marine and freshwater ecosystems. However, current methods to assessnutrientlimitation are often cumbersome and time consuming. F

Relative supplies of macro and micronutrients (C,N,P, various metals), along with light and water, controls ecosystem metabolism, trophic energy transfer and community structure. Here we test the hypothesis, using measurements from 41 spring-fed rivers in Florida, that tissue stoichiometry indicates autotroph nutrientlimitation status. Low variation in discharge, temperature and chemical composition within springs, but large variation across springs creates an ideal setting to assess the relationship between limitation and resource supply. Molar N:P ranges from 0.4 to 90, subjecting autotrophs to dramatically different nutrient supply. Over this gradient, species-specific autotroph tissue C:N:P ratios are strictly homeostatic, and with no evidence that nutrient supply affects species composition. Expanding to include 19 metals and micronutrients revealed autotrophs are more plastic in response to micronutrient variation, particularly for iron and manganese whose supply fluxes are small compared to biotic demand. Using a Droop model modified to reflect springs conditions (benthic production, light limitation, high hydraulic turnover), we show that tissue stoichiometry transitions from homeostatic to plastic with the onset of nutrientlimitation, providing a potentially powerful new tool for predicting nutrientlimitation and thus eutrophication in flowing waters.

An important methodological problem in plant ecology concerns the way in which the type and extent of nutrientlimitation in terrestrial communities should be assessed. Conclusions on nutrientlimitation have been founded mainly on soil extractions, fertiliser trials and tissue nutrient concentratio

Both nitrogen (N) and phosphorus (P) can limit primary production in shallow lakes, but it is still debated how the importance of N and P varies in time and space. We sampled 83 shallow lakes along a latitudinal gradient (5°–55° S) in South America and assessed the potential nutrientlimitation usin

Accurately predicting the effects of global change on net carbon (C) exchange between terrestrial ecosystems and the atmosphere requires a more complete understanding of how nutrient availability regulates both plant growth and heterotrophic soil respiration. Models of soil development suggest that the nature of nutrientlimitation changes over the course of ecosystem development, transitioning from nitrogen (N) limitation in 'young' sites to phosphorus (P) limitation in 'old' sites. However, previous research has focused primarily on plant responses to added nutrients, and the applicability of nutrientlimitation-soil development models to belowground processes has not been thoroughly investigated. Here, we assessed the effects of nutrients on soil C cycling in three different forests that occupy a 4 million year substrate age chronosequence where tree growth is N limited at the youngest site, co-limited by N and P at the intermediate-aged site, and P limited at the oldest site. Our goal was to use short-term laboratory soil C manipulations (using 14C-labeled substrates) and longer-term intact soil core incubations to compare belowground responses to fertilization with aboveground patterns. When nutrients were applied with labile C (sucrose), patterns of microbial nutrientlimitation were similar to plant patterns: microbial activity was limited more by N than by P in the young site, and P was more limiting than N in the old site. However, in the absence of C additions, increased respiration of native soil organic matter only occurred with simultaneous additions of N and P. Taken together, these data suggest that altered nutrient inputs into ecosystems could have dissimilar effects on C cycling above- and belowground, that nutrients may differentially affect of the fate of different soil C pools, and that future changes to the net C balance of terrestrial ecosystems will be partially regulated by soil nutrient status. ?? 2010 US Government.

Full Text Available Abstract Background Methanogenic Archaea play key metabolic roles in anaerobic ecosystems, where they use H2 and other substrates to produce methane. Methanococcus maripaludis is a model for studies of the global response to nutrientlimitations. Results We used high-coverage quantitative proteomics to determine the response of M. maripaludis to growth-limiting levels of H2, nitrogen, and phosphate. Six to ten percent of the proteome changed significantly with each nutrientlimitation. H2 limitation increased the abundance of a wide variety of proteins involved in methanogenesis. However, one protein involved in methanogenesis decreased: a low-affinity [Fe] hydrogenase, which may dominate over a higher-affinity mechanism when H2 is abundant. Nitrogen limitation increased known nitrogen assimilation proteins. In addition, the increased abundance of molybdate transport proteins suggested they function for nitrogen fixation. An apparent regulon governed by the euryarchaeal nitrogen regulator NrpR is discussed. Phosphate limitation increased the abundance of three different sets of proteins, suggesting that all three function in phosphate transport. Conclusion The global proteomic response of M. maripaludis to each nutrientlimitation suggests a wider response than previously appreciated. The results give new insight into the function of several proteins, as well as providing information that should contribute to the formulation of a regulatory network model.

The magnitude, spatial distribution, and variability of land net ecosystem exchange of carbon (NEE) are important determinants of the trajectory of atmospheric carbon dioxide concentration. Independent observational constraints provide important clues regarding NEE and its component fluxes, with information available at multiple spatial scales: from cells, to leaves, to entire organisms and collections of organisms, to complex landscapes and up to continental and global scales. Experimental manipulations, ecosystem observations, and process modeling all suggest that the components of NEE (photosynthetic gains, and respiration and other losses) are controlled in part by the availability of mineral nutrients, and that nutrientlimitation is a common condition in many biomes. Experimental and observational constraints at different spatial scales provide a complex and sometimes puzzling picture of the nature and degree of influence of nutrient availability on carbon cycle processes. Photosynthetic rates assessed at the cellular and leaf scales are often higher than the observed accumulation of carbon in plant and soil pools would suggest. We infer that a down-regulation process intervenes between carbon uptake and plant growth under conditions of nutrientlimitation, and several down-regulation mechanisms have been hypothesized and tested. A recent evaluation of two alternative hypotheses for down-regulation in the light of whole-plant level flux estimates indicates that some plants take up and store extra carbon, releasing it to the environment again on short time scales. The mechanism of release, either as additional autotrophic respiration or as exudation belowground is unclear, but has important consequences for long-term ecosystem state and response to climate change signals. Global-scale constraints from atmospheric concentration and isotopic composition data help to resolve this question, ultimately focusing attention on land use fluxes as the most uncertain

Soils of northern permafrost regions currently contain twice as much carbon as the entire Earth's atmosphere. Traditionally, environmental constraints have limited microbial activity resulting in restricted decomposition of soil organic matter in these systems and accumulation of massive amounts of soil organic carbon (SOC), however climate change is reducing the constraints of decomposition in arctic permafrost regions. Carbon cycling in nutrient poor, arctic ecosystems is tightly coupled to other biogeochemical cycles. Several studies have suggested strong nitrogen limitations of primary productivity and potentially warm-season microbial activity in these nutrient deficient soils. Nitrogen is required for microbial extracellular enzyme production which drives the decomposition of soil organic matter (SOM). Nitrogen limited arctic soils may also experience limitation via labile carbon availability despite the SOM rich environment due to low extracellular enzyme production. Few studies have directly addressed nutrient induced microbial limitation in SOC rich arctic tundra soils, and even less is known about the potential for nutrient co-limitation. Additionally, through the process of becoming deglaciated, sites within close proximity to one another may have experienced drastic differences in their effective soil ages due to the varied length of their active histories. Many soil properties and nutrient deficiencies are directly related to soil age, however this chronology has not previously been a focus of research on nutrientlimitation of arctic soil microbial activity. Understanding of nutrientlimitations, as well as potential co-limitation, on arctic soil microbial activity has important implications for carbon cycling and the ultimate fate of the current arctic SOC reservoir. Analyses of nutrientlimitation on soils of a single site are not adequate for fully understanding the controls on soil microbial activity across a vast land mass with large variation in

Microbial activity and growth in soil is regulated by several abiotic factors, including temperature, moisture and pH as the most important ones. At the same time nutrient conditions and substrate availability will also determine microbial growth. Amount of substrate will not only affect overall microbial growth, but also affect the balance of fungal and bacterial growth. The type of substrate will also affect the latter. Furthermore, according to Liebig law of limiting factors, we would expect one nutrient to be the main limiting one for microbial growth in soil. When this nutrient is added, the initial second liming factor will become the main one, adding complexity to the microbial response after adding different substrates. I will initially describe different ways of determining limiting factors for bacterial growth in soil, especially a rapid method estimating bacterial growth, using the leucine incorporation technique, after adding C (as glucose), N (as ammonium nitrate) and P (as phosphate). Scenarios of different limitations will be covered, with the bacterial growth response compared with fungal growth and total activity (respiration). The "degree of limitation", as well as the main limiting nutrient, can be altered by adding substrate of different stoichiometric composition. However, the organism group responding after alleviating the nutrientlimitation can differ depending on the type of substrate added. There will also be situations, where fungi and bacteria appear to be limited by different nutrients. Finally, I will describe interactions between abiotic factors and the response of the soil microbiota to alleviation of limiting factors.

Both nitrogen (N) and phosphorus (P) can limit primary production in shallow lakes, but it is still debated how the importance of N and P varies in time and space. We sampled 83 shallow lakes along a latitudinal gradient (5 degrees 55 degrees S) in South America and assessed the potential nutrientlimitation using different methods including nutrient ratios in sediment, water, and seston, dissolved nutrient concentrations, and occurrence of N-fixing cyanobacteria. We found that local characteristics such as soil type and associated land use in the catchment, hydrology, and also the presence of abundant submerged macrophyte growth influenced N and P limitation. We found neither a consistent variation in nutrientlimitation nor indications for a steady change in denitrification along the latitudinal gradient. Contrary to findings in other regions, we did not find a relationship between the occurrence of (N-fixing and non-N-fixing) cyanobacteria and the TN:TP ratio. We found N-fixing cyanobacteria (those with heterocysts) exclusively in lakes with dissolved inorganic nitrogen (DIN) concentrations of < 100 microg/L, but notably they were also often absent in lakes with low DIN concentrations. We argue that local factors such as land use and hydrology have a stronger influence on which nutrient is limiting than climate. Furthermore, our data show that in a wide range of climates N limitation does not necessarily lead to cyanobacterial dominance.

Phosphorus (P) is a critical nutrient and frequently limits primary productivity in terrestrial ecosystems. Microorganisms have evolved an array of strategies to mobilize occluded and insoluble P and may be important regulators of P availability to vegetation. Understanding the mechanisms of P mobilization, the breadth of microorganisms responsible, and the impact of these organisms on vegetation growth remains an important knowledge gap for both predicting ecosystem productivity and harnessing microbial functions to improve vegetation growth. To determine the relationship between soil development, phosphorus availability and P mobilizing microorganisms and their strategies we are studying a marine terrace chronosequence (Ecological Staircase, Mendocino County, CA) representing a fertility gradient culminating in P-limited pygmy forests that provide an ideal natural observatory to investigate how plant-microbe interactions co-evolve in response to P stress. Soil mineralogical analysis identified acidic soils bearing iron and aluminum phosphates and phytate as the dominant forms of occluded inorganic and organic P, respectively. Several diverse bacterial and fungal strains were isolated on media with AlPO4, FePO4, or phytate as the sole P source. Most microorganisms were able to utilize AlPO4 as a sole P source, with fewer subsisting on FePO4 or phytate. Terraces with a higher fraction of occluded and organic P harbored the greatest abundance of P-mobilizing microorganisms, with a significant proportion coming from the Burkholderia. Isolates that exhibited significant excess P mobilization were inoculated with Arabidopsis and Switchgrass plants grown with insoluble P forms had a positive impact on growth. These results indicate that rhizosphere microorganisms that have evolved under extreme nutrientlimitation have an extended capacity for P solubilization, and could potentially be harnessed to alleviate P stress for plants. The detailed mechanisms for P

Full Text Available Photorhabdus is a genus of Gram-negative entomopathogenic bacteria that also maintain a mutualistic association with nematodes from the family Heterorhabditis. Photorhabdus has an extensive secondary metabolism that is required for the interaction between the bacteria and the nematode. A major component of this secondary metabolism is a stilbene molecule, called ST. The first step in ST biosynthesis is the non-oxidative deamination of phenylalanine resulting in the production of cinnamic acid. This reaction is catalyzed by phenylalanine-ammonium lyase, an enzyme encoded by the stlA gene. In this study we show, using a stlA-gfp transcriptional fusion, that the expression of stlA is regulated by nutrientlimitation through a regulatory network that involves at least 3 regulators. We show that TyrR, a LysR-type transcriptional regulator that regulates gene expression in response to aromatic amino acids in E. coli, is absolutely required for stlA expression. We also show that stlA expression is modulated by σ(S and Lrp, regulators that are implicated in the regulation of the response to nutrientlimitation in other bacteria. This work is the first that describes pathway-specific regulation of secondary metabolism in Photorhabdus and, therefore, our study provides an initial insight into the complex regulatory network that controls secondary metabolism, and therefore mutualism, in this model organism.

Low soybean yields in western Kenya have been attributed to low soil fertility despite much work done on nitrogen (N) and phosphorus (P) nutrition leading to suspicion of other nutrientlimitations. To investigate this, a nutrient omission trial was set up in the greenhouse at the University of Eldoret-Kenya to diagnose the nutrientslimiting soybean production in Acrisols from Masaba central and Butere sub-Counties, and Ferralsols from Kakamega (Shikhulu and Khwisero sub-locations) and Butula sub-Counties and to assess the effect of liming on soil pH and soybean growth. The experiment was laid out in a completely randomized design with ten treatments viz; positive control (complete), negative control (distilled water), complete with lime, complete with N, minus macronutrients P, potassium (K), calcium (Ca), magnesium (Mg) and sulphur (S) and with, micro-nutrients boron (B), molybdenum (Mo), manganese (Mn), copper (Cu) and zinc (Zn) omitted. Visual deficiency symptoms observed included interveinal leaf yellowing in Mg omission and N addition and dark green leaves in P omission. Nutrients omission resulted in their significantly low concentration in plant tissues than the complete treatment. Significantly (P≤ 0.05) lower shoot dry weights (SDWs) than the complete treatment were obtained in different treatments; omission of K and Mg in Masaba and Shikhulu, Mg in Khwisero, K in Butere and, P, Mg and K in Butula. Nitrogen significantly improved SDWs in soils from Kakamega and Butula. Liming significantly raised soil pH by 9, 13 and 11% from 4.65, 4.91 and 4.99 in soils from Masaba, Butere and Butula respectively and soybean SDWs in soils from Butere. The results show that, poor soybean growth was due to K, Mg and P limitation and low pH in some soils. The results also signify necessity of application of small quantities of N for initial soybean use.

Full Text Available Low soybean yields in western Kenya have been attributed to low soil fertility despite much work done on nitrogen (N and phosphorus (P nutrition leading to suspicion of other nutrientlimitations. To investigate this, a nutrient omission trial was set up in the greenhouse at the University of Eldoret-Kenya to diagnose the nutrientslimiting soybean production in Acrisols from Masaba central and Butere sub-Counties, and Ferralsols from Kakamega (Shikhulu and Khwisero sub-locations and Butula sub-Counties and to assess the effect of liming on soil pH and soybean growth. The experiment was laid out in a completely randomized design with ten treatments viz; positive control (complete, negative control (distilled water, complete with lime, complete with N, minus macronutrients P, potassium (K, calcium (Ca, magnesium (Mg and sulphur (S and with, micro-nutrients boron (B, molybdenum (Mo, manganese (Mn, copper (Cu and zinc (Zn omitted. Visual deficiency symptoms observed included interveinal leaf yellowing in Mg omission and N addition and dark green leaves in P omission. Nutrients omission resulted in their significantly low concentration in plant tissues than the complete treatment. Significantly (P≤ 0.05 lower shoot dry weights (SDWs than the complete treatment were obtained in different treatments; omission of K and Mg in Masaba and Shikhulu, Mg in Khwisero, K in Butere and, P, Mg and K in Butula. Nitrogen significantly improved SDWs in soils from Kakamega and Butula. Liming significantly raised soil pH by 9, 13 and 11% from 4.65, 4.91 and 4.99 in soils from Masaba, Butere and Butula respectively and soybean SDWs in soils from Butere. The results show that, poor soybean growth was due to K, Mg and P limitation and low pH in some soils. The results also signify necessity of application of small quantities of N for initial soybean use.

Low soybean yields in western Kenya have been attributed to low soil fertility despite much work done on nitrogen (N) and phosphorus (P) nutrition leading to suspicion of other nutrientlimitations. To investigate this, a nutrient omission trial was set up in the greenhouse at the University of Eldoret-Kenya to diagnose the nutrientslimiting soybean production in Acrisols from Masaba central and Butere sub-Counties, and Ferralsols from Kakamega (Shikhulu and Khwisero sub-locations) and Butul...

Phytoplankton nutrientlimitation was studied in the Gulf of Riga during spring bloom (April 1995), early summer stage (June 1994), cyanobacterial bloom (July 1994) and post cyanobacterial bloom (August 1993). Each year six factorial nutrient enrichment experiments were carried out in various locations in the Gulf; including outer Irbe Strait, northern Gulf and southern Gulf. The responses of natural phytoplankton communities to the nutrient additions (80 μg NH 4-N l -1, 20 μg PO 4-P l -1 and two levels of combined additions) were followed for 3 days using 6 l experimental units. To evaluate the nutrientlimitation patterns, time series of chlorophyll a were analysed using polynomial regression models and ranking method, taking advantage of the relatively constant experimental error. Apparent nutrient depletion rates and ratios were estimated, and compared with the changes in particulate nutrient ratios. During the spring diatom bloom in 1995, ambient inorganic nutrient concentrations were still high, and thus phytoplankton biomass did not respond to additions of nutrients. Chlorophyll a specific nutrient depletion rates were low (0.01-0.12 μg N (μg chl a) -1 h -1 and 0.002-0.016 μg P (μg chl a) -1 h -1) and linear over time, thus also revealing that phytoplankton was not limited by these nutrients in that time. In June 1994, there was an areal shift from N limitation in the outer Irbe Strait towards co-limitation in the southern Gulf. Later in July 1994, during the bloom of N-fixing Aphanizomenon flos-aquae, the N limitation was obvious for the whole study area. For this period chlorophyll a specific nutrient depletion rates were high (0.36-0.67 μg N (μg chl a) -1 h -1 and 0.089-0.135 μg P (μg chl a) -1 h -1), and added nutrients were almost totally depleted during the first light period. After the collapse of cyanobacterial bloom in August 1993, the experiment carried out in the southern Gulf indicated P limitation of phytoplankton. The central Gulf was

Biomass allocation can exert a great influence on plant resource acquisition and nutrient use. However, the role of biomass allocation strategies in shaping plant community composition under nutrientlimitations remains poorly addressed. We hypothesized that species-specific allocation strategies can affect plant adaptation to nutrientlimitations, resulting in species turnover and changes in community-level biomass allocations across nutrient gradients. In this study, we measured species abundance and the concentrations of nitrogen and phosphorus in leaves and soil nutrients in an arid-hot grassland. We quantified species-specific allocation parameters for stems vs leaves based on allometric scaling relationships. Species-specific stem vs leaf allocation parameters were weighted with species abundances to calculate the community-weighted means driven by species turnover. We found that the community-weighted means of biomass allocation parameters were significantly related to the soil nutrient gradient as well as to leaf stoichiometry, indicating that species-specific allocation strategies can affect plant adaptation to nutrientlimitations in the studied grassland. Species that allocate less to stems than leaves tend to dominate nutrient-limited environments. The results support the hypothesis that species-specific allocations affect plant adaptation to nutrientlimitations. The allocation trade-off between stems and leaves has the potential to greatly affect plant distribution across nutrient gradients.

Burkholderia pseudomallei is the causative agent of melioidosis, which can form biofilms and microcolonies in vivo and in vitro. One of the hallmark characteristics of the biofilm-forming bacteria is that they can be up to 1,000 times more resistant to antibiotics than their free-living counterpart. Bacteria also become highly tolerant to antibiotics when nutrients are limited. One of the most important causes of starvation induced tolerance in vivo is biofilm growth. However, the effect of nutritional stress on biofilm formation and drug tolerance of B. pseudomallei has never been reported. Therefore, this study aims to determine the effect of nutrient-limited and enriched conditions on drug susceptibility of B. pseudomallei in both planktonic and biofilm forms in vitro using broth microdilution method and Calgary biofilm device, respectively. The biofilm formation of B. pseudomallei in nutrient-limited and enriched conditions was also evaluated by a modified microtiter-plate test. Six isolates of ceftazidime (CAZ)-susceptible and four isolates of CAZ-resistant B. pseudomallei were used. The results showed that the minimum bactericidal concentrations of CAZ against B. pseudomallei in nutrient-limited condition were higher than those in enriched condition. The drug susceptibilities of B. pseudomallei biofilm in both enriched and nutrient-limited conditions were more tolerant than those of planktonic cells. Moreover, the quantification of biofilm formation by B. pseudomallei in nutrient-limited condition was significantly higher than that in enriched condition. These data indicate that nutrient-limited condition could induce biofilm formation and drug tolerance of B. pseudomallei.

Past work suggests that burial and low nutrient availability limit the growth and zonal distribution of coastal dune plants. Given the importance of these two factors, there is a surprising lack of field investigations of the interactions between burial and nutrient availability. This study aims to address this issue by measuring the growth responses of four coastal dune plant species to these two factors and their interaction. Species that naturally experience either high or low rates of burial were selected and a factorial burial by nutrient addition experiment was conducted. Growth characteristics were measured in order to determine which characteristics allow a species to respond to burial. Species that naturally experience high rates of burial (Arctotheca populifolia and Scaevola plumieri) displayed increased growth when buried, and this response was nutrient-limited. Stable-dune species had either small (Myrica cordifolia, N-fixer) or negligible responses to burial (Metalasia muricata), and were not nutrient-limited. This interspecific difference in response to burial and/or fertiliser is consistent with the idea that burial maintains the observed zonation of species on coastal dunes. Species that are unable to respond to burial are prevented from occupying the mobile dunes. Species able to cope with high rates of burial had high nitrogen-use efficiencies and low dry mass costs of production, explaining their ability to respond to burial under nutrientlimitation. The interaction between burial and nutrientlimitation is understudied but vital to understanding the zonation of coastal dune plant species.

There has always been a great need for simple and accurate bioassays for evaluating nutrientlimitation in aquatic ecosystems. Whereas organic carbon is usually considered to be the limiting nutrient for microbial growth in many aquatic ecosystems,there are, however, many water sources that are limited by phosphorus or nitrogen. A method named "nitrogen fixing bacterial growth potential" (NFBGP) test, which is based on pre-culturing ofautochthonous (target) microorganisms was described. The method was applied to evaluate phosphorus or nitrogen nutrientlimitation in lake and sewage water samples using an isolate of the nitrogen fixing bacterium, Azorhizobium sp. WS6. The results corresponded well to those from the traditional algal growth potential (AGP) test and the bacterial regrowth potential (BRP) test, suggesting that the NFBGP test is a useful supplementary method for evaluating the limiting nutrient, especially phosphorus, in an aquatic environment.

Examining foliar nutrient concentrations after fertilization provides an alternative method for detecting nutrientlimitation of ecosystems, which is logistically simpler to measure than biomass change. We present a meta-analysis of response ratios of foliar nitrogen and phosphorus (RRN, RRP) after addition of fertilizer of nitrogen (N), phosphorus (P), or the two elements in combination, in relation to climate, ecosystem type, life form, family, and methodological factors. Results support other meta-analyses using biomass, and demonstrate there is strong evidence for nutrientlimitation in natural communities. However, because N fertilization experiments greatly outnumber P fertilization trials, it is difficult to discern the absolute importance of N vs. P vs. co-limitation across ecosystems. Despite these caveats, it is striking that results did not follow "conventional wisdom" that temperate ecosystems are N-limited and tropical ones are P-limited. In addition, the use of ratios of N-to-P rather than response ratios also are a useful index of nutrientlimitation, but due to large overlap in values, there are unlikely to be universal cutoff values for delimiting N vs. P limitation. Differences in RRN and RRP were most significant across ecosystem types, plant families, life forms, and between competitive environments, but not across climatic variables.

Questions remain as to which soil nutrientslimit primary production in tropical forests. Phosphorus (P) has long been considered the primary limiting element in lowland forests, but recent evidence demonstrates substantial heterogeneity in response to nutrient addition, highlighting a need to understand and diagnose nutrientlimitation across diverse forests. Fine-root characteristics including their abundance, functional traits, and mycorrhizal symbionts can be highly responsive to changes in soil nutrients and may help to diagnose nutrientlimitation. Here, we document the response of fine roots to long-term nitrogen (N), P, and potassium (K) fertilization in a lowland forest in Panama. Because this experiment has demonstrated that N and K together limit tree growth and P limits fine litter production, we hypothesized that fine roots would also respond to nutrient addition. Specifically we hypothesized that N, P, and K addition would reduce the biomass, diameter, tissue density, and mycorrhizal colonization of fine roots, and increase nutrient concentration in root tissue. Most morphological root traits responded to the single addition of K and the paired addition of N and P, with the greatest response to all three nutrients combined. The addition of N, P, and K together reduced fine-root biomass, length, and tissue density, and increased specific root length, whereas root diameter remained unchanged. Nitrogen addition did not alter root N concentration, but P and K addition increased root P and K concentration, respectively. Mycorrhizal colonization of fine roots declined with N, increased with P, and was unresponsive to K addition. Although plant species composition remains unchanged after 14 years of fertilization, fine-root characteristics responded to N, P, and K addition, providing some of the strongest stand-level responses in this experiment. Multiple soil nutrients regulate fine-root abundance, morphological and chemical traits, and their association

Nodularia spumigena is one of the dominating species during the extensive cyanobacterial blooms in the Baltic Sea. The blooms coincide with strong light, stable stratification, low ratios of dissolved inorganic nitrogen, and dissolved inorganic phosphorus. The ability of nitrogen fixation, a high tolerance to phosphorus starvation, and different photo-protective strategies (production of mycosporine-like amino acids, MAAs) may give N. spumigena a competitive advantage over other phytoplankton during the blooms. To elucidate the interactive effects of ambient UV radiation and nutrientlimitation on the performance of N. spumigena, an outdoor experiment was designed. Two radiation treatments photosynthetic active radiation (PAR) and PAR +UV-A + UV-B (PAB) and three nutrient treatments were established: nutrient replete (NP), nitrogen limited (-N), and phosphorus limited (-P). Variables measured were specific growth rate, heterocyst frequency, cell volume, cell concentrations of MAAs, photosynthetic pigments, particulate carbon (POC), particulate nitrogen (PON), and particulate phosphorus (POP). Ratios of particulate organic matter were calculated: POC/PON, POC/POP, and PON/POP. There was no interactive effect between radiation and nutrientlimitation on the specific growth rate of N. spumigena, but there was an overall effect of phosphorus limitation on the variables measured. Interaction effects were observed for some variables; cell size (larger cells in -P PAB compared to other treatments) and the carotenoid canthaxanthin (highest concentration in -N PAR). In addition, significantly less POC and PON (mol cell(-1)) were found in -P PAR compared to -P PAB, and the opposite radiation effect was observed in -N. Our study shows that despite interactive effects on some of the variables studied, N. spumigena tolerate high ambient UVR also under nutrientlimiting conditions and maintain positive growth rate even under severe phosphorus limitation.

The increased lifespan caused by food limitation has been observed in a wide range of animals including the nematode Caenorhabditis elegans. We show here that the lifespans of eat-2 and eat-5 feeding-defective mutants and a mutant of dbl-1 encoding a TGFβ ligand significantly change between the cultures fed on Escherichia coli strain OP50 or a more nutrient-rich strain HB101. On HB101 food, the eat-2, eat-5 and dbl-1 mutants show increased lifespan compared to that of the wild type. This result is probably due to nutrientlimitation because the eat mutations reduce food uptake and the mutation of dbl-1 that regulates expression of several digestive enzymes leads to nutrientlimitation. In contrast, the lifespans of the eat-2 and dbl-1 mutants decreased from that of the wild type on OP50 food. We found that live OP50 cells within a worm were markedly more in these mutants than in the wild type, which suggests that impaired digestion of pathogenic OP50 decreased lifespan in the eat-2 and dbl-1 mutants.

The response of Prorocentrum donghaiense and Thalassiosira weissflogii pigments under nitrate (N) and phosphate (P) limitation were studied using HPLC and in vlvo fluorescence protocols in batch cultures. For P. donghalense,the pigment ratio was kept stable under different nutrient conditions from the results of HPLC. For T. weissflogii,there was a lower ratio of chlorophyllide to Chi a during the exponential phase, but the reverse during the stationary phase. Different members of the phytoplankton had different pigments response mechanisms under nutrientlimitation. From the results of in vivo fluorescence, the ratio of peridinin to Chl a for P. donghaiense increased in nutrient-free culture, while it was kept stable for nutrient-limited cultures during the exponential phase. For T.weissflogii, the ratio of fucoxanthin to Chl a for each culture increased during the exponential phase, but the ratio under N limitation was apparently lower than that for P limitation during the stationary phase. The results indicate that both pigment ratios from HPLC and In vivo fluorescence of T. weissflogii were changed greatly under different nutrient conditions, which suggests that both ratios could be used as indicators of algal physiological status in different nutrient conditions.

Human activities have more than doubled the amount of nitrogen (N) circulating in the biosphere. One major pathway of this anthropogenic N input into ecosystems has been increased regional deposition from the atmosphere. Here we show that atmospheric N deposition increased the stoichiometric ratio of N and phosphorus (P) in lakes in Norway, Sweden, and Colorado, United States, and, as a result, patterns of ecological nutrientlimitation were shifted. Under low N deposition, phytoplankton growth is generally N-limited; however, in high-N deposition lakes, phytoplankton growth is consistently P-limited. Continued anthropogenic amplification of the global N cycle will further alter ecological processes, such as biogeochemical cycling, trophic dynamics, and biological diversity, in the world's lakes, even in lakes far from direct human disturbance.

To survive in a host environment, microbial pathogens must sense local conditions, including nutrient availability, and adjust their growth state and virulence functions accordingly. No comprehensive investigation of growth phase-related gene regulation in Bordetella pertussis has been reported previously. We characterized changes in genome-wide transcript abundance of B. pertussis as a function of growth phase and availability of glutamate, a key nutrient for this organism. Using a Bordetella DNA microarray, we discovered significant changes in transcript abundance for 861 array elements during the transition from log phase to stationary phase, including declining transcript levels of many virulence factor genes. The responses to glutamate depletion exhibited similarities to the responses induced by exit from log phase, including decreased virulence factor transcript levels. However, only 23% of array elements that showed at least a fourfold growth phase-associated difference in transcript abundance also exhibited glutamate depletion-associated changes, suggesting that nutrientlimitation may be one of several interacting factors affecting gene regulation during stationary phase. Transcript abundance patterns of a Bvg+ phase-locked mutant revealed that the BvgAS two-component regulatory system is a key determinant of growth phase- and nutrientlimitation-related transcriptional control. Several adhesin genes exhibited lower transcript abundance during stationary phase and under glutamate restriction conditions. The predicted bacterial phenotype was confirmed: adherence to bronchoepithelial cells decreased 3.3- and 4.4-fold at stationary phase and with glutamate deprivation, respectively. Growth phase and nutrient availability may serve as cues by which B. pertussis regulates virulence according to the stage of infection or the location within the human airway.

Full Text Available Fishes can play important functional roles in the nutrient dynamics of freshwater systems. Aggregating fishes have the potential to generate areas of increased biogeochemical activity, or hotspots, in streams and rivers. Many of the studies documenting the functional role of fishes in nutrient dynamics have focused on native fish species; however, introduced fishes may restructure nutrient storage and cycling freshwater systems as they can attain high population densities in novel environments. The purpose of this study was to examine the impact of a non-native catfish (Loricariidae: Pterygoplichthys on nitrogen and phosphorus remineralization and estimate whether large aggregations of these fish generate measurable biogeochemical hotspots within nutrient-limited ecosystems. Loricariids formed large aggregations during daylight hours and dispersed throughout the stream during evening hours to graze benthic habitats. Excretion rates of phosphorus were twice as great during nighttime hours when fishes were actively feeding; however, there was no diel pattern in nitrogen excretion rates. Our results indicate that spatially heterogeneous aggregations of loricariids can significantly elevate dissolved nutrient concentrations via excretion relative to ambient nitrogen and phosphorus concentrations during daylight hours, creating biogeochemical hotspots and potentially altering nutrient dynamics in invaded systems.

Full Text Available Although the Mississippi-Atchafalaya River system exports large amounts of nutrients to the Northern Gulf of Mexico annually, nutrientlimitation of primary productivity still occurs offshore, acting as one of the major factors controlling local phytoplankton biomass and community structure. Bioassays were conducted for 48 hrs at two stations adjacent to the river plumes in April and August 2012. High Performance of Liquid Chromatography (HPLC combined with ChemTax and a Fluorescence Induction and Relaxation (FIRe system were combined to observe changes in the phytoplankton community structure and photosynthetic activity. Major fluorescence parameters (Fo, Fv/Fm performed well to reveal the stimulating effect of the treatments with nitrogen (N-nitrate and with nitrogen plus phosphate (+NPi. HPLC/ChemTax results showed that phytoplankton community structure shifted with nitrate addition: we observed an increase in the proportion of diatoms and prasinophytes and a decrease in cyanobacteria and prymnesiophytes. These findings are consistent with predictions from trait-based analysis which predict that phytoplankton groups with high maximum growth rates (μmax and high nutrient uptake rates (Vmax readily take advantage of the addition of limiting nutrients. Changes in phytoplankton community structure, if persistent, could trigger changes of particular organic matter fluxes and alter the micro-food web cycles and bottom oxygen consumption.

The growth rates of planktonic microbes in the pelagic zone of the Eastern Mediterranean Sea are nutrientlimited, but the type of limitation is still uncertain. During this study, we investigated the occurrence of N and P limitation among different groups of the prokaryotic and eukaryotic (pico-, nano-, and micro-) plankton using a microcosm experiment during stratified water column conditions in the Cretan Sea (Eastern Mediterranean). Microcosms were enriched with N and P (either solely or simultaneously), and the PO4 turnover time, prokaryotic heterotrophic activity, primary production, and the abundance of the different microbial components were measured. Flow cytometric and molecular fingerprint analyses showed that different heterotrophic prokaryotic groups were limited by different nutrients; total heterotrophic prokaryotic growth was limited by P, but only when both N and P were added, changes in community structure and cell size were detected. Phytoplankton were N and P co-limited, with autotrophic pico-eukaryotes being the exception as they increased even when only P was added after a 2-day time lag. The populations of Synechococcus and Prochlorococcus were highly competitive with each other; Prochlorococcus abundance increased during the first 2 days of P addition but kept increasing only when both N and P were added, whereas Synechococcus exhibited higher pigment content and increased in abundance 3 days after simultaneous N and P additions. Dinoflagellates also showed opportunistic behavior at simultaneous N and P additions, in contrast to diatoms and coccolithophores, which diminished in all incubations. High DNA content viruses, selective grazing, and the exhaustion of N sources probably controlled the populations of diatoms and coccolithophores.

A substantial contribution of biogenic weathering in ecosystem nutrition, especially by symbiotic microorganisms, has often been proposed, but large-scale in vivo studies are still missing. Here we compare a set of ecosystems spanning from the Antarctic to tropical forests for their potential biogenic weathering and its drivers. To address biogenic weathering rates, we installed mineral mesocosms only accessible for bacteria and fungi for up to 4 years, which contained freshly broken and defined nutrient-baring minerals in soil A horizons of ecosystems along a gradient of soil development differing in climate and plant species communities. Alterations of the buried minerals were analyzed by grid-intersection, confocal lascer scanning microscopy, energy-dispersive X-ray spectroscopy, and X-ray photoelectron spectroscopy on the surface and on thin sections. On selected sites, carbon fluxes were tracked by 13C labeling, and microbial community was identified by DNA sequencing. In young ecosystems (protosoils) biogenic weathering is almost absent and starts after first carbon accumulation by aeolian (later litter) inputs and is mainly performed by bacteria. With ongoing soil development and appearance of symbiotic (mycorrhized) plants, nutrient availability in soil increasingly drove biogenic weathering, and fungi became the far more important players than bacteria. We found a close relation between fungal biogenic weathering and available potassium across all 16 forested sites in the study, regardless of the dominant mycorrhiza type (AM or EM), climate, and plant-species composition. We conclude that nutrientlimitations at ecosystem scale are generally counteracted by adapted fungal biogenic weathering. The close relation between fungal weathering and plant-available nutrients over a large range of severely contrasting ecosystems points towards a direct energetic support of these weathering processes by the photoautotrophic community, making biogenic weathering a

Nutrient rich conditions often promote plant invasions, yet additions of non-nitrogen (N) nutrients may provide a novel approach for conserving native symbiotic N-fixing plants in otherwise N-limited ecosystems. Lupinus oreganus is a threatened N-fixing plant endemic to prairies in western Oregon and southwest Washington (USA). We tested the effect of non-N fertilizers on the growth, reproduction, tissue N content, and stable isotope δ15N composition of Lupinus at three sites that differed in soil phosphorus (P) and N availability. We also examined changes in other Fabaceae (primarily Vicia sativa and V. hirsuta) and cover of all plant species. Variation in background soil P and N availability shaped patterns of nutrientlimitation across sites. Where soil P and N were low, P additions increased Lupinus tissue N and altered foliar δ15N, suggesting P limitation of N fixation. Where soil P was low but N was high, P addition stimulated growth and reproduction in Lupinus. At a third site, with higher soil P, only micro- and macronutrient fertilization without N and P increased Lupinus growth and tissue N. Lupinus foliar δ15N averaged −0.010‰ across all treatments and varied little with tissue N, suggesting consistent use of fixed N. In contrast, foliar δ15N of Vicia spp. shifted towards 0‰ as tissue N increased, suggesting that conditions fostering N fixation may benefit these exotic species. Fertilization increased cover, N fixation, and tissue N of non-target, exotic Fabaceae, but overall plant community structure shifted at only one site, and only after the dominant Lupinus was excluded from analyses. Our finding that non-N fertilization increased the performance of Lupinus with few community effects suggests a potential strategy to aid populations of threatened legume species. The increase in exotic Fabaceae species that occurred with fertilization further suggests that monitoring and adaptive management should accompany any large scale applications.

Full Text Available Nutrient rich conditions often promote plant invasions, yet additions of non-nitrogen (N nutrients may provide a novel approach for conserving native symbiotic N-fixing plants in otherwise N-limited ecosystems. Lupinus oreganus is a threatened N-fixing plant endemic to prairies in western Oregon and southwest Washington (USA. We tested the effect of non-N fertilizers on the growth, reproduction, tissue N content, and stable isotope δ(15N composition of Lupinus at three sites that differed in soil phosphorus (P and N availability. We also examined changes in other Fabaceae (primarily Vicia sativa and V. hirsuta and cover of all plant species. Variation in background soil P and N availability shaped patterns of nutrientlimitation across sites. Where soil P and N were low, P additions increased Lupinus tissue N and altered foliar δ(15N, suggesting P limitation of N fixation. Where soil P was low but N was high, P addition stimulated growth and reproduction in Lupinus. At a third site, with higher soil P, only micro- and macronutrient fertilization without N and P increased Lupinus growth and tissue N. Lupinus foliar δ(15N averaged -0.010‰ across all treatments and varied little with tissue N, suggesting consistent use of fixed N. In contrast, foliar δ(15N of Vicia spp. shifted towards 0‰ as tissue N increased, suggesting that conditions fostering N fixation may benefit these exotic species. Fertilization increased cover, N fixation, and tissue N of non-target, exotic Fabaceae, but overall plant community structure shifted at only one site, and only after the dominant Lupinus was excluded from analyses. Our finding that non-N fertilization increased the performance of Lupinus with few community effects suggests a potential strategy to aid populations of threatened legume species. The increase in exotic Fabaceae species that occurred with fertilization further suggests that monitoring and adaptive management should accompany any large scale

Nutrient rich conditions often promote plant invasions, yet additions of non-nitrogen (N) nutrients may provide a novel approach for conserving native symbiotic N-fixing plants in otherwise N-limited ecosystems. Lupinus oreganus is a threatened N-fixing plant endemic to prairies in western Oregon and southwest Washington (USA). We tested the effect of non-N fertilizers on the growth, reproduction, tissue N content, and stable isotope δ(15)N composition of Lupinus at three sites that differed in soil phosphorus (P) and N availability. We also examined changes in other Fabaceae (primarily Vicia sativa and V. hirsuta) and cover of all plant species. Variation in background soil P and N availability shaped patterns of nutrientlimitation across sites. Where soil P and N were low, P additions increased Lupinus tissue N and altered foliar δ(15)N, suggesting P limitation of N fixation. Where soil P was low but N was high, P addition stimulated growth and reproduction in Lupinus. At a third site, with higher soil P, only micro- and macronutrient fertilization without N and P increased Lupinus growth and tissue N. Lupinus foliar δ(15)N averaged -0.010‰ across all treatments and varied little with tissue N, suggesting consistent use of fixed N. In contrast, foliar δ(15)N of Vicia spp. shifted towards 0‰ as tissue N increased, suggesting that conditions fostering N fixation may benefit these exotic species. Fertilization increased cover, N fixation, and tissue N of non-target, exotic Fabaceae, but overall plant community structure shifted at only one site, and only after the dominant Lupinus was excluded from analyses. Our finding that non-N fertilization increased the performance of Lupinus with few community effects suggests a potential strategy to aid populations of threatened legume species. The increase in exotic Fabaceae species that occurred with fertilization further suggests that monitoring and adaptive management should accompany any large scale applications.

Semi-arid savannah ecosystems are under strong pressure from climate and land-use changes, especially around populous areas like Mt. Kilimanjaro region. Savannah vegetation consists of grassland with isolated trees and is therefore characterized by high spatial variation of canopy cover and aboveground biomass. Both are major regulators for soil ecological parameters and soil-atmospheric trace gas exchange (CO2, N2O, CH4), especially in water limited environments. The spatial distribution of these parameters and the connection between above and belowground processes are important to understand and predict ecosystem changes and estimate its vulnerability. Our objective was to determine spatial trends and changes of soil parameters and trace-gas fluxes and relate their variability to the vegetation structure. We chose three trees from each of the two most dominant species (Acacia nilotica and Balanites aegyptiaca). For each tree, we selected transects with total nine sampling points under and outside the crown. At each sampling point we measured soil and plant biomass carbon (C) and nitrogen (N) content, δ13C, microbial biomass C and N, soil respiration, available nutrients, pH, cation exchange capacity (CEC) as well as belowground biomass, soil temperature and soil water content. Contents and stocks of C and N fractions, Ca2+, K+ and total CEC decreased up to 50% outside the crown. This was unaffected by the tree species, tree size or other tree characteristics. Water content was below the permanent wilting point and independent from tree cover. In all cases tree litter inputs had far a closer C:N ratio than C4-grass litter. Microbial C:N ratio and CO2 efflux was about 30% higher in open area and strongly dependent on mineral N availability. This indicates N limitation and low microbial C use efficiency in soil under open area. We conclude that the spatial structure of aboveground biomass in savanna ecosystems leads to a spatial redistribution of nutrient

A cross-ecosystem comparison of data obtained from 20 French Mediterranean lagoons with contrasting eutrophication status provided the basis for investigating the variables that best predict chlorophyll a (Chl a) concentrations and nutrientlimitation of phytoplankton biomass along a strong nutrient enrichment gradient. Summer concentrations of dissolved inorganic nitrogen (DIN) and phosphorus (DIP) comprised only a small fraction of total nitrogen (TN) and total phosphorus (TP). On the basis...

Nitrogen (N) or phosphorus (P) limits primary productivity in nearly every ecosystem worldwide, yet how limitation changes over time, particularly in connection to variation in environmental drivers, remains understudied. We evaluated temporal and species-specific variability in the relative importance of N and P limitation among tropical macroalgae in two-factor experiments conducted twice after rains and twice after dry conditions to explore potential linkages to environmental drivers. We studied three common macroalgal species with varying ecological strategies: a fast-growing opportunist, Dictyota bartayresiana; and two calcifying species likely to be slower growing, Galaxaura fasciculata and Padina boryana. On the scale of days to weeks, nutrient responses ranged among and within species from no limitation to increases in growth by 20 and 40 % over controls in 3 d with N and P addition, respectively. After light rain or dry conditions, Dictyota grew rapidly (up to ~60 % in 3 d) with little indication of nutrientlimitation, while Padina and Galaxaura shifted between N, P, or no limitation. All species grew slowly or lost mass after a large storm, presumably due to unfavorable conditions on the reef prior to the experiment that limited nutrient uptake. Padina and Galaxaura both became nutrientlimited 3 d post-storm, while Dictyota did not. These results suggest that differing capabilities for nutrient uptake and storage dictate the influence of nutrient history and thus drive nutrient responses and, in doing so, may allow species with differing ecological strategies to coexist in a fluctuating environment. Moreover, the great variability in species' responses indicates that patterns of nutrientlimitation are more complex than previously recognized, and generalizations about N versus P limitation of a given system may not convey the inherent complexity in governing conditions and processes.

Full Text Available Understanding nutrientlimitation of net primary productivity (NPP is critical to predict how plant communities will respond to environmental change. Foliar nutrients, especially nitrogen and phosphorus concentrations ([N] and [P] and their ratio, have been used widely as indicators of plant nutritional status and have been linked directly to nutrientlimitation of NPP. In tropical systems, however, a high number of confounding factors can limit the ability to predict nutrientlimitation--as defined mechanistically by NPP responses to fertilization--based on the stoichiometric signal of the plant community. We used a long-term full factorial N and P fertilization experiment in a lowland tropical wet forest in Costa Rica to explore how tissue (foliar, litter and root [N] and [P] changed with fertilization, how different tree size classes and taxa influenced the community response, and how tissue nutrients related to NPP. Consistent with NPP responses to fertilization, there were no changes in community-wide foliar [N] and [P], two years after fertilization. Nevertheless, litterfall [N] increased with N additions and root [P] increased with P additions. The most common tree species (Pentaclethra macroloba had 9% higher mean foliar [N] with NP additions and the most common palm species (Socratea exohrriza had 15% and 19% higher mean foliar [P] with P and NP additions, respectively. Moreover, N:P ratios were not indicative of NPP responses to fertilization, either at the community or at the taxa level. Our study suggests that in these diverse tropical forests, tissue [N] and [P] are driven by the interaction of multiple factors and are not always indicative of the nutritional status of the plant community.

Understanding nutrientlimitation of net primary productivity (NPP) is critical to predict how plant communities will respond to environmental change. Foliar nutrients, especially nitrogen and phosphorus concentrations ([N] and [P]) and their ratio, have been used widely as indicators of plant nutritional status and have been linked directly to nutrientlimitation of NPP. In tropical systems, however, a high number of confounding factors can limit the ability to predict nutrientlimitation--as defined mechanistically by NPP responses to fertilization--based on the stoichiometric signal of the plant community. We used a long-term full factorial N and P fertilization experiment in a lowland tropical wet forest in Costa Rica to explore how tissue (foliar, litter and root) [N] and [P] changed with fertilization, how different tree size classes and taxa influenced the community response, and how tissue nutrients related to NPP. Consistent with NPP responses to fertilization, there were no changes in community-wide foliar [N] and [P], two years after fertilization. Nevertheless, litterfall [N] increased with N additions and root [P] increased with P additions. The most common tree species (Pentaclethra macroloba) had 9% higher mean foliar [N] with NP additions and the most common palm species (Socratea exohrriza) had 15% and 19% higher mean foliar [P] with P and NP additions, respectively. Moreover, N:P ratios were not indicative of NPP responses to fertilization, either at the community or at the taxa level. Our study suggests that in these diverse tropical forests, tissue [N] and [P] are driven by the interaction of multiple factors and are not always indicative of the nutritional status of the plant community.

Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT

Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT

Forest productivity is one manner to sequester carbon and it is a renewable energy source. Likewise, efficient use of fertilization can be a significant energy savings. To date, site-specific use of fertilization for the purpose of maximizing forest productivity has not been well developed. Site evaluation of nutrient deficiencies is primarily based on empirical approaches to soil testing and plot fertilizer tests with little consideration for soil water regimes and contributing site factors. This project uses mass flow diffusion theory in a modeling context, combined with process level knowledge of soil chemistry, to evaluate nutrient bioavailability to fast-growing juvenile forest stands growing on coastal plain Spodosols of the southeastern U.S. The model is not soil or site specific and should be useful for a wide range of soil management/nutrient management conditions. In order to use the model, field data of fast-growing southern pine needed to be measured and used in the validation of the model. The field aspect of the study was mainly to provide data that could be used to verify the model. However, we learned much about the growth and development of fast growing loblolly. Carbon allocation patterns, root shoot relationships and leaf area root relationships proved to be new, important information. The Project Objectives were to: (1) Develop a mechanistic nutrient management model based on the COMP8 uptake model. (2) Collect field data that could be used to verify and test the model. (3) Model testing.

Atmospheric wet nitrogen (N) and phosphorus (P) depositions are important sources of bioavailable N and P, and the input of N and P and their ratios significantly influences nutrient availability and balance in terrestrial as well as aquatic ecosystems. Here we monitored atmospheric P depositions by measuring monthly dissolved P concentration in rainfall at 41 field stations in China. Average deposition fluxes of N and P were 13.69 ± 8.69 kg N ha-1 a-1 (our previous study) and 0.21 ± 0.17 kg P ha-1 a-1, respectively. Central and southern China had higher N and P deposition rates than northwest China, northeast China, Inner Mongolia, or Qinghai-Tibet. Atmospheric N and P depositions showed strong seasonal patterns and were dependent upon seasonal precipitation. Fertilizer and energy consumption were significantly correlated with N deposition but less correlated with P deposition. The N:P ratios of atmospheric wet deposition (with the average of 77 ± 40, by mass) were negatively correlated with current soil N:P ratios in different ecological regions, suggesting that the imbalanced atmospheric N and P deposition will alter nutrient availability and strengthen P limitation, which may further influence the structure and function of terrestrial ecosystems. The findings provide the assessments of both wet N and P deposition and their N:P ratio across China and indicate potential for strong impacts of atmospheric deposition on broad range of terrestrial ecosystems.

Nutrients in the environment are coupled over broad timescales (days to seasons) when organisms add or withdraw multiple nutrients simultaneously and in ratios that are roughly constant. But at finer timescales (seconds to days), nutrients become decoupled if physiological traits such as nutrient storage limits, circadian rhythms, or enzyme kinetics cause one nutrient to be processed faster than another. To explore the interactions among these coupling and decoupling mechanisms, we introduce a model in which organisms process resources via uptake, excretion, growth, respiration, and mortality according to adjustable trait parameters. The model predicts that uptake can couple the input of one nutrient to the export of another in a ratio reflecting biological demand stoichiometry, but coupling occurs only when the input nutrient is limiting. Temporal nutrient coupling may, therefore, be a useful indicator of ecosystem limitation status. Fine-scale patterns of nutrient coupling are further modulated by, and potentially diagnostic of, physiological traits governing growth, uptake, and internal nutrient storage. Together, limitation status and physiological traits create a complex and informative relationship between nutrient inputs and exports. Understanding the mechanisms behind that relationship could enrich interpretations of fine-scale time-series data such as those now emerging from in situ solute sensors.

Full Text Available Background. Mother’s own milk is the first choice for feeding preterm infants, but when not available, pasteurized human donor milk (PDM is often used. Infants fed PDM have difficulties maintaining appropriate growth velocities. To assess the most basic elements of nutrition, we tested the hypotheses that fatty acid and amino acid composition of PDM is highly variable and standard pooling practices attenuate variability; however, total nutrients may be limiting without supplementation due to late lactational stage of the milk. Methods. A prospective cross-sectional sampling of milk was obtained from five donor milk banks located in Ohio, Michigan, Colorado, Texas-Ft Worth, and California. Milk samples were collected after Institutional Review Board (#07-0035 approval and informed consent. Fatty acid and amino acid contents were measured in milk from individual donors and donor pools (pooled per Human Milk Banking Association of North America guidelines. Statistical comparisons were performed using Kruskal–Wallis, Spearman’s, or Multivariate Regression analyses with center as the fixed factor and lactational stage as co-variate. Results. Ten of the fourteen fatty acids and seventeen of the nineteen amino acids analyzed differed across Banks in the individual milk samples. Pooling minimized these differences in amino acid and fatty acid contents. Concentrations of lysine and docosahexaenoic acid (DHA were not different across Banks, but concentrations were low compared to recommended levels. Conclusions. Individual donor milk fatty acid and amino acid contents are highly variable. Standardized pooling practice reduces this variability. Lysine and DHA concentrations were consistently low across geographic regions in North America due to lactational stage of the milk, and thus not adequately addressed by pooling. Targeted supplementation is needed to optimize PDM, especially for the preterm or volume restricted infant.

Microorganisms play a dominant role in the biogeochemical cycling of nutrients. They are rightly praised for their facility for fixing both carbon and nitrogen into organic matter, and microbial driven processes have tangibly altered the chemical composition of the biosphere and its surrounding atmosphere. Despite their prodigious capacity for molecular transformations, microorganisms are powerless in the face of the immutability of the elements. Limitations for specific elements, either fleeting or persisting over eons, have left an indelible trace on microbial genomes, physiology, and their very atomic composition. We here review the impact of elemental limitation on microbes, with a focus on selected genetic model systems and representative microbes from the ocean ecosystem. Evolutionary adaptations that enhance growth in the face of persistent or recurrent elemental limitations are evident from genome and proteome analyses. These range from the extreme (such as dispensing with a requirement for a hard to obtain element) to the extremely subtle (changes in protein amino acid sequences that slightly, but significantly, reduce cellular carbon, nitrogen, or sulfur demand). One near-universal adaptation is the development of sophisticated acclimation programs by which cells adjust their chemical composition in response to a changing environment. When specific elements become limiting, acclimation typically begins with an increased commitment to acquisition and a concomitant mobilization of stored resources. If elemental limitation persists, the cell implements austerity measures including elemental sparing and elemental recycling. Insights into these fundamental cellular properties have emerged from studies at many different levels, including ecology, biological oceanography, biogeochemistry, molecular genetics, genomics, and microbial physiology. Here, we present a synthesis of these diverse studies and attempt to discern some overarching themes.

Triacylglycerols of oleaginous algae are promising for production of food oils and biodiesel fuel. Air-drying of cells induces triacylglycerol accumulation in a freshwater green alga, Chlorella kessleri, therefore, it seems that dehydration, i.e., intracellular hyperosmosis, and/or nutrient-limitation are key stressors. We explored this possibility in liquid-culturing C. kessleri cells. Strong hyperosmosis with 0.9 M sorbitol or 0.45 M NaCl for two days caused cells to increase the triacylglycerol content in total lipids from 1.5 to 48.5 and 75.3 mol%, respectively, on a fatty acid basis, whereas nutrient-limitation caused its accumulation to 41.4 mol%. Even weak hyperosmosis with 0.3 M sorbitol or 0.15 M NaCl, when nutrient-limitation was simultaneously imposed, induced triacylglycerol accumulation to 61.9 and 65.7 mol%, respectively. Furthermore, culturing in three-fold diluted seawater, the chemical composition of which resembled that of the medium for the combinatory stress, enabled the cells to accumulate triacylglycerol up to 24.7 weight% of dry cells in only three days. Consequently, it was found that hyperosmosis is a novel stressor for triacylglycerol accumulation, and that weak hyperosmosis, together with nutrient-limitation, exerts a strong stimulating effect on triacylglycerol accumulation. A similar combinatory stress would contribute to the triacylglycerol accumulation in air-dried C. kessleri cells.

Full Text Available The Um Alhool area in Qatar is a dynamic evaporative ecosystem that receives seawater from below as it is surrounded by sand dunes. We investigated the chemical composition, the microbial activity and biodiversity of the four main layers (L1-L4 in the photosynthetic mats. Chlorophyll a (Chl a concentration and distribution (measured by HPLC and hyperspectral imaging, respectively, the phycocyanin distribution (scanned with hyperspectral imaging, oxygenic photosynthesis (determined by microsensor, and the abundance of photosynthetic microorganisms (from 16S and 18S rRNA sequencing decreased with depth in the euphotic layer (L1. Incident irradiance exponentially attenuated in the same zone reaching 1% at 1.7-mm depth. Proteobacteria dominated all layers of the mat (24%-42% of the identified bacteria. Anoxygenic photosynthetic bacteria (dominated by Chloroflexus were most abundant in the third red layer of the mat (L3, evidenced by the spectral signature of Bacteriochlorophyll as well as by sequencing. The deep, black layer (L4 was dominated by sulfate reducing bacteria belonging to the Deltaproteobacteria, which were responsible for high sulfate reduction rates (measured using 35S tracer. Members of Halobacteria were the dominant Archaea in all layers of the mat (92%-97%, whereas Nematodes were the main Eukaryotes (up to 87%. Primary productivity rates of Um Alhool mat were similar to those of other hypersaline microbial mats. However, sulfate reduction rates were relatively low, indicating that oxygenic respiration contributes more to organic material degradation than sulfate reduction, because of bioturbation. Although Um Alhool hypersaline mat is a nutrient-limited ecosystem, it is interestingly dynamic and phylogenetically highly diverse. All its components work in a highly efficient and synchronized way to compensate for the lack of nutrient supply provided during regular inundation periods.

Whether it is necessary to reduce nitrogen (N) and/or phosphorus (P) input to mitigate lake eutrophication is controversial. The controversy stems mainly from differences in time and space in previous studies that support the contrasting ideas. To test the response of phytoplankton to various combinations of nutrient control strategies in mesocosms and the possibility of reflecting the conditions in natural ecosystems with short-term experiments, a 9-month experiment was carried out in eight 800-L tanks with four nutrient level combinations (+N+P, -N+P, +N-P, and -N-P), with an 18-month whole-ecosystem experiment in eight ~800-m 2 ponds as the reference. Phytoplankton abundance was determined by P not N, regardless of the initial TN/TP level, which was in contrast to the nutrientlimitation predicted by the N/P theory. Net natural N inputs were calculated to be 4.9, 6.8, 1.5, and 3.0 g in treatments +N+P, -N+P, +N-P, and -N-P, respectively, suggesting that N deficiency and P addition may promote natural N inputs to support phytoplankton development. However, the compensation process was slow, as suggested by an observed increase in TN after 3 weeks in -N+P and 2 months in -N-P in the tank experiment, and after 3 months in -N +P and ~3 months in -N-P in our pond experiment. Obviously, such a slow process cannot be simulated in short-term experiments. The natural N inputs cannot be explained by planktonic N-fixation because N-fixing cyanobacteria were scarce, which was probably because there was a limited pool of species in the tanks. Therefore, based on our results we argue that extrapolating short-term, small-scale experiments to large natural ecosystems does not give reliable, accurate results.

Leaf-level studies of Metrosideros polymorpha Gaud. (Myrtaceae) canopy trees at both ends of a substrate age gradient in the Hawaiian Islands pointed to differential patterns of adjustment to both nutrientlimitation and removal of this limitation by long-term (8-14 years) nitrogen (N), phosphorus (P) and N + P fertilizations. The two study sites were located at the same elevation, had similar annual precipitation, and supported forests dominated by M. polymorpha, but differed in the age of the underlying volcanic substrate, and in soil nutrient availability, with relatively low N at the young site (300 years, Thurston, Hawaii) and relatively low P at the oldest site (4,100,000 years, Kokee, Kauai). Within each site, responses to N and P fertilization were similar, regardless of the difference in soil N and P availability between sites. At the young substrate site, nutrient addition led to a larger mean leaf size (about 7.4 versus 4.8 cm2), resulting in a larger canopy leaf surface area. Differences in foliar N and P content, chlorophyll concentrations and carboxylation capacity between the fertilized and control plots were small. At the old substrate site, nutrient addition led to an increase in photosynthetic rate per unit leaf surface area from 4.5 to 7.6 micromol m(-2) s(-1), without a concomitant change in leaf size. At this site, leaves had substantially greater nutrient concentrations, chlorophyll content and carboxylation capacity in the fertilized plots than in the control plots. These contrasting acclimation responses to fertilization at the young and old sites led to significant increases in total carbon gain of M. polymorpha canopy trees at both sites. At the young substrate site, acclimation to fertilization was morphological, resulting in larger leaves, whereas at the old substrate site, physiological acclimation resulted in higher leaf carboxylation capacity and chlorophyll content.

The Um Alhool area in Qatar is a dynamic evaporative ecosystem that receives seawater from below as it is surrounded by sand dunes. We investigated the chemical composition, the microbial activity and biodiversity of the four main layers (L1–L4) in the photosynthetic mats. Chlorophyll a (Chl a) concentration and distribution (measured by HPLC and hyperspectral imaging, respectively), the phycocyanin distribution (scanned with hyperspectral imaging), oxygenic photosynthesis (determined by microsensor), and the abundance of photosynthetic microorganisms (from 16S and 18S rRNA sequencing) decreased with depth in the euphotic layer (L1). Incident irradiance exponentially attenuated in the same zone reaching 1% at 1.7-mm depth. Proteobacteria dominated all layers of the mat (24%–42% of the identified bacteria). Anoxygenic photosynthetic bacteria (dominated by Chloroflexus) were most abundant in the third red layer of the mat (L3), evidenced by the spectral signature of Bacteriochlorophyll as well as by sequencing. The deep, black layer (L4) was dominated by sulfate reducing bacteria belonging to the Deltaproteobacteria, which were responsible for high sulfate reduction rates (measured using 35S tracer). Members of Halobacteria were the dominant Archaea in all layers of the mat (92%–97%), whereas Nematodes were the main Eukaryotes (up to 87%). Primary productivity rates of Um Alhool mat were similar to those of other hypersaline microbial mats. However, sulfate reduction rates were relatively low, indicating that oxygenic respiration contributes more to organic material degradation than sulfate reduction, because of bioturbation. Although Um Alhool hypersaline mat is a nutrient-limited ecosystem, it is interestingly dynamic and phylogenetically highly diverse. All its components work in a highly efficient and synchronized way to compensate for the lack of nutrient supply provided during regular inundation periods.

Full Text Available Nitrogen (N is considered the dominant limiting nutrient in temperate regions, while phosphorus (P limitation frequently occurs in tropical regions, but in subtropical regions nutrientlimitation is poorly understood. In this study, we investigated N and P contents and N:P ratios of foliage, forest floors, fine roots and mineral soils, and their relationships with community biomass, litterfall C, N and P productions, forest floor turnover rate, and microbial processes in eight mature and old-growth subtropical forests (stand age >80 yr at Dinghushan Biosphere Reserve, China. Average N:P ratios (mass based in foliage, litter (L layer and mixture of fermentation and humus (F/H layer, and fine roots were 28.3, 42.3, 32.0 and 32.7, respectively. These values are higher than the critical N:P ratios for P limitation proposed (16-20 for foliage, ca. 25 for forest floors. The markedly high N:P ratios were mainly attributed to the high N concentrations of these plant materials. Community biomass, litterfall C, N and P productions, forest floor turnover rate and microbial properties were more strongly related to measures of P than N and frequently negatively related to the N:P ratios, suggesting a significant role of P availability in determining ecosystem production and productivity and nutrient cycling at all the study sites except for one prescribed disturbed site where N availability may also be important. We propose that N enrichment is probably a significant driver of the potential P limitation in the study area. Low P parent material may also contribute to the potential P limitation. In general, our results provided strong evidence supporting a significant role for P availability, rather than N availability, in determining ecosystem primary productivity and ecosystem processes in subtropical forests of China.

Light (20-450 μmol photons m(-2) s(-1)), temperature (3-11 °C) and inorganic nutrient composition (nutrient replete and N, P and Si limitation) were manipulated to study their combined influence on growth, stoichiometry (C:N:P:Chl a) and primary production of the cold water diatom Chaetoceros wighamii. During exponential growth, the maximum growth rate (~0.8 d(-1)) was observed at high temperature and light; at 3 °C the growth rate was ~30% lower under similar light conditions. The interaction effect of light and temperature were clearly visible from growth and cellular stoichiometry. The average C:N:P molar ratio was 80:13:1 during exponential growth, but the range, due to different light acclimation, was widest at the lowest temperature, reaching very low C:P (~50) and N:P ratios (~8) at low light and temperature. The C:Chl a ratio had also a wider range at the lowest temperature during exponential growth, ranging 16-48 (weight ratio) at 3 °C compared with 17-33 at 11 °C. During exponential growth, there was no clear trend in the Chl a normalized, initial slope (α*) of the photosynthesis-irradiance (PE) curve, but the maximum photosynthetic production (P(m)) was highest for cultures acclimated to the highest light and temperature. During the stationary growth phase, the stoichiometric relationship depended on the limiting nutrient, but with generally increasing C:N:P ratio. The average photosynthetic quotient (PQ) during exponential growth was 1.26 but decreased to nutrient and light limitation, probably due to photorespiration. The results clearly demonstrate that there are interaction effects between light, temperature and nutrientlimitation, and the data suggests greater variability of key parameters at low temperature. Understanding these dynamics will be important for improving models of aquatic primary production and biogeochemical cycles in a warming climate.

Full Text Available Light (20-450 μmol photons m(-2 s(-1, temperature (3-11 °C and inorganic nutrient composition (nutrient replete and N, P and Si limitation were manipulated to study their combined influence on growth, stoichiometry (C:N:P:Chl a and primary production of the cold water diatom Chaetoceros wighamii. During exponential growth, the maximum growth rate (~0.8 d(-1 was observed at high temperature and light; at 3 °C the growth rate was ~30% lower under similar light conditions. The interaction effect of light and temperature were clearly visible from growth and cellular stoichiometry. The average C:N:P molar ratio was 80:13:1 during exponential growth, but the range, due to different light acclimation, was widest at the lowest temperature, reaching very low C:P (~50 and N:P ratios (~8 at low light and temperature. The C:Chl a ratio had also a wider range at the lowest temperature during exponential growth, ranging 16-48 (weight ratio at 3 °C compared with 17-33 at 11 °C. During exponential growth, there was no clear trend in the Chl a normalized, initial slope (α* of the photosynthesis-irradiance (PE curve, but the maximum photosynthetic production (P(m was highest for cultures acclimated to the highest light and temperature. During the stationary growth phase, the stoichiometric relationship depended on the limiting nutrient, but with generally increasing C:N:P ratio. The average photosynthetic quotient (PQ during exponential growth was 1.26 but decreased to <1 under nutrient and light limitation, probably due to photorespiration. The results clearly demonstrate that there are interaction effects between light, temperature and nutrientlimitation, and the data suggests greater variability of key parameters at low temperature. Understanding these dynamics will be important for improving models of aquatic primary production and biogeochemical cycles in a warming climate.

From studies in seasonal lowland tropical forests, bromeliad epiphytes appear to be limited mainly by water, and to a lesser extent by nutrient supply, especially phosphorous. Less is understood about the mineral nutrition of tropical montane cloud forest (TMCF) epiphytes, even though their highest diversity is in this habitat. Nutrientlimitation is known to be a key factor restricting forest productivity in TMCF, and if epiphytes are nutritionally linked to their host trees, as has been suggested, we would expect that they are also nutrientlimited. We studied the effect of a higher nutrient input on reproduction and growth of the tank bromeliad Werauhia sintenisii in experimental plots located in a TMCF in Puerto Rico, where all macro- and micronutrients had been added quarterly starting in 1989 and continuing throughout the duration of this study. We found that bromeliads growing in fertilized plots were receiving litterfall with higher concentrations of N, P, and Zn and had higher concentrations of P, Zn, Fe, Al, and Na in their vegetative body. The N:P ratios found (fertilized = 27.5 and non-fertilized = 33.8) suggest that W. sintenisii may also be phosphorous limited as are lowland epiphytes. Fertilized plants had slightly longer inflorescences, and more flowers per inflorescence, than non-fertilized plants, but their flowers produced nectar in similar concentrations and quantities. Fertilized plants produced more seeds per fruit and per plant. Frequency of flowering in two consecutive years was higher for fertilized plants than for controls, suggesting that fertilized plants overcome the cost of reproduction more readily than non-fertilized plants. These results provide evidence that TMCF epiphytic bromeliads are nutrientlimited like their lowland counterparts.

The APSIM model was used to assess the impact of legumes on sorghum grown in rotation in a nutrient-limited system under dry conditions in south-western Zimbabwe. An experiment was conducted at Lucydale, Matopos Research Station, between 2002 and 2005. The model was used to simulate soil and plant

This report describes the results of the Spanish participation in the project Coupling CORINAIR data to cost-effect emission reduction strategies based on critical threshold. (EU/LIFE97/ENV/FIN/336). The subproject has focused on three tasks. Develop tools to improve knowledge on the spatial and temporal details of emissions of air pollutants in Spain. Exploit existing experimental information on plant response to air pollutants in temperate ecosystem and Integrate these findings in a modelling framework that can asses with more accuracy the impact of air pollutants to temperate ecosystems. The results obtained during the execution of this project have significantly improved the models of the impact of alternative emission control strategies on ecosystems and crops in the Iberian Peninsula. (Author) 375 refs.

This paper advances a model describing how peer assessment supports self-assessment. Although prior research demonstrates that peer assessment promotes self-assessment, the connection between these two activities is underspecified. This model, the assessment cycle, draws from theories of self-assessment to elaborate how learning takes place…

Full Text Available A bibliographic review was carried out about the professional competence assessment of human resources in the Health System and the main characteristics of different models that contribute to their improvement, establishing direct links with the present context of National Health System in Cuba. We include trends and common practices related with assessmentmodels, highlighting those aspects associated with professional competence assessment and its inclusion in the dynamic of a strategy to increase the quality of human resources in Health Services. It has been proved that the appropriate assessment of competences among these professionals assures, through its results, to make valuable decisions on the need of knowledge associated with skills and attitudes that should be present in their daily professional practice.

A systematic approach was adopted to investigate the nutrientlimiting factors in gray-brown purple soils and yellow soils derived from limestone in Chongqing, China, to study balanced fertilization for corn, sweet potato and wheat in rotation. The results showed that N, P and K were deficient in both soils, Cu, Mn, S andZn in the gray-brown purple soils and Ca, Mg, Mo and Zn for the yellow soils. Balanced fertilizer application increased yields of corn, sweet potato and wheat by 28.4%, 28.7% and 4.4%, respectively, as compared to the local farmers' practice. The systematic approach can be considered as one of the most efficient and reliable methods in fertility study.

This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented.

Climate and hydrology are strong drivers of ecosystem structure and function in arid landscapes. Arid regions are characterized by high interannual variation in precipitation, and these climate patterns drive the overall hydrologic disturbance regime (in terms of flooding and drying), which influences geomorphic structure, biotic distributions, and nutrient status of desert stream ecosystems. We analyzed the long-term pattern of discharge in a desert stream in Arizona to identify hydrologic regime shifts, i.e., abrupt transitions between sequences of floods and droughts at periods of months to decades. We used wavelet analysis to identify time intervals over a 50-year time series that were negatively correlated with one another, reflecting a shift from wet to dry phases. We also looked with finer resolution at the most recent 10-year period, when wetlands have come to dominate the ecosystem owing to a management change, and at individual flood and drought events within years. In space, there is high site fidelity of wetland plant cover, corresponding to reliable water sources. Comparing five-year patterns of plant distribution and stream metabolism between wet and dry years suggested the primacy of geomorphic controls in drought periods. Nutrientlimitation of algal production varied from moderate to very strong N limitation, with only one year when there was a (weak) suggestion of secondary P limitation. Over the longer period of record, we identified times characterized by hydrological regime shifts and asked whether ecosystem variables would have changed over that time period. We hypothesized, in particular, that the changes in nutrient status of the stream ecosystem would result from these regime shifts. We used our most complete long-term dataset on stream nitrogen (N) and phosphorus (P) concentrations and N:P ratios as a proxy for nutrientlimitation. However, N:P varied primarily at fine scales in response to individual flood events.

Documentation of a study to assess the capability of computer codes to predict lateral loads on earth penetrating projectiles under conditions of non-normal impact. Calculations simulated a set of small scale penetration tests into concrete targets with oblique faces at angles of 15 and 30 degrees to the line-of-flight. Predictive codes used by the various calculational teams cover a wide range of modeling approaches from approximate techniques, such as cavity expansion, to numerical methods, such as finite element codes. The modelingassessment was performed under the auspices of the Phenomenology Integrated Product Team (PIPT) for the Robust Nuclear Earth Penetrator Program (RNEP). Funding for the penetration experiments and modeling was provided by multiple earth penetrator programs.

Preliminary results are presented for a comprehensive inter-disciplinary study on Lake Diefenbaker initiated by the Global Institute for Water Security to understand the physical and biogeochemical processes affecting water quality under climate change and their policy implications. Lake Diefenbaker is a large reservoir (surface area ~500km2 and Zmean ~33m) located in Southern Saskatchewan, Canada and is a critically-important water resource for Saskatchewan. It receives nearly all of its flow from the South Saskatchewan River, which flows through some of the most urbanized and intense agricultural lands of southern Alberta. As a result these waters contain high levels of nutrients [nitrogen (N) and phosphorus (P)] along with a variety of chemical contaminants characteristic of anthropogenic activity. In addition, riparian and in-lake activities provide local sources of nutrients, from domestic sewage, agriculture and fish farming. The South Saskatchewan River has been identified by the World Wildlife Fund (2009) as Canada's most threatened river in terms of environmental flow. Lake Diefenbaker has numerous large deep embayments (depth >20m) and an annual water level fluctuation of ~6m. A deep thermocline (~25m) forms infrequently. Stratification does not occur throughout the lake. Anecdotal information suggests that the frequency and severity of algal blooms are increasing; although blooms have been sporadic and localized. This localized eutrophication may be related to local stratification patterns, point source nutrient loading, and/or internal lake processes (i.e., internal nutrient loading). A paleolimnological reconstruction has begun to assess historical nutrient and contaminant loading to Lake Diefenbaker and hence the trajectory of water quality in the lake. Major point sources and diffuse sources of N and P are also under investigation. In addition, the type (N versus P) and degree of nutrientlimitation of bacteria and algae are being assessed (spatially

This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

We present Fe and P concentrations from distal hydrothermal sediments and iron formations through time in order to evaluate the evolution of the marine P reservoir. P concentrations appear to have been elevated in Precambrian oceans.

SKB has carried out several safety analyses for repositories for radioactive waste, one of which was SR 97, a multi-site study concerned with a future deep bedrock repository for high-level waste. In case of future releases due to unforeseen failure of the protective multiple barrier system, radionuclides may be transported with groundwater and may reach the biosphere. Assessments of doses have to be carried out with a long-term perspective. Specific models are therefore employed to estimate consequences to man. It has been determined that the main pathway for nuclides from groundwater or surface water to soil is via irrigation. Irrigation may cause contamination of crops directly by e.g. interception or rain-splash, and indirectly via root-uptake from contaminated soil. The exposed people are in many safety assessments assumed to be self-sufficient, i.e. their food is produced locally where the concentration of radionuclides may be the highest. Irrigation therefore plays an important role when estimating consequences. The present study is therefore concerned with a more extensive analysis of the role of irrigation for possible future doses to people living in the area surrounding a repository. Current irrigation practices in Sweden are summarised, showing that vegetables and potatoes are the most common crops for irrigation. In general, however, irrigation is not so common in Sweden. The irrigation model used in the latest assessments is described. A sensitivity analysis is performed showing that, as expected, interception of irrigation water and retention on vegetation surfaces are important parameters. The parameters used to describe this are discussed. A summary is also given how irrigation is proposed to be handled in the international BIOMASS (BIOsphere Modelling and ASSessment) project and in models like TAME and BIOTRAC. Similarities and differences are pointed out. Some numerical results are presented showing that surface contamination in general gives the

This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

Assessment of pain in animal models of osteoarthritis is integral to interpretation of a model's utility in representing the clinical condition, and enabling accurate translational medicine. Here we describe behavioral pain assessments available for small and large experimental osteoarthritic pain animal models.

Context: In order to study the efficacy of assessment methods, a theoretical framework of Earl's model of assessment was introduced. Objective: (1) Introduce the predictive learning assessmentmodel (PLAM) as an application of Earl's model of learning; (2) test Earl's model of learning through the use of the Standardized Orthopedic Assessment Tool…

Context: In order to study the efficacy of assessment methods, a theoretical framework of Earl's model of assessment was introduced. Objective: (1) Introduce the predictive learning assessmentmodel (PLAM) as an application of Earl's model of learning; (2) test Earl's model of learning through the use of the Standardized Orthopedic Assessment Tool…

The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.

The United Nations Educational, Scientific, and Cultural Organization promoted the creation of a model instrument for individual assessment of students' foundational writing skills in the Spanish language that was based on a literature review and existing writing tools and assessments. The purpose of the "Early Grade Writing Assessment"…

This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

This article begins by discussing requirements for functional behavioral assessment under the Individuals with Disabilities Education Act and then describes a comprehensive model for the application of behavior analysis in the schools. The model includes descriptive assessment, functional analysis, and intervention and involves the participation…

This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessmentmodels. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessmentmodels give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor modelsassess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessmentmodels (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

Ocean ModelAssessment With Lagrangian Metrics” Pearn P. Niiler Scripps Institution of Oceanography 9500 Gilman Drive MC 0213 La Jolla, CA...project are to aid in the development of accurate modeling of upper ocean circulation by using data on circulation observations to test models . These tests...or metrics, will be statistical measures of model and data comparisons. It is believed that having accurate models of upper ocean currents will

Full Text Available Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted.We assessed the skill of the Northeast U.S. (NEUS Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation, and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes.We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods

This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

Quantifying vulnerability to critical infrastructure has not been adequately addressed in the literature. Thus, the purpose of this article is to present a model that quantifies vulnerability. Vulnerability is defined as a measure of system susceptibility to threat scenarios. This article asserts that vulnerability is a condition of the system and it can be quantified using the Infrastructure Vulnerability AssessmentModel (I-VAM). The model is presented and then applied to a medium-sized clean water system. The model requires subject matter experts (SMEs) to establish value functions and weights, and to assess protection measures of the system. Simulation is used to account for uncertainty in measurement, aggregate expert assessment, and to yield a vulnerability (Omega) density function. Results demonstrate that I-VAM is useful to decisionmakers who prefer quantification to qualitative treatment of vulnerability. I-VAM can be used to quantify vulnerability to other infrastructures, supervisory control and data acquisition systems (SCADA), and distributed control systems (DCS).

The purpose of this article is to define a method for the assessment of change. A reinterpretation of the extended logistic model is proposed. The extended logistic model for the assessment of change (ELMAC) allows the definition of a time parameter which is supposed to identify whether change occurs during a period of time, given a specific event or phenomenon. The assessment of a trend of change through time, on the basis of the time parameter which is estimated at different successive occasions during a period of time, is also considered. In addition, a dispersion parameter is calculated which identifies whether change is consistent at each time point. The issue of independence is taken into account both in relation to the time parameter and the dispersion parameter. An application of the ELMAC in a learning process is presented. The interpretation of the model parameters and the model fit statistics is consistent with expectations.

the European Commission initiated the development of a framework for assessing telemedicine applications, based on the users' need for information for decision making. This article presents the Model for ASsessment of Telemedicine applications (MAST) developed in this study.......Telemedicine applications could potentially solve many of the challenges faced by the healthcare sectors in Europe. However, a framework for assessment of these technologies is need by decision makers to assist them in choosing the most efficient and cost-effective technologies. Therefore in 2009...

Deep vein thrombosis and common complications, including pulmonary embolism and post-thrombotic syndrome, represent a major source of morbidity and mortality worldwide. Experimental models of venous thrombosis have provided considerable insight into the cellular and molecular mechanisms that regulate thrombus formation and subsequent resolution. Here, we critically appraise the ex vivo and in vivo techniques used to assess venous thrombosis in these models. Particular attention is paid to imaging modalities, including magnetic resonance imaging, micro-computed tomography, and high-frequency ultrasound that facilitate longitudinal assessment of thrombus size and composition.

competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

Traditional risk analysis and assessment is based on failure-oriented models of the system. In contrast to this, model-based risk assessment (MBRA) utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The target models are then used as input sources for complementary risk analysis and assessment techniques, as well as a basis for the documentation of the assessment results. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tested with successful outcome through a series of seven trial within the telemedicine and ecommerce areas. The CORAS project in general and the CORAS application of MBRA in particular have contributed positively to the visibility of model-based risk assessment and thus to the disclosure of several potentials for further exploitation of various aspects within this important research field. In that connection, the CORAS methodology's possibilities for further improvement towards utilization in more complex architectures and also in other application domains such as the nuclear field can be addressed. The latter calls for adapting the framework to address nuclear standards such as IEC 60880 and IEC 61513. For this development we recommend applying a trial driven approach within the nuclear field. The tool supported approach for combining risk analysis and system development also fits well with the HRP proposal for developing an Integrated Design Environment (IDE) providing efficient methods and tools to support control room systems design. (Author)

This body of work is dedicated to the modeling and assessment of initiatives within electricity markets using the underlying hourly market dynamics. The dissertation presents two separate frameworks that take a bottom-up approach for assessing benefits associated with various demand-side initiatives and other emerging interventions in power markets. Models in support of each framework are presented, and numerical results are used to highlight some impacts based on hourly dynamics. The first framework uses stochastic optimization models to explore the economic feasibility of grid-scale energy storage from the perspective of a price taking, profit maximizing firm facing uncertain market dynamics. This model is then extended to incorporate intermittent wind generation, demonstrating how storage can be used as a potential substitute for transmission capacity. The second framework uses a new dynamic market equilibrium simulation model to address broader economic and environmental impacts of various demand-side initiatives including: energy efficiency, distributed generation, and plug-in hybrid electric vehicles. The general model is calibrated for the California electricity market. The model is used to estimate impacts of the various interventions, taking into account varying market adoption levels and natural gas prices.

Coccolithophores are unicellular calcifying marine algae that play an important role in the oceanic carbon cycle via their cellular processes of photosynthesis (a CO2 sink) and calcification (a CO2 source). In contrast to the well-studied, surface-water coccolithophore blooms visible from satellites, the lower photic zone is a poorly known but potentially important ecological niche for coccolithophores in terms of primary production and carbon export to the deep ocean. In this study, the physiological responses of an Emiliania huxleyi strain to conditions simulating the deep niche in the oligotrophic gyres along the BIOSOPE transect in the South Pacific Gyre were investigated. We carried out batch culture experiments with an E. huxleyi strain isolated from the BIOSOPE transect, reproducing the in situ conditions of light and nutrient (nitrate and phosphate) limitation. By simulating coccolithophore growth using an internal stores (Droop) model, we were able to constrain fundamental physiological parameters for this E. huxleyi strain. We show that simple batch experiments, in conjunction with physiological modelling, can provide reliable estimates of fundamental physiological parameters for E. huxleyi that are usually obtained experimentally in more time-consuming and costly chemostat experiments. The combination of culture experiments, physiological modelling and in situ data from the BIOSOPE cruise show that E. huxleyi growth in the deep BIOSOPE niche is limited by availability of light and nitrate. This study contributes more widely to the understanding of E. huxleyi physiology and behaviour in a low-light and oligotrophic environment of the ocean.

A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessmentmodels might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

Variation in microbial metabolism poses one of the greatest current uncertainties in models of global carbon cycling, and is particularly poorly understood in soils. Biological Stoichiometry theory describes biochemical mechanisms linking metabolic rates with variation in the elemental composition of cells and organisms, and has been widely observed in animals, plants, and plankton. However, this theory has not been widely tested in microbes, which are considered to have fixed ratios of major elements in soils. To determine whether Biological Stoichiometry underlies patterns of soil microbial metabolism, we compiled published data on microbial biomass carbon (C), nitrogen (N), and phosphorus (P) pools in soils spanning the global range of climate, vegetation, and land use types. We compared element ratios in microbial biomass pools to the metabolic quotient qCO2 (respiration per unit biomass), where soil C mineralization was simultaneously measured in controlled incubations. Although microbial C, N, and P stoichiometry appeared to follow somewhat constrained allometric relationships at the global scale, we found significant variation in the C∶N∶P ratios of soil microbes across land use and habitat types, and size-dependent scaling of microbial C∶N and C∶P (but not N∶P) ratios. Microbial stoichiometry and metabolic quotients were also weakly correlated as suggested by Biological Stoichiometry theory. Importantly, we found that while soil microbial biomass appeared constrained by soil N availability, microbial metabolic rates (qCO2) were most strongly associated with inorganic P availability. Our findings appear consistent with the model of cellular metabolism described by Biological Stoichiometry theory, where biomass is limited by N needed to build proteins, but rates of protein synthesis are limited by the high P demands of ribosomes. Incorporation of these physiological processes may improve models of carbon cycling and understanding of the effects of

Full Text Available Abstract Background Pseudomonas putida KT2442 is a natural producer of polyhydroxyalkanoates (PHAs, which can substitute petroleum-based non-renewable plastics and form the basis for the production of tailor-made biopolymers. However, despite the substantial body of work on PHA production by P. putida strains, it is not yet clear how the bacterium re-arranges its whole metabolism when it senses the limitation of nitrogen and the excess of fatty acids as carbon source, to result in a large accumulation of PHAs within the cell. In the present study we investigated the metabolic response of KT2442 using a systems biology approach to highlight the differences between single- and multiple-nutrient-limited growth in chemostat cultures. Results We found that 26, 62, and 81% of the cell dry weight consist of PHA under conditions of carbon, dual, and nitrogen limitation, respectively. Under nitrogen limitation a specific PHA production rate of 0.43 (g·(g·h-1 was obtained. The residual biomass was not constant for dual- and strict nitrogen-limiting growth, showing a different feature in comparison to other P. putida strains. Dual limitation resulted in patterns of gene expression, protein level, and metabolite concentrations that substantially differ from those observed under exclusive carbon or nitrogen limitation. The most pronounced differences were found in the energy metabolism, fatty acid metabolism, as well as stress proteins and enzymes belonging to the transport system. Conclusion This is the first study where the interrelationship between nutrientlimitations and PHA synthesis has been investigated under well-controlled conditions using a system level approach. The knowledge generated will be of great assistance for the development of bioprocesses and further metabolic engineering work in this versatile organism to both enhance and diversify the industrial production of PHAs.

Soil erosion is a phenomenon with relevance for many research topics in the geosciences. Consequently, PhD students with many different backgrounds are exposed to soil erosion related questions during their research. These students require a compact, but detailed introduction to erosion processes, the risks associated with erosion, but also tools to assess and study erosion related questions ranging from a simple risk assessment to effects of climate change on erosion-related effects on geochemistry on various scales. The PhD course on Soil Erosion Risk Assessment and Modelling offered by the University of Aarhus and conducted jointly with the University of Basel is aimed at graduate students with degrees in the geosciences and a PhD research topic with a link to soil erosion. The course offers a unique introduction to erosion processes, conventional risk assessment and field-truthing of results. This is achieved by combing lectures, mapping, erosion experiments, and GIS-based erosion modelling. A particular mark of the course design is the direct link between the results of each part of the course activities. This ensures the achievement of a holistic understanding of erosion in the environment as a key learning outcome.

Radioactive waste exists at the US Department of Energy's (DOE's) Hanford Site in a variety of locations, including subsurface grout and tank farms, solid waste burial grounds, and contaminated soil sites. Some of these waste sites may need to be isolated from percolating water to minimize the potential for transport of the waste to the ground water, which eventually discharges to the Columbia River. Multilayer protective barriers have been proposed as a means of limiting the flow of water through the waste sites (DOE 1987). A multiyear research program (managed jointly by Pacific Northwest Laboratory (PNL) and Westinghouse Hanford Company for the DOE) is aimed at assessing the performance of these barriers. One aspect of this program involves the use of computer models to predict barrier performance. Three modeling studies have already been conducted and a test plan was produced. The simulation work reported here was conducted by PNL and extends the previous modeling work. The purpose of this report are to understand phenomena that have been observed in the field and to provide information that can be used to improve hydrologic modeling of the protective barrier. An improved modeling capability results in better estimates of barrier performance. Better estimates can be used to improve the design of barriers and the assessment of their long-term performance.

Full Text Available The mobile technology is considered to be the fastest-developing IT security area. Only in the last year security threats around mobile devices have reached new heights in terms of both quality and quantity. The speed of this development has made possible several types of security attacks that, until recently, were only possible on computers. In terms of the most targeted mobile operating systems, Android continues to be the most vulnerable, although new ways of strengthening its security model were introduced by Google. The aim of this article is to provide a model for assessing the risk of mobile infection with malware, starting from a statistical analysis of the permissions required by each application installed into the mobile system. The software implementation of this model will use the Android operating system and in order to do so, we will start by analyzing its permission-based security architecture. Furthermore, based on statistical data regarding the most dangerous permissions, we build the risk assessmentmodel and, to prove its efficiency, we scan some of the most popular apps and interpret the results. To this end, we offer an overview of the strengths and weaknesses of this permission-based model and we also state a short conclusion regarding model’s efficiency.

Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

The United Nations Educational, Scientific, and Cultural Organization promoted the creation of a model instrument for individual assessment of students' foundational writing skills in the Spanish language that was based on a literature review and existing writing tools and assessments. The purpose of the Early Grade Writing Assessment (EGWA) is to document learners' basic writing skills, mapped in composing units of increasing complexity to communicate meaning. Validation and standardization of EGWA was conducted in the Canary Islands (Spain) in 12 schools using a cross-sectional design with a sample of 1,653 Spanish-speaking students in Grades 1 through 3. The author describes EGWA's internal structure, along with the prevalence of learning disabilities (LD) in transcription and developmental differences in writing between Spanish-speaking children with LD and typical peers. Findings suggest that EGWA's psychometric characteristics are satisfactory, and its internal structure can be attributed to four factors responsible for a high percentage of the variance. The odds ratio indicated that 2 Spanish-speaking children with LD in transcription are identified out of every 100. A comparison between students with and without LD in transcription revealed statistically significant differences concerning sentence and text production across grades. Results are interpreted within current theoretical accounts of writing models.

This report assesses the MARMOT grain growth model by comparing modeling predictions with experimental results from thermal annealing. The purpose here is threefold: (1) to demonstrate the validation approach of using thermal annealing experiments with non-destructive characterization, (2) to test the reconstruction capability and computation efficiency in MOOSE, and (3) to validate the grain growth model and the associated parameters that are implemented in MARMOT for UO2. To assure a rigorous comparison, the 2D and 3D initial experimental microstructures of UO2 samples were characterized using non-destructive Synchrotron x-ray. The same samples were then annealed at 2273K for grain growth, and their initial microstructures were used as initial conditions for simulated annealing at the same temperature using MARMOT. After annealing, the final experimental microstructures were characterized again to compare with the results from simulations. So far, comparison between modeling and experiments has been done for 2D microstructures, and 3D comparison is underway. The preliminary results demonstrated the usefulness of the non-destructive characterization method for MARMOT grain growth model validation. A detailed analysis of the 3D microstructures is in progress to fully validate the current model in MARMOT.

Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment.

Technology-rich integrated assessmentmodels (IAMs) address possible technology mixes and future costs of climate change mitigation by generating scenarios for the future industrial system. Industrial ecology (IE) focuses on the empirical analysis of this system. We conduct an in-depth review of five major IAMs from an IE perspective and reveal differences between the two fields regarding the modelling of linkages in the industrial system, focussing on AIM/CGE, GCAM, IMAGE, MESSAGE, and REMIND. IAMs ignore material cycles and recycling, incoherently describe the life-cycle impacts of technology, and miss linkages regarding buildings and infrastructure. Adding IE system linkages to IAMs adds new constraints and allows for studying new mitigation options, both of which may lead to more robust and policy-relevant mitigation scenarios.

Full Text Available Adel Abdelaziz,1,2 Emad Koshak3 1Medical Education Development Unit, Faculty of Medicine, Al Baha University, Al Baha, Saudi Arabia; 2Medical Education Department, Faculty of Medicine, Suez Canal University, Egypt; 3Dean and Internal Medicine Department, Faculty of Medicine, Al Baha University, Al Baha, Saudi Arabia Abstract: Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment. Keywords: curriculum development, teaching, learning, assessment, apprenticeship, community-based settings, health service-based settings

Quantitative microbial risk assessment implies an estimation of the probability and impact of adverse health outcomes due to microbial hazards. In the case of food safety, the probability of human illness is a complex function of the variability of many parameters that influence the microbial environment, from the production to the consumption of a food. The analytical integration required to estimate the probability of foodborne illness is intractable in all but the simplest of models. Monte Carlo simulation is an alternative to computing analytical solutions. In some cases, a risk assessment may be commissioned to serve a larger purpose than simply the estimation of risk. A Monte Carlo simulation can provide insights into complex processes that are invaluable, and otherwise unavailable, to those charged with the task of risk management. Using examples from a farm-to-fork model of the fate of Escherichia coli O157:H7 in ground beef hamburgers, this paper describes specifically how such goals as research prioritization, risk-based characterization of control points, and risk-based comparison of intervention strategies can be objectively achieved using Monte Carlo simulation.

We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of train

The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

College assessmentmodels for our future middle school teachers must be varied, on-going, engaging, equitable and empowering. Traditional assessments do not often model the critical components of what makes assessment appropriate for middle level students. To provide the appropriate model for future middle level teachers, the establishment of…

Full Text Available This study presents a general framework model for E-Government Readiness Assessment. There are six necessary key factors to implement any E-government initiative worldwide. These factors represent the basic components to be assessed before launching the "e-initiative" to guarantee the right implementation in the right direction. The organization building blocks need to be assessed are: Organizational Readiness, Governance and leadership Readiness, Customer Readiness, Competency Readiness, Technology Readiness and Legal Readiness[1]. In the Organizational readiness, bureaucratic nature of E-Governments, business process, long process delay and need for re-engineering will be discussed. In the Governance and Leadership Readiness, the importance of leadership and governance for the e-initiative, the importance of procedures, service level agreement, the way public officials perform, commitment and accountability for public jobs, all will be shown. In the Customer readiness, the main public concerns regarding accessibility, trust and security will be highlighted. In the Competency readiness, the lack of qualified personnel in the public sector and the different alternatives to overcome this issue will be discussed. In the Technology readiness, too many issues worth to be considered, such as hardware, software, communication, current technology, legacy systems, sharing applications and data and setting secure infrastructure to exchange services. The last factor is the Legal readiness where the adoption of the Jordanian Temporary law No 85 in the year 2001 "Electronic Transaction Law" ETL paved the road towards the big shift for e-initiative and privacy. Some of these will be discussed in detail. The need for this detail arises from the fact that all government activities are governed by law. For this reason, it is important to start from this key factor

Full Text Available Natural hazards have caused severe consequences to the natural, modified and human systems, in the past. These consequences seem to increase with time due to both higher intensity of the natural phenomena and higher value of elements at risk. Among the water related hazards flood hazards have the most destructive impacts. The paper presents a new systemic paradigm for the assessment of flood hazard and flood risk in the riverine flood prone areas. Special emphasis is given to the urban areas with mild terrain and complicated topography, in which 2-D fully dynamic flood modelling is proposed. Further the EU flood directive is critically reviewed and examples of its implementation are presented. Some critical points in the flood directive implementation are also highlighted.

AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer.METHODS: A case control study was carried on in 66patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food,etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessmentmodel was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD).RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively.According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%.Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P＞0.05).CONCLUSION: The validity of this method is satisfactory.It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer.

Teachers devote a substantial amount of their time to assessment-related activities. This study aimed to describe beginning teachers' assessment literacy and to examine a structural model that binds assessment literacy with assessment training, self-efficacy, and conceptions of assessment. Data were collected from 327 Israeli inductee teachers and…

Teachers devote a substantial amount of their time to assessment-related activities. This study aimed to describe beginning teachers' assessment literacy and to examine a structural model that binds assessment literacy with assessment training, self-efficacy, and conceptions of assessment. Data were collected from 327 Israeli inductee teachers and…

Medical Education 2012: 46: 1087-1098 OBJECTIVES We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue th

This paper presents a critical review of literature investigating assessment of mathematical modelling. Written tests, projects, hands-on tests, portfolio and contests are modes of modellingassessment identified in this study. The written tests found in the reviewed papers draw on an atomistic view on modelling competencies, whereas projects are…

This paper presents a critical review of literature investigating assessment of mathematical modelling. Written tests, projects, hands-on tests, portfolio and contests are modes of modellingassessment identified in this study. The written tests found in the reviewed papers draw on an atomistic view on modelling competencies, whereas projects are…

The objective of this research was to: 1) involved a survey of information relating to secondary school health, 2) involved the construction of a model of health assessment and a handbook for using the model in secondary school, 3) develop an assessmentmodel for secondary school. The research included 3 phases. (1) involved a survey of…

We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference.

Full Text Available We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference.

Discusses the benefits of portfolio assessment to counselor education programs. Provides a definition of portfolio and outlines a portfolio approach in counselor education. Asserts that portfolio assessment relies on performance in measuring learning outcomes. Gives an example of portfolio content, along with strategies for evaluating student…

but with the head and neck replaced with a high fidelity cervical spine and head model. The occupant models were used to determine the effects of...fidelity cervical spine and head model... vertebrae , including the disks, ligaments and musculature, Figure 6. In total there are 57837 elements with 63713 nodes. A full description of the model

The ability to manage financial affairs is a life skill of critical importance, and neuropsychologists are increasingly asked to assess financial capacity across a variety of settings. Sound clinical assessment of financial capacity requires knowledge and appreciation of applicable clinical conceptual models and principles. However, the literature has presented relatively little conceptual guidance for clinicians concerning financial capacity and its assessment. This article seeks to address this gap. The article presents six clinical models of financial capacity : (1) the early gerontological IADL model of Lawton, (2) the clinical skills model and (3) related cognitive psychological model developed by Marson and colleagues, (4) a financial decision-making model adapting earlier decisional capacity work of Appelbaum and Grisso, (5) a person-centered model of financial decision-making developed by Lichtenberg and colleagues, and (6) a recent model of financial capacity in the real world developed through the Institute of Medicine. Accompanying presentation of the models is discussion of conceptual and practical perspectives they represent for clinician assessment. Based on the models, the article concludes by presenting a series of conceptually oriented guidelines for clinical assessment of financial capacity. In summary, sound assessment of financial capacity requires knowledge and appreciation of clinical conceptual models and principles. Awareness of such models, principles and guidelines will strengthen and advance clinical assessment of financial capacity.

U.S. Geological Survey, Department of the Interior — Well-established conservation planning principles and techniques framed by geodesign were used to assess the restorability of areas that historically supported...

U.S. Geological Survey, Department of the Interior — Well-established conservation planning principles and techniques framed by geodesign were used to assess the restorability of areas that historically supported...

U.S. Geological Survey, Department of the Interior — Well-established conservation planning principles and techniques framed by geodesign were used to assess the restorability of areas that historically supported...

Any traditional engineering field has metrics to rigorously assess the quality of their products. Engineers know that the output must satisfy the requirements, must comply with the production and market rules, and must be competitive. Professionals in the new field of software engineering started a few years ago to define metrics to appraise their product: individual programs and software systems. This concern motivates the need to assess not only the outcome but also the process and tools em...

The article presents the Special Education Needs Assessment Priorities model which establishes training priorities for both regular and special educators. The model consists of four stages: identification of competencies, development of discrepancies, setting training priorities, and resource allocation. (SB)

Bayesian cluster inference with a flexible generative model allows us to detect various types of structures. However, it has problems stemming from computational complexity and difficulties in modelassessment. We consider the stochastic block model with restricted hyperparameter space, which is known to correspond to modularity maximization. We show that it not only reduces computational complexity, but is also beneficial for modelassessment. Using various criteria, we conduct a comparative analysis of the modelassessments, and analyze whether each criterion tends to overfit or underfit. We also show that the learning of hyperparameters leads to qualitative differences in Bethe free energy and cross-validation errors.

When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

Full Text Available A groundwater risk assessment was carried out for 30 potable water supply systems under a framework of protecting drinking water quality across South Australia. A semi-quantitative Groundwater Risk AssessmentModel (GRAM was developed based on a “multi-barrier” approach using likelihood of release, contaminant pathway and consequence equation. Groundwater vulnerability and well integrity have been incorporated to the pathway component of the risk equation. The land use of the study basins varies from protected water reserves to heavily stocked grazing lands. Based on the risk assessment, 15 systems were considered as low risk, four as medium and 11 systems as at high risk. The GRAM risk levels were comparable with indicator bacteria—total coliform—detection. Most high risk systems were the result of poor well construction and casing corrosion rather than the land use. We carried out risk management actions, including changes to well designs and well operational practices, design to increase time of residence and setting the production zone below identified low permeable zones to provide additional barriers to contaminants. The highlight of the risk management element is the well integrity testing using down hole geophysical methods and camera views of the casing condition.

This paper is an examination of statewide district writing achievement gain data from the Nebraska Statewide Writing Assessment system and implications for statewide assessment writing models. The writing assessment program is used to gain compliance with the United States No Child Left Behind Law (NCLB), a federal effort to influence school…

Discusses Career-Development Assessment and Counseling model, which implements current development theory and uses innovative assessment measures and improved counseling methods to improve vocational and life career counseling. Focuses on assessment, treating interests and preferences as basic status data to be viewed in light of career maturity,…

A visionary academic medical center in Colombia, South America, engaged the Institute for Nursing Healthcare Leadership in a multifaceted project for the overall goal of strengthening the model of professional practice. The authors describe a reflective model for organizational assessment that steered the in-depth assessment of the organization. The model combines the constructs of culture, theory, patterns, and phenomenon with an iterative process. The organizational reflection process is applied to assessing key nursing roles and the nursing care delivery system. Recommendations and interventions that emerged from the assessment are included.

Repeated measurements designs, occur frequently in the assessment of exposure to toxic chemicals. This thesis deals with the possibilities of using mixed effects models for occupational exposure assessment and in the analysis of exposure response relationships. The model enables simultaneous estima

This paper presents a developed higher education quality assessmentmodel (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

This paper presents a developed higher education quality assessmentmodel (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

As workplace air measurements of manufactured nanoparticles are relatively expensive to conduct, models can be helpful for a first tier assessment of exposure. A conceptual model was developed to give a framework for such models. The basis for the model is an analysis of the fate and underlying

Full Text Available In this paper, some shortcomings involved in the modelling of ozone fluxes in the context of local-scale risk assessment are discussed, especially as related to the data collected within the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests. An enhanced monitoring strategy, that would provide a sounder basis for the development, validation and application of risk assessmentmodelling tools, is also suggested.

Temperature, fragrance concentration on the skin and power of ventilation have been determined as crucial parameters in fragrance diffusion from skin. A tool has been developed to simulate perfume diffusion from skin over time, allowing headspace analysis and fragrance profile assessments in a highly reproducible way.

The nominal group technique (NGT), a needs assessment methodology that can be effective in many situations related to marketing education, is used to identify and collect information in a small group setting. It is a special purpose group process appropriate for identifying elements of a problem situation, identifying elements of a solution…

averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

The aim of this study is to evaluate the relative importance of elastic non-linearities, viscoelasticity and resistance vessel modelling on arterial pressure and flow wave contours computed with distributed arterial network models. The computational results of a non-linear (time-domain) and a linear (frequency-domain) mode were compared using the same geometrical configuration and identical upstream and downstream boundary conditions and mechanical properties. pressures were computed at the ascending aorta, brachial and femoral artery. In spite of the identical problem definition, computational differences were found in input impedance modulus (max. 15-20%), systolic pressure (max. 5%) and pulse pressure (max. 10%). For the brachial artery, the ratio of pulse pressure to aortic pulse pressure was practically identical for both models (3%), whereas for the femoral artery higher values are found for the linear model (+10%). The aortic/brachial pressure transfer function indicates that pressure harmonic amplification is somewhat higher in the linear model for frequencies lower than 6 Hz while the opposite is true for higher frequencies. These computational disparities were attributed to conceptual model differences, such as the treatment of geometric tapering, rather than to elastic or convective non-linearities. Compared to the effect of viscoelasticity, the discrepancy between the linear and non-linear model is of the same importance. At peripheral locations, the correct representation of terminal impedance outweight the computational differences between the linear and non-linear models.

This paper describes the Assessment Practices Framework and how I used it to study a high school Chemistry teacher as she designed, implemented, and learned from a chemistry lab report. The framework consists of exploring three teacher-centered components of classroom assessment (assessment beliefs, practices, and reflection) and analyzing components with the assessment triangle model (Pellegrino et al. in, Knowing what students know: The science and design of educational assessment. National Academy Press, Washington DC, 2001). Employing the framework, I report the teacher's assessment practices, report the alignment in her assessment practices through the three vertices of the assessment triangle (cognition, observation, and interpretation), and suggest relations between her beliefs and practices. I conclude by discussing the contribution and limitations of the Assessment Practices Framework while conducting future research and supporting science teachers in assessing student learning.

Various animal models of hyperlipidemia are used in research. Four rodent hyperlipidemia experimental models are examined in this study: three chronic hyperlipidemia models based on dietary supplementation with lipid or sucrose for 3 months and one acute hyperlipidemia model based on administration of the nonionic surfactant poloxamer. Neither lipid supplementation nor sucrose supplementation in Wistar rats was effective for establishing hyperlipidemia. Combining both lipid and sucrose supplementation in BALB/c mice induced hypercholesterolemia, as reflected in a considerable increase in blood cholesterol concentration, but did not produce an increase in blood triglyceride concentration. Poloxamer administration in C57BL/J6 mice produced increases in blood cholesterol and triglyceride concentrations. The authors conclude that supplementation of both lipid and sucrose in BALB/c mice was the most effective method for developing chronic hypercholesterolemia.

Volcanic hazard assessment is a basic ingredient for risk-based decision-making in land-use planning and emergency management. Volcanic hazard is defined as the probability of any particular area being affected by a destructive volcanic event within a given period of time (Fournier d’Albe 1979). The probabilistic nature of such an important issue derives from the fact that volcanic activity is a complex process, characterized by several and usually unknown degrees o...

S) Leelinda P Dawson, John W Raby, and Jeffrey A Smith 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION...model runs .............................13 Fig. 6 An example PSA log file, ps_auto_log, using DDA, one case- study date, 3 domains, 3 model runs, and...case study date could be set for each run. This process was time-consuming when multiple configurations were required by the user. Also, each run

Environmental radiological assessments rely heavily on the use of mathematical models. The predictions of these models are inherently uncertain because these models are inexact representations of real systems. The major sources of this uncertainty are related to biases in model formulation and parameter estimation. The best approach for estimating the actual extent of over- or underprediction is model validation, a procedure that requires testing over the range of the intended realm of model application. Other approaches discussed are the use of screening procedures, sensitivity and stochastic analyses, and model comparison. The magnitude of uncertainty in model predictions is a function of the questions asked of the model and the specific radionuclides and exposure pathways of dominant importance. Estimates are made of the relative magnitude of uncertainty for situations requiring predictions of individual and collective risks for both chronic and acute releases of radionuclides. It is concluded that models developed as research tools should be distinguished from models developed for assessment applications. Furthermore, increased model complexity does not necessarily guarantee increased accuracy. To improve the realism of assessmentmodeling, stochastic procedures are recommended that translate uncertain parameter estimates into a distribution of predicted values. These procedures also permit the importance of model parameters to be ranked according to their relative contribution to the overall predicted uncertainty. Although confidence in model predictions can be improved through site-specific parameter estimation and increased model validation, risk factors and internal dosimetry models will probably remain important contributors to the amount of uncertainty that is irreducible.

Full Text Available Community assessment is one of the core competencies for public health professionals; mainly because it gives them a better understanding of the strengths and drawbacks of their jurisdictions. We planned to recognize an appropriate model that provides a conceptual framework for the Iranian community.This study was conducted in Tehran, during 2009-2010 and consisted of two parts: a review of the literature and qualitative interview with selected experts as well as focus group discussion with health field staff. These steps were done to develop a conceptual framework: planning for a steering committee, forming a working committee, re-viewing community assessmentmodels and projects, preparing the proposed model draft, in-depth interview and focused group discussions with national experts, finalizing the draft, and preparing the final model.Three different models published and applied routinely in different contexts. The 2008 North Carolina Community Assessmentmodel was used as a reference. Ten national and 18 international projects were compared to the reference and one and six projects were completely compatible with this model, respectively.Our final proposed model takes communities through eight steps to complete a collaborative community assessment: form a community assessment team, solicit community participation and gain inter-sectoral collaboration, establish a working committee, empower the community, collect and analyze community's primary and secondary statistics, solicit community input to select health priorities, evaluate the community assessment and develop the community assessment document, an develop the community action plans.

It has been argued that social security disability assessments should directly assess claimants' work capacity, rather than relying on proxies such as on functioning. However, there is little academic discussion of how such assessments could be conducted. The article presents an account of different models of direct disability assessments based on case studies of the Netherlands, Germany, Denmark, Norway, the United States of America, Canada, Australia, and New Zealand, utilising over 150 documents and 40 expert interviews. Three models of direct work disability assessments can be observed: (i) structured assessment, which measures the functional demands of jobs across the national economy and compares these to claimants' functional capacities; (ii) demonstrated assessment, which looks at claimants' actual experiences in the labour market and infers a lack of work capacity from the failure of a concerned rehabilitation attempt; and (iii) expert assessment, based on the judgement of skilled professionals. Direct disability assessment within social security is not just theoretically desirable, but can be implemented in practice. We have shown that there are three distinct ways that this can be done, each with different strengths and weaknesses. Further research is needed to clarify the costs, validity/legitimacy, and consequences of these different models. Implications for rehabilitation It has recently been argued that social security disability assessments should directly assess work capacity rather than simply assessing functioning - but we have no understanding about how this can be done in practice. Based on case studies of nine countries, we show that direct disability assessment can be implemented, and argue that there are three different ways of doing it. These are "demonstrated assessment" (using claimants' experiences in the labour market), "structured assessment" (matching functional requirements to workplace demands), and "expert assessment" (the

Modelassessment of the stochastic block model is a crucial step in identification of modular structures in networks. Although this has typically been done according to the principle that a parsimonious model with a large marginal likelihood or a short description length should be selected, another principle is that a model with a small prediction error should be selected. We show that the leave-one-out cross-validation estimate of the prediction error can be efficiently obtained using belief propagation for sparse networks. Furthermore, the relations among the objectives for modelassessment enable us to determine the exact cause of overfitting.

Behavioral (activity, diet, social interaction) and exposure (air pollution, traffic injury, and noise) related health impacts of land use and transportation investment decisions are becoming better understood and quantified. Research has shown relationships between density, mix, street connectivity, access to parks, shops, transit, presence of sidewalks and bikeways, and healthy food with physical activity, obesity, cardiovascular disease, type II diabetes, and some mental health outcomes. This session demonstrates successful integration of health impact assessment into multiple scenario planning tool platforms. Detailed evidence on chronic disease and related costs associated with contrasting land use and transportation investments are built into a general-purpose module that can be accessed by multiple platforms. Funders, researchers, and end users of the tool will present a detailed description of the key elements of the approach, how it has been applied, and how will evolve. A critical focus will be placed on equity and social justice inherent within the assessment of health disparities that will be featured in the session. Health impacts of community design have significant cost benefit implications. Recent research is now extending relationships between community design features and chronic disease to health care costs. This session will demonstrate the recent application of this evidence on health impacts to the newly adopted Los Angeles Regional Transpo

The instability in today's market and the ever increasing and emerging demands for mass customized and hybrid products by customers, are driving companies and decision makers to seek for cost effective and time efficient improvements in their product development process. Design concept evaluation which is the end of conceptual design is one of the most critical decision points in product development. It relates to the final success of product development, because poor criteria assessment in design concept evaluation can rarely compensated at the later stages. This has led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. In this paper, a new integrated design concept evaluation based on fuzzy-technique for order preference by similarity to ideal solution (Fuzzy-TOPSIS) is presented, and it also attempts to incorporate sustainability practices in assessing the criteria. Prior to Fuzzy-TOPSIS, a new scale of “Weighting criteria” for survey process is developed to quantify the evaluation criteria. This method will help engineers to improve the effectiveness and objectivity of the sustainable product development. Case example from industry is presented to demonstrate the efficacy of the proposed methodology. The result of the example shows that the new integrated method provides an alternative to existing methods of design concept evaluation.

Faster and more efficient development of innovative and sustainable products has become the focus for manufacturing companies in order to remain competitive in today’s technologically driven world. Design concept evaluation which is the end of conceptual design is one of the most critical decision points. It relates to the final success of product development, because poor criteria assessment in design concept evaluation can rarely compensated at the later stages. Furthermore, consumers, investors, shareholders and even competitors are basing their decisions on what to buy or invest in, from whom, and also on what company report, and sustainability is one of a critical component. In this research, a new methodology of sustainability assessment in product development for Malaysian industry has been developed using integration of green project management, new scale of “Weighting criteria” and Rough-Grey Analysis. This method will help design engineers to improve the effectiveness and objectivity of the sustainable design concept evaluation, enable them to make better-informed decisions before finalising their choice and consequently create value to the company or industry. The new framework is expected to provide an alternative to existing methods.

Maintaining soil productivity is essential if agriculture production systems are to be sustainable, thus soil quality is an essential issue. However, there is a paucity of tools for measurement for the purpose of understanding changes in soil quality. Here the possibility of using fuzzy modeling t...

textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

Full Text Available Risk assessment of roads is an effective approach for road agencies to determine safety improvement investments. It can increases the cost-effective returns in crash and injury reductions. To get a powerful Chinese risk assessmentmodel, Research Institute of Highway (RIOH is developing China Road Assessment Programme (ChinaRAP model to show the traffic crashes in China in partnership with International Road Assessment Programme (iRAP. The ChinaRAP model is based upon RIOH’s achievements and iRAP models. This paper documents part of ChinaRAP’s research work, mainly including the RIOH model and its pilot application in a province in China.

The object of this paper is to make an overall description of the author's PhD study, concerning uncertainties in numerical urban storm water drainage models. Initially an uncertainty localization and assessment of model inputs and parameters as well as uncertainties caused by different model...

Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

[Objective] The study aimed to assess the health state of rivers by using fuzzy matter-element model.[Method] Based on fuzzy matter-element analysis theory,the assessmentmodel of river health was established,then a modified method to calculate the superior subordinate degree was put forward according to Hamming distance.Afterwards,a multi-level evaluation model,which contained the assessment indicators about hydrological features,ecological characteristics,environmental traits and service function,was set ...

Assessment is the process by which the teacher and the student gain knowledge about student progress. Assessment systems should aim at evaluating the desired learning outcomes. In Melaka Manipal Medical College, (Manipal Campus), Manipal, India, the TEMM model (consisting of 4 assessment methods: Triple Jump Test, essay incorporating critical thinking questions, Multistation Integrated Practical Examination, and multiple choice questions) was introduced to 30 refresher students in the fourth block of the academic year. At the end of the block, a questionnaire was distributed to ask the students to rank the different assessments in the order of their preference with respect to seven items. Analysis of the results showed that not a single type of assessment was ranked highest for all the seven items, proving the earlier observation that a single assessment does not fulfill all aspects of assessment and that there is a need for an evaluating system with multiple ways of assessment.

Presentation for the American Water Works Association Water Sustainability Conference. The presentation highlights latest results from water quality trading research conducted by ORD using the East Fork Watershed in Southwestern Ohio as a case study. The watershed has a nutrient enrichment problem that is creating harmful algal blooms in a reservoir used for drinking water and recreation. Innovative modeling and monitoring is combined to understand how to best manage this water quality problem and costs associated with this endeavor. The presentation will provide an overview of the water quality trading feasibility research. The research includes the development and evaluation of innovative modeling and monitoring approaches to manage watersheds for nutrient pollution using a whole systems approach.

Physics models and requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics specifications are provided for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event). A safety analysis code SAFALY has been developed to investigate plasma anomaly events. The plasma response to ex-vessel component failure and machine response to plasma transients are considered.

The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

This paper discusses some of the concept of modeling surgery outcome.It is also an attempt to offer a road map for progress.This paper may serve as a common ground of discussion for both communities i.e surgeons and computational scientist in its broadest sense.Predicting surgery outcome is a very difficult task.All patients are different,and multiple factors such as genetic,or environment conditions plays a role.The difficulty is to construct models that are complex enough to address some of these significant multiscale elements and simple enough to be used in clinical conditions and calibrated on patient data.We will provide a multilevel progressive approach inspired by two applications in surgery that we have been working on.One is about vein graft adaptation after a transplantation,the other is the recovery of cosmesis outcome after a breast lumpectomy.This work,that is still very much in progress,may teach us some lessons.We are convinced that the digital revolution that is transforming the working environment of the surgeon makes closer collaboration between surgeons and computational scientist unavoidable.We believe that "computational surgery" will allow the community to develop predictive model of the surgery outcome and greatprogresses in surgery procedures that goes far beyond the operating room procedural aspect.

Climate financing is a key issue in current negotiations on climate protection. This study establishes a climate financing model based on a mechanism in which donor countries set up funds for climate financing and recipient countries use the funds exclusively for carbon emission reduction. The burden-sharing principles are based on GDP, historical emissions, and consumptionbased emissions. Using this model, we develop and analyze a series of scenario simulations, including a financing program negotiated at the Cancun Climate Change Conference (2010) and several subsequent programs. Results show that sustained climate financing can help to combat global climate change. However, the Cancun Agreements are projected to result in a reduction of only 0.01°C in global warming by 2100 compared to the scenario without climate financing. Longer-term climate financing programs should be established to achieve more significant benefits. Our model and simulations also show that climate financing has economic benefits for developing countries. Developed countries will suffer a slight GDP loss in the early stages of climate financing, but the longterm economic growth and the eventual benefits of climate mitigation will compensate for this slight loss. Different burden-sharing principles have very similar effects on global temperature change and economic growth of recipient countries, but they do result in differences in GDP changes for Japan and the FSU. The GDP-based principle results in a larger share of financial burden for Japan, while the historical emissions-based principle results in a larger share of financial burden for the FSU. A larger burden share leads to a greater GDP loss.

Full Text Available An integrated route assessment approach based on cloud model is proposed in this paper, where various sources of uncertainties are well kept and modeled by cloud theory. Firstly, a systemic criteria framework incorporating models for scoring subcriteria is developed. Then, the cloud model is introduced to represent linguistic variables, and survivability probability histogram of each route is converted into normal clouds by cloud transformation, enabling both randomness and fuzziness in the assessment environment to be managed simultaneously. Finally, a new way to measure the similarity between two normal clouds satisfying reflexivity, symmetry, transitivity, and overlapping is proposed. Experimental results demonstrate that the proposed route assessment approach outperforms fuzzy logic based assessment approach with regard to feasibility, reliability, and consistency with human thinking.

INTRODUCTION: Hospitals increasingly make decisions regarding the early development of and investment in technologies, but a formal evaluation model for assisting hospitals early on in assessing the potential of innovative medical technologies is lacking. This article provides an overview of models...... for early assessment in different health organisations and discusses which models hold most promise for hospital decision makers. METHODS: A scoping review of published studies between 1996 and 2015 was performed using nine databases. The following information was collected: decision context, decision...... problem, and a description of the early assessmentmodel. RESULTS: 2362 articles were identified and 12 studies fulfilled the inclusion criteria. An additional 12 studies were identified and included in the review by searching reference lists. The majority of the 24 early assessment studies were variants...

The Indoor Air Quality Building Education and AssessmentModel (I-BEAM) is a guidance tool designed for use by building professionals and others interested in indoor air quality in commercial buildings.

The Indoor Air Quality Building Education and AssessmentModel (I-BEAM), released in 2002, is a guidance tool designed for use by building professionals and others interested in indoor air quality in commercial buildings.

Full Text Available Purpose: This paper focuses on supply chain disruption assessment.Design/methodology/approach: Newsvendor ModelFindings: As both cost and income principle will be taken into account in supply chain disruption assessment, we proposed in this paper: (1 the problem of supply chain disruption assessment is the trade-off problem. (2 the generic single period - newsvendor model can be used for capturing the critical point, which in tradition model stands for the demarcation point of profit but in this paper is the least costs considering disruption costs and expected revenues.Research limitations/implications: single period - newsvendor modelPractical implications: we give an example for test the effectiveness of this methodOriginality/value: to research supply chain risk in a new approach, that is: supply chain risk has both cost and profit. So we can assess it with trade-off method

This study proposes and tests an integrative model that incorporates the mental resources framework (MOA: motivation, opportunity, and ability) alongside traditional innovation adoption predictors for assessing the adoption of dual-functionality innovations (DFI), a special case of multifunctional i

This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment.

This paper describes a Bayesian model for the assessment of inhalation exposures in an occupational setting; the methodology underpins a freely available web-based application for exposure assessment, the Advanced REACH Tool (ART). The ART is a higher tier exposure tool that combines disparate sourc

Graduate attributes (GAs) have become a necessary framework of reference for the 21st century competency-based model of higher education. However, the issue of evaluating and assessing GAs still remains unchartered territory. In this article, we present a criteria-based method of assessment that allows for an institution-wide comparison of the…

Although proof comprehension is fundamental in advanced undergraduate mathematics courses, there has been limited research on what it means to understand a mathematical proof at this level and how such understanding can be assessed. In this paper, we address these issues by presenting a multidimensional model for assessing proof comprehension in…

This background paper on refugee needs assessment discusses the assumptions, goals, objectives, strategies, models, and methods that the state refugee programs can consider in designing their strategies for assessing the mental health needs of refugees. It begins with a set of background assumptions about the ethnic profile of recent refugee…

The ex-ante assessment of the likely impacts of policy changes and technological innovations on agriculture can provide insight into policy effects on land use and other resources and inform discussion on the desirability of such changes. Integrated assessment and modeling (IAM) is an approach that

The material presents the application of a mathematical method for risk assessment under statistical determination of the ballistic limits of the protection equipment. The authors have implemented a mathematical model based on Pierson's criteria. The software accomplishment of the model allows to evaluate the V50 indicator and to assess the statistical hypothesis' reliability. The results supply the specialists with information about the interval valuations of the probability determined during the testing process.

The mechanistic empirical method of flexible pavement design/assessment uses a large number of numerical truck model runs to predict a history of dynamic load. The pattern of dynamic load distribution along the pavement is a key factor in the design/ assessment of flexible pavement. While this can be measured in particular cases, there are no reliable methods of predicting the mean pattern for typical traffic conditions. A simple linear quarter car model is developed here which aims to reprod...

The OCENSA pipeline system is vulnerable to geotechnical problems such as faults, landslides or creeping slopes, which are well-known in the Andes Mountains and tropical countries like Colombia. This paper proposes a methodology to evaluate the pipe behaviour during the soil displacements of slow landslides. Three different cases of analysis are examined, according to site characteristics. The process starts with a simplified analytical model and develops into 3D finite element numerical simulations applied to the on-site geometry of soil and pipe. Case 1 should be used when the unstable site is subject to landslides impacting significant lengths of pipeline, pipeline is straight, and landslide is simple from the geotechnical perspective. Case 2 should be used when pipeline is straight and landslide is complex (creeping slopes and non-conventional stabilization solutions). Case 3 should be used if the pipeline presents vertical or horizontal bends.

This article addresses the application of ecological risk assessment at the regional scale to the prediction of impacts due to invasive or nonindigenous species (NIS). The first section describes risk assessment, the decision-making process, and introduces regional risk assessment. A general conceptual model for the risk assessment of NIS is then presented based upon the regional risk assessment approach. Two diverse examples of the application of this approach are presented. The first example is based upon the dynamics of introduced plasmids into bacteria populations. The second example is the application risk assessment approach to the invasion of a coastal marine site of Cherry Point, Washington, USA by the European green crab. The lessons learned from the two examples demonstrate that assessment of the risks of invasion of NIS will have to incorporate not only the characteristics of the invasive species, but also the other stresses and impacts affecting the region of interest.

A guide was prepared to allow a user to run the PNL long-range transport model, REGIONAL 1. REGIONAL 1 is a computer model set up to run atmospheric assessments on a regional basis. The model has the capability of being run in three modes for a single time period. The three modes are: (1) no deposition, (2) dry deposition, (3) wet and dry deposition. The guide provides the physical and mathematical basis used in the model for calculating transport, diffusion, and deposition for all three modes. Also the guide includes a program listing with an explanation of the listings and an example in the form of a short-term assessment for 48 hours. The purpose of the example is to allow a person who has past experience with programming and meteorology to operate the assessmentmodel and compare his results with the guide results. This comparison will assure the user that the program is operating in a proper fashion.

A new item response theory (IRT) model with a tree structure has been introduced for modeling item response processes with a tree structure. In this paper, we present a generalized item response tree model with a flexible parametric form, dimensionality, and choice of covariates. The utilities of the model are demonstrated with two applications in psychological assessments for investigating Likert scale item responses and for modeling omitted item responses. The proposed model is estimated with the freely available R package flirt (Jeon et al., 2014b).

To describe the weaknesses of the current psychometric approach to assessment as a scientific model. The current psychometric model has played a major role in improving the quality of assessment of medical competence. It is becoming increasingly difficult, however, to apply this model to modern assessment methods. The central assumption in the current model is that medical competence can be subdivided into separate measurable stable and generic traits. This assumption has several far-reaching implications. Perhaps the most important is that it requires a numerical and reductionist approach, and that aspects such as fairness, defensibility and credibility are by necessity mainly translated into reliability and construct validity. These approaches are more and more difficult to align with modern assessment approaches such as mini-CEX, 360-degree feedback and portfolios. This paper describes some of the weaknesses of the psychometric model and aims to open a discussion on a conceptually different statistical approach to quality of assessment. We hope that the discussion opened by this paper will lead to the development of a conceptually different statistical approach to quality of assessment. A probabilistic or Bayesian approach would be worth exploring.

Single-species and age-structured fish stock assessments still remains the main tool for managing fish stocks. A simple state-space assessmentmodel is presented as an alternative to (semi) deterministic procedures and the full parametric statistical catch at age models. It offers a solution...... of state-space assessmentmodels is that they tend to be more conservative (react slower to changes) than the alternatives. A solution to this criticism is offered by introducing a mixture distribution for the transitions steps. The model presented is used for several commercially important stocks...... to some of the key challenges of these models. Compared to the deterministic procedures it solves a list of problems originating from falsely assuming that age classified catches are known without errors and allows quantification of uncertainties of estimated quantities of interest. Compared to full...

Finite element (FE) model updating techniques have been a viable approach to correcting an initial mathematical model based on test data. Validation of the updated FE models is usually conducted by comparing model predictions with independent test data that have not been used for model updating. This approach of model validation cannot be readily applied in the case of a stochastically updated FE model. In recognizing that structural reliability is a major decision factor throughout the lifecycle of a structure, this study investigates the use of structural reliability as a measure for assessing the quality of stochastically updated FE models. A recently developed perturbation method for stochastic FE model updating is first applied to attain the stochastically updated models by using the measured modal parameters with uncertainty. The reliability index and failure probability for predefined limit states are computed for the initial and the stochastically updated models, respectively, and are compared with those obtained from the 'true' model to assess the quality of the two models. Numerical simulation of a truss bridge is provided as an example. The simulated modal parameters involving different uncertainty magnitudes are used to update an initial model of the bridge. It is shown that the reliability index obtained from the updated model is much closer to true reliability index than that obtained from the initial model in the case of small uncertainty magnitude; in the case of large uncertainty magnitude, the reliability index computed from the initial model rather than from the updated model is closer to the true value. The present study confirms the usefulness of measurement-calibrated FE models and at the same time also highlights the importance of the uncertainty reduction in test data for reliable model updating and reliability evaluation.

Integrated assessments of how climate policy interacts with energy-economy systems can be performed by a variety of models with different functional structures. In order to provide insights into why results differ between models, this article proposes a diagnostic scheme that can be applied to a wid

Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...

It is generally accepted that humans are the “weakest link” in structural design and construction processes. Despite this, few models are available to quantify human error within engineering processes. This paper demonstrates the use of a quantitative Human Reliability Assessmentmodel within struct

The Association to Advance Collegiate Schools of Business (AACSB) International's assurance of learning (AoL) standards require that schools develop a sophisticated continuous-improvement process. The authors review various assessmentmodels and develop a practical, 6-step AoL model based on the literature and the authors' AoL-implementation…

A computer-based model was constructed to assess enrichment materials (EMats) for intensively-farmed weaned, growing and fattening pigs on a scale from 0 to 10. This model, called RICHPIG, was constructed in order to support the further implementation of EC Directive 2001/93/EC, which states that "p

Full Text Available This paper reports on assessment of an e-readiness model for low bandwidth environment. The main focus of the model is on technological (bandwidth related critical factors that are barrier to the adoption of technology mediated learning in developing cou ...

A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...

This document is the model summary report for the safety assessment SR-Site. In the report, the quality assurance (QA) measures conducted for assessment codes are presented together with the chosen QA methodology. In the safety assessment project SR-Site, a large number of numerical models are used to analyse the system and to show compliance. In order to better understand how the different models interact and how information are transferred between the different modelsAssessmentModel Flowcharts, AMFs, are used. From these, different modelling tasks can be identify and the computer codes used. As a large number of computer codes are used in the assessment the complexity of these differs to a large extent, some of the codes are commercial while others are developed especially for the assessment at hand. QA requirements must on the one hand take this diversity into account and on the other hand be well defined. In the methodology section of the report the following requirements are defined for all codes: - It must be demonstrated that the code is suitable for its purpose. - It must be demonstrated that the code has been properly used. - It must be demonstrated that the code development process has followed appropriate procedures and that the code produces accurate results. - It must be described how data are transferred between the different computational tasks. Although the requirements are identical for all codes in the assessment, the measures used to show that the requirements are fulfilled will be different for different types of codes (for instance due to the fact that for some software the source-code is not available for review). Subsequent to the methodology section, each assessment code is presented together with a discussion on how the requirements are met

We describe the three dimensional global stratospheric chemistry model developed under the NASA Global Modeling Initiative (GMI) to assess the possible environmental consequences from the emissions of a fleet of proposed high speed civil transport aircraft. This model was developed through a unique collaboration of the members of the GMI team. Team members provided computational modules representing various physical and chemical processes, and analysis of simulation results through extensive comparison to observation. The team members' modules were integrated within a computational framework that allowed transportability and simulations on massively parallel computers. A unique aspect of this model framework is the ability to interchange and intercompare different submodules to assess the sensitivity of numerical algorithms and model assumptions to simulation results. In this paper, we discuss the important attributes of the GMI effort, describe the GMI model computational framework and the numerical modules representing physical and chemical processes. As an application of the concept, we illustrate an analysis of the impact of advection algorithms on the dispersion of a NO{sub y}-like source in the stratosphere which mimics that of a fleet of commercial supersonic transports (High-Speed Civil Transport (HSCT)) flying between 17 and 20 kilometers.

Given a set of alternative models for a specific protein sequence, the model quality assessment (MQA) problem asks for an assignment of scores to each model in the set. A good MQA program assigns these scores such that they correlate well with real quality of the models, ideally scoring best...... with the best MQA methods that were assessed at CASP7. We also propose a new evaluation measure, Kendall's tau, that is more interpretable than conventional measures used for evaluating MQA methods (Pearson's r and Spearman's rho). We show clear examples where Kendall's tau agrees much more with our intuition...... of a correct MQA, and we therefore propose that Kendall's tau be used for future CASP MQA assessments. Proteins 2009. (c) 2008 Wiley-Liss, Inc....

A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...... waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...

This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

This document is the model summary report for the safety assessment SR-Can. In the report, the quality assurance measures conducted for the assessment codes are presented together with the chosen methodology. In the safety assessment SR-Can, a number of different computer codes are used. In order to better understand how these codes are related AssessmentModel Flowcharts, AMFs, have been produced within the project. From these, it is possible to identify the different modelling tasks and consequently also the different computer codes used. A large number of different computer codes are used in the assessment of which some are commercial while others are developed especially for the current assessment project. QA requirements must on the one hand take this diversity into account and on the other hand be well defined. In the methodology section of the report the following requirements are defined: It must be demonstrated that the code is suitable for its purpose; It must be demonstrated that the code has been properly used; and, It must be demonstrated that the code development process has followed appropriate procedures and that the code produces accurate results. Although the requirements are identical for all codes, the measures used to show that the requirements are fulfilled will be different for different codes (for instance due to the fact that for some software the source-code is not available for review). Subsequent to the methodology section, each assessment code is presented and it is shown how the requirements are met.

Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

Workplace accommodations to enable employees with disabilities to perform essential job tasks are an important strategy ways for increasing the presence of people with disabilities in the labor market. However, assessments, which are crucial to identifying necessary accommodations, are typically conducted using a variety of methods that lack consistent procedures and comprehensiveness of information. This can lead to the rediscovery of the same solutions over and over, inability to replicate assessments and a failure to effectively meet all of an individual's accommodation needs. To address standardize assessment tools and processes, a taxonomy of demand-producing activity factors is needed to complement the taxonomies of demand-producing person and environment factors already available in the International Classification of Functioning, Disability and Health (ICF). The purpose of this article is to propose a hierarchical model of accommodation assessment based on level of specificity of job activity. While the proposed model is neither a taxonomy nor an assessment process, the seven-level hierarchical model provides a conceptual framework of job activity that is the first step toward such a taxonomy as well as providing a common language that can bridge the many approaches to assessment. The model was designed and refined through testing against various job examples. Different levels of activity are defined to be easily linked to different accommodation strategies. Finally, the levels can be cross-walked to the ICF, which enhances its acceptability, utility and universality.

Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical modelassessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

Physicians' interpersonal and communication skills have a significant impact on patient care and correlate with improved healthcare outcomes. Some studies suggest, however, that communication skills decline during the four years of medical school. Regulatory and other medical organizations, recognizing the importance of interpersonal and communication skills in the practice of medicine, now require competence in communication skills. Two challenges exist: to select a framework of interpersonal and communication skills to teach across undergraduate medical education, and to develop and implement a uniform model for the assessment of these skills. The authors describe a process and model for developing and institutionalizing the assessment of communication skills across the undergraduate curriculum. Consensus was built regarding communication skill competencies by working with course leaders and examination directors, a uniform framework of competencies was selected to both teach and assess communication skills, and the framework was implemented across the Harvard Medical School undergraduate curriculum. The authors adapted an assessment framework based on the Bayer-Fetzer Kalamazoo Consensus Statement adapted a patient and added and satisfaction tool to bring patients' perspectives into the assessment of the learners. The core communication competencies and evaluation instruments were implemented in school-wide courses and assessment exercises including the first-year Patient-Doctor I Clinical Assessment, second-year Objective Structured Clinical Exam (OSCE), third-year Patient-Doctor III Clinical Assessment, fourth-year Comprehensive Clinical Practice Examination and the Core Medicine Clerkships. Faculty were offered workshops and interactive web-based teaching to become familiar with the framework, and students used the framework with repeated opportunities for faculty feedback on these skills. A model is offered for educational leaders and others who are involved

This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determine which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An

The article presents an assessment of the ability of the thirty-seven model quality assessment (MQA) methods participating in CASP10 to provide an a priori estimation of the quality of structural models, and of the 67 tertiary structure prediction groups to provide confidence estimates for their predicted coordinates. The assessment of MQA predictors is based on the methods used in previous CASPs, such as correlation between the predicted and observed quality of the models (both at the global and local levels), accuracy of methods in distinguishing between good and bad models as well as good and bad regions within them, and ability to identify the best models in the decoy sets. Several numerical evaluations were used in our analysis for the first time, such as comparison of global and local quality predictors with reference (baseline) predictors and a ROC analysis of the predictors\\' ability to differentiate between the well and poorly modeled regions. For the evaluation of the reliability of self-assessment of the coordinate errors, we used the correlation between the predicted and observed deviations of the coordinates and a ROC analysis of correctly identified errors in the models. A modified two-stage procedure for testing MQA methods in CASP10 whereby a small number of models spanning the whole range of model accuracy was released first followed by the release of a larger number of models of more uniform quality, allowed a more thorough analysis of abilities and inabilities of different types of methods. Clustering methods were shown to have an advantage over the single- and quasi-single- model methods on the larger datasets. At the same time, the evaluation revealed that the size of the dataset has smaller influence on the global quality assessment scores (for both clustering and nonclustering methods), than its diversity. Narrowing the quality range of the assessedmodels caused significant decrease in accuracy of ranking for global quality predictors but

assessments together with the Bayesian pre-posterior decision analysis and builds upon the quantification of Value of Information (VoI). The consequences are evaluated for different outputs of the probabilistic model to provide a basis for prioritizing risk management decision alternatives. Each step...... the bridge cables, which can cause socioeconomically expensive closures of bridges and traffic disruptions. The objective is to develop a simple model that can be used to assess the occurrence probability of ice accretion on bridge cables from readily available meteorological variables. This model is used....... The damage assessment is performed using a probabilistic approach, based on a Bayesian Probabilistic Network, where the wind environment, traffic loading, bridge specific parameters and the mechanisms that induce significant cable vibrations are the main input parameters. It is outlined how information...

Environmental conscious manufacturing has become an important issue in industry because of market pressure and environmental regulations. An environmental risk assessmentmodel was developed based on the network analytic method and fuzzy set theory. The "interval analysis method" was applied to deal with the on-site monitoring data as basic information for assessment. In addition, the fuzzy set theory was employed to allow uncertain, interactive and dynamic information to be effectively incorporated into the environmental risk assessment. This model is a simple, practical and effective tool for evaluating the environmental risk of manufacturing industry and for analyzing the relative impacts of emission wastes, which are hazardous to both human and ecosystem health. Furthermore, the model is considered useful for design engineers and decision-maker to design and select processes when the costs, environmental impacts and performances of a product are taken into consideration.

Integrated assessments of how climate policy interacts with energy-economic systems can be performed by a variety of models with different functional structures. This article proposes a diagnostic scheme that can be applied to a wide range of integrated assessmentmodels to classify differences among models based on their carbon price responses. Model diagnostics can uncover patterns and provide insights into why, under a given scenario, certain types of models behave in observed ways. Such insights are informative since model behavior can have a significant impact on projections of climate change mitigation costs and other policy-relevant information. The authors propose diagnostic indicators to characterize model responses to carbon price signals and test these in a diagnostic study with 11 global models. Indicators describe the magnitude of emission abatement and the associated costs relative to a harmonized baseline, the relative changes in carbon intensity and energy intensity and the extent of transformation in the energy system. This study shows a correlation among indicators suggesting that models can be classified into groups based on common patterns of behavior in response to carbon pricing. Such a classification can help to more easily explain variations among policy-relevant model results.

This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection

A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur

A range of approaches can be used in the application of climate change projections to agricultural impacts assessment. Climate projections can be used directly to drive crop models, which in turn can be used to provide inputs for agricultural economic or integrated assessmentmodels. These model applications, and the transfer of information between models, must be guided by the state of the science. But the methodology must also account for the specific needs of stakeholders and the intended use of model results beyond pure scientific inquiry, including meeting the requirements of agencies responsible for designing and assessing policies, programs, and regulations. Here we present methodology and results of two climate impacts studies that applied climate model projections from CMIP3 and from the EPA Climate Impacts and Risk Analysis (CIRA) project in a crop model (EPIC - Environmental Policy Indicator Climate) in order to generate estimates of changes in crop productivity for use in an agricultural economic model for the United States (FASOM - Forest and Agricultural Sector Optimization Model). The FASOM model is a forward-looking dynamic model of the US forest and agricultural sector used to assess market responses to changing productivity of alternative land uses. The first study, focused on climate change impacts on the UDSA crop insurance program, was designed to use available daily climate projections from the CMIP3 archive. The decision to focus on daily data for this application limited the climate model and time period selection significantly; however for the intended purpose of assessing impacts on crop insurance payments, consideration of extreme event frequency was critical for assessing periodic crop failures. In a second, coordinated impacts study designed to assess the relative difference in climate impacts under a no-mitigation policy and different future climate mitigation scenarios, the stakeholder specifically requested an assessment of a

Full Text Available This work describes a formative assessmentmodel for the Mathematical Analysis course taken by engineering students. It includes online questionnaires with feedback, a portfolio with weekly assignments, exams involving the use of mathematical software and a project to be completed in small groups of two or three students. The model has been perfected since 2009, and during the 2014-15 academic year the creation of a pilot online learning community was added. Based on Google+, it has been used for a peer assessment experiment involving student projects, among other uses.

assessment is challenging and the inclusion of the relevant processes is difficult. Furthermore the lack of long-term monitoring data prevents from verifying the accuracy of the different conceptual models. Further investigations based on long-term data and numerical modeling are needed to accurately......The article presents different tools available for risk assessment in fractured clayey tills and their advantages and limitations are discussed. Because of the complex processes occurring during contaminant transport through fractured media, the development of simple practical tools for risk...

This paper presents an Environmental Excellence Self-Assessment (EEA) model based on the structure of the European Foundation of Quality Management Business Excellence Framework. Four theoretical scenarios for deploying the model are presented as well as managerial implications, suggesting...... that the EEA model can be used in global organizations to differentiate environmental efforts depending on the maturity stage of the individual sites. Furthermore, the model can be used to support the decision-making process regarding when organizations should embark on more complex environmental efforts...... to continue to realize excellent environmental results. Finally, a development trajectory for environmental excellence is presented....

In this paper, we introduce a study that we carried out in order to validate the use of a simplified pregnant woman model for the assessment of the fetus exposure to radio frequency waves. This simplified model, based on the use of a homogeneous tissue to replace most of the inner organs of the virtual mother, would allow us to deal with many issues that are raised because of the lack of pregnant woman models for numerical dosimetry. Using specific absorption rate comparisons, we show that this model could be used to estimate the fetus exposure to plane waves.

The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessmentmodels. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them.

BACKGROUND: No validated model exists to explain the learning effects of assessment, a problem when designing and researching assessment for learning. We recently developed a model explaining the pre-assessment learning effects of summative assessment in a theory teaching context. The challenge now

As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose AssessmentModel (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose AssessmentModels (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

Full Text Available The Tsunami AssessmentModeling System was developed by the European Commission, Joint Research Centre, in order to serve Tsunami early warning systems such as the Global Disaster Alerts and Coordination System (GDACS in the evaluation of possible consequences by a Tsunami of seismic nature. The Tsunami AssessmentModeling System is currently operational and is calculating in real time all the events occurring in the world, calculating the expected Tsunami wave height and identifying the locations where the wave height should be too high. The first part of the paper describes the structure of the system, the underlying analytical models and the informatics arrangement; the second part shows the activation of the system and the results of the calculated analyses. The final part shows future development of this modeling tool.

Dental caries is a transmissible, complex biofilm disease that creates prolonged periods of low pH in the mouth, resulting in a net mineral loss from the teeth. Historically, the disease model for dental caries consisted of mutans streptococci and Lactobacillus species, and the dental profession focused on restoring the lesions/damage from the disease by using a surgical model. The current recommendation is to implement a risk-assessment-based medical model called CAMBRA (caries management by risk assessment) to diagnose and treat dental caries. Unfortunately, many of the suggestions of CAMBRA have been overly complicated and confusing for clinicians. The risk of caries, however, is usually related to just a few common factors, and these factors result in common patterns of disease. This article examines the biofilm model of dental caries, identifies the common disease patterns, and discusses their targeted therapeutic strategies to make CAMBRA more easily adaptable for the privately practicing professional.

The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

times. We propose a simple alternative. In three case studies each with two stocks, we improve the single-stock models, as measured by Akaike information criterion, by adding correlation in the cohort survival. To limit the number of parameters, the correlations are parameterized through...... the corresponding partial correlations. We consider six models where the partial correlation matrix between stocks follows a band structure ranging from independent assessments to complex correlation structures. Further, a simulation study illustrates the importance of handling correlated data sufficiently......Fisheries management is mainly conducted via single-stock assessmentmodels assuming that fish stocks do not interact, except through assumed natural mortalities. Currently, the main alternative is complex ecosystem models which require extensive data, are difficult to calibrate, and have long run...

Biocomputational modelling as developed by the European Virtual Physiological Human (VPH) Initiative is the area of ICT most likely to revolutionise in the longer term the practice of medicine. Using the example of osteoporosis management, a socio-economic assessment framework is presented that captures how the transformation of clinical guidelines through VPH models can be evaluated. Applied to the Osteoporotic Virtual Physiological Human Project, a consequent benefit-cost analysis delivers promising results, both methodologically and substantially.

This paper introduces the concept of the semi-automatic assessment of student texts that aims at offering the twin benefits of fully automatic grading and feedback together with the advantages that can be provided by human assessors. This paper concentrates on the pedagogical foundations of the model by demonstrating how the relevant findings in research into written composition and writing education have been taken into account in the model design.

We assess the ability of 11 models to reproduce three-phase oil relative permeability (kro) laboratory data obtained in a water-wet sandstone sample. We do so by considering model performance when (i) solely two-phase data are employed to render predictions of kro and (ii) two and three-phase data are jointly used for model calibration. In the latter case, a Maximum Likelihood (ML) approach is used to estimate model parameters. The tested models are selected among (i) classical models routinely employed in practical applications and implemented in commercial reservoir software and (ii) relatively recent models which are considered to allow overcoming some drawbacks of the classical formulations. Among others, the latter set of models includes the formulation recently proposed by Ranaee et al., which has been shown to embed the critical effects of hysteresis, including the reproduction of oil remobilization induced by gas injection in water-wet media. We employ formal model discrimination criteria to rank models according to their skill to reproduce the observed data and use ML Bayesian model averaging to provide model-averaged estimates (and associated uncertainty bounds) of kro by taking advantage of the diverse interpretive abilities of all models analyzed. The occurrence of elliptic regions is also analyzed for selected models in the framework of the classical fractional flow theory of displacement. Our study confirms that model outcomes based on channel flow theory and classical saturation-weighted interpolation models do not generally yield accurate reproduction of kro data, especially in the regime associated with low oil saturations, where water alternating gas injection (WAG) techniques are usually employed for enhanced oil recovery. This negative feature is not observed in the model of Ranaee et al. (2015) due to its ability to embed key effects of pore-scale phase distributions, such as hysteresis effects and cycle dependency, for modeling kro observed

An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to be an aid to sense-making rather than meeting seemingly arbitrary requirements set by the instructor. By giving students the authority to develop their own models and establish requirements for their diagrams, the sense that these are arbitrary requirements diminishes and students are more likely to see modeling as a sense-making activity. The practice of peer assessment can help students take ownership; however, it can be difficult for instructors to manage. Furthermore, it is not without risk: students can be reluctant to critique their peers, they may view this as the job of the instructor, and there is no guarantee that students will employ greater rigor and precision as a result of peer assessment. In this article, we describe one approach for peer assessment that can establish norms for diagrams in a way that is student driven, where students retain agency and authority in assessing and improving their work. We show that such an approach does indeed improve students' diagrams and abilities to assess their own work, without sacrificing students' authority and agency.

During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessmentsmodel historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessmentsmodel historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to- model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were carried out on the systematic assessment of the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. Mean-state assessments routinely compared model hindcasts to available modern biogeochemical observations. However, these assessments considered neither the extent of equilibrium in modeled biogeochemical reservoirs nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESM) contribute to model-to-model differences in the simulated fields. We take advantage of a 500 year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and is consistent when confronted against a larger ensemble of CMIP5 models. This shows that drift has implications on their performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercices in order to provide realistic ESM results on marine biogeochemistry and carbon cycle feedbacks.

This paper presents an effort to create a unified model for conducting and assessing undergraduate dissertations, shared by all disciplines involved in computer game development at a Swedish university. Computer game development includes technology-oriented disciplines as well as disciplines with aesthetical traditions. The challenge has been to…

Integrated assessmentmodels (IAMs) are regularly used to evaluate different policies of future emissions reductions. Since the global costs associated with these policies are immense, it is vital that the uncertainties in IAMs are quantified and understood. We first demonstrate the significant spre

In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage ti

The regulatory risk assessment of chemicals requires the estimation of occupational dermal exposure. Until recently, the models used were either based on limited data or were specific to a particular class of chemical or application. The EU project RISKOFDERM has gathered a considerable number of ne

Full Text Available Drought is regarded as a slow-onset natural disaster that causes inevitable damage to water resources and to farm life. Currently, crisis management is the basis of drought mitigation plans, however, thus far studies indicate that effective drought management strategies are based on risk management. As a primary tool in mitigating the impact of drought, vulnerability assessment can be used as a benchmark in drought mitigation plans and to enhance farmers’ ability to cope with drought. Moreover, literature pertaining to drought has focused extensively on its impact, only awarding limited attention to vulnerability assessment as a tool. Therefore, the main purpose of this paper is to develop a conceptual framework for designing a vulnerability model in order to assess farmers’ level of vulnerability before, during and after the onset of drought. Use of this developed drought vulnerability model would aid disaster relief workers by enhancing the adaptive capacity of farmers when facing the impacts of drought. The paper starts with the definition of vulnerability and outlines different frameworks on vulnerability developed thus far. It then identifies various approaches of vulnerability assessment and finally offers the most appropriate model. The paper concludes that the introduced model can guide drought mitigation programs in countries that are impacted the most by drought.

The goal of the paper is to present the heuristic model of the composite environmental quality index based on the integrated application of the elements of utility theory, multidimensional scaling, expert evaluation and decision-making. The composite index is synthesized in linear-quadratic form, it provides higher adequacy of the results of the assessment preferences of experts and decision-makers.

Studies that use structural equation modeling (SEM) techniques are increasingly encountered in the language assessment literature. This popularity has created the need for a set of guidelines that can indicate what should be included in a research report and make it possible for research consumers to judge the appropriateness of the…

The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

Unrestricted use of pesticides in agriculture threatens ground-water resources and can have adverse ecological impact on the nation's receiving surface waters. In this paper, we develop mass fraction models for exposure assessment and the regulation of agricultural organic chemic...

A framework is introduced for considering dimensionality assessment procedures for multidimensional item response models. The framework characterizes procedures in terms of their confirmatory or exploratory approach, parametric or nonparametric assumptions, and applicability to dichotomous, polytomous, and missing data. Popular and emerging…

The regulatory risk assessment of chemicals requires the estimation of occupational dermal exposure. Until recently, the models used were either based on limited data or were specific to a particular class of chemical or application. The EU project RISKOFDERM has gathered a considerable number of

This study involves the assessment of the quality management models in Higher Education by explaining the importance of quality in higher education and by examining the higher education quality assurance system practices in other countries. The qualitative study was carried out with the members of the Higher Education Planning, Evaluation,…

The Pressure Systems Manager at NASA Ames Research Center (ARC) has embarked on a project to collect data and develop risk assessmentmodels to support risk-informed decision making regarding future inspections of underground pipes at ARC. This paper shows progress in one area of this project - a corrosion risk assessmentmodel for the underground high-pressure air distribution piping system at ARC. It consists of a Corrosion Model of pipe-segments, a Pipe Wrap Protection Model; and a Pipe Stress Model for a pipe segment. A Monte Carlo simulation of the combined models provides a distribution of the failure probabilities. Sensitivity study results show that the model uncertainty, or lack of knowledge, is the dominant contributor to the calculated unreliability of the underground piping system. As a result, the Pressure Systems Manager may consider investing resources specifically focused on reducing these uncertainties. Future work includes completing the data collection effort for the existing ground based pressure systems and applying the risk models to risk-based inspection strategies of the underground pipes at ARC.

Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out.

Objective To quantitatively compare condylar morphology using CBCT and MSCT virtual 3D surface models. Study Design The sample consisted of secondary data analysis of CBCT and MSCT scans obtained for clinical purposes from 74 patients treated with condylar resection and prosthetic joint replacement. 3D surface models of 146 condyles were constructed from each scan modality. Across-subject models were approximated and voxel-based registration was performed between homologous CBCT and MSCT images, making it possible to create an average CBCT and MSCT-based condylar models. SPHARM-PDM provided matching points on each correspondent model. ShapeAnalysisMANCOVA assessed statistical significant differences between observers and imaging modalities. One-sample t-test evaluated the null hypothesis that the mean differences between each CBCT and MSCT-based model were not clinically significant (0.68). During pairwise comparison, the mean difference observed was 0.406mm, SD 0.173. One sample t-test showed that mean differences between each paired CBCT and MSCT-based models were not clinically significant (P=0.411). Conclusion 3D surface models constructed from CBCT images are comparable to those derived from MSCT scans and may be considered reliable tools for assessing condylar morphology. PMID:26679363

The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment

Full Text Available In the global marketplace, the ability to communicate, both orally and in writing, is a skillset demanded by employers. Unfortunately, typical academic exercises that involve written and oral communication are often just that ࿦ academic exercises. To provide a more authentic and robust experience, a student conference activity has been developed for use in a second-level physics course entitled Physics for a New Millennium (PNM at American University (AU. This activity involves writing a formal research paper using professional guidelines. In addition, students present their research paper during a class event modeled after an actual professional conference. A focus of this paper is to discuss the assessment strategies developed for the conference paper activity. A major goal of the assessment strategies designed for the conference paper and the associated presentation is to better capture (and then assess what students are actually learning in the course. This paper will provide an overview of the student conference paper activity with emphasis on its value as an alternative assessment tool. To that end, a synopsis of how the conference paper activity has been designed will be shared. This synopsis will begin with a general discussion of assessment, assessment methods, and the ཿlanguage of assessment.࿝ Following this synopsis a model of non-traditional assessment using the student conference paper will be highlighted. Subsequently a description of the course curriculum and the specific structure for the writing activity will be outlined as they relate to the learning outcomes for the course. Shadowing the presentation of the course-specific learning outcomes, a description of the strategies used to uncover student learning will be shared. These strategies provide an opportunity for multiple assessment ཿsnapshots࿝ to be made throughout various phases of the learning process. To illustrate these snapshots, examples from actual student

Managing and mitigating induced seismicity during reservoir stimulation and operation is a critical prerequisite for many GeoEnergy applications. We are currently developing and validating so called 'Adaptive Traffic Light Systems' (ATLS), fully probabilistic forecast models that integrate all relevant data on the fly into a time-dependent hazard and risk model. The combined model intrinsically considers both aleatory and model-uncertainties, the robustness of the forecast is maximized by using a dynamically update ensemble weighting. At the heart of the ATLS approach are a variety of forecast models that range from purely statistical models, such as flow-controlled Epidemic Type Aftershock Sequence (ETAS) models, to models that consider various physical interaction mechanism (e.g., pore pressure changes, dynamic and static stress transfer, volumetric strain changes). The automated re-calibration of these models on the fly given data imperfection, degrees of freedom, and time-constraints is a sizable challenge, as is the validation of the models for applications outside of their calibrated range (different settings, larger magnitudes, changes in physical processes etc.). Here we present an overview of the status of the model development, calibration and validation. We also demonstrate how such systems can contribute to a quantitative risk assessment and mitigation of induced seismicity in a wide range of applications and time scales.

Stream-habitat assessment for evaluation of restoration projects requires the examination of many parameters, both watershed-scale and reach-scale, to incorporate the complex non-linear effects of geomorphic, riparian, watershed and hydrologic factors on aquatic ecosystems. Rapid geomorphic assessment tools used by many jurisdictions to assess natural channel design projects seldom include watershed-level parameters, which have been shown to have a significant effect on benthic habitat in stream systems. In this study, Artificial Neural Network (ANN) models were developed to integrate complex non-linear relationships between the aquatic ecosystem health indices and key watershed-scale and reach-scale parameters. Physical stream parameters, based on QHEI parameters, and watershed characteristics data were collected at 112 sites on 62 stream systems located in Southern Ontario. Benthic data were collected separately and benthic invertebrate summary indices, specifically Hilsenhoff's Biotic Index (HBI) and Richness, were determined. The ANN models were trained on the randomly selected 3/4 of the dataset of 112 streams in Ontario, Canada and validated on the remaining 1/4. The R2 values for the developed ANN model predictions were 0.86 for HBI and 0.92 for Richness. Sensitivity analysis of the trained ANN models revealed that Richness was directly proportional to Erosion and Riparian Width and inversely proportional to Floodplain Quality and Substrate parameters. HBI was directly proportional to Velocity Types and Erosion and inversely proportional to Substrate, % Treed and 1:2 Year Flood Flow parameters. The ANN models can be useful tools for watershed managers in stream assessment and restoration projects by allowing consideration of watershed properties in the stream assessment.

The Industrial Process System Assessment (IPSA) methodology is a multiple step allocation approach for connecting information from the production line level up to the facility level and vice versa using a multiscale model of process systems. The allocation procedure assigns inpu...

The Industrial Process System Assessment (IPSA) methodology is a multiple step allocation approach for connecting information from the production line level up to the facility level and vice versa using a multiscale model of process systems. The allocation procedure assigns inpu...

During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... "preferred" GIA model has been used, without any consideration of the possible errors involved. Lacking a rigorous assessment of systematic errors in GIA modeling, the reliability of the results is uncertain. GIA sensitivity and uncertainties associated with the viscosity models have been explored......, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

Prediction of river basin hydrological response to extreme meteorological events is a primary concern in areas with frequent flooding, landslides, and debris flows. Natural hydrogeological disasters in many regions lead to extensive property damage, impact on societal activities, and loss of life. Hydrologists have a long history of assessing and predicting hydrologic hazards through the combined use of field observations, monitoring networks, remote sensing, and numerical modeling. Nevertheless, the integration of field data and computer models has yet to result in prediction systems that capture space-time interactions between meteorological forcing, land surface characteristics, and the internal hydrological response in river basins. Capabilities for assessing hydrologic extreme events are greatly enhanced via the use of geospatial data sets describing watershed properties such as topography, channel structure, soils, vegetation, and geological features. Recent advances in managing, processing, and visualizing cartographic data with geographic information systems (GIS) have enabled their direct use in spatially distributed hydrological models. In a distributed model application, geospatial data sets can be used to establish the model domain, specify boundary and initial conditions, determine the spatial variation of parameter values, and provide the spatial model forcing. By representing a watershed through a set of discrete elements, distributed models simulate water, energy, and mass transport in a landscape and provide estimates of the spatial pattern of hydrologic states, fluxes, and pathways.

Background The Decision-Making Capacity Assessment (DMCA) Model includes a best-practice process and tools to assess DMCA, and implementation strategies at the organizational and assessor levels to support provision of DMCAs across the care continuum. A Developmental Evaluation of the DMCA Model was conducted. Methods A mixed methods approach was used. Survey (N = 126) and focus group (N = 49) data were collected from practitioners utilizing the Model. Results Strengths of the Model include its best-practice and implementation approach, applicability to independent practitioners and inter-professional teams, focus on training/mentoring to enhance knowledge/skills, and provision of tools/processes. Post-training, participants agreed that they followed the Model’s guiding principles (90%), used problem-solving (92%), understood discipline-specific roles (87%), were confident in their knowledge of DMCAs (75%) and pertinent legislation (72%), accessed consultative services (88%), and received management support (64%). Model implementation is impeded when role clarity, physician engagement, inter-professional buy-in, accountability, dedicated resources, information sharing systems, and remuneration are lacking. Dedicated resources, job descriptions inclusive of DMCAs, ongoing education/mentoring supports, access to consultative services, and appropriate remuneration would support implementation. Conclusions The DMCA Model offers practitioners, inter-professional teams, and organizations a best-practice and implementation approach to DMCAs. Addressing barriers and further contextualizing the Model would be warranted. PMID:27729947

Applications and modelling have gained a prominent role in mathematics education reform documents and curricula. Thus, there is a growing need for studies focusing on the effective use of mathematical modelling in classrooms. Assessment is an integral part of using modelling activities in classrooms, since it allows teachers to identify and manage…

During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skills of Earth system models against available modern observations. However, most of these skill-assessment approaches can be considered as "blind" given that they were applied without considering models' specific characteristics and treat models a priori as independent of observations. Indeed, since these models are typically initialized from observations, the spin-up procedure (e.g. the length of time for which the model has been run since initialization, and therefore the degree to which it has approached it's own equilibrium) has the potential to exert a significant control over the skill-assessment metrics calculated for each model. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESM) contributes to model-to-model differences in the simulated fields. We focus on the amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) as a function of spin-up duration in a dedicated 500-year-long spin-up simulation performed with IPSL-CM5A-LR as well as an ensemble of 24 CMIP5 ESMs. We demonstrate that a relationship between spin-up duration and skill-assessment metrics emerges from the results of a single model and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift in biogeochemical fields has implications for performance assessment in addition to possibly influence estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

In order to have confidence in model-based phylogenetic methods, such as maximum likelihood (ML) and Bayesian analyses, one must use an appropriate model of molecular evolution identified using statistically rigorous criteria. Although model selection methods such as the likelihood ratio test and Akaike information criterion are widely used in the phylogenetic literature, model selection methods lack the ability to reject all models if they provide an inadequate fit to the data. There are two methods, however, that assess absolute model adequacy, the frequentist Goldman-Cox (GC) test and Bayesian posterior predictive simulations (PPSs), which are commonly used in conjunction with the multinomial log likelihood test statistic. In this study, we use empirical and simulated data to evaluate the adequacy of common substitution models using both frequentist and Bayesian methods and compare the results with those obtained with model selection methods. In addition, we investigate the relationship between model adequacy and performance in ML and Bayesian analyses in terms of topology, branch lengths, and bipartition support. We show that tests of model adequacy based on the multinomial likelihood often fail to reject simple substitution models, especially when the models incorporate among-site rate variation (ASRV), and normally fail to reject less complex models than those chosen by model selection methods. In addition, we find that PPSs often fail to reject simpler models than the GC test. Use of the simplest substitution models not rejected based on fit normally results in similar but divergent estimates of tree topology and branch lengths. In addition, use of the simplest adequate substitution models can affect estimates of bipartition support, although these differences are often small with the largest differences confined to poorly supported nodes. We also find that alternative assumptions about ASRV can affect tree topology, tree length, and bipartition support. Our

We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such models do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3

Full Text Available Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1 Summarizes the advantages of the assessment of patient clinical outcome; 2 reviews some of the existing patient clinical outcome assessmentmodels namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3 demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessmentmodel. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessmentmodels.

Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed

The problem of comparing and matching different learners’ knowledge arises when assessment systems use a one-dimensional numerical value to represent “knowledge level”. Such assessment systems may measure inconsistently because they estimate this level differently and inadequately. The multi-dimensional competency model called COMpetence-Based learner knowledge for personalized Assessment (COMBA) is being developed to represent a learner’s knowledge in a multi-dimensional vector space. The heart of this model is to treat knowledge, not as possession, but as a contextualized space of capability either actual or potential. The paper discusses a system for automatically generating questions from the COMBA competency model as a “guide-on-the-side”. The system’s novel design and implementation involves an ontological database that represents the intended learning outcome to be assessed across a number of dimensions, including level of cognitive ability and subject matter. The system generates all the questions that are possible from a given learning outcome, which may then be used to test for understanding, and so could determine the degree to which learners actually acquire the desired knowledge.

A number of comprehensive country energy assessments were performed in the late 1970s and early 1980s in cooperation with the governments of various countries. The assessments provided a framework for analyzing the impacts of various national strategies for meeting energy requirements. These analyses considered the total energy framework. Economics, energy supply, national resources, energy use, environmental impacts, technologies, energy efficiencies, and sociopolitical impacts were some of the factors addressed. These analyses incorporated the best available data bases and computer models to facilitate the analyses. National policy makers identified the various strategies to examine. The results of the analyses were provided to the national policy makers to support their decision making. Almost 20 years have passed since these assessments were performed. There have been major changes in energy supply and use, technologies, economics, available resources, and environmental concerns. The available tools for performing the assessments have improved drastically. The availability of improved computer modeling, i.e., MARKAL-MACRO, and improved data collection methods and data bases now permit such assessments to be performed in a more sophisticated manner to provide state of the art support to policy makers. The MARKAL-MACRO model was developed by Brookhaven National Laboratory over the last 25 years to support strategic energy planning. It is widely used in the international community for integrating analyses of environmental options, such as reduction of greenhouse gas emissions. It was used to perform the analyses in the least cost energy strategy study for the Energy Policy Act of 1992. Improvements continue to be made to MARKAL-MACRO and its capabilities extended. A methodology to conduct Country Energy Assessments using MARKAL-MACRO is discussed.

With the increasing global development of wind energy, collision risk models (CRMs) are routinely used to assess the potential impacts of wind turbines on birds. We reviewed and compared the avian collision risk models currently available in the scientific literature, exploring aspects such as the calculation of a collision probability, inclusion of stationary components e.g. the tower, angle of approach and uncertainty. 10 models were cited in the literature and of these, all included a probability of collision of a single bird colliding with a wind turbine during passage through the rotor swept area, and the majority included a measure of the number of birds at risk. 7 out of the 10 models calculated the probability of birds colliding, whilst the remainder used a constant. We identified four approaches to calculate the probability of collision and these were used by others. 6 of the 10 models were deterministic and included the most frequently used models in the UK, with only 4 including variation or uncertainty in some way, the most recent using Bayesian methods. Despite their appeal, CRMs have their limitations and can be ‘data hungry’ as well as assuming much about bird movement and behaviour. As data become available, these assumptions should be tested to ensure that CRMs are functioning to adequately answer the questions posed by the wind energy sector. - Highlights: • We highlighted ten models available to assess avian collision risk. • Only 4 of the models included variability or uncertainty. • Collision risk models have limitations and can be ‘data hungry’. • It is vital that the most appropriate model is used for a given task.

Decadal climate forecasts with full-field initialized coupled climate models are affected by a growing error signal that develops due to the adjustment of the simulations from the assimilated state consistent with observations to the state consistent with the biased model's climatology. Sea-surface temperature (SST) drifts and biases are a major concern due to the central role of SST properties for the dynamical coupling between the atmosphere and the ocean, and for the associated variability. Therefore, strong SST drifts complicate the initialization and assessment of decadal climate prediction experiments, and can be detrimental for their overall quality. We propose a dynamic linear model based on a state-space approach and developed within a Bayesian hierarchical framework for probabilistic assessment of spatial and temporal characteristics of SST drifts in ensemble climate simulations. The state-space approach uses unobservable state variables to directly model the processes generating the observed variability. The statistical model is based on a sequential definition of the process having a conditional dependency only on the previous time step, which therefore corresponds to the Kalman filter formulas. In our formulation, the statistical model distinguishes between seasonal and longer-term drift components, and between large-scale and local drifts. We apply the Bayesian method to make inferences on the variance components of the Gaussian errors in both the observation and system equations of the state-space model. To this purpose, we draw samples from their posterior distributions using a Monte Carlo Markov Chain simulation technique with a Gibbs sampler. In this contribution we illustrate a first application of the model using the MiKlip prototype system for decadal climate predictions. We focus on the tropical Atlantic Ocean - a region where climate models are typically affected by a severe warm SST bias - to demonstrate how our approach allows for a more

The purpose of this document is to profile analytical tools and methods which could be used in a total fuel cycle analysis. The information in this document provides a significant step towards: (1) Characterizing the stages of the fuel cycle. (2) Identifying relevant impacts which can feasibly be evaluated quantitatively or qualitatively. (3) Identifying and reviewing other activities that have been conducted to perform a fuel cycle assessment or some component thereof. (4) Reviewing the successes/deficiencies and opportunities/constraints of previous activities. (5) Identifying methods and modeling techniques/tools that are available, tested and could be used for a fuel cycle assessment.

Every four years, earth scientists work together on a National Climate Assessment (NCA) report which integrates, evaluates, and interprets the findings of climate change and impacts on affected industries such as agriculture, natural environment, energy production and use, etc. Given the amount of information presented in each report, and the wide range of information sources and topics, it can be difficult for users to find and identify desired information. To ease the user effort of information discovery, well-structured metadata is needed that describes the report's key statements and conclusions and provide for traceable provenance of data sources used. We present an assessment ontology developed to describe terms, concepts and relations required for the NCA metadata. Wherever possible, the assessment ontology reuses terms from well-known ontologies such as Semantic Web for Earth and Environmental Terminology (SWEET) ontology, Dublin Core (DC) vocabulary. We have generated sample National Climate Assessment metadata conforming to our assessment ontology and publicly exposed via a SPARQL-endpoint and website. We have also modeled provenance information for the NCA writing activities using the W3C recommendation-candidate PROV-O ontology. Using this provenance the user will be able to trace the sources of information used in the assessment and therefore make trust decisions. In the future, we are planning to implement a faceted browser over the metadata to enhance metadata traversal and information discovery.

In this paper, we discuss the most important theoretical aspects of polluted soil Risk Assessment Methodologies, which have been developed in order to evaluate the risk, for the exposed people, connected with the residual contaminant concentration in polluted soil, and we make a short presentation of the major different kinds of risk assessment methodologies. We also underline the relevant role played, in this kind of analysis, by the pollutant transport models. We also describe a new and innovative model, based on the general framework of the so-called Cellular Automata (CA), initially developed in the UE-Esprit Project COLOMBO for the simulation of bioremediation processes. These kinds of models, for their intrinsic "finite and discrete" characteristics, seem to be very well suited for a detailed analysis of the shape of the pollutant sources, the contaminant fates and the evaluation of target in the risk assessment evaluation. In particular, we will describe the future research activities we are going to develop in the area of a strict integration between pollutant fate and transport models and Risk Analysis Methodologies.

In the last two decades, animal models have become important tools in understanding and treating pain, and in predicting analgesic efficacy. Although rodent models retain a dominant role in the study of pain mechanisms, large animal models may predict human biology and pharmacology in certain pain conditions more accurately. Taking into consideration the anatomical and physiological characteristics common to man and pigs (median body size, digestive apparatus, number, size, distribution and communication of vessels in dermal skin, epidermal–dermal junctions, the immunoreactivity of peptide nerve fibers, distribution of nociceptive and non-nociceptive fiber classes, and changes in axonal excitability), swines seem to provide the most suitable animal model for pain assessment. Locomotor function, clinical signs, and measurements (respiratory rate, heart rate, blood pressure, temperature, electromyography), behavior (bright/quiet, alert, responsive, depressed, unresponsive), plasma concentration of substance P and cortisol, vocalization, lameness, and axon reflex vasodilatation by laser Doppler imaging have been used to assess pain, but none of these evaluations have proved entirely satisfactory. It is necessary to identify new methods for evaluating pain in large animals (particularly pigs), because of their similarities to humans. This could lead to improved assessment of pain and improved analgesic treatment for both humans and laboratory animals. PMID:24855386

Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

Full Text Available In the framework of the Mediterranean Forecasting System project (MFS sub-regional and regional numerical ocean forecasting systems performance are assessed by mean of model-model and model-data comparison. Three different operational systems have been considered in this study: the Adriatic REGional Model (AREG; the AdriaROMS and the Mediterranean Forecasting System general circulation model (MFS model. AREG and AdriaROMS are regional implementations (with some dedicated variations of POM (Blumberg and Mellor, 1987 and ROMS (Shchepetkin and McWilliams, 2005 respectively, while MFS model is based on OPA (Madec et al., 1998 code. The assessment has been done by means of standard scores. The data used for operational systems assessment derive from in-situ and remote sensing measurements. In particular a set of CTDs covering the whole western Adriatic, collected in January 2006, one year of SST from space born sensors and six months of buoy data. This allowed to have a full three-dimensional picture of the operational forecasting systems quality during January 2006 and some preliminary considerations on the temporal fluctuation of scores estimated on surface (or near surface quantities between summer 2005 and summer 2006. In general, the regional models are found to be colder and fresher than observations. They eventually outperform the large scale model in the shallowest locations, as expected. Results on amplitude and phase errors are also much better in locations shallower than 50 m, while degraded in deeper locations, where the models tend to have a higher homogeneity along the vertical column compared to observations. In a basin-wide overview, the two regional models show some dissimilarities in the local displacement of errors, something suggested by the full three-dimensional picture depicted using CTDs, but also confirmed by the comparison with SSTs. In locations where the regional models are mutually correlated, the aggregated mean

Next steps in developing next-generation crop models fall into several categories: significant improvements in simulation of important crop processes and responses to stress; extension from simplified crop models to complex cropping systems models; and scaling up from site-based models to landscape, national, continental, and global scales. Crop processes that require major leaps in understanding and simulation in order to narrow uncertainties around how crops will respond to changing atmospheric conditions include genetics; carbon, temperature, water, and nitrogen; ozone; and nutrition. The field of crop modeling has been built on a single crop-by-crop approach. It is now time to create a new paradigm, moving from 'crop' to 'cropping system.' A first step is to set up the simulation technology so that modelers can rapidly incorporate multiple crops within fields, and multiple crops over time. Then the response of these more complex cropping systems can be tested under different sustainable intensification management strategies utilizing the updated simulation environments. Model improvements for diseases, pests, and weeds include developing process-based models for important diseases, frameworks for coupling air-borne diseases to crop models, gathering significantly more data on crop impacts, and enabling the evaluation of pest management strategies. Most smallholder farming in the world involves integrated crop-livestock systems that cannot be represented by crop modeling alone. Thus, next-generation cropping system models need to include key linkages to livestock. Livestock linkages to be incorporated include growth and productivity models for grasslands and rangelands as well as the usual annual crops. There are several approaches for scaling up, including use of gridded models and development of simpler quasi-empirical models for landscape-scale analysis. On the assessment side, AgMIP is leading a community process for coordinated contributions to IPCC AR6

Full Text Available Most coastal flood risk studies make use of a Digital Elevation Model (DEM in addition to a projected flood water level in order to estimate the flood inundation and associated damages to property and livelihoods. The resolution and accuracy of a DEM are critical in a flood risk assessment, as land elevation largely determines whether a location will be flooded or will remain dry during a flood event. Especially in low lying deltaic areas, the land elevation variation is usually in the order of only a few decimeters, and an offset of various decimeters in the elevation data has a significant impact on the accuracy of the risk assessment. Publicly available DEMs are often used in studies for coastal flood risk assessments. The accuracy of these datasets is relatively low, in the order of meters, and is especially low in comparison to the level of accuracy required for a flood risk assessment in a deltaic area. For a coastal zone area in Nigeria (Lagos State an accurate LiDAR DEM dataset was adopted as ground truth concerning terrain elevation. In the case study, the LiDAR DEM was compared to various publicly available DEMs. The coastal flood risk assessment using various publicly available DEMs was compared to a flood risk assessment using LiDAR DEMs. It can be concluded that the publicly available DEMs do not meet the accuracy requirement of coastal flood risk assessments, especially in coastal and deltaic areas. For this particular case study, the publically available DEMs highly overestimated the land elevation Z-values and thereby underestimated the coastal flood risk for the Lagos State area. The findings are of interest when selecting data sets for coastal flood risk assessments in low-lying deltaic areas.

Full Text Available In the framework of the Mediterranean Forecasting System (MFS project, the performance of regional numerical ocean forecasting systems is assessed by means of model-model and model-data comparison. Three different operational systems considered in this study are: the Adriatic REGional Model (AREG; the Adriatic Regional Ocean Modelling System (AdriaROMS and the Mediterranean Forecasting System General Circulation Model (MFS-GCM. AREG and AdriaROMS are regional implementations (with some dedicated variations of POM and ROMS, respectively, while MFS-GCM is an OPA based system. The assessment is done through standard scores. In situ and remote sensing data are used to evaluate the system performance. In particular, a set of CTD measurements collected in the whole western Adriatic during January 2006 and one year of satellite derived sea surface temperature measurements (SST allow to asses a full three-dimensional picture of the operational forecasting systems quality during January 2006 and to draw some preliminary considerations on the temporal fluctuation of scores estimated on surface quantities between summer 2005 and summer 2006.

The regional systems share a negative bias in simulated temperature and salinity. Nonetheless, they outperform the MFS-GCM in the shallowest locations. Results on amplitude and phase errors are improved in areas shallower than 50 m, while degraded in deeper locations, where major models deficiencies are related to vertical mixing overestimation. In a basin-wide overview, the two regional models show differences in the local displacement of errors. In addition, in locations where the regional models are mutually correlated, the aggregated mean squared error was found to be smaller, that is a useful outcome of having several operational systems in the same region.

The increasing availability of remotely sensed land surface and precipitation information provides new opportunities to improve upon existing landslide hazard assessment methods. This research considers how satellite precipitation information can be applied in two types of landslide hazard assessment frameworks: a global, landslide forecasting framework and a deterministic slope-stability model. Examination of both landslide hazard frameworks points to the need for higher resolution spatial and temporal precipitation inputs to better identify small-scale precipitation forcings that contribute to significant landslide triggering. This research considers how satellite precipitation information may be downscaled to account for local orographic impacts and better resolve peak intensities. Precipitation downscaling is employed in both models to better approximate local rainfall distribution, antecedent conditions, and intensities. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale and have the potential to significantly advance landslide hazard assessment tools. The first landslide forecasting tool, running in near real-time at http://trmm.gsfc.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. Results of the algorithm evaluation indicate that considering higher resolution susceptibility information is a key factor in better resolving potentially hazardous areas. However, success in resolving when landslide activity is probable is closely linked to appropriate characterization of the empirical rainfall intensity-duration thresholds. We test a variety of rainfall thresholds to evaluate algorithmic performance accuracy and determine the optimal set of conditions that

A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

The assessment of the long-term safety of a repository for radioactive or hazardous waste and therewith the development of a safety case requires a comprehensive system understanding, a continuous development of the methods of a safety case and capable and qualified numerical tools. The objective of the project ''Scientific basis for the assessment of the long-term safety of repositories'', identification number 02 E 10548, was to follow national and international developments in this area, to evaluate research projects, which contribute to knowledge, model approaches and data, and to perform specific investigations to improve the methodologies of the safety case and the long-term safety assessment.

Similar to other health policy initiatives, there is a growing movement to involve consumers in decisions affecting their treatment options. Access to treatments can be impacted by decisions made during a health technology assessment (HTA), i.e., the rigorous assessment of medical interventions such as drugs, vaccines, devices, materials, medical and surgical procedures and systems. The purpose of this paper was to empirically assess the interest and potential mechanisms for consumer involvement in HTA by identifying what health consumer organizations consider meaningful involvement, examining current practices internationally and developing a model for involvement based on identified priorities and needs. Canadian health consumer groups representing the largest disease or illness conditions reported a desire for involvement in HTA and provided feedback on mechanisms for facilitating their involvement.

Full Text Available We describe results of a multi-year effort to strengthen consideration of the human dimension into endangered species risk assessments and to strengthen research capacity to understand biodiversity risk assessment in the context of coupled human-natural systems. A core group of social and biological scientists have worked with a network of more than 50 individuals from four countries to develop a conceptual framework illustrating how human-mediated processes influence biological systems and to develop tools to gather, translate, and incorporate these data into existing simulation models. A central theme of our research focused on (1 the difficulties often encountered in identifying and securing diverse bodies of expertise and information that is necessary to adequately address complex species conservation issues; and (2 the development of quantitative simulation modeling tools that could explicitly link these datasets as a way to gain deeper insight into these issues. To address these important challenges, we promote a "meta-modeling" approach where computational links are constructed between discipline-specific models already in existence. In this approach, each model can function as a powerful stand-alone program, but interaction between applications is achieved by passing data structures describing the state of the system between programs. As one example of this concept, an integrated meta-model of wildlife disease and population biology is described. A goal of this effort is to improve science-based capabilities for decision making by scientists, natural resource managers, and policy makers addressing environmental problems in general, and focusing on biodiversity risk assessment in particular.

Full Text Available To assess global water resources from the perspective of subannual variation in water availability and water use, an integrated water resources model was developed. In a companion report, we presented the global meteorological forcing input used to drive the model and six modules, namely, the land surface hydrology module, the river routing module, the crop growth module, the reservoir operation module, the environmental flow requirement module, and the anthropogenic withdrawal module. Here, we present the results of the model application and global water resources assessments. First, the timing and volume of simulated agriculture water use were examined because agricultural use composes approximately 85% of total consumptive water withdrawal in the world. The estimated crop calendar showed good agreement with earlier reports for wheat, maize, and rice in major countries of production. In major countries, the error in the planting date was ±1 mo, but there were some exceptional cases. The estimated irrigation water withdrawal also showed fair agreement with country statistics, but tended to be underestimated in countries in the Asian monsoon region. The results indicate the validity of the model and the input meteorological forcing because site-specific parameter tuning was not used in the series of simulations. Finally, global water resources were assessed on a subannual basis using a newly devised index. This index located water-stressed regions that were undetected in earlier studies. These regions, which are indicated by a gap in the subannual distribution of water availability and water use, include the Sahel, the Asian monsoon region, and southern Africa. The simulation results show that the reservoir operations of major reservoirs (>1 km3 and the allocation of environmental flow requirements can alter the population under high water stress by approximately −11% to +5% globally. The integrated model is applicable to

Full Text Available The sources of formation, environmental distribution and fate of persistent organic pollutants (POPs are increasingly seen as topics to be addressed and solved at the global scale. Therefore, there are already two international agreements concerning persistent organic pollutants: the Protocol of 1998 to the 1979 Convention on the Long-Range Transboundary Air Pollution on Persistent Organic Pollutants (Aarhus Protocol; and the Stockholm Convention on Persistent Organic Pollutants. For the assessment of environmental pollution of POPs, for the risk assessment, for the evaluation of new pollutants as potential candidates to be included in the POPs list of the Stokholmo or/and Aarhus Protocol, a set of different models are developed or under development. Multimedia models help describe and understand environmental processes leading to global contamination through POPs and actual risk to the environment and human health. However, there is a lack of the tools based on a systematic and integrated approach to POPs management difficulties in the region.

This paper presents opto-physiological (OP) modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG). Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation non-contact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance.

Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as emphasis was the function of the product and not the chemical composition of said product. Conversely, in modelling of environmental technologies, such as wastewater...... considering how the biochemical parameters change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products...

GIA modeling. GIA errors are also important in the far field of previously glaciated areas and in the time evolution of global indicators. In this regard we also account for other possible errors sources which can impact global indicators like the sea level history related to GIA. The thermal......During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... in the literature. However, at least two major sources of errors remain. The first is associated with the ice models, spatial distribution of ice and history of melting (this is especially the case of Antarctica), the second with the numerical implementation of model features relevant to sea level modeling...

Full Text Available This paper presents opto-physiological (OP modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG. Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation non-contact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance.

Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

Technical fields are changing so rapidly that even the core of an engineering education must be constantly reevaluated. Graduates of today give more dedication and, almost certainly, more importance to continued learning than to mastery of specific technical concepts. Continued learning shapes a high-quality education, which is what an engineering college must offer its students. The question is how to guarantee the quality of education. In addition, the Accreditation Board of Engineering and Technology is asking that universities commit to continuous and comprehensive education, assuming quality of the educational process. The research is focused on developing a generic assessmentmodel for a college of engineering as an annual cycle that consists of a systematic assessment of every course in the program, followed by an assessment of the program and of the college as a whole using Six Sigma methodology. This unique approach to assessment in education will provide a college of engineering with valuable information regarding many important curriculum decisions in every accreditation cycle. The Industrial and Manufacturing Engineering (IME) Program in the College of Engineering at the University of Cincinnati will be used as a case example for a preliminary test of the generic model.

There are many factors influencing landslide occurrence. The key for landslide control is to confirm the regional landslide hazard factors. The Cameron Highlands of Malaysia was selected as the study area. By bivariate statistical analysis method with GIS software the authors analyzed the relationships among landslides and environmental factors such as lithology, geomorphy, elevation, road and land use. Distance Evaluation Model was developed with Landslide Density(LD). And the assessment of landslide hazard of Cameron Highlands was performed. The result shows that the model has higher prediction precision.

Discussions on Consequential Life Cycle Assessment (CLCA) have relied largely on partial or general equilibrium models. Such models are useful for integrating market effects into CLCA, but also have well-recognized limitations such as the poor granularity of the sectoral definition and the assumption of perfect oversight by all economic agents. Building on the Rectangular-Choice-of-Technology (RCOT) model, this study proposes a new modeling approach for CLCA, the Technology Choice Model (TCM). In this approach, the RCOT model is adapted for its use in CLCA and extended to incorporate parameter uncertainties and suboptimal decisions due to market imperfections and information asymmetry in a stochastic setting. In a case study on rice production, we demonstrate that the proposed approach allows modeling of complex production technology mixes and their expected environmental outcomes under uncertainty, at a high level of detail. Incorporating the effect of production constraints, uncertainty, and suboptimal decisions by economic agents significantly affects technology mixes and associated greenhouse gas (GHG) emissions of the system under study. The case study also shows the model's ability to determine both the average and marginal environmental impacts of a product in response to changes in the quantity of final demand.

Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessingmodel adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data.

Full Text Available This study measured the noise levels generated at different construction sites in reference to the stage of construction and the equipment used, and examined the methods to predict such noise in order to assess the environmental impact of noise. It included 33 construction sites in Kuwait and used artificial neural networks (ANNs for the prediction of noise. A back-propagation neural network (BPNN model was compared with a general regression neural network (GRNN model. The results obtained indicated that the mean equivalent noise level was 78.7 dBA which exceeds the threshold limit. The GRNN model was superior to the BPNN model in its accuracy of predicting construction noise due to its ability to train quickly on sparse data sets. Over 93% of the predictions were within 5% of the observed values. The mean absolute error between the predicted and observed data was only 2 dBA. The ANN modeling proved to be a useful technique for noise predictions required in the assessment of environmental impact of construction activities.

Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....

Supply Chain Modeling: Downstream Risk Assessment Methodology (DRAM) Dr. Sean Barnett December 5, 2013 Institute for Defense Analyses Alexandria, Virginia DMSMS Conference 2013 These Slides are Unclassified and Not Proprietary Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the

Most coastal flood risk studies make use of a Digital Elevation Model (DEM) in addition to a projected flood water level in order to estimate the flood inundation and associated damages to property and livelihoods. The resolution and accuracy of a DEM are critical in a flood risk assessment, as land elevation largely determines whether a location will be flooded or will remain dry during a flood event. Especially in low lying deltaic areas, the land elevation variation is usually in the order...

Coal measures (coal bearing rock strata) can contain large reserves of methane. These reserves are being exploited at a rapidly increasing rate in many parts of the world. To extract coal seam gas, thousands of wells are drilled at relatively small spacing to depressurize coal seams to induce desorption and allow subsequent capture of the gas. To manage this process effectively, the effect of coal bed methane (CBM) extraction on regional aquifer systems must be properly understood and managed. Groundwater modeling is an integral part of this management process. However, modeling of CBM impacts presents some unique challenges, as processes that are operative at two very different scales must be adequately represented in the models. The impacts of large-scale gas extraction may be felt over a large area, yet despite the significant upscaling that accompanies construction of a regional model, near-well conditions and processes cannot be ignored. These include the highly heterogeneous nature of many coal measures, and the dual-phase flow of water and gas that is induced by coal seam depressurization. To understand these challenges, a fine-scale model was constructed incorporating a detailed representation of lithological heterogeneity to ensure that near-well processes and conditions could be examined. The detail of this heterogeneity was at a level not previously employed in models built to assess groundwater impacts arising from CBM extraction. A dual-phase reservoir simulator was used to examine depressurization and water desaturation processes in the vicinity of an extractive wellfield within this fine-scale model. A single-phase simulator was then employed so that depressurization errors incurred by neglecting near-well, dual-phase flow could be explored. Two models with fewer lithological details were then constructed in order to examine the nature of depressurization errors incurred by upscaling and to assess the interaction of the upscaling process with the

An evaluation of the risk to an exposed element from a hazardous event requires a consideration of the element's vulnerability, which expresses its propensity to suffer damage. This concept allows the assessed level of hazard to be translated to an estimated level of risk and is often used to evaluate the risk from earthquakes and cyclones. However, for other natural perils, such as mass movements, coastal erosion and volcanoes, the incorporation of vulnerability within risk assessment is not well established and consequently quantitative risk estimations are not often made. This impedes the study of the relative contributions from different hazards to the overall risk at a site. Physical vulnerability is poorly modelled for many reasons: the cause of human casualties (from the event itself rather than by building damage); lack of observational data on the hazard, the elements at risk and the induced damage; the complexity of the structural damage mechanisms; the temporal and geographical scales; and the ability to modify the hazard level. Many of these causes are related to the nature of the peril therefore for some hazards, such as coastal erosion, the benefits of considering an element's physical vulnerability may be limited. However, for hazards such as volcanoes and mass movements the modelling of vulnerability should be improved by, for example, following the efforts made in earthquake risk assessment. For example, additional observational data on induced building damage and the hazardous event should be routinely collected and correlated and also numerical modelling of building behaviour during a damaging event should be attempted.

Full Text Available The need for integrated methodological framework for sustainability assessment has been widely discussed and is urgent due to increasingly complex environmental system problems. These problems have impacts on ecosystems and human well-being which represent a threat to economic performance of countries and corporations. Integrated assessment crosses issues; spans spatial and temporal scales; looks forward and backward; and incorporates multi-stakeholder inputs. This study aims to develop an integrated methodology by capitalizing the complementary strengths of different methods used by industrial ecologists and biophysical economists. The computational methodology proposed here is systems perspective, integrative, and holistic approach for sustainability assessment which attempts to link basic science and technology to policy formulation. The framework adopts life cycle thinking methods—LCA, LCC, and SLCA; stakeholders analysis supported by multi-criteria decision analysis (MCDA; and dynamic system modelling. Following Pareto principle, the critical sustainability criteria, indicators and metrics (i.e., hotspots can be identified and further modelled using system dynamics or agent based modelling and improved by data envelopment analysis (DEA and sustainability network theory (SNT. The framework is being applied to development of biofuel supply chain networks. The framework can provide new ways of integrating knowledge across the divides between social and natural sciences as well as between critical and problem-solving research.

Dust handling poses a potential explosion hazard in many industrial facilities. The consequences of a dust explosion are often severe and similar to a gas explosion; however, its occurrence is conditional to the presence of five elements: combustible dust, ignition source, oxidant, mixing and confinement. Dust explosion researchers have conducted experiments to study the characteristics of these elements and generate data on explosibility. These experiments are often costly but the generated data has a significant scope in estimating the probability of a dust explosion occurrence. This paper attempts to use existing information (experimental data) to develop a predictive model to assess the probability of a dust explosion occurrence in a given environment. The pro-posed model considers six key parameters of a dust explosion: dust particle diameter (PD), minimum ignition energy (MIE), minimum explosible concentration (MEC), minimum ignition temperature (MIT), limiting oxygen concentration (LOC) and explosion pressure (Pmax). A conditional probabilistic approach has been developed and embedded in the proposed model to generate a nomograph for assessing dust explosion occurrence. The generated nomograph provides a quick assessment technique to map the occurrence probability of a dust explosion for a given environment defined with the six parameters.

Asteroids and comets 10-100 m in size that collide with Earth disrupt dramatically in the atmosphere with an explosive transfer of energy, caused by extreme air drag. Such airbursts produce a strong blastwave that radiates from the meteoroid's trajectory and can cause damage on the surface. An established technique for predicting airburst blastwave damage is to treat the airburst as a static source of energy and to extrapolate empirical results of nuclear explosion tests using an energy-based scaling approach. Here we compare this approach to two more complex models using the iSALE shock physics code. We consider a moving-source airburst model where the meteoroid's energy is partitioned as two-thirds internal energy and one-third kinetic energy at the burst altitude, and a model in which energy is deposited into the atmosphere along the meteoroid's trajectory based on the pancake model of meteoroid disruption. To justify use of the pancake model, we show that it provides a good fit to the inferred energy release of the 2013 Chelyabinsk fireball. Predicted overpressures from all three models are broadly consistent at radial distances from ground zero that exceed three times the burst height. At smaller radial distances, the moving-source model predicts overpressures two times greater than the static-source model, whereas the cylindrical line-source model based on the pancake model predicts overpressures two times lower than the static-source model. Given other uncertainties associated with airblast damage predictions, the static-source approach provides an adequate approximation of the azimuthally averaged airblast for probabilistic hazard assessment.

Various numerical methods are available to model,simulate,analyse and interpret the results; however a major task is to select a reliable and intended tool to perform a realistic assessment of any problem.For a model to be a representative of the realistic mining scenario,a verified tool must be chosen to perform an assessment of mine roof support requirement and address the geotechnical risks associated with longwall mining.The dependable tools provide a safe working environment,increased production,efficient management of resources and reduce environmental impacts of mining.Although various methods,for example,analytical,experimental and empirical are being adopted in mining,in recent days numerical tools are becoming popular due to the advancement in computer hardware and numerical methods.Empirical rules based on past experiences do provide a general guide,however due to the heterogeneous nature of mine geology (i.e.,none of the mine sites are identical),numerical simulations of mine site specific conditions would lend better insights into some underlying issues.The paper highlights the use of a continuum mechanics based tool in coal mining with a mine scale model.The continuum modelling can provide close to accurate stress fields and deformation.The paper describes the use of existing mine data to calibrate and validate the model parameters,which then are used to assess geotechnical issues related with installing a new high capacity longwall mine at the mine site.A variety of parameters,for example,chock convergences,caveability of overlying sandstones,abutment and vertical stresses have been estimated.

Full Text Available Abstract Background Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods

Quantitative microbiological risk assessment (QMRA) models are used to reflect knowledge about complex real-world scenarios for the propagation of microbiological hazards along the feed and food chain. The aim is to provide insight into interdependencies among model parameters, typically with an interest to characterise the effect of risk mitigation measures. A particular requirement is to achieve clarity about the reliability of conclusions from the model in the presence of uncertainty. To this end, Monte Carlo (MC) simulation modelling has become a standard in so-called probabilistic risk assessment. In this paper, we elaborate on the application of Bayesian computational statistics in the context of QMRA. It is useful to explore the analogy between MC modelling and Bayesian inference (BI). This pertains in particular to the procedures for deriving prior distributions for model parameters. We illustrate using a simple example that the inability to cope with feedback among model parameters is a major limitation of MC modelling. However, BI models can be easily integrated into MC modelling to overcome this limitation. We refer a BI submodel integrated into a MC model to as a "Bayes domain". We also demonstrate that an entire QMRA model can be formulated as Bayesian graphical model (BGM) and discuss the advantages of this approach. Finally, we show example graphs of MC, BI and BGM models, highlighting the similarities among the three approaches.

During geomagnetic storm, the energy transfer from solar wind to magnetosphere-ionosphere system adversely affects the communication and navigation systems. Quantifying storm impacts on TEC (Total Electron Content) and assessment of modeling capability of reproducing storm impacts on TEC are of importance to specifying and forecasting space weather. In order to quantify storm impacts on TEC, we considered several parameters: TEC changes compared to quiet time (the day before storm), TEC difference between 24-hour intervals, and maximum increase/decrease during the storm. We investigated the spatial and temporal variations of the parameters during the 2006 AGU storm event (14-15 Dec. 2006) using ground-based GPS TEC measurements in the selected 5 degree eight longitude sectors. The latitudinal variations were also studied in two longitude sectors among the eight sectors where data coverage is relatively better. We obtained modeled TEC from various ionosphere/thermosphere (IT) models. The parameters from the models were compared with each other and with the observed values. We quantified performance of the models in reproducing the TEC variations during the storm using skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

Extreme climate events play an important and potentially lasting role in terrestrial carbon cycling and storage. In particular, satellite and in-situ measurements have shown that forest recovery time following severe drought can extend several years beyond the return to normal climate conditions. However, terrestrial ecosystem models generally do not account for the physiological mechanisms that cause these legacy effects and, instead, assume complete and rapid vegetation recovery from drought. Using a suite of fifteen land surface models from the Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP), we assessmodels' ability to capture legacy effects by analyzing the spatial and temporal extent of modeled vegetation response to the 2005 Amazon drought. We compare the simulated primary production and ecosystem exchange (GPP, NPP, NEE) to previous recovery-focused analysis of satellite microwave observations of canopy backscatter. Further, we evaluate the specific model characteristics that control the timescale and magnitude of simulated vegetation recovery from drought. Since climate change is expected to increase the frequency and severity of extreme climate events, improving models' ability to simulate the legacy effects of these events will likely refine estimates of the land carbon sink and its interannual variability.

Full Text Available We present a simple Poisson process model for the growth of Tradescantia fluminensis, an invasive plant species that inhibits the regeneration of native forest remnants in New Zealand. The model was parameterised with data derived from field experiments in New Zealand and then verified with independent data. The model gave good predictions which showed that its underlying assumptions are sound. However, this simple model had less predictive power for outputs based on variance suggesting that some assumptions were lacking. Therefore, we extended the model to include higher variability between plants thereby improving its predictions. This high variance model suggests that control measures that promote node death at the base of the plant or restrict the main stem growth rate will be more effective than those that reduce the number of branching events. The extended model forms a good basis for assessing the efficacy of various forms of control of this weed, including the recently-released leaf-feeding tradescantia leaf beetle (Neolema ogloblini.

Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... uncertainties can be implemented in probabilistic reliability assessments....

A literature review and assessment was conducted by Pacific Northwest National Laboratory (PNNL) to update information on plant and animal radionuclide transfer factors used in performance-assessmentmodeling. A group of 15 radionuclides was included in this review and assessment. The review is composed of four main sections, not including the Introduction. Section 2.0 provides a review of the critically important issue of physicochemical speciation and geochemistry of the radionuclides in natural soil-water systems as it relates to the bioavailability of the radionuclides. Section 3.0 provides an updated review of the parameters of importance in the uptake of radionuclides by plants, including root uptake via the soil-groundwater system and foliar uptake due to overhead irrigation. Section 3.0 also provides a compilation of concentration ratios (CRs) for soil-to-plant uptake for the 15 selected radionuclides. Section 4.0 provides an updated review on radionuclide uptake data for animal products related to absorption, homeostatic control, approach to equilibration, chemical and physical form, diet, and age. Compiled transfer coefficients are provided for cow’s milk, sheep’s milk, goat’s milk, beef, goat meat, pork, poultry, and eggs. Section 5.0 discusses the use of transfer coefficients in soil, plant, and animal modeling using regulatory models for evaluating radioactive waste disposal or decommissioned sites. Each section makes specific suggestions for future research in its area.

With the advent of digital study models, the importance of being able to evaluate space requirements becomes valuable to treatment planning and the justification for any required extraction pattern. This study was undertaken to compare the validity and reliability of the Royal London space analysis (RLSA) undertaken on plaster as compared with digital models. A pilot study (n = 5) was undertaken on plaster and digital models to evaluate the feasibility of digital space planning. This also helped to determine the sample size calculation and as a result, 30 sets of study models with specified inclusion criteria were selected. All five components of the RLSA, namely: crowding; depth of occlusal curve; arch expansion/contraction; incisor antero-posterior advancement and inclination (assessed from the pre-treatment lateral cephalogram) were accounted for in relation to both model types. The plaster models served as the gold standard. Intra-operator measurement error (reliability) was evaluated along with a direct comparison of the measured digital values (validity) with the plaster models. The measurement error or coefficient of repeatability was comparable for plaster and digital space analyses and ranged from 0.66 to 0.95mm. No difference was found between the space analysis performed in either the upper or lower dental arch. Hence, the null hypothesis was accepted. The digital model measurements were consistently larger, albeit by a relatively small amount, than the plaster models (0.35mm upper arch and 0.32mm lower arch). No difference was detected in the RLSA when performed using either plaster or digital models. Thus, digital space analysis provides a valid and reproducible alternative method in the new era of digital records.

Full Text Available The evaluation of hydrologic model behaviour and performance is commonly made and reported through comparisons of simulated and observed variables. Frequently, comparisons are made between simulated and measured streamflow at the catchment outlet. In distributed hydrological modelling approaches, additional comparisons of simulated and observed measurements for multi-response validation may be integrated into the evaluation procedure to assess overall modelling performance. In both approaches, single and multi-response, efficiency criteria are commonly used by hydrologists to provide an objective assessment of the "closeness" of the simulated behaviour to the observed measurements. While there are a few efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of determination, and index of agreement that are frequently used in hydrologic modeling studies and reported in the literature, there are a large number of other efficiency criteria to choose from. The selection and use of specific efficiency criteria and the interpretation of the results can be a challenge for even the most experienced hydrologist since each criterion may place different emphasis on different types of simulated and observed behaviours. In this paper, the utility of several efficiency criteria is investigated in three examples using a simple observed streamflow hydrograph.

Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

Iron mines in Minnesota are ideally located to assess the accuracy of available atmospheric profiles used in infrasound modeling. These mines are located approximately 400 km away to the southeast (142) of the Lac-Du-Bonnet infrasound station, IS-10. Infrasound data from June 1999 to March 2004 was analyzed to assess the effects of explosion size and atmospheric conditions on observations. IS-10 recorded a suite of events from this time period resulting in well constrained ground truth. This ground truth allows for the comparison of ray trace and PE (Parabolic Equation) modeling to the observed arrivals. The tele-infrasonic distance (greater than 250 km) produces ray paths that turn in the upper atmosphere, the thermosphere, at approximately 120 km to 140 km. Modeling based upon MSIS/HWM (Mass Spectrometer Incoherent Scatter/Horizontal Wind Model) and the NOGAPS (Navy Operational Global Atmospheric Prediction System) and NRL-GS2 (Naval Research Laboratory Ground to Space) augmented profiles are used to interpret the observed arrivals.

This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessmentmodels. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

Life science and biotechnology companies are the fastest growing industries in the nation, with more than 30% of these companies and close to 50% of the nation's life science workers located in California. The need for well-trained biotechnology workers continues to grow. Educational institutions and industry professionals have attempted to create the training and the workforce for the bioscience and biotechnology industry. Many have concluded that one way would be to create a multiuse training center where trainees from high school age through late adulthood could receive up-to-date training. This case study had 2 unique phases. Phase 1 consisted of examining representative stakeholder interview data for characteristics of an ideal biotechnology shared-use regional education (B-SURE) center, which served as the basis for an assessment tool, with 107 characteristics in 8 categories. This represented what an ideal center model should include. Phase 2 consisted of using this assessment tool to gather data from 6 current biotechnology regional centers to determine how these centers compared to the ideal model. Results indicated that each center was unique. Although no center met all ideal model characteristics, the 6 centers could clearly be ranked. Recommendations include refining the core characteristics, further assessing the existing and planned centers; evaluating and refining the interview instrument in Phase 1 and the assessment tool in Phase 2 by including additional stakeholders in both phases and by adding reviewers of Phase 1 transcripts; and determining a method to demonstrate a clear return on investment in a B-SURE center.

Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…

The purpose of this paper is to give an example of how to assess the model-data fit of unidimensional IRT models in simulated data. Also, the present research aims to explain the importance of fit and the consequences of misfit by using simulated data sets. Responses of 1000 examinees to a dichotomously scoring 20 item test were simulated with 25…

Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…

This paper evaluates uncertainties in two solute transport models based on tracer experiment data from the Upper River Narew. Data Based Mechanistic and transient storage models were applied to Rhodamine WT tracer observations. We focus on the analysis of uncertainty and the sensitivity of model predictions to varying physical parameters, such as dispersion and channel geometry. An advection-dispersion model with dead zones (Transient Storage model) adequately describes the transport of pollutants in a single channel river with multiple storage. The applied transient storage model is deterministic; it assumes that observations are free of errors and the model structure perfectly describes the process of transport of conservative pollutants. In order to take into account the model and observation errors, an uncertainty analysis is required. In this study we used a combination of the Generalized Likelihood Uncertainty Estimation technique (GLUE) and the variance based Global Sensitivity Analysis (GSA). The combination is straightforward as the same samples (Sobol samples) were generated for GLUE analysis and for sensitivity assessment. Additionally, the results of the sensitivity analysis were used to specify the best parameter ranges and their prior distributions for the evaluation of predictive model uncertainty using the GLUE methodology. Apart from predictions of pollutant transport trajectories, two ecological indicators were also studied (time over the threshold concentration and maximum concentration). In particular, a sensitivity analysis of the length of "over the threshold" period shows an interesting multi-modal dependence on model parameters. This behavior is a result of the direct influence of parameters on different parts of the dynamic response of the system. As an alternative to the transient storage model, a Data Based Mechanistic approach was tested. Here, the model is identified and the parameters are estimated from available time series data using

Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application

New Zealand is highly dependent on its soil resource for continued agricultural production. To avoiddepleting this resource, there is a need to identify soils and associated land management practices wherethere is a risk of soil degradation. Environmental integrity and ecosystem services also need to be maintained.Accordingly, to ensure sustainable production, the on- and off-site environmental impacts of land managementneed to be identified and managed. We developed a structural vulnerability index for New Zealand soils. Thisindex ranks soils according to their inherent susceptibility to physical degradation when used for agricultural(pasture, forestry and cropping) purposes. We also developed a rule-based model to assess soil compactionvulnerability by characterising the combined effects of resistance and resilience. Other soil attributes havebeen appraised using seven chemical, physical and biological indicators of soil quality. These indicators havebeen applied in a nation-wide project involving data collection from over 500 sites for a range of land uses.These soil quality data can be interpreted via the World Wide Web - through the interactive decision-support tool SINDI. The land-use impact model is a framework to assess agricultural land management andenvironmental sustainability, and may be applied to land units at any scale. Using land resource data andinformation the model explicitly identifies hazards to land productivity and environmental integrity. It utilisesqualitative expert and local knowledge and quantitative model-based evaluations to assess the potentialenvironmental impacts of land-management practices. The model is linked to a geographic informationsystem (GIS), allowing model outputs, such as the environmental impacts of site-specific best managementpractices, to be identified in a spatially explicit manner. The model has been tested in New Zealand in anarea of pastoral land use. Advantages of this risk identification model include

In this proof-of-concept study we focus on linking large scale climate and permafrost simulations to small scale engineering projects by bridging the gap between climate and permafrost sciences on the one hand and on the other technical recommendation for adaptation of planned infrastructures...... to climate change in a region generally underlain by permafrost. We present the current and future state of permafrost in Greenland as modelled numerically with the GIPL model driven by HIRHAM climate projections up to 2080. We develop a concept called Permafrost Thaw Potential (PTP), defined...... as the potential active layer increase due to climate warming and surface alterations. PTP is then used in a simple risk assessment procedure useful for engineering applications. The modelling shows that climate warming will result in continuing wide-spread permafrost warming and degradation in Greenland...

A new descriptive stratiform chromite deposit model was prepared which will provide a framework for understanding the characteristics of stratiform chromite deposits worldwide. Previous stratiform chromite deposit models developed by the U.S. Geological Survey (USGS) have been referred to as Bushveld chromium, because the Bushveld Complex in South Africa is the only stratified, mafic-ultramafic intrusion presently mined for chromite and is the most intensely researched. As part of the on-going effort by the USGS Mineral Resources Program to update existing deposit models for the upcoming national mineral resource assessment, this revised stratiform chromite deposit model includes new data on the geological, mineralogical, geophysical, and geochemical attributes of stratiform chromite deposits worldwide. This model will be a valuable tool in future chromite resource and environmental assessments and supplement previously published models used for mineral resource evaluation.

The objective of this report is to assess the confidence that can be placed in the Forsmark site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Forsmark). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface based investigations or more usefully by explorations underground made during construction of the repository. The confidence in the Forsmark site descriptive model, based on the data available at the conclusion of the surface-based site investigations, have been assessed by exploring: Confidence in the site characterisation data base; Key remaining issues and their handling; Handling of alternative models; Consistency between disciplines; and, Main reasons for confidence and lack of confidence in the model. It is generally found that the key aspects of importance for safety assessment and repository engineering of the Forsmark site descriptive model are associated with a high degree of confidence. Because of the robust geological model that describes the site, the overall confidence in Forsmark site descriptive model is judged to be high. While some aspects have lower confidence this lack of confidence is handled by providing wider uncertainty ranges, bounding estimates and/or alternative models. Most, but not all, of the low confidence aspects have little impact on repository engineering design or for long-term safety. Poor precision in the measured data are judged to have limited impact on uncertainties on the site descriptive model, with the exceptions of inaccuracy in determining the position of some boreholes at depth in 3-D space, as well as the poor precision of the orientation of BIPS images in some boreholes, and the poor precision of stress data determined by overcoring at the locations where the pre

In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

This study assessed the results from the parallel application of two alternate personality models, the Zuckerman-Kuhlman trait model and Bond's Defense Styles, in a sample of 268 Greek medical students (172 women, M age = 22.0 yr., SD = 1.1; 95 men, M age = 22.3 yr., SD = 1.2) in relation to psychopathological symptoms, so as to clarify whether this practice yielded accurate results while avoiding shared variance. Data from both models are cross-checked with canonical correlation analysis to validate whether there was significant conceptual overlap between them that would mean that their parallel use is an ineffective research practice. Following this analysis, factors from both models are utilized to predict variance in sample psychopathology, so as to compare their relative usefulness. Results indicated that the two models did not share a significant amount of variance, while a combination of personality aspects from both models, including Impulsive Sensation-Seeking, Neuroticism-Anxiety, Aggression-Hostility, and Sociability traits and Maladaptive Action, Image Distorting, and Adaptive Action defense styles, predicted high variance in psychopathology symptoms.

Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.

This report provides a descriptive model for arc-related porphyry molybdenum deposits. Presented within are geological, geochemical, and mineralogical characteristics that differentiate this deposit type from porphyry copper and alkali-feldspar rhyolite-granite porphyry molybdenum deposits. The U.S. Geological Survey's effort to update existing mineral deposit models spurred this research, which is intended to supplement previously published models for this deposit type that help guide mineral-resource and mineral-environmental assessments.

The Value of the World's Ecosystem Services and Natural Capital by Costanza in 1997 is generally regarded as a monument for the research of valuing ecosystem services. However, the classification of ecosystem services, the method of various services summation and the purpose for static global value had be confronted by many criticisms. Based on the summary of these criticisms, suggestions, related function assessment and further study direction, the sustainability of ecosystem services is presented The two basic indicators in ecology, productivity and biodiversity, respectively charactering the ability of producing and self-organizing, not only represent the internal function of ecosystem, but also are proportioned to its external function of supporting and providing for human life On presenting the general form of ecosystem services assessment, this paper improves the mathematical formula giving a function adjusting coefficient composed of productivity and biodiversity. Theoretically, the integration of the two indicators reflects the changes of ecosystem services at spatial and temporal scales, can physically assess the sustability of ecosystem services, and build a firm scientific fundament of value assessment for ecosystem services Objectively, its application should be strictly tested in next step.Ecosystem services; theoretical model; Sustainability; Bio-productivity; Biodiversity

Highlights: • A psychometric model to evaluate ‘safety climate’ at nuclear research facilities. • The model presented evidences of good psychometric qualities. • The model was applied to nuclear research facilities in Brazil. • Some ‘safety culture’ weaknesses were detected in the assessed organization. • A potential tool to develop safety management programs in nuclear facilities. - Abstract: A safe and reliable operation of nuclear power plants depends not only on technical performance, but also on the people and on the organization. Organizational factors have been recognized as the main causal mechanisms of accidents by research organizations through USA, Europe and Japan. Deficiencies related with these factors reveal weaknesses in the organization’s safety culture. A significant number of instruments to assess the safety culture based on psychometric models that evaluate safety climate through questionnaires, and which are based on reliability and validity evidences, have been published in health and ‘safety at work’ areas. However, there are few safety culture assessment instruments with these characteristics (reliability and validity) available on nuclear literature. Therefore, this work proposes an instrument to evaluate, with valid and reliable measures, the safety climate of nuclear research facilities. The instrument was developed based on methodological principles applied to research modeling and its psychometric properties were evaluated by a reliability analysis and validation of content, face and construct. The instrument was applied to an important nuclear research organization in Brazil. This organization comprises 4 research reactors and many nuclear laboratories. The survey results made possible a demographic characterization and the identification of some possible safety culture weaknesses and pointing out potential areas to be improved in the assessed organization. Good evidence of reliability with Cronbach's alpha

, number, size, distribution and communication of vessels in dermal skin, epidermal–dermal junctions, the immunoreactivity of peptide nerve fibers, distribution of nociceptive and non-nociceptive fiber classes, and changes in axonal excitability, swines seem to provide the most suitable animal model for pain assessment. Locomotor function, clinical signs, and measurements (respiratory rate, heart rate, blood pressure, temperature, electromyography, behavior (bright/quiet, alert, responsive, depressed, unresponsive, plasma concentration of substance P and cortisol, vocalization, lameness, and axon reflex vasodilatation by laser Doppler imaging have been used to assess pain, but none of these evaluations have proved entirely satisfactory. It is necessary to identify new methods for evaluating pain in large animals (particularly pigs, because of their similarities to humans. This could lead to improved assessment of pain and improved analgesic treatment for both humans and laboratory animals.Keywords: pain assessment, experimental model, translational research

of process system engineering and life cycle inventory and assessment in the design, development and improvement of sustainable bioprocesses are explored. The existing process systems engineering software tools will prove essential to assist this work. However, the existing tools will also require further......The next generation of process engineers will face a new set of challenges, with the need to devise new bioprocesses, with high selectivity for pharmaceutical manufacture, and for lower value chemicals manufacture based on renewable feedstocks. In this paper the current and predicted future roles...... development such that they can also be used to evaluate processes against sustainability metrics, as well as economics as an integral part of assessments. Finally, property models will also be required based on compounds not currently present in existing databases. It is clear that many new opportunities...

The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessmentmodels. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

showed some unexpected results, where climate models predicting the largest increase in net precipitation did not result in the largest increase in groundwater heads. This was found to be the result of different initial conditions (1990 - 2010) for the various climate models. In some areas a combination of a high initial groundwater head and an increase in precipitation towards 2021 - 2050 resulted in a groundwater head raise that reached the drainage or the surface water system. This will increase the exchange from the groundwater to the surface water system, but reduce the raise in groundwater heads. An alternative climate model, with a lower initial head can thus predict a higher increase in the groundwater head, although the increase in precipitation is lower. This illustrates an extra dimension in the uncertainty assessment, namely the climate models capability of simulating the current climatic conditions in a way that can reproduce the observed hydrological response. Højberg, AL, Troldborg, L, Stisen, S, et al. (2012) Stakeholder driven update and improvement of a national water resources model - http://www.sciencedirect.com/science/article/pii/S1364815212002423 Seaby, LP, Refsgaard, JC, Sonnenborg, TO, et al. (2012) Assessment of robustness and significance of climate change signals for an ensemble of distribution-based scaled climate projections (submitted) Journal of Hydrology Stisen, S, Højberg, AL, Troldborg, L et al., (2012): On the importance of appropriate rain-gauge catch correction for hydrological modelling at mid to high latitudes - http://www.hydrol-earth-syst-sci.net/16/4157/2012/

This paper reviews the work carried out at the University of Liverpool to assess the use of CFD methods for aircraft flight dynamics applications. Three test cases are discussed in the paper, namely, the Standard Dynamic Model, the Ranger 2000 jet trainer and the Stability and Control Unmanned Combat Air Vehicle. For each of these, a tabular aerodynamic model based on CFD predictions is generated along with validation against wind tunnel experiments and flight test measurements. The main purpose of the paper is to assess the validity of the tables of aerodynamic data for the force and moment prediction of realistic aircraft manoeuvres. This is done by generating a manoeuvre based on the tables of aerodynamic data, and then replaying the motion through a time-accurate computational fluid dynamics calculation. The resulting forces and moments from these simulations were compared with predictions from the tables. As the latter are based on a set of steady-state predictions, the comparisons showed perfect agreement for slow manoeuvres. As manoeuvres became more aggressive some disagreement was seen, particularly during periods of large rates of change in attitudes. Finally, the Ranger 2000 model was used on a flight simulator.

Full Text Available In this proof-of-concept study we focus on linking large scale climate and permafrost simulations to small scale engineering projects by bridging the gap between climate and permafrost sciences on the one hand and on the other technical recommendation for adaptation of planned infrastructures to climate change in a region generally underlain by permafrost. We present the current and future state of permafrost in Greenland as modelled numerically with the GIPL model driven by HIRHAM climate projections up to 2080. We develop a concept called Permafrost Thaw Potential (PTP, defined as the potential active layer increase due to climate warming and surface alterations. PTP is then used in a simple risk assessment procedure useful for engineering applications. The modelling shows that climate warming will result in continuing wide-spread permafrost warming and degradation in Greenland, in agreement with present observations. We provide examples of application of the risk zone assessment approach for the two towns of Sisimiut and Ilulissat, both classified with high PTP.

Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of

Full Text Available Evapotranspiration is one of the major components of the water balance and has been identified as a key factor in hydrological modelling. For this reason, several methods have been developed to calculate the reference evapotranspiration (ET0. In modelling reference evapotranspiration it is inevitable that both model and data input will present some uncertainty. Whatever model is used, the errors in the input will propagate to the output of the calculated ET0. Neglecting information about estimation uncertainty, however, may lead to improper decision-making and water resources management. One geostatistical approach to spatial analysis is stochastic simulation, which draws alternative and equally probable, realizations of a regionalized variable. Differences between the realizations provide a measure of spatial uncertainty and allow to carry out an error propagation analysis. Among the evapotranspiration models, the Hargreaves-Samani model was used.

The aim of this paper was to assess spatial uncertainty of a monthly reference evapotranspiration model resulting from the uncertainties in the input attributes (mainly temperature at regional scale. A case study was presented for Calabria region (southern Italy. Temperature data were jointly simulated by conditional turning bands simulation with elevation as external drift and 500 realizations were generated.

The ET0 was then estimated for each set of the 500 realizations of the input variables, and the ensemble of the model outputs was used to infer the reference evapotranspiration probability distribution function. This approach allowed to delineate the areas characterized by greater uncertainty, to improve supplementary sampling strategies and ET0 value predictions.

This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60 X 60 mm FOV, 0.125 mm{sup 3} (FOV{sub 60}); 2) 80 X 80 mm FOV, 0.160 mm{sup 3} (FOV{sub 80}); and 3) 100 X 100 mm FOV, 0.250 mm{sup 3} (FOV{sub 100}). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements.

Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

Due to the high importance of biofilms on river ecosystems, assessment of pesticides' adverse effects is necessary but is impaired by high variability and poor reproducibility of both natural biofilms and those developed in the laboratory. We constructed a model biofilm to evaluate the effects of pesticides, consisting in cultured microbial strains, Pedobacter sp. 7-11, Aquaspirillum sp. T-5, Stenotrophomonas sp. 3-7, Achnanthes minutissima N71, Nitzschia palea N489, and/or Cyclotella meneghiniana N803. Microbial cell numbers, esterase activity, chlorophyll-a content, and the community structure of the model biofilm were examined and found to be useful as biological factors for evaluating the pesticide effects. The model biofilm was formed through the cooperative interaction of bacteria and diatoms, and a preliminary experiment using the herbicide atrazine, which inhibits diatom growth, indicated that the adverse effect on diatoms inhibited indirectly the bacterial growth and activity and, thus, the formation of the model biofilm. Toxicological tests using model biofilms could be useful for evaluating the pesticide effects and complementary to studies on actual river biofilms.

The projected service life of weapons in the US nuclear stockpile will exceed the original design life of their critical components. Interim metrics are needed to describe weapon states for use in simulation models of the nuclear weapons complex. The authors present an approach to this problem based upon the theory of approximate reasoning (AR) that allows meaningful assessments to be made in an environment where reliability models are incomplete. AR models are designed to emulate the inference process used by subject matter experts. The emulation is based upon a formal logic structure that relates evidence about components. This evidence is translated using natural language expressions into linguistic variables that describe membership in fuzzy sets. The authors introduce a metric that measures the acceptability of a weapon to nuclear deterrence planners. Implication rule bases are used to draw a series of forward chaining inferences about the acceptability of components, subsystems and individual weapons. They describe each component in the AR model in some detail and illustrate its behavior with a small example. The integration of the acceptability metric into a prototype model to simulate the weapons complex is also described.

Full Text Available This study presents promising variants of genetic programming (GP, namely linear genetic programming (LGP and multi expression programming (MEP to evaluate the liquefaction resistance of sandy soils. Generalized LGP and MEP-based relationships were developed between the strain energy density required to trigger liquefaction (capacity energy and the factors affecting the liquefaction characteristics of sands. The correlations were established based on well established and widely dispersed experimental results obtained from the literature. To verify the applicability of the derived models, they were employed to estimate the capacity energy values of parts of the test results that were not included in the analysis. The external validation of the models was verified using statistical criteria recommended by researchers. Sensitivity and parametric analyses were performed for further verification of the correlations. The results indicate that the proposed correlations are effectively capable of capturing the liquefaction resistance of a number of sandy soils. The developed correlations provide a significantly better prediction performance than the models found in the literature. Furthermore, the best LGP and MEP models perform superior than the optimal traditional GP model. The verification phases confirm the efficiency of the derived correlations for their general application to the assessment of the strain energy at the onset of liquefaction.

The PropFan Test Assessment (PTA) program includes flight tests of a propfan power plant mounted on the left wind of a modified Gulfstream II testbed aircraft. A static balance boom is mounted on the right wing tip for lateral balance. Flutter analyses indicate that these installations reduce the wing flutter stabilizing speed and that torsional stiffening and the installation of a flutter stabilizing tip boom are required on the left wing for adequate flutter safety margins. Wind tunnel tests of a 1/9th scale high speed flutter model of the testbed aircraft were conducted. The test program included the design, fabrication, and testing of the flutter model and the correlation of the flutter test data with analysis results. Excellent correlations with the test data were achieved in posttest flutter analysis using actual model properties. It was concluded that the flutter analysis method used was capable of accurate flutter predictions for both the (symmetric) twin propfan configuration and the (unsymmetric) single propfan configuration. The flutter analysis also revealed that the differences between the tested model configurations and the current aircraft design caused the (scaled) model flutter speed to be significantly higher than that of the aircraft, at least for the single propfan configuration without a flutter boom. Verification of the aircraft final design should, therefore, be based on flutter predictions made with the test validated analysis methods.

The life cycle of technology is one of the most important indexes to weigh up the risk of the investment to neo-tech. There are so many uncertainties because it is conditioned by a lot of factors, we can not make a rational forecasting by traditional assessment method. So this paper gives a conprehensive consideration to the factors that influence production and makes some modification to production function, and establishes the life cycle of technology assessmet model by the method of fuzzy mathematics. So it quantifies the risk of investment. We can take it as one foundational index for the decision making of the investment.

The five-factor model (FFM) of personality is obtaining construct validation, recognition, and practical consideration across a broad domain of fields, including clinical psychology, industrial-organizational psychology, and health psychology. As a result, an array of instruments have been developed and existing instruments are being modified to assess the FFM. In this article, we present an overview and critique of five such instruments (the Goldberg Big Five Markers, the revised NEO Personality Inventory, the Interpersonal Adjective Scales-Big Five, the Personality Psychopathology-Five, and the Hogan Personality Inventory), focusing in particular on their representation of the lexical FFM and their practical application.

To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of

Current measures used to estimate the risks of toxic chemicals are not relevant to the goals of the environmental protection process, and thus ecological risk assessment (ERA) is not used as extensively as it should be as a basis for cost-effective management of environmental resources. Appropriate...... population models can provide a powerful basis for expressing ecological risks that better inform the environmental management process and thus that are more likely to be used by managers. Here we provide at least five reasons why population modeling should play an important role in bridging the gap between...... what we measure and what we want to protect. We then describe six actions needed for its implementation into management-relevant ERA....

Results are presented of the 1973 NASA Mission Model Analysis. The purpose was to obtain an economic assessment of using the Shuttle to accommodate the payloads and requirements as identified by the NASA Program Offices and the DoD. The 1973 Payload Model represents a baseline candidate set of future payloads which can be used as a reference base for planning purposes. The cost of implementing these payload programs utilizing the capabilities of the shuttle system is analyzed and compared with the cost of conducting the same payload effort using expendable launch vehicles. There is a net benefit of 14.1 billion dollars as a result of using the shuttle during the 12-year period as compared to using an expendable launch vehicle fleet.

Photocatalysis employing titanium dioxide is a useful method to degrade a wide variety of organic and inorganic pollutants from water and air. However, the application of this advanced oxidation process at industrial scale requires the development of mathematical models to design and scale-up photocatalytic reactors. In the present work, intrinsic kinetic expressions previously obtained in a laboratory reactor are employed to predict the performance of a bench scale reactor of different configuration and operating conditions. 4-Chlorophenol was chosen as the model pollutant. The toxicity and biodegradability of the irradiated mixture in the bench photoreactor was also assessed. Good agreement was found between simulation and experimental data. The root mean square error of the estimations was 9.9%. The photocatalytic process clearly enhances the biodegradability of the reacting mixture, and the initial toxicity of the pollutant was significantly reduced by the treatment.

Urban planning solutions and decisions have large-scale significance for ecological sustainability (eco-efficiency) the consumption of energy and other natural resources, the production of greenhouse gas and other emissions and the costs caused by urban form. Climate change brings new and growing challenges for urban planning. The EcoBalance model was developed to assess the sustainability of urban form and has been applied at various planning levels: regional plans, local master plans and detailed plans. The EcoBalance model estimates the total consumption of energy and other natural resources, the production of emissions and wastes and the costs caused directly and indirectly by urban form on a life cycle basis. The results of the case studies provide information about the ecological impacts of various solutions in urban development. (orig.)

Assessing agreement is often of interest in biomedical sciences to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We investigate the agreement structure for a class of frailty models that are commonly used for analyzing correlated survival outcomes. Conditional on the shared frailty, bivariate survival times are assumed to be independent with Weibull baseline hazard distribution. We present the analytic expressions for the concordance correlation coefficient (CCC) for several commonly used frailty distributions. Furthermore, we develop a time-dependent CCC for measuring agreement between survival times among subjects who survive beyond a specified time point. We characterize the temporal pattern in the time-dependent CCC for various frailty distributions. Our results provide a better understanding of the agreement structure implied by different frailty models.

A new computer model, the GCR Event-based Risk Model code (GERMcode), was developed to describe biophysical events from high-energy protons and high charge and energy (HZE) particles that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the GERMcode, the biophysical description of the passage of HZE particles in tissue and shielding materials is made with a stochastic approach that includes both particle track structure and nuclear interactions. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections. For NSRL applications, the GERMcode evaluates a set of biophysical properties, such as the Poisson distribution of particles or delta-ray hits for a given cellular area and particle dose, the radial dose on tissue, and the frequency distribution of energy deposition in a DNA volume. By utilizing the ProE/Fishbowl ray-tracing analysis, the GERMcode will be used as a bi-directional radiation transport model for future spacecraft shielding analysis in support of Mars mission risk assessments. Recent radiobiological experiments suggest the need for new approaches to risk assessment that include time-dependent biological events due to the signaling times for activation and relaxation of biological processes in cells and tissue. Thus, the tracking of the temporal and spatial distribution of events in tissue is a major goal of the GERMcode in support of the simulation of biological processes important in GCR risk assessments. In order to validate our approach, basic radiobiological responses such as cell survival curves, mutation, chromosomal

The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance AssessmentModel simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. The Pena Blanca Natural Analogue Performance AssessmentModel uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash-flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following analogous characteristics as compared to the Yucca Mountain repository site: (1) Analogous source--UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geology--(i.e. fractured, welded, and altered rhyolitic ash-flow tuffs); (3) Analogous climate--Semiarid to arid; (4) Analogous setting--Volcanic tuffs overlie carbonate rocks; and (5) Analogous geochemistry--Oxidizing conditions Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table.

The Fundamentals of Laparoscopic surgery (FLS) is a validated program for the teaching and evaluation of the basic knowledge and skills required to perform laparoscopic surgery. The educational component includes didactic, Web-based material and a simple, affordable physical simulator with specific tasks and a recommended curriculum. FLS certification requires passing a written multiple-choice examination and a proctored manual skills examination in the FLS simulator. The metrics for the FLS program have been rigorously validated to meet the highest educational standards, and certification is now a requirement for the American Board of Surgery. This article summarizes the validation process and the FLS-related research that has been done to date. The Fundamentals of Endoscopic Surgery is a program modeled after FLS with a similar mission for flexible endoscopy. It is currently in the final stages of development and will be launched in April 2010. The program also includes learning and assessment components, and is undergoing the same meticulous validation process as FLS. These programs serve as models for the creation of simulation-based tools to teach skills and assess competence with the intention of optimizing patient safety and the quality of surgical education.

Previous studies have suggested that species responded individualistically to the climate change of the last glaciation, expanding and contracting their ranges independently. Consequently, many researchers have concluded that community composition is plastic over time. Here I quantitatively assess changes in community composition over broad timescales and assess the effect of range shifts on community composition. Data on Pleistocene mammal assemblages from the FAUNMAP database were divided into four time periods (preglacial, full glacial, postglacial, and modern). Simulation analyses were designed to determine whether the degree of change in community composition is consistent with independent range shifts, given the distribution of range shifts observed. Results indicate that many of the communities examined in the United States were more similar through time than expected if individual range shifts were completely independent. However, in each time transition examined, there were areas of nonanalogue communities. I conducted sensitivity analyses to explore how the results were affected by the assumptions of the null model. Conclusions about changes in mammalian distributions and community composition are robust with respect to the assumptions of the model. Thus, whether because of biotic interactions or because of common environmental requirements, community structure through time is more complex than previously thought.

Full Text Available Visual quality measure is one of the fundamental and important issues to numerous applications of image and video processing. In this paper, based on the assumption that human visual system is sensitive to image structures (edges and image local luminance (light stimulation, we propose a new perceptual image quality assessment (PIQA measure based on total variation (TV model (TVPIQA in spatial domain. The proposed measure compares TVs between a distorted image and its reference image to represent the loss of image structural information. Because of the good performance of TV model in describing edges, the proposed TVPIQA measure can illustrate image structure information very well. In addition, the energy of enclosed regions in a difference image between the reference image and its distorted image is used to measure the missing luminance information which is sensitive to human visual system. Finally, we validate the performance of TVPIQA measure with Cornell-A57, IVC, TID2008, and CSIQ databases and show that TVPIQA measure outperforms recent state-of-the-art image quality assessment measures.

Full Text Available Bankruptcy risk made the subject of many research studies that aim at identifying the time of the bankruptcy, the factors that compete to achieve this state, the indicators that best express this orientation (the bankruptcy. The threats to enterprises require the managers knowledge of continually economic and financial situations, and vulnerable areas with development potential. Managers need to identify and properly manage the threats that would prevent achieving the targets. In terms of methods known in the literature of assessment and evaluation of bankruptcy risk they are static, functional, strategic, and scoring nonfinancial models. This article addresses Altman and Conan-Holder-known internationally as the model developed at national level by two teachers from prestigious universities in our country-the Robu-Mironiuc model. Those models are applied to data released by the profit and loss account and balance sheet Turism Covasna company over which bankruptcy risk analysis is performed. The results of the analysis are interpreted while trying to formulate solutions to the economic and financial viability of the entity.

Intermediate results from an ongoing health technology assessment exercise of a simulation model of paediatric cardiomyopathy are reported. Comprehensive data on paediatric cardiomyopathy/heart failure, treatment options, incidence and prevalence, prognoses for different outcomes to be expected were collected. Based on this knowledge, a detailed clinical pathway model was developed and validated against the clinical workflow in a tertiary paediatric care hospital. It combines three disease stages and various treatment options with estimates of the probabilities of a child moving from one stage to another. To reflect the complexity of initial decision taking by clinicians, a three-stage Markov model was combined with a decision tree approach - a Markov decision process. A Markov Chain simulation tool was applied to compare estimates of transition probabilities and cost data of present standard of care treatment options for a cohort of children over ten years with expected improvements from using a clinical decision support tool based on the disease model under development. Early results indicate a slight increase of overall costs resulting from the extra cost of using such a tool in spite of some savings to be expected from improved care. However, the intangible benefits in life years saved of severely ill children and the improvement in QoL to be expected for moderately ill ones should more than compensate for this.

Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.

CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)

Full Text Available A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model, one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

Full Text Available This study aims to develop an instrument of affective assessment to measure the social competence of elementary school students in the learning process in schools. This study used the development model of Borg & Gall’s approach which was modified into five phases, including the need analyses, developing draft of the product conducted by experts, developing an affective assessment instrument, trying out the affective assessment instrument conducted by teachers of primary education in Yogyakarta, and the dissemination and implementation of the developed affective assessment instrument. The subjects were elementary school students whose school implemented Curriculum 2013 in the academic year of 2013/2014. The validity and reliability of each construct of the affective instrument were established using the PLS SEM Wrap PLS 3.0 analysis program. The study finds the following results. First, the construct of Honesty, Discipline, Responsibility, Decency, Care, and Self-Confidence in the limited, main, and extended testing has been supported by empirical data. Second, the validity of Honesty, Discipline, Responsibility, Decency, Care, and Self-Confidence in the limited, main, and extended testing meets the criteria above 0.70 for each indicator of the loading factor and the criteria below 0.50 for each indicator score of the cross-loading factor. Third, the reliability of Honesty, Discipline, Responsibility, Decency, Care, and Self-Confidence in limited, main, and extended testing meets the criteria above 0.70 for both composite reliability and Cronbach’s alpha scores. Fourth, the number of indicators at preresearch was 53, and 10 indicators were rejected in the limited testing, and four indicators were rejected in the main testing, and one indicator was rejected in the extended testing.

Full Text Available A total of 17404 soil samples (2003rd-2009th year were analysed in the eastern Croatia. The largest number of soil samples belongs to the Osijek-Baranya county, which together with both Eastern sugar beet Factories (Osijek and Županja, conduct the soil fertility control (~4200 samples/yr.. Computer model suitability assessment for crops, supported by GIS, proved to be fast, efficient enough reliable in terms of the number of analyzed soil samples. It allows the visualization of the agricultural area and prediction of its production properties for the purposes of analysis, planning and rationalization of agricultural production. With more precise data about the soil (soil, climate and reliable Digital Soil Map of Croatia, the model could be an acceptable, not only to evaluate the suitability for growing different crops but also their need for fertilizer, necessary machinery, repairs (liming, and other measures of organic matter input. The abovementioned aims to eliminate or reduce effects of limiting factors in primary agricultural production. Assessment of the relative benefits of soil presented by computer model for the crops production and geostatistical method kriging in the Osijek-Baranya county showed: 1 Average soil suitability being 60.06 percent. 2 Kriging predicted that 51751 ha (17.16% are of limited resources (N1 for growing crops whereas a 86142 ha (28.57% of land is limited suitably (S3, b 132789 ha (44.04% are moderately suitable (S2 and c 30772 ha (10.28% are of excellent fertility (S1. A large number of eastern Croatian land data showed that the computer-geostatistical model for determination of soil benefits for growing crops was automated, fast and simple to use and suitable for the implementation of GIS and automatically downloading the necessary benefit indicators from the input base (land, analytical and climate as well as data from the digital soil maps able to: a visualize the suitability for soil tillage, b predict the

This report contains a first attempt at introducing the environmental impacts associated with amines and derivatives in a life cycle assessment (LCA) of gas power production with carbon capture and comparing these with other environmental impacts associated with the production system. The report aims to identify data gaps and methodological challenges connected both to modelling toxicity of amines and derivatives and weighting of environmental impacts. A scenario based modelling exercise was performed on a theoretical gas power plant with carbon capture, where emission levels of nitrosamines were varied between zero (gas power without CCS) to a worst case level (outside the probable range of actual carbon capture facilities). Because of extensive research and development in the areas of solvents and emissions from carbon capture facilities in the latter years, data used in the exercise may be outdated and results should therefore not be taken at face value.The results from the exercise showed: According to UseTox, emissions of nitrosamines are less important than emissions of formaldehyde with regard to toxicity related to operation of (i.e. both inputs to and outputs from) a carbon capture facility. If characterisation factors for emissions of metals are included, these outweigh all other toxic emissions in the study. None of the most recent weighting methods in LCA include characterisation factors for nitrosamines, and these are therefore not part of the environmental ranking.These results shows that the EDecIDe project has an important role to play in developing LCA methodology useful for assessing the environmental performance of amine based carbon capture in particular and CCS in general. The EDecIDe project will examine the toxicity models used in LCA in more detail, specifically UseTox. The applicability of the LCA compartment models and site specificity issues for a Norwegian/Arctic situation will be explored. This applies to the environmental compartments

Human health risk assessment of sites contaminated by volatile hydrocarbons involves site-specific evaluations of soil or groundwater contaminants and development of Australian soil health-based investigation levels (HILs). Exposure assessment of vapors arising from subsurface sources includes the use of overseas-derived commercial models to predict indoor air concentrations. These indoor vapor intrusion models commonly consider steady-state assumptions, infinite sources, limited soil biodegradation, negligible free phase, and equilibrium partitioning into air and water phases to represent advective and diffusive processes. Regional model construct influences and input parameters affect model predictions while steady-state assumptions introduce conservatism and jointly highlight the need for Australian-specific indoor vapor intrusion assessment. An Australian non-steady-state indoor vapor intrusion model has been developed to determine cumulative indoor human doses (CIHDs) and to address these concerns by incorporating Australian experimental field data to consider mixing, dilution, ventilation, sink effects and first-order soil and air degradation. It was used to develop provisional HILs for benzene, toluene, ethylbenzene, and xylene (BTEX), naphthalene, and volatile aliphatic and aromatic total petroleum hydrocarbons (TPH) < or = EC16 fractions for crawl space dwellings. This article summarizes current state of knowledge and discusses proposed research for differing exposure scenarios based on Australian dwelling and subsurface influences, concurrent with sensitivity analyses of input parameters and in-field model validation.

Cross-fertilising environmental, economic and geographical modelling to improve the environmental assessment of biofuel......Cross-fertilising environmental, economic and geographical modelling to improve the environmental assessment of biofuel...

The FORTRAN77 ecological risk computer model--ECORSK.5--has been used to estimate the potential toxicity of surficial deposits of radioactive and non-radioactive contaminants to several threatened and endangered (T and E) species at the Los Alamos National Laboratory (LANL). These analyses to date include preliminary toxicity estimates for the Mexican spotted owl, the American peregrine falcon, the bald eagle, and the southwestern willow flycatcher. This work has been performed as required for the Record of Decision for the construction of the Dual Axis Radiographic Hydrodynamic Test (DARHT) Facility at LANL as part of the Environmental Impact Statement. The model is dependent on the use of the geographic information system and associated software--ARC/INFO--and has been used in conjunction with LANL's Facility for Information Management and Display (FIMAD) contaminant database. The integration of FIMAD data and ARC/INFO using ECORSK.5 allows the generation of spatial information from a gridded area of potential exposure called an Ecological Exposure Unit. ECORSK.5 was used to simulate exposures using a modified Environmental Protection Agency Quotient Method. The model can handle a large number of contaminants within the home range of T and E species. This integration results in the production of hazard indices which, when compared to risk evaluation criteria, estimate the potential for impact from consumption of contaminants in food and ingestion of soil. The assessment is considered a Tier-2 type of analysis. This report summarizes and documents the ECORSK.5 code, the mathematical models used in the development of ECORSK.5, and the input and other requirements for its operation. Other auxiliary FORTRAN 77 codes used for processing and graphing output from ECORSK.5 are also discussed. The reader may refer to reports cited in the introduction to obtain greater detail on past applications of ECORSK.5 and assumptions used in deriving model parameters.

Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0

The main objective of this study was to provide a basis for illustrations of yearly dose rates to the most exposed individual from hypothetical leakages of radionuclides from a deep bedrock repository for spent nuclear fuel and other radioactive waste. The results of this study will be used in the safety assessment SR 97 and in a study on the design and long-term safety for a repository planned to contain long-lived low and intermediate level waste. The repositories will be designed to isolate the radionuclides for several hundred thousands of years. In the SR 97 study, however, hypothetical scenarios for leakage are postulated. Radionuclides are hence assumed to be transported in the geosphere by groundwater, and probably discharge into the biosphere. This may occur in several types of ecosystems. A number of categories of such ecosystems were identified, and turnover of radionuclides was modelled separately for each ecosystem. Previous studies had focused on generic models for wells, lakes and coastal areas. These models were, in this study, developed further to use site-specific data. In addition, flows of groundwater, containing radionuclides, to agricultural land and peat bogs were considered. All these categories are referred to as modules in this report. The forest ecosystems were not included, due to a general lack of knowledge of biospheric processes in connection with discharge of groundwater in forested areas. Examples of each type of module were run with the assumption of a continuous annual release into the biosphere of 1 Bq for each radionuclide during 10 000 years. The results are presented as ecosystem specific dose conversion factors (EDFs) for each nuclide at the year 10 000, assuming stationary ecosystems and prevailing living conditions and habits. All calculations were performed with uncertainty analyses included. Simplifications and assumptions in the modelling of biospheric processes are discussed. The use of modules may be seen as a step

Much knowledge in chemistry exists at a molecular level, inaccessible to direct perception. Chemistry instruction should therefore include multiple visual representations, such as molecular models and symbols. This study describes the implementation and assessment of a learning unit designed for 12th grade chemistry honors students. The organic…

Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients t

One of the major recommendations of the National Academy of Science to the USEPA, NMFS and USFWS was to utilize probabilistic methods when assessing the risks of pesticides to federally listed endangered and threatened species. The Terrestrial Investigation Model (TIM, version 3....

Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

In a smart grid, data and information are transported, transmitted, stored, and processed with various stakeholders having to cooperate effectively. Furthermore, personal data is the key to many smart grid applications and therefore privacy impacts have to be taken into account. For an effective smart grid, well integrated solutions are crucial and for achieving a high degree of customer acceptance, privacy should already be considered at design time of the system. To assist system engineers in early design phase, frameworks for the automated privacy evaluation of use cases are important. For evaluation, use cases for services and software architectures need to be formally captured in a standardized and commonly understood manner. In order to ensure this common understanding for all kinds of stakeholders, reference models have recently been developed. In this paper we present a model-driven approach for the automated assessment of such services and software architectures in the smart grid that builds on the standardized reference models. The focus of qualitative and quantitative evaluation is on privacy. For evaluation, the framework draws on use cases from the University of Southern California microgrid.

Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

Precast concrete elements are widely used within United Kingdom house building offering ease in assembly and added values as structural integrity, sound and thermal insulation; most common concrete components include walls, beams, floors, panels, lintels, stairs, etc. The lack of respect of the manufacturer instruction during assembling, however, may induce cracking and short/long term loss of bearing capacity. GPR is a well-established not destructive technique employed in the assessment of structural elements because of real-time imaging, quickness of data collecting and ability to discriminate finest structural details. In this work, GPR has been used to investigate two different precast elements: precast reinforced concrete planks constituting the roof slab of a school and precast wood-cement blocks with insulation material pre-fitted used to build a perimeter wall of a private building. Visible cracks affected both constructions. For the assessment surveys, a GSSI 2.0 GHz GPR antenna has been used because of the high resolution required and the small size of the antenna case (155 by 90 by 105mm) enabling scanning up to 45mm from any obstruction. Finite Difference Time Domain (FDTD) numerical modelling was also performed to build a scenario of the expected GPR signal response for a preliminary real-time interpretation and to help solve uncertainties due to complex reflection patterns: simulated radargrams were built using Reflex Software v. 8.2, reproducing the same GPR pulse used for the surveys in terms of wavelet, nominal frequency, sample frequency and time window. Model geometries were derived from the design projects available both for the planks and the blocks; the electromagnetic properties of the materials (concrete, reinforcing bars, air-filled void, insulation and wooden concrete) were inferred from both values reported in literature and a preliminary interpretation of radargrams where internal layer interfaces were clearly recognizable and

Full Text Available Abstract Background No validated model exists to explain the learning effects of assessment, a problem when designing and researching assessment for learning. We recently developed a model explaining the pre-assessment learning effects of summative assessment in a theory teaching context. The challenge now is to validate this model. The purpose of this study was to explore whether the model was operational in a clinical context as a first step in this process. Methods Given the complexity of the model, we adopted a qualitative approach. Data from in-depth interviews with eighteen medical students were subject to content analysis. We utilised a code book developed previously using grounded theory. During analysis, we remained alert to data that might not conform to the coding framework and open to the possibility of deploying inductive coding. Ethical clearance and informed consent were obtained. Results The three components of the model i.e., assessment factors, mechanism factors and learning effects were all evident in the clinical context. Associations between these components could all be explained by the model. Interaction with preceptors was identified as a new subcomponent of assessment factors. The model could explain the interrelationships of the three facets of this subcomponent i.e., regular accountability, personal consequences and emotional valence of the learning environment, with previously described components of the model. Conclusions The model could be utilized to analyse and explain observations in an assessment context different to that from which it was derived. In the clinical setting, the (negative influence of preceptors on student learning was particularly prominent. In this setting, learning effects resulted not only from the high-stakes nature of summative assessment but also from personal stakes, e.g. for esteem and agency. The results suggest that to influence student learning, consequences should accrue from

The low bioavailability of nutrients and oxygen in the soil environment has hampered successful expression of biodegradation/biocontrol genes that are driven by promoters highly active during routine laboratory conditions of high nutrient- and oxygen-availability. Hence, in the present study, expression of the gus-tagged genes in 12 Tn5-gus mutants of the soil microbe Pseudomonas putida PNL-MK25 was examined under various conditions chosen to mimic the soil environment: low carbon, phosphate, nitrate, or oxygen, and in the rhizosphere. Based on their expression profiles, three nutrient-responsive mutant (NRM) strains, NRM5, NRM7, and NRM17, were selected for identification of the tagged genes. In the mutant strain NRM5, expression of the glutamate dehydrogenase (gdhA) gene was increased between 4.9- to 26.4-fold under various low nutrient conditions. In NRM7, expression of the novel NADPH:quinone oxidoreductase-like (nql) gene was consistently amongst the highest and was synergistically upregulated by low nutrient and anoxic conditions. The cyoD gene in NRM17, which encodes the fourth subunit of the cytochrome o ubiquinol oxidase complex, had decreased expression in low nutrient conditions but its absolute expression levels was still amongst the highest. Additionally, it was independent of oxygen availability, in contrast to that in E. coli.

A majority of microorganisms in dark, nutrient-poor, subsurface habitats live in biofilms attached to mineral surfaces. As a result, microorganisms have likely adapted and evolved to take advantage of specific minerals that support a variety of biogeochemical processes. Using biofilm reactors inoculated with a diverse microbial biomat from a sulfidic cave, we found that specific microorganisms colonize specific minerals according to their metabolic/nutritional requirements as well as their environmental tolerances in order to increase survival in unfavorable environments. In a neutral pH, carbon (C) and phosphate (P)-limited (unfavorable) reactor, highly-buffering carbonates were colonized by nearly identical communities of neutrophilic sulfur-oxidizing (acid-generating) bacteria (SOB), which intensely corroded the carbonates. Non-buffering quartz was colonized by acid-generating acidophiles, while feldspars (containing potentially toxic aluminum) were colonized largely by aluminotolerant microbes. The SOB Thiothrix unzii demonstrated a clear affinity for basalt, and it is commonly found on basaltic rocks in mid-ocean ridge environments. In an identical reactor amended with acetate, heterotrophic sulfur-reducing bacteria (SRB) dominated on most surfaces. The metabolism of the SRB causes an increase in both alkalinity and pH, nearly eliminating the need for buffering minerals and resulting in carbonate precipitation. However, SRB were not dominant on quartz, which was again colonized by acidophiles and acid-tolerant microorganisms or basalt which hosted a complex consortium similar to those found on natural basalt outcrops. These organisms have been shown to weather basalts to access mineral nutrients, especially when provided a carbon source. In both the C&P-limited and acetate-amended reactors significantly greater biomass accumulated on minerals with high P content. When abundant P was added and the pH was buffered to 8.3, mineral selectivity was eliminated and every surface accumulated similar total biomass and nearly identical communities (primarily SOB and alkalitrophs). These experiments suggest that in unfavorable environments microbial survival, growth, and community structure is closely linked to mineral chemistry and reactions at the microbe/mineral interface.

Errors in spreadsheet applications and models are alarmingly common (some authorities, with justification cite spreadsheets containing errors as the norm rather than the exception). Faced with this body of evidence, the auditor can be faced with a huge task - the temptation may be to launch code inspections for every spreadsheet in an organisation. This can be very expensive and time-consuming. This paper describes risk assessment based on the "SpACE" audit methodology used by H M Customs & Excise's tax inspectors. This allows the auditor to target resources on the spreadsheets posing the highest risk of error, and justify the deployment of those resources to managers and clients. Since the opposite of audit risk is audit assurance the paper also offers an overview of some elements of good practice in the use of spreadsheets in business.

Lack of understanding about workflow can impair health IT system adoption. Observational techniques can provide valuable information about clinical workflow. A pilot study using direct observation was conducted in an outpatient chronic disease clinic. The goals of the study were to assess workflow and information flow and to develop a general model of workflow and information behavior. Over 55 hours of direct observation showed that the pilot site utilized many of the features of the informatics systems available to them, but also employed multiple non-electronic artifacts and workarounds. Gaps existed between clinic workflow and informatics tool workflow, as well as between institutional expectations of informatics tool use and actual use. Concurrent use of both paper-based and electronic systems resulted in duplication of effort and inefficiencies. A relatively short period of direct observation revealed important information about workflow and informatics tool adoption.

In a binary solution unidirectionally solidified from below, the bulk melt and the eutectic solid is separated by a dendritic mushy zone. The mathematical formulation governing the fluid motion shall thus consist of the equations in the bulk melt and the mushy zone and the associated boundary conditions. In the bulk melt, assuming that the melt is a Newtonian fluid, the governing equations are the continuity equation, the Navier-Stokes equations, the heat conservation equation, and the solute conservation equation. In the mushy layer, however, the formulation of the momentum equation and the associated boundary conditions are diversified in previous investigations. In this paper, we discuss three mathematical models, which had been previously applied to study the flow induced by the solidification of binary solutions cooling from below. The assessment is given on the bases of the stability characteristics of the convective flow and the comparison between the numerical and experimental results.

A Digital Elevation Model (DEM) is a digital representation of ground surface topography or terrain with different accuracies for different application fields. DEM have been applied to a wide range of civil engineering and military planning tasks. DEM is obtained using a number of techniques such as photogrammetry, digitizing, laser scanning, radar interferometry, classical survey and GPS techniques. This paper presents an assessment study of DEM using GPS (Stop&Go) and kinematic techniques comparing with classical survey. The results show that a DEM generated from (Stop&Go) GPS technique has the highest accuracy with a RMS error of 9.70 cm. The RMS error of DEM derived by kinematic GPS is 12.00 cm.

This paper presents the status of the physical modelling in present codes used for Nuclear Reactor Thermalhydraulics (TRAC, RELAP 5, CATHARE, ATHLET,...) and attempts to list the unresolved or partially resolved issues. First, the capabilities and limitations of present codes are presented. They are mainly known from a synthesis of the assessment calculations performed for both separate effect tests and integral effect tests. It is also interesting to list all the assumptions and simplifications which were made in the establishment of the system of equations and of the constitutive relations. Many of the present limitations are associated to physical situations where these assumptions are not valid. Then, recommendations are proposed to extend the capabilities of these codes.

Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated

Full Text Available The companies pursue their goals and operate their activities in an environment full of risks and uncertainties. One of the major principles in accounting is that the companies to continue indefinitely, which is called “the going concern assumption”. Any company, surrounded by many risks must adapt to the rapidly changing conditions of the business environment, realize and manage those risks and build some core competencies to continue as a going concern. COSO internal control, having practical application tools for companies is one of the generally accepted frameworks that aims enabling the companies to build, manage and develop an internal control structure as a tool to reach sustainable success. One of the five COSO components is “risk assessment” covering the recognition and assessment of the potential risks that the company faces and manage those risk considering their materiality. This study aims to explain the COSO internal control model with its five components as well as stressing the assessment of risks component supported by some examples.

Consumer exposure to chemicals from products and articles is rarely monitored. Since an assessment of consumer exposure has become particularly important under the European REACH Regulation, dedicated modelling approaches with exposure assessment tools are applied. The results of these tools are critically dependent on the default input values embedded in the tools. These inputs were therefore compiled for three lower tier tools (ECETOC TRA (version 3.0), EGRET and REACT)) and benchmarked against a higher tier tool (ConsExpo (version 4.1)). Mostly, conservative input values are used in the lower tier tools. Some cases were identified where the lower tier tools used less conservative values than ConsExpo. However, these deviations only rarely resulted in less conservative exposure estimates compared to ConsExpo, when tested in reference scenarios. This finding is mainly due to the conservatism of (a) the default value for the thickness of the product layer (with complete release of the substance) used for the prediction of dermal exposure and (b) the complete release assumed for volatile substances (i.e. substances with a vapour pressure ⩾10Pa) for inhalation exposure estimates. The examples demonstrate that care must be taken when changing critical defaults in order to retain conservative estimates of consumer exposure to chemicals.

Accurate and effective assessment of strategic alternatives of an organization directly affects the decision-making and execution of its development strategy. In evaluation of strategic alternatives, relevant elements from both internal and external environments of an organization must be considered. In this paper we use strategic assessmentmodel to evaluate strategic alternatives of an air-conditioning company. Strategic objectives and alternatives of the company are developed through analysis of the competitive environment,key competitors and internal conditions. The environment factors are classified into internal, task, and general opportunities and threats. Analytical hierarchy process, subjective probabilities, entropy concept,and utility theory are used to enhance decision-maker＇s ability in evaluating strategic alternatives. The evaluation results show that the most effective strategic alternative for the company is to reduce types of products, concentrate its effort on producing window-type and cupboard-type air-conditioners, enlarge the production scale, and pre-empt the market. The company has made great progress by implementing this alternative. We conclude that SAM is an appropriate tool for evaluating strategic alternatives.

To model a given time series $F(t)$ with fractal Brownian motions (fBms), it is necessary to have appropriate error assessment for related quantities. Usually the fractal dimension $D$ is derived from the Hurst exponent $H$ via the relation $D=2-H$, and the Hurst exponent can be evaluated by analyzing the dependence of the rescaled range $\\langle|F(t+\\tau)-F(t)|\\rangle$ on the time span $\\tau$. For fBms, the error of the rescaled range not only depends on data sampling but also varies with $H$ due to the presence of long term memory. This error for a given time series then can not be assessed without knowing the fractal dimension. We carry out extensive numerical simulations to explore the error of rescaled range of fBms and find that for $0

Full Text Available The prediction of a wind farm near the wind turbines has a significant effect on the safety as well as economy of wind power generation. To assess the wind resource distribution within a complex terrain, a computational fluid dynamics (CFD based wind farm forecast microscale model is developed. The model uses the Reynolds Averaged Navier-Stokes (RANS model to characterize the turbulence. By using the results of Weather Research and Forecasting (WRF mesoscale weather forecast model as the input of the CFD model, a coupled model of CFD-WRF is established. A special method is used for the treatment of the information interchange on the lateral boundary between two models. This established coupled model is applied in predicting the wind farm near a wind turbine in Hong Gang-zi, Jilin, China. The results from this simulation are compared to real measured data. On this basis, the accuracy and efficiency of turbulence characterization schemes are discussed. It indicates that this coupling system is easy to implement and can make these two separate models work in parallel. The CFD model coupled with WRF has the advantage of high accuracy and fast speed, which makes it valid for the wind power generation.

Full Text Available The amount and concentration of N in catchment runoff is strongly controlled by a number of hydrological influences, such as leaching rates and the rate of transport of N from the land to surface water bodies. This paper describes how the principal hydrological controls at a catchment scale have been represented within the Nitrogen Risk AssessmentModel for Scotland (NIRAMS; it demonstrates their influence through application of the model to eight Scottish catchments, contrasting in terms of their land use, climate and topography. Calculation of N leaching rates, described in the preceding paper (Dunn et al., 2004, is based on soil water content determined by application of a weekly water balance model. This model uses national scale datasets and has been developed and applied to the whole of Scotland using five years of historical meteorological data. A catchment scale transport model, constructed from a 50m digital elevation model, routes flows of N through the sub-surface and groundwater to the stream system. The results of the simulations carried out for eight different catchments demonstrate that the NIRAMS model is capable of predicting time-series of weekly stream flows and N concentrations, to an acceptable degree of accuracy. The model provides an appropriate framework for risk assessment applications requiring predictions in ungauged catchments and at a national scale. Analysis of the model behaviour shows that streamwater N concentrations are controlled both by the rate of supply of N from leaching as well as the rate of transport of N from the land to the water. Keywords: nitrogen, diffuse pollution, hydrology, model, transport, catchment

It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up.

One of the goals of current education is to ensure that graduates can act as independent lifelong learners. Graduates need to be able to assess their own learning and interpret assessment results. The central question in this article is how to acquire sustainable assessment skills, enabling students to assess their performance and learning…

One of the goals of current education is to ensure that graduates can act as independent lifelong learners. Graduates need to be able to assess their own learning and interpret assessment results. The central question in this article is how to acquire sustainable assessment skills, enabling students to assess their performance and learning…

Full Text Available The aims of the study were to apply, test and to present the ability of the deterministic simulation models SIMWASER and CERES-Wheat computing soil-water balance components, percolation losses, ground water recharge and capillary rise. Two case studies for the assessment of percolation losses from irrigated carrots to deep groundwater at Obersiebenbrunn in the Marchfeld (Austria and ground water recharge and capillary rise from shallow groundwater in grass lysimeters at Berlin-Dahlem (Germany together with two test sites with similar climatic conditions and soil water storage potential but with (Grossenzesdorf, Austria and without (Zabcice, Czech Republic groundwater impact in a semi-arid agricultural area in central Europe were chosen. At Obersiebenbrunn, simulated percolation and evapotranspiration were 183 and 629 mm, while the respective measured values amounted to 198 and 635 mm. Up to 42% (194 mm of evapotranspiration was provided by groundwater at s Grossenzesdorf and only 126 mm was used for the worst case comparing to observed data. Th ese results showed both models as proper applicable tools to demonstrate crop – soil – water relations. However, the availability and management of soil water reserves will remain important, especially when extreme events such as droughts occur more frequently and annual soil and groundwater recharge decrease.

Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.

Full Text Available The purpose of research is to develop a conceptual model for assessing the impact of the gender aspect on economic policy at macro– and microeconomic levels. The research methodology is based on analysing scientific approaches to the gender aspect in economics and gender–responsive budgeting as well as determining the impact of the gender aspect on GDP, foreign trade, the state budget and the labour market. First, the major findings encompass the main idea of a conceptual model proposing that a socio–economic picture of society can be accepted as completed only when, alongside public and private sectors, includes the care/reproductive sector that is dominated by women and creating added value in the form of educated human resources; second, macroeconomics is not neutral in terms of gender equality. Gender asymmetry is manifested not only at the level of microeconomics (labour market and business but also at the level of macroeconomics (GDP, the state budget and foreign trade, which has a negative impact on economic growth and state budget revenues. In this regard, economic decisions, according to the principles of gender equality and in order to achieve gender equality in economics, must be made, as the gender aspect has to be also implemented at the macroeconomic level.

The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy.

In recent years there has been significant interest in modelling cumulative effects and the population consequences of individual changes in cetacean behaviour and physiology due to disturbance. One potential source of disturbance that has garnered particular interest is whale-watching. Though perceived as ‘green’ or eco-friendly tourism, there is evidence that whale-watching can result in statistically significant and biologically meaningful changes in cetacean behaviour, raising the question whether whale-watching is in fact a long term sustainable activity. However, an assessment of the impacts of whale-watching on cetaceans requires an understanding of the potential behavioural and physiological effects, data to effectively address the question and suitable modelling techniques. Here, we review the current state of knowledge on the viability of long-term whale-watching, as well as logistical limitations and potential opportunities. We conclude that an integrated, coordinated approach will be needed to further understanding of the possible effects of whale-watching on cetaceans.

Full Text Available Accurate mean areal precipitation (MAP estimates are essential input forcings for hydrologic models. However, the selection of the most accurate method to estimate MAP can be daunting because there are numerous methods to choose from (e.g., proximate gauge, direct weighted average, surface-fitting, and remotely sensed methods. Multiple methods (n = 19 were used to estimate MAP with precipitation data from 11 distributed monitoring sites, and 4 remotely sensed data sets. Each method was validated against the hydrologic model simulated stream flow using the Soil and Water Assessment Tool (SWAT. SWAT was validated using a split-site method and the observed stream flow data from five nested-scale gauging sites in a mixed-land-use watershed of the central USA. Cross-validation results showed the error associated with surface-fitting and remotely sensed methods ranging from −4.5 to −5.1%, and −9.8 to −14.7%, respectively. Split-site validation results showed the percent bias (PBIAS values that ranged from −4.5 to −160%. Second order polynomial functions especially overestimated precipitation and subsequent stream flow simulations (PBIAS = −160 in the headwaters. The results indicated that using an inverse-distance weighted, linear polynomial interpolation or multiquadric function method to estimate MAP may improve SWAT model simulations. Collectively, the results highlight the importance of spatially distributed observed hydroclimate data for precipitation and subsequent steam flow estimations. The MAP methods demonstrated in the current work can be used to reduce hydrologic model uncertainty caused by watershed physiographic differences.

Integrated assessmentmodelling has evolved to support policy development in relation to air pollutants and greenhouse gases by providing integrated simulation tools able to produce quick and realistic representations of emission scenarios and their environmental impacts without the need to re-run complex atmospheric dispersion models. The UK Integrated AssessmentModel (UKIAM) has been developed to investigate strategies for reducing UK emissions by bringing together information on projected UK emissions of SO2, NOx, NH3, PM10 and PM2.5, atmospheric dispersion, criteria for protection of ecosystems, urban air quality and human health, and data on potential abatement measures to reduce emissions, which may subsequently be linked to associated analyses of costs and benefits. We describe the multi-scale model structure ranging from continental to roadside, UK emission sources, atmospheric dispersion of emissions, implementation of abatement measures, integration with European-scale modelling, and environmental impacts. The model generates outputs from a national perspective which are used to evaluate alternative strategies in relation to emissions, deposition patterns, air quality metrics and ecosystem critical load exceedance. We present a selection of scenarios in relation to the 2020 Business-As-Usual projections and identify potential further reductions beyond those currently being planned.

Since Korean Air has begun to use the polar route from Seoul/ICN airport to New York/JFK airport on August 2006, there are explosive needs for the estimation and prediction against cosmic radiation exposure for Korean aircrew and passengers in South Korea from public. To keep pace with those needs of public, Korean government made the law on safety standards and managements of cosmic radiation for the flight attendants and the pilots in 2013. And we have begun to develop our own Korean Radiation Exposure AssessmentModel (KREAM) for aviation route dose since last year funded by Korea Meteorological Administration (KMA). GEANT4 model and NRLMSIS 00 model are used for calculation of the energetic particles' transport in the atmosphere and for obtaining the background atmospheric neutral densities depending on altitude. For prediction the radiation exposure in many routes depending on the various space weather effects, we constructed a database from pre-arranged simulations using all possible combinations of R, S, and G, which are the space weather effect scales provided by the National Oceanic and Atmospheric Administration (NOAA). To get the solar energetic particles' spectrum at the 100 km altitude which we set as a top of the atmospheric layers in the KREAM, we use ACE and GOES satellites' proton flux observations. We compare the results between KREAM and the other cosmic radiation estimation programs such as CARI-6M which is provided by the Federal Aviation Agency (FAA). We also validate KREAM's results by comparison with the measurement from Liulin-6K LET spectrometer onboard Korean commercial flights and Korean Air Force reconnaissance flights.

This article explored the application of the posterior predictive model checking (PPMC) method in assessing fit for unidimensional polytomous item response theory (IRT) models, specifically the divide-by-total models (e.g., the generalized partial credit model). Previous research has primarily focused on using PPMC in model checking for unidimensional and multidimensional IRT models for dichotomous data, and has paid little attention to polytomous models. A Monte Carlo simulation was conducted to investigate the performance of PPMC in detecting different sources of misfit for the partial credit model family. Results showed that the PPMC method, in combination with appropriate discrepancy measures, had adequate power in detecting different sources of misfit for the partial credit model family. Global odds ratio and item total correlation exhibited specific patterns in detecting the absence of the slope parameter, whereas Yen's Q1 was found to be promising in the detection of misfit caused by the constant category intersection parameter constraint across items. (PsycINFO Database Record

Hydrological models have usually been used to simulate variations in water storage compartments resulting from changes in fluxes (i.e., precipitation, evapotranspiration) considering physical or conceptual frameworks. In an effort to improve the simulation of storage compartments, this research investigated the benefits of assimilating the Gravity Recovery and Climate Experiment (GRACE) derived terrestrial water storage (TWS) anomalies into the AWRA (Australian Water Resource Assessment) model using an ensemble Kalman filter (EnKF) approach in 2009. The Murray-Darling Basin (MDB), which is Australia's biggest river system, was selected to perform the assimilation. Our investigations address (i) the optimal implementation of the EnKF, including sensitivity to ensemble size, localization length scale, observational errors correlations, inflation and stochastic parameterization of forcing terms, and (ii) the best strategy for assimilating GRACE data, which are available at different spatial resolutions (few hundred kilometres). Our motivation to select EnKF was due to its promising performance in previous studies to deal with the nonlinearity and high-dimensionality of hydrological models. However, the small size of ensembles might represent a critical issue for its success, since the statistical state of the system might not be well represented. Therefore, in this study, we analysed the relation between ensemble size and the performance of assimilation process. Previous studies have demonstrated that GRACE can be used to enhance the performance of models. However, it is very difficult to deal with its relatively low spatial resolution. Furthermore, assimilation of GRACE TWS measurements at different spatial resolution may result in different degree of improvements. Therefore, attempts were made here to find an optimal assimilation resolution of GRACE TWS observations into AWRA over MDB. Eventually, a localization approach was applied to modify the error covariance

Integrated AssessmentModels (IAMs) link representations of the regionally disaggregated global economy, energy system, agriculture and land-use, terrestrial carbon cycle, oceans and climate in an internally consistent framework. These models are often used as science-based decision-support tools for evaluating the consequences of climate, energy, and other policies, and their use in this framework is likely to increase in the future. Additionally, these models are used to develop future scenarios of emissions and land cover for use in climate models (e.g., RCPs and CMIP5). Land use is strongly influenced by assumptions about population, income, diet, ecosystem productivity change, and climate policy. Population, income, and diet determine the amount of food production needed in the future. Assumptions about future changes in crop yields due to agronomic developments influence the amount of land needed to produce food crops. Climate policy has implications for land when land-based mitigation options (e.g., afforestation and bioenergy) are considered. IAM models consider each of these factors in their computation of land use in the future. As each of these factors is uncertain in the future, IAM models use scenario analysis to explore the implications of each. For example, IAMs have been used to explore the effect of different mitigation policies on land cover. These models can quantify the trade-offs in terms of land cover, energy prices, food prices, and mitigation costs of each of these policies. Furthermore, IAMs are beginning to explore the effect of climate change on land productivity, and the implications that changes in productivity have on mitigation efforts. In this talk, we describe the implications for future land use and land cover of a variety of socioeconomic, technological, and policy drivers in several IAM models. Additionally, we will discuss the effects of future land cover on climate and the effects of climate on future land cover, as simulated

Cognitive diagnosis models (CDMs) are psychometric models developed mainly to assess examinees' specific strengths and weaknesses in a set of skills or attributes within a domain. By adopting the Generalized-DINA model framework, the recently developed general modeling framework, we attempted to retrofit the PISA reading assessments, a…

A better understanding of the current and future availability of water resources is essential for the implementation of the recently agreed Sustainable Development Goals (SDGs). Long-term/efficient strategies for coping with current and potential future water-related challenges are urgently required. Although Representative Concentration Pathways (RCPs) and Shared Socioeconomic Pathways (SSPs) were develop for the impact assessment of climate change, very few assessments have yet used the SSPs to assess water resources. Then the IIASA Water Futures and Solutions Initiative (WFaS), developed a set of water use scenarios consistent with RCPs and SSPs and applying the latest climate changes scenarios. Here this study focuses on results for Asian countries for the period 2010-2050. We present three conceivable future pathways of Asian water resources, determined by feasible combinations of two RCPs and three SSPs. Such a scenario approach provides valuable insights towards identifying appropriate strategies as gaps between a "scenario world" and reality. In addition, for the assessment of future water resources a multi-criteria analysis is applied. A classification system for countries and watershed that consists of two broad dimensions: (i) economic and institutional adaptive capacity, (ii) hydrological complexity. The latter is composed of several sub-indexes including total renewable water resources per capita, the ratio of water demand to renewable water resource, variability of runoff and dependency ratio to external. Furthermore, this analysis uses a multi-model approach to estimate runoff and discharge using 5 GCMs and 5 global hydrological models (GHMs). Three of these GHMs calculate water use based on a consistent set of scenarios in addition to water availability. As a result, we have projected hot spots of water scarcity in Asia and their spatial and temporal change. For example, in a scenario based on SSP2 and RCP6.0, by 2050, in total 2.1 billion people

The modelling of organic pollutants in the environment is burdened by a load of uncertainties. Not only parameter values are uncertain but often also the mass and timing of pesticide application. By introducing transformation products (TPs) into modelling, further uncertainty coming from the dependence of these substances on their parent compounds and the introduction of new model parameters are likely. The purpose of this study was the investigation of the behaviour of a parsimonious catchment scale model for the assessment of river concentrations of the insecticide Chlorpyrifos (CP) and two of its TPs, Chlorpyrifos Oxon (CPO) and 3,5,6-trichloro-2-pyridinol (TCP) under the influence of uncertain input parameter values. Especially parameter uncertainty and pesticide application uncertainty were investigated by Global Sensitivity Analysis (GSA) and the Generalized Likelihood Uncertainty Estimation (GLUE) method, based on Monte-Carlo sampling. GSA revealed that half-lives and sorption parameters as well as half-lives and transformation parameters were correlated to each other. This means, that the concepts of modelling sorption and degradation/transformation were correlated. Thus, it may be difficult in modelling studies to optimize parameter values for these modules. Furthermore, we could show that erroneous pesticide application mass and timing were compensated during Monte-Carlo sampling by changing the half-life of CP. However, the introduction of TCP into the calculation of the objective function was able to enhance identifiability of pesticide application mass. The GLUE analysis showed that CP and TCP were modelled successfully, but CPO modelling failed with high uncertainty and insensitive parameters. We assumed a structural error of the model which was especially important for CPO assessment. This shows that there is the possibility that a chemical and some of its TPs can be modelled successfully by a specific model structure, but for other TPs, the model

The Argonne Coal Market Model was developed as part of the National Coal Utilization Assessment, a comprehensive study of coal-related environmental, health, and safety impacts. The model was used to generate long-term coal market scenarios that became the basis for comparing the impacts of coal-development options. The model has a relatively high degree of regional detail concerning both supply and demand. Coal demands are forecast by a combination of trend and econometric analysis and then input exogenously into the model. Coal supply in each region is characterized by a linearly increasing function relating increments of new mine capacity to the marginal cost of extraction. Rail-transportation costs are econometrically estimated for each supply-demand link. A quadratic programming algorithm is used to calculate flow patterns that minimize consumer costs for the system.

Land reclamation is a complex marine environmental engineering and has a huge impact on social, economic, and physical environment. Reclamation environmental impact assessment (REIA) is also a complicated project, including the assessment of social economic background, ocean engineering, coastal geomorphology, sediment transportation, marine hydrodynamics and marine ecosystem and so on. Nowadays, a large number of land reclaimed projects have been carried out or in the process of construction along the coastal zone, thus, it is necessary to build up a framework on REIA to evaluate and quantify the environmental changes, to contribute to reclamation program, to reduce marine environmental disasters, and to sustain development of coastal zone. This article focuses on the research of REIA framework theory and puts forward a REIA model on land reclaimed evaluation, at the same time, applies this assessment system in Shenzhen City, which is a highly developed coastal city with an expectation of land reclamation. By use of the Remote Sensing (RS) and Geographic Information System (GIS) techniques, along with the topographic map and in situ survey in reclamation area, it concludes that the area of 2680 hectares in total has been reclaimed in Shenzhen city by the end of the year 2000. Thus, reclamation is usually applied to meet the needs for infrastructure, such as harbors, industries and highways in Shenzhen City. However, some serious negative impacts have been created to the coastal environment shown clearly in the following aspects. Firstly, it caused the dramatic changes of tidal flat and channels along the western coast, made this area more unstable, which is threatening the function of the harbor in this area. Secondly, Tidal prism has decreased rapidly. During the 20 years of reclamation, the tidal prism has been reduced by 20%～30% along the western coast in the Lingdingyang Estuary, and 15.6% in the Shenzhen Bay. As a result, the velocity of the tidal current

The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.

Stormwater pollution is linked to stream ecosystem degradation. In predicting stormwater pollution, various types of modelling techniques are adopted. The accuracy of predictions provided by these models depends on the data quality, appropriate estimation of model parameters, and the validation undertaken. It is well understood that available water quality datasets in urban areas span only relatively short time scales unlike water quantity data, which limits the applicability of the developed models in engineering and ecological assessment of urban waterways. This paper presents the application of leave-one-out (LOO) and Monte Carlo cross validation (MCCV) procedures in a Monte Carlo framework for the validation and estimation of uncertainty associated with pollutant wash-off when models are developed using a limited dataset. It was found that the application of MCCV is likely to result in a more realistic measure of model coefficients than LOO. Most importantly, MCCV and LOO were found to be effective in model validation when dealing with a small sample size which hinders detailed model validation and can undermine the effectiveness of stormwater quality management strategies.

The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance AssessmentModel simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. the Pena Blanca Natural Analogue Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following characteristics as compared to the Yucca Mountain repository site. (1) Analogous source: UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geologic setting: fractured, welded, and altered rhyolitic ash flow tuffs overlying carbonate rocks; (3) Analogous climate: Semiarid to arid; (4) Analogous geochemistry: Oxidizing conditions; and (5) Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table. The Nopal I deposit is approximately 8 {+-} 0.5 million years old and has been exposed to oxidizing conditions during the last 3.2 to 3.4 million years. The Pena Blanca Natural Analogue Model considers that the uranium oxide and uranium silicates in the ore deposit were originally analogous to uranium-oxide spent nuclear fuel. The Pena Blanca site has been characterized using field and laboratory investigations of its fault and fracture distribution, mineralogy, fracture fillings, seepage into the mine adits, regional hydrology, and mineralization that shows the extent of radionuclide migration. Three boreholes were drilled at the Nopal I mine site in 2003 and these boreholes have provided samples for lithologic characterization, water-level measurements, and water samples for laboratory

We developed a generic hydroeconomic model able to confront future water supply and demand on a large scale, taking into account man-made reservoirs. The assessment is done at the scale of river basins, using only globally available data; the methodology can thus be generalized. On the supply side, we evaluate the impacts of climate change on water resources. The available quantity of water at each site is computed using the following information: runoff is taken from the outputs of CNRM climate model (Dubois et al., 2010), reservoirs are located using Aquastat, and the sub-basin flow-accumulation area of each reservoir is determined based on a Digital Elevation Model (HYDRO1k). On the demand side, agricultural and domestic demands are projected in terms of both quantity and economic value. For the agricultural sector, globally available data on irrigated areas and crops are combined in order to determine irrigated crops localization. Then, crops irrigation requirements are computed for the different stages of the growing season using Allen (1998) method with Hargreaves potential evapotranspiration. Irrigation water economic value is based on a yield comparison approach between rainfed and irrigated crops. Potential irrigated and rainfed yields are taken from LPJmL (Blondeau et al., 2007), or from FAOSTAT by making simple assumptions on yield ratios. For the domestic sector, we project the combined effects of demographic growth, economic development and water cost evolution on future demands. The method consists in building three-blocks inverse demand functions where volume limits of the blocks evolve with the level of GDP per capita. The value of water along the demand curve is determined from price-elasticity, price and demand data from the literature, using the point-expansion method, and from water costs data. Then projected demands are confronted to future water availability. Operating rules of the reservoirs and water allocation between demands are based on

SummaryRiver water quality models can be valuable tools for the assessment and management of receiving water body quality. However, such water quality models require accurate model calibration in order to specify model parameters. Reliable model calibration requires an extensive array of water quality data that are generally rare and resource-intensive, both economically and in terms of human resources, to collect. In the case of small rivers, such data are scarce due to the fact that these rivers are generally considered too insignificant, from a practical and economic viewpoint, to justify the investment of such considerable time and resources. As a consequence, the literature contains very few studies on the water quality modelling for small rivers, and such studies as have been published are fairly limited in scope. In this paper, a simplified river water quality model is presented. The model is an extension of the Streeter-Phelps model and takes into account the physico-chemical and biological processes most relevant to modelling the quality of receiving water bodies (i.e., degradation of dissolved carbonaceous substances, ammonium oxidation, algal uptake and denitrification, dissolved oxygen balance, including depletion by degradation processes and supply by physical reaeration and photosynthetic production). The model has been applied to an Italian case study, the Oreto river (IT), which has been the object of an Italian research project aimed at assessing the river's water quality. For this reason, several monitoring campaigns have been previously carried out in order to collect water quantity and quality data on this river system. In particular, twelve river cross sections were monitored, and both flow and water quality data were collected for each cross section. The results of the calibrated model show satisfactory agreement with the measured data and results reveal important differences between the parameters used to model small rivers as compared to

Full Text Available The well-known historical tsunami in the Makran Subduction Zone (MSZ region was generated by the earthquake of November 28, 1945 in Makran Coast in the North of Oman Sea. This destructive tsunami killed over 4,000 people in Southern Pakistan and India, caused great loss of life and devastation along the coasts of Western India, Iran and Oman. According to the report of "Remembering the 1945 Makran Tsunami", compiled by the Intergovernmental Oceanographic Commission (UNESCO/IOC, the maximum inundation of Chabahar port was 367 m toward the dry land, which had a height of 3.6 meters from the sea level. In addition, the maximum amount of inundation at Pasni (Pakistan reached to 3 km from the coastline. For the two beaches of Gujarat (India and Oman the maximum run-up height was 3 m from the sea level. In this paper, we first use Makran 1945 seismic parameters to simulate the tsunami in generation, propagation and inundation phases. The effect of tsunami on Chabahar port is simulated using the ComMIT model which is based on the Method of Splitting Tsunami (MOST. In this process the results are compared with the documented eyewitnesses and some reports from researchers for calibration and validation of the result. Next we have used the model to perform risk assessment for Chabahar port in the south of Iran with the worst case scenario of the tsunami. The simulated results showed that the tsunami waves will reach Chabahar coastline 11 minutes after generation and 9 minutes later, over 9.4 Km2 of the dry land will be flooded with maximum wave amplitude reaching up to 30 meters.

This paper focuses on uncertainties in model output used to assess accidents. We begin by reviewing the historical development of assessmentmodels and the associated interest in uncertainties as these evolutionary processes occurred in the United States. This is followed by a description of the sources of uncertainties in assessment calculations. Types of models appropriate for assessment of accidents are identified. A summary of results from our analysis of uncertainty is provided in results obtained with current methodology for assessing routine and accidental radionuclide releases to the environment. We conclude with discussion of preferred procedures and suggested future directions to improve the state-of-the-art of radiological assessments.

The Utility of Social Modeling for Proliferation Assessment project (PL09-UtilSocial) investigates the use of social and cultural information to improve nuclear proliferation assessments, including nonproliferation assessments, Proliferation Resistance (PR) assessments, safeguards assessments, and other related studies. These assessments often use and create technical information about a host State and its posture towards proliferation, the vulnerability of a nuclear energy system (NES) to an undesired event, and the effectiveness of safeguards. This objective of this project is to find and integrate social and technical information by explicitly considering the role of cultural, social, and behavioral factors relevant to proliferation; and to describe and demonstrate if and how social science modeling has utility in proliferation assessment. This report describes a modeling approach and how it might be used to support a location-specific assessment of the PR assessment of a particular NES. The report demonstrates the use of social modeling to enhance an existing assessment process that relies on primarily technical factors. This effort builds on a literature review and preliminary assessment performed as the first stage of the project and compiled in PNNL-18438. [ T his report describes an effort to answer questions about whether it is possible to incorporate social modeling into a PR assessment in such a way that we can determine the effects of social factors on a primarily technical assessment. This report provides: 1. background information about relevant social factors literature; 2. background information about a particular PR assessment approach relevant to this particular demonstration; 3. a discussion of social modeling undertaken to find and characterize social factors that are relevant to the PR assessment of a nuclear facility in a specific location; 4. description of an enhancement concept that integrates social factors into an existing, technically

The alteration of forest cover and the replacement of native vegetation with buildings, roads, exotic vegetation, and other urban features pose one of the greatest threats to global biodiversity. As more land becomes slated for urban development, identifying effective urban forest wildlife management tools becomes paramount to ensure the urban forest provides habitat to sustain bird and other wildlife populations. The primary goal of this study was to integrate wildlife suitability indices to an existing national urban forest assessment tool, i-Tree. We quantified available habitat characteristics of urban forests for ten northeastern U.S. cities, and summarized bird habitat relationships from the literature in terms of variables that were represented in the i-Tree datasets. With these data, we generated habitat suitability equations for nine bird species representing a range of life history traits and conservation status that predicts the habitat suitability based on i-Tree data. We applied these equations to the urban forest datasets to calculate the overall habitat suitability for each city and the habitat suitability for different types of land-use (e.g., residential, commercial, parkland) for each bird species. The proposed habitat models will help guide wildlife managers, urban planners, and landscape designers who require specific information such as desirable habitat conditions within an urban management project to help improve the suitability of urban forests for birds.

Chemicals provide many key building blocks that are converted into end-use products or used in industrial processes to make products that benefit society. Ensuring the safety of chemicals and their associated products is a key regulatory mission. Current processes and procedures for evaluating and assessing the impact of chemicals on human health, wildlife, and the environment were, in general, designed decades ago. These procedures depend on generation of relevant scientific knowledge in the laboratory and interpretation of this knowledge to refine our understanding of the related potential health risks. In practice, this often means that estimates of dose-response and time-course behaviors for apical toxic effects are needed as a function of relevant levels of exposure. In many situations, these experimentally determined functions are constructed using relatively high doses in experimental animals. In absence of experimental data, the application of computational modeling is necessary to extrapolate risk or safety guidance values for human exposures at low but environmentally relevant levels.

There is a growing need for investments in hospital facilities to improve the efficiency and quality of health services. In recent years, publicly financed hospital organisations in many countries have utilised private finance arrangements, variously called private finance initiatives (PFIs), public-private partnerships (PPPs) or P3s, to address their capital requirements. However, such projects have become more difficult to implement since the onset of the global financial crisis, which has led to a reduction in the supply of debt capital and an increase in its price. In December 2012, the government of the United Kingdom outlined a comprehensive set of reforms to the private finance model in order to revive this important source of capital for hospital investments. This article provides a critical assessment of the 'Private Finance 2' reforms, focusing on their likely impact on the supply and cost of capital. It concludes that constraints in supply are likely to continue, in part due to regulatory constraints facing both commercial banks and institutional investors, while the cost of capital is likely to increase, at least in the short term.

Explores the process of needs assessment within the In-Service Education and Training (INSET) program for unqualified primary teachers in Namibia and presents the various stages of an effective needs assessmentmodel. Summarizes the study from which the model emerged and reviews literature related to needs assessment for INSET. (CMK)

Classroom assessment, especially formative assessment, is one of the most challenging areas for new teachers, so it is imperative that teacher educators model effective classroom assessment practices. This article describes the use of rubrics in formative assessment, to support candidates in their progress toward mastery of course outcomes and to…

Intended as a simple, economical method of needs assessment, this needs-assessmentmodel presents four primary tasks: goal definition, program assessment, needs identification, and decision-making. Each step is explained in detail with sample instruments, sample preplans, education goals, and a questionnaire. The needs-assessment concept is…

We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.

This paper describes the Assessment Practices Framework and how I used it to study a high school Chemistry teacher as she designed, implemented, and learned from a chemistry lab report. The framework consists of exploring three teacher-centered components of classroom assessment (assessment beliefs, practices, and reflection) and analyzing…

It has become axiomatic that assessment impacts powerfully on student learning. However, surprisingly little research has been published emanating from authentic higher education settings about the nature and mechanism of the pre-assessment learning effects of summative assessment. Less still emanat

Full Text Available This study focused on the forest ecosystem dynamics assessment and predictive modelling deforestation and forest cover prediction in a part of north-eastern India i.e. forest areas along West Bengal, Bhutan, Arunachal Pradesh and Assam border in Eastern Himalaya using temporal satellite imagery of 1975, 1990 and 2009 and predicted forest cover for the period 2028 using Cellular Automata Markov Modedel (CAMM. The exercise highlighted large-scale deforestation in the study area during 1975–1990 as well as 1990–2009 forest cover vectors. A net loss of 2,334.28 km2 forest cover was noticed between 1975 and 2009, and with current rate of deforestation, a forest area of 4,563.34 km2 will be lost by 2028. The annual rate of deforestation worked out to be 0.35 and 0.78% during 1975–1990 and 1990–2009 respectively. Bamboo forest increased by 24.98% between 1975 and 2009 due to opening up of the forests. Forests in Kokrajhar, Barpeta, Darrang, Sonitpur, and Dhemaji districts in Assam were noticed to be worst-affected while Lower Subansiri, West and East Siang, Dibang Valley, Lohit and Changlang in Arunachal Pradesh were severely affected. Among different forest types, the maximum loss was seen in case of sal forest (37.97% between 1975 and 2009 and is expected to deplete further to 60.39% by 2028. The tropical moist deciduous forest was the next category, which decreased from 5,208.11 km2 to 3,447.28 (33.81% during same period with further chances of depletion to 2,288.81 km2 (56.05% by 2028. It noted progressive loss of forests in the study area between 1975 and 2009 through 1990 and predicted that, unless checked, the area is in for further depletion of the invaluable climax forests in the region, especially sal and moist deciduous forests. The exercise demonstrated high potential of remote sensing and geographic information system for forest ecosystem dynamics assessment and the efficacy of CAMM to predict the forest cover change.

Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

As part of studies into the siting of a deep repository for nuclear waste, Swedish Nuclear Fuel and Waste Management Company (SKB) has commissioned the Alternative Models Project (AMP). The AMP is a comparison of three alternative modeling approaches for geosphere performance assessment for a single hypothetical site. The hypothetical site, arbitrarily named Aberg is based on parameters from the Aespoe Hard Rock Laboratory in southern Sweden. The Aberg model domain, boundary conditions and canister locations are defined as a common reference case to facilitate comparisons between approaches. This report presents the results of a discrete fracture pathways analysis of the Aberg site, within the context of the SR 97 performance assessment exercise. The Aberg discrete fracture network (DFN) site model is based on consensus Aberg parameters related to the Aespoe HRL site. Discrete fracture pathways are identified from canister locations in a prototype repository design to the surface of the island or to the sea bottom. The discrete fracture pathways analysis presented in this report is used to provide the following parameters for SKB's performance assessment transport codes FARF31 and COMP23: * F-factor: Flow wetted surface normalized with regards to flow rate (yields an appreciation of the contact area available for diffusion and sorption processes) [TL{sup -1}]. * Travel Time: Advective transport time from a canister location to the environmental discharge [T]. * Canister Flux: Darcy flux (flow rate per unit area) past a representative canister location [LT{sup -1}]. In addition to the above, the discrete fracture pathways analysis in this report also provides information about: additional pathway parameters such as pathway length, pathway width, transport aperture, reactive surface area and transmissivity, percentage of canister locations with pathways to the surface discharge, spatial pattern of pathways and pathway discharges, visualization of pathways, and

Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the

Risk Assessment of Power Systems addresses the regulations and functions of risk assessment with regard to its relevance in system planning, maintenance, and asset management. Brimming with practical examples, this edition introduces the latest risk information on renewable resources, the smart grid, voltage stability assessment, and fuzzy risk evaluation. It is a comprehensive reference of a highly pertinent topic for engineers, managers, and upper-level students who seek examples of risk theory applications in the workplace.

Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al

This study aimed to critically appraise translational research models for suitability in performance assessment of cancer centers. Process models, such as the Process Marker Model and Lean and Six Sigma applications, seem to be suitable for performance assessment of cancer centers. However, they must be thoroughly tested in practice.

Pain assessment in animal models of osteoarthritis is integral to interpretation of a model's utility in representing the clinical condition, and enabling accurate translational medicine. Here we describe two methods for behavioral pain assessments available for use in animal models of experimental osteoarthritic pain: Von Frey filaments and spontaneous activity monitoring.

Ecological Models for Regulatory Risk Assessments of Pesticides: Developing a Strategy for the Future provides a coherent, science-based view on ecological modeling for regulatory risk assessments. It discusses the benefits of modeling in the context of registrations, identifies the obstacles that p

A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the

In-stent restenosis (ISR) remains a major public health concern associated with an increased morbidity, mortality, and health-related costs. Drug-eluting stents (DES) have reduced ISR, but generate healing-related issues or hypersensitivity reactions, leading to an increased risk of late acute stent thrombosis. Assessments of new DES are based on animal models or in vitro release systems, which have several limitations. The role of flow and shear stress on endothelial cell and ISR has also been emphasized. The aim of this work was to design and first evaluate an original bioreactor, replicating ex vivo hemodynamic and biological conditions similar to human conditions, to further evaluate new DES. This bioreactor was designed to study up to 6 stented arteries connected in bypass, immersed in a culture box, in which circulated a physiological systolo-diastolic resistive flow. Two centrifugal pumps drove the flow. The main pump generated pulsating flows by modulation of rotation velocity, and the second pump worked at constant rotation velocity, ensuring the counter pressure levels and backflows. The flow rate, the velocity profile, the arterial pressure, and the resistance of the flow were adjustable. The bioreactor was placed in an incubator to reproduce a biological environment. A first feasibility experience was performed over a 24-day period. Three rat aortic thoracic arteries were placed into the bioreactor, immersed in cell culture medium changed every 3 days, and with a circulating systolic and diastolic flux during the entire experimentation. There was no infection and no leak. At the end of the experimentation, a morphometric analysis was performed confirming the viability of the arteries. We designed and patented an original hemodynamic ex vivo model to further study new DES, as well as a wide range of vascular diseases and medical devices. This bioreactor will allow characterization of the velocity field and drug transfers within a stented artery with new

Massive Open Online Courses (MOOCs) are becoming an increasingly popular choice for education but, to reach their full extent, they require the resolution of new issues like assessing students at scale. A feasible approach to tackle this problem is peer assessment, in which students also play the role of assessor for assignments submitted by…

An important goal of the SIGMEA project is to develop computer-based decision support systems (DSS) for the assessment of the......An important goal of the SIGMEA project is to develop computer-based decision support systems (DSS) for the assessment of the...

A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…

In this paper we present a method of modeling and analysis that permits the extraction and quantitative display of detailed information about the effects of instruction on a class's knowledge. The method relies on a congitive model that represents student thinking in terms of mental models. Students frequently fail to recognize relevant conditions that lead to appropriate uses of their models. As a result they can use multiple models inconsistently. Once the most common mental models have been determined by qualitative research, they can be mapping onto a multiple choice test. Model analysis permits the interpretation of such a situation. We illustrate the use of our method by analyzing results from the FCI.

We describe the assessment of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). The impact of teaching computation is evaluated through a proctored assignment that asks the students to complete a provided program to represent the correct motion. Using questions isomorphic to the Force Concept Inventory we gauge students understanding of force in relation to the simulation. The students are given an open ended essay question that asks them to explain the steps they would use to model a physical situation. We also investigate the attitudes and prior experiences of each student using the Computation Modeling in Physics Attitudinal Student Survey (COMPASS) developed at Georgia Tech as well as a prior computational experiences survey.

A comprehensive assessing method based on the principle of the gray system theory and gray relational grade analysis was put forward to optimize water consumption forecasting models. The method provides a better accuracy for the assessment and the optimal selection of the water consumption forecasting models. The results show that the forecasting model built on this comprehensive assessing method presents better self-adaptability and accuracy in forecasting.

issues has been used in educational public settings to affect public understanding of science. After a theoretical background discussion, our approach is three-fold. First, we will provide an overview, a ?map? of DOE-funded of outreach programs within the overall ELSI context to identify the importance of the educational component, and to present the criteria we used to select relevant and representative case studies. Second, we will document the history of the case studies. Finally, we will explore an intertwined set of research questions: (1) To identify what we can expect such projects to accomplish -in other words to determine the goals that can reasonably be achieved by different types of outreach, (2) To point out how the case study approach could be useful for DOE-ELSI outreach as a whole, and (3) To use the case study approach as a basis to test theoretical models of science outreach in order to assess to what extent those models accord with real world outreach activities. For this last goal, we aim at identifying what practices among ELSI outreach activities contribute most to dissemination, or to participation, in other words in which cases outreach materials spark action in terms of public participation in decisions about scientific issues.

This report contains a revised descriptive model of porphyry copper deposits (PCDs), the world's largest source (about 60 percent) and resource (about 65 percent) of copper and a major source of molybdenum, gold and silver. Despite relatively low grades (average 0.44 percent copper in 2008), PCDs have significant economic and societal impacts due to their large size (commonly hundreds of millions to billions of metric tons), long mine lives (decades), and high production rates (billions of kilograms of copper per year). The revised model describes the geotectonic setting of PCDs, and provides extensive regional- to deposit-scale descriptions and illustrations of geological, geochemical, geophysical, and geoenvironmental characteristics. Current genetic theories are reviewed and evaluated, knowledge gaps are identified, and a variety of exploration and assessment guides are presented. A summary is included for users seeking overviews of specific topics.

Phylogenetic inference is widely used to investigate the relationships between homologous sequences. RNA molecules have played a key role in these studies because they are present throughout life and tend to evolve slowly. Phylogenetic inference has been shown to be dependent on the substitution model used. A wide range of models have been developed to describe RNA evolution, either with 16 states describing all possible canonical base pairs or with 7 states where the 10 mismatched nucleotides are reduced to a single state. Formal model selection has become a standard practice for choosing an inferential model and works well for comparing models of a specific type, such as comparisons within nucleotide models or within amino acid models. Model selection cannot function across different sized state spaces because the likelihoods are conditioned on different data. Here, we introduce statistical state-space projection methods that allow the direct comparison of likelihoods between nucleotide models and 7-state and 16-state RNA models. To demonstrate the general applicability of our new methods, we extract 287 RNA families from genomic alignments and perform model selection. We find that in 281/287 families, RNA models are selected in preference to nucleotide models, with simple 7-state RNA models selected for more conserved families with shorter stems and more complex 16-state RNA models selected for more divergent families with longer stems. Other factors, such as the function of the RNA molecule or the GC-content, have limited impact on model selection. Our models and model selection methods are freely available in the open-source PHASE 3.0 software.

In the last three decades, the Life Cycle Assessment (LCA) framework has grown to establish itself as the leading tool for the assessment of the environmental impacts of product systems.LCA studies are now conducted globally both in and outside the academia and also used as a basis for policy

paper addresses an important question of modeling stream dynamics: How may numerical models of braided stream morphodynamics be rigorously and objectively evaluated against a real case study? Using simulations from the Cellular Automaton Evolutionary Slope and River (CAESAR) reduced-complexity model (RCM) of a 33 km reach of a large gravel bed river (the Tagliamento River, Italy), this paper aims to (i) identify a sound strategy for calibration and validation of RCMs, (ii) investigate the effectiveness of multiperformance modelassessments, (iii) assess the potential of using CAESAR at mesospatial and mesotemporal scales. The approach used has three main steps: first sensitivity analysis (using a screening method and a variance-based method), then calibration, and finally validation. This approach allowed us to analyze 12 input factors initially and then to focus calibration only on the factors identified as most important. Sensitivity analysis and calibration were performed on a 7.5 km subreach, using a hydrological time series of 20 months, while validation on the whole 33 km study reach over a period of 8 years (2001-2009). CAESAR was able to reproduce the macromorphological changes of the study reach and gave good results as for annual bed load sediment estimates which turned out to be consistent with measurements in other large gravel bed rivers but showed a poorer performance in reproducing the characteristics of the braided channel (e.g., braiding intensity). The approach developed in this study can be effectively applied in other similar RCM contexts, allowing the use of RCMs not only in an explorative manner but also in obtaining quantitative results and scenarios.

Full Text Available The Panel has interpreted the Terms of Reference as a stepwise analysis of issues relevant to both the development and the evaluation of models to assess ecological effects of pesticides. The regulatory model should be selected or developed to address the relevant specific protection goal. The basis of good modelling practice must be the knowledge of relevant processes and the availability of data of sufficient quality. The opinion identifies several critical steps in order to set models within risk assessment, namely: problem formulation, considering the specific protection goals for the taxa or functional groups of concern; model domain of applicability, which drives the species and scenarios to model; species (and life stage selection, considering relevant life history traits and toxicological/toxicokinetics characteristics of the pesticide; selection of the environmental scenario, which is defined by a combination of abiotic, biotic and agronomic parameters to provide a realistic worst-case situation. Model development should follow the modelling cycle, in which every step has to be fully documented: (i problem definition; (ii model formulation, i.e. design of a conceptual model; (iii model formalisation, in which variables and parameters are linked together into mathematical equations or algorithms; (iv model implementation, in which a computer code is produced and verified; (v model setup, including sensitivity analysis, uncertainty analysis and comparison with observed data, that delivers the regulatory model; (vi prior to actual use in risk assessment, the regulatory model should be evaluated for relevance to the specific protection goals; (vii feedback from risk assessor with possible recommendations for model improvement. Model evaluation by regulatory authorities should consider each step of the modelling cycle: the opinion identifies points of particular attention for the use of mechanistic effect models in pesticide risk assessment

May 7, 2014 ... In the process of introducing cotton cultivars, it is essential to assess their productive ... In turn, the cultivars CA 222, STAM 42 and ISA-205 were ..... sector development in Mozambique. ... Principles of genetics quantitative.

This study evaluated the correlation between the risk of febrile neutropenia (FN) estimated by physicians and the risk of severe neutropenia or FN predicted by a validated multivariate model in patients with nonmyeloid malignancies receiving chemotherapy. Before patient enrollment, physician and site characteristics were recorded, and physicians self-reported the FN risk at which they would typically consider granulocyte colony-stimulating factor (G-CSF) primary prophylaxis (FN risk intervention threshold). For each patient, physicians electronically recorded their estimated FN risk, orders for G-CSF primary prophylaxis (yes/no), and patient characteristics for model predictions. Correlations between physician-assessed FN risk and model-predicted risk (primary endpoints) and between physician-assessed FN risk and G-CSF orders were calculated. Overall, 124 community-based oncologists registered; 944 patients initiating chemotherapy with intermediate FN risk enrolled. Median physician-assessed FN risk over all chemotherapy cycles was 20.0%, and median model-predicted risk was 17.9%; the correlation was 0.249 (95% CI, 0.179-0.316). The correlation between physician-assessed FN risk and subsequent orders for G-CSF primary prophylaxis (n = 634) was 0.313 (95% CI, 0.135-0.472). Among patients with a physician-assessed FN risk ≥ 20%, 14% did not receive G-CSF orders. G-CSF was not ordered for 16% of patients at or above their physician's self-reported FN risk intervention threshold (median, 20.0%) and was ordered for 21% below the threshold. Physician-assessed FN risk and model-predicted risk correlated weakly; however, there was moderate correlation between physician-assessed FN risk and orders for G-CSF primary prophylaxis. Further research and education on FN risk factors and appropriate G-CSF use are needed.

Animal models have been used extensively in diabetes research. Studies on animal models have contributed to the discovery and purification of insulin, development of new therapeutic approaches, and progress in fundamental and clinical research. However, conventional rodent and large animal mammalian models face ethical, practical, or technical limitations. Therefore, it would be beneficial developing an alternative model for diabetes research which would overcome these limitations. Amongst ot...

The plant disease triangle consists of the host plant, pathogen and environment, but their interaction has not been considered in climate change adaptation policy. Our objectives are to predict the changes of a coniferous forest, pine wood nematodes (Bursaphelenchus xylophilus) and pine sawyer beetles (Monochamus spp.), which is a cause of pine wilt disease in the Republic of Korea. We analyzed the impact of pine wilt disease on climate change by using the species distribution model (SDM) and the CLIMEX model. Area of coniferous forest will decline and move to northern and high-altitude area. But pine wood nematodes and pine sawyer beetles are going to spread because they are going to be in a more favorable environment in the future. Coniferous forests are expected to have high vulnerability because of the decrease in area and the increase in the risk of pine wilt disease. Such changes to forest ecosystems will greatly affect climate change in the future. If effective and appropriate prevention and control policies are not implemented, coniferous forests will be severely damaged. An adaptation policy should be created in order to protect coniferous forests from the viewpoint of biodiversity. Thus we need to consider the impact assessment of climate change for establishing an effective adaptation policy. The impact assessment of pine wilt disease using a plant disease triangle drew suitable results to support climate change adaptation policy.

The generalized estimating equation (GEE), a distribution-free, or semi-parametric, approach for modeling longitudinal data, is used in a wide range of behavioral, psychotherapy, pharmaceutical drug safety, and healthcare-related research studies. Most popular methods for assessingmodel fit are based on the likelihood function for parametric models, rendering them inappropriate for distribution-free GEE. One rare exception is a score statistic initially proposed by Tsiatis for logistic regression (1980) and later extended by Barnhart and Willamson to GEE (1998). Because GEE only provides valid inference under the missing completely at random assumption and missing values arising in most longitudinal studies do not follow such a restricted mechanism, this GEE-based score test has very limited applications in practice. We propose extensions of this goodness-of-fit test to address missing data under the missing at random assumption, a more realistic model that applies to most studies in practice. We examine the performance of the proposed tests using simulated data and demonstrate the utilities of such tests with data from a real study on geriatric depression and associated medical comorbidities.

Ecosystem health assessment is one of the main researches and urgent tasks of ecosystem science in 21st century. An operational definition on ecosystem health and an all-sided, simple, easy operational and standard index system, which are the foundation of assessment on ecosystem health, are necessary in obtaining a simple and applicable assessment theory and method of ecosystem health. Taking the Korean pine and broadleaved mixed forest ecosystem as an example, an originally creative idea on ecosystem health was put forward in this paper based on the idea of mode ecosystem set and the idea of forest ecosystem health, together with its assessment. This creative idea can help understand what ecosystem health is. Finally, a formula was deduced based on a new effective health assessment method--health distance (HD), which is the first time to be brought forward in China. At the same time, aiming at it's characteristics by status understanding and material health questions, a health index system of Korean pine and broadleaved mixed forest ecosystem was put forward in this paper, which is a compound ecosystem based on the compound properties of nature, economy and society. It is concrete enough to measure sub-index, so it is the foundation to assess ecosystem health of Korean pine and broadleaved mixed forest in next researches.

The present paper proposes a source-receptor model to schematically describe inhalation exposure to help understand the complex processes leading to inhalation of hazardous substances. The model considers a stepwise transfer of a contaminant from the source to the receptor. The conceptual model is c

The Objective of this Deliverable D3.2 is to describe the models developed in BASE that is, the experimental setup for the sectoral modelling. The model development described in this deliverable will then be implemented in the adaptation and economic analysis in WP6 in order to integrate adaptati...

Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the