A ubiquitous issue in research is that of selecting a representative sample from the study population. While random sampling strategies are the gold standard, in practice, random sampling of participants is not always feasible nor necessarily the optimal choice. In our case, a selection must be made of 12 hospitals (out of 89 Dutch hospitals in total). With this selection of 12 hospitals, it should be possible to estimate blood use in the remaining hospitals as well. In this paper, we evaluate both random and purposive strategies for the case of estimating blood use in Dutch hospitals. Available population-wide data on hospital blood use and number of hospital beds are used to simulate five sampling strategies: (1) select only the largest hospitals, (2) select the largest and the smallest hospitals ('maximum variation'), (3) select hospitals randomly, (4) select hospitals from as many different geographic regions as possible, (5) select hospitals from only two regions. Simulations of each strategy result in different selections of hospitals, that are each used to estimate blood use in the remaining hospitals. The estimates are compared to the actual population values; the subsequent prediction errors are used to indicate the quality of the sampling strategy. The strategy leading to the lowest prediction error in the case study was maximum variation sampling, followed by random, regional variation and two-region sampling, with sampling the largest hospitals resulting in the worst performance. Maximum variation sampling led to a hospital level prediction error of 15%, whereas random sampling led to a prediction error of 19% (95% CI 17%-26%). While lowering the sample size reduced the differences between maximum variation and the random strategies, increasing sample size to n = 18 did not change the ranking of the strategies and led to only slightly better predictions. The optimal strategy for estimating blood use was maximum variation sampling. When proxy data

Video contents are inherently heterogeneous. To exploit different feature modalities in a diverse video collection for video summarization, we propose to formulate the task as a multiview representativeselection problem. The goal is to select visual elements that are representative of a video consistently across different views (i.e., feature modalities). We present in this paper the multiview sparse dictionary selection with centroid co-regularization method, which optimizes the representativeselection in each view, and enforces that the view-specific selections to be similar by regularizing them towards a consensus selection. We also introduce a diversity regularizer to favor a selection of diverse representatives. The problem can be efficiently solved by an alternating minimizing optimization with the fast iterative shrinkage thresholding algorithm. Experiments on synthetic data and benchmark video datasets validate the effectiveness of the proposed approach for video summarization, in comparison with other video summarization methods and representativeselection methods such as K-medoids, sparse dictionary selection, and multiview clustering.

Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selectsrandomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomlyselected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomlyselected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

Full Text Available Abstract Background Randomized controlled trials (RCTs are the gold standard for trials assessing the effects of therapeutic interventions; therefore it is important to understand how they are conducted. Our objectives were to provide an overview of a representative sample of pediatric RCTs published in 2007 and assess the validity of their results. Methods We searched Cochrane Central Register of Controlled Trials using a pediatric filter and randomlyselected 300 RCTs published in 2007. We extracted data on trial characteristics; outcomes; methodological quality; reporting; and registration and protocol characteristics. Trial registration and protocol availability were determined for each study based on the publication, an Internet search and an author survey. Results Most studies (83% were efficacy trials, 40% evaluated drugs, and 30% were placebo-controlled. Primary outcomes were specified in 41%; 43% reported on adverse events. At least one statistically significant outcome was reported in 77% of trials; 63% favored the treatment group. Trial registration was declared in 12% of publications and 23% were found through an Internet search. Risk of bias (ROB was high in 59% of trials, unclear in 33%, and low in 8%. Registered trials were more likely to have low ROB than non-registered trials (16% vs. 5%; p = 0.008. Effect sizes tended to be larger for trials at high vs. low ROB (0.28, 95% CI 0.21,0.35 vs. 0.16, 95% CI 0.07,0.25. Among survey respondents (50% response rate, the most common reason for trial registration was a publication requirement and for non-registration, a lack of familiarity with the process. Conclusions More than half of this random sample of pediatric RCTs published in 2007 was at high ROB and three quarters of trials were not registered. There is an urgent need to improve the design, conduct, and reporting of child health research.

An exhaustive evaluation of state-of-the-art random number generators with several well-known suites of tests provides the basis for selection of suitable random number generators for use in stochastic simulations...

Micropollutants remain of concern in drinking water, and there is a broad interest in the ability of different treatment processes to remove these compounds. To gain a better understanding of treatment effectiveness for structurally diverse compounds and to be cost effective, it is necessary to select a small set of representative micropollutants for experimental studies. Unlike other approaches to-date, in this research micropollutants were systematically selected based solely on their physico-chemical and structural properties that are important in individual water treatment processes. This was accomplished by linking underlying principles of treatment processes such as coagulation/flocculation, oxidation, activated carbon adsorption, and membrane filtration to compound characteristics and corresponding molecular descriptors. A systematic statistical approach not commonly used in water treatment was then applied to a compound pool of 182 micropollutants (identified from the literature) and their relevant calculated molecular descriptors. Principal component analysis (PCA) was used to summarize the information residing in this large dataset. D-optimal onion design was then applied to the PCA results to select structurally representative compounds that could be used in experimental treatment studies. To demonstrate the applicability and flexibility of this selection approach, two sets of 22 representative micropollutants are presented. Compounds in the first set are representative when studying a range of water treatment processes (coagulation/flocculation, oxidation, activated carbon adsorption, and membrane filtration), whereas the second set shows representative compounds for ozonation and advanced oxidation studies. Overall, selected micropollutants in both lists are structurally diverse, have wide-ranging physico-chemical properties and cover a large spectrum of applications. The systematic compound selection approach presented here can also be adjusted to fit

The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.

Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down selectrepresentative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping to selectrepresentative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.

Full Text Available Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.

Spent fuel (SF) integrity evaluation is a regulatory requirement that is described in 10 CFR 71(transportation) and 10 CFR 72(storage) of the U. S. NRC licensing requirement. NRC regulation states that retrievability of SF after storage should be ensured and SF integrity under the normal condition must be guaranteed during transportation and handling process that is entailed before/during/after the interim storage. And SF integrity evaluation under the hypothetical accident condition is a core technology element for an assessment of critical, shielding, and containment. In this paper, SF integrity evaluation system which is suitable for domestic situation is suggested, and necessity of representative SF selection and its method is described. The ultimate goal of the SF integrity evaluation is to evaluate a safety margin in case of transportation/ handling/storage of SFs. It means that retrievability of SF after storage should be assured and SF integrity must be guaranteed at normal condition in the process of transportation/handling accompanied before/during/after interim storage. In Korea, SF integrity evaluation system is not established up to date. Especially, representative SF selection technology that is essential to SF integrity evaluation has not been fulfilled. To overcome this situation effectively, the methodology and technology of an overseas agency need to be benchmarked. In this paper, an overseas SF integrity evaluation system is analyzed, and an evaluation system suitable for domestic situation is suggested. Also, necessity of representative SF selection and its method is described

Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomlyselected final year Pharmacy ...

squares regression (PLSR), a chemometric method, has been applied on NIR spectroscopy data for the determination of the nitrogen (N) concentration in these grass samples. The sample selection method based on NIR spectral data proposed by Puchwein and the CADEX (computer aided design of experiments......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...

Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

Selective reporting is wasteful, leads to bias in the published record and harms the credibility of science. Studies on potential determinants of selective reporting currently lack a shared taxonomy and a causal framework. To develop a taxonomy of determinants of selective reporting in science. Inductive qualitative content analysis of a randomselection of the pertinent literature including empirical research and theoretical reflections. Using search terms for bias and selection combined with terms for reporting and publication, we systematically searched the PubMed, Embase, PsycINFO and Web of Science databases up to January 8, 2015. Of the 918 articles identified, we screened a 25 percent randomselection. From eligible articles, we extracted phrases that mentioned putative or possible determinants of selective reporting, which we used to create meaningful categories. We stopped when no new categories emerged in the most recently analyzed articles (saturation). Saturation was reached after analyzing 64 articles. We identified 497 putative determinants, of which 145 (29%) were supported by empirical findings. The determinants represented 12 categories (leaving 3% unspecified): focus on preferred findings (36%), poor or overly flexible research design (22%), high-risk area and its development (8%), dependence upon sponsors (8%), prejudice (7%), lack of resources including time (3%), doubts about reporting being worth the effort (3%), limitations in reporting and editorial practices (3%), academic publication system hurdles (3%), unfavorable geographical and regulatory environment (2%), relationship and collaboration issues (2%), and potential harm (0.4%). We designed a taxonomy of putative determinants of selective reporting consisting of 12 categories. The taxonomy may help develop theory about causes of selection bias and guide policies to prevent selective reporting.

STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

This paper reports the findings of a systematic study using Monte Carlo experiments and a real dataset aimed at comparing the performance of various ways of specifying random taste heterogeneity in a discrete choice model. Specifically, the analysis compares the performance of two recent advanced...... distributions. Both approaches allow the researcher to increase the number of parameters as desired. The paper provides a range of evidence on the ability of the various approaches to recover various distributions from data. The two advanced approaches are comparable in terms of the likelihoods achieved...

This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

Although it has been suggested that selective decontamination of the digestive tract (SDD) decreases postoperative aerobic Gram-negative and fungal infections in orthotopic liver transplantation (OLT), no controlled trials exist in pediatric patients. This prospective, randomized controlled study of 36 pediatric OLT patients examines the effect of short-term SDD on postoperative infection and digestive tract flora. Patients were randomized into two groups. The control group received perioperative parenteral antibiotics only. The SDD group received in addition polymyxin E, tobramycin, and amphotericin B enterally and by oropharyngeal swab postoperatively until oral intake was tolerated (6 +/- 4 days). Indications for operation, preoperative status, age, and intensive care unit and hospital length of stay were no different in SDD (n = 18) and control (n = 18) groups. A total of 14 Gram-negative infections (intraabdominal abscess 7, septicemia 5, pneumonia 1, urinary tract 1) developed in the 36 patients studied. Mortality was not significantly different in the two groups. However, there were significantly fewer patients with Gram-negative infections in the SDD group: 3/18 patients (11%) vs. 11/18 patients (50%) in the control group, P < 0.001. There was also significant reduction in aerobic Gram-negative flora in the stool and pharynx in patients receiving SDD. Gram-positive and anaerobic organisms were unaffected. We conclude that short-term postoperative SDD significantly reduces Gram-negative infections in pediatric OLT patients.

Selective mutism (SM) is a rare disease in children coded by DSM-5 as an anxiety disorder. Despite the disabling nature of the disease, there is still no specific treatment. The aims of this study were to verify the efficacy of six-month standard psychomotor treatment and the positive changes in lifestyle, in a population of children affected by SM. Randomized controlled trial registered in the European Clinical Trials Registry (EuDract 2015-001161-36). University third level Centre (Child and Adolescent Neuropsychiatry Clinic). Study population was composed by 67 children in group A (psychomotricity treatment) (35 M, mean age 7.84±1.15) and 71 children in group B (behavioral and educational counseling) (37 M, mean age 7.75±1.36). Psychomotor treatment was administered by trained child therapists in residential settings three times per week. Each child was treated for the whole period by the same therapist and all the therapists shared the same protocol. The standard psychomotor session length is of 45 minutes. At T0 and after 6 months (T1) of treatments, patients underwent a behavioral and SM severity assessment. To verify the effects of the psychomotor management, the Child Behavior Checklist questionnaire (CBCL) and Selective Mutism Questionnaire (SMQ) were administered to the parents. After 6 months of psychomotor treatment SM children showed a significant reduction among CBCL scores such as in social relations, anxious/depressed, social problems and total problems (Pselective mutism, even if further studies are needed. The present study identifies in psychomotricity a safe and efficacy therapy for pediatric selective mutism.

The Random Forest method is a multivariate algorithm that can be used for classification and regression respectively. The Random Forest implemented in the RapidMiner learning environment has been used for training and validation on data and Monte Carlo simulations of the IceCube neutrino telescope. Latest results are presented.

Full Text Available This article addresses visual language in architecture and spatial disciplines, using it as a means of communicating and conveying information, knowledge and ideas about space that are permeated by their interdisciplinary character. We focus in particular on the transmission of messages between professionals and the general public, arguing that this process aids the long-term formation of a responsible and critical public, which is then able to take an active part in sustainable planning and design practices. The article highlights some findings of an empirical study of 245 people that tested the effectiveness of selected presentation techniques in communicating spatial messages to the general public and placing them in the framework of existing knowledge.

The screening process for DG interconnection procedures needs to be improved in order to increase the PV deployment level on the distribution grid. A significant improvement in the current screening process could be achieved by finding a method to classify the feeders in a utility service territory and determine the sensitivity of particular groups of distribution feeders to the impacts of high PV deployment levels. This report describes the utility distribution feeder characteristics in California for a large dataset of 8,163 feeders and summarizes the California feeder population including the range of characteristics identified and most important to hosting capacity. The report describes the set of feeders that are identified for modeling and analysis as well as feeders identified for the control group. The report presents a method for separating a utilitys distribution feeders into unique clusters using the k-means clustering algorithm. An approach for determining the feeder variables of interest for use in a clustering algorithm is also described. The report presents an approach for choosing the feeder variables to be utilized in the clustering process and a method is identified for determining the optimal number of representative clusters.

We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process. PMID:26544962

Most clinical trials on medical interventions are sponsored by the industry. The choice of comparators shapes the accumulated evidence. We aimed to assess how often major companies sponsor trials that involve only their own products. Studies were identified by searching ClinicalTrials.gov for trials registered in 2006. We focused on randomized trials involving the 15 companies that had sponsored the largest number of registered trials in ClinicalTrials.gov in that period. Overall, 577 randomized trials were eligible for analysis and 82% had a single industry sponsor [89% (166/187) of the placebo-control trials, 87% (91/105) of trials comparing different doses or ways of administration of the same intervention, and 78% (221/285) of other active control trials]. The compared intervention(s) belonged to a single company in 67% of the trials (89%, 81% and 47% in the three categories respectively). All 15 companies strongly preferred to run trials where they were the only industry sponsor or even the only owner of the assessed interventions. Co-sponsorship typically reflected co-ownership of the same intervention by both companies. Head-to-head comparison of different active interventions developed by different companies occurred in only 18 trials with two or more industry sponsors. Each company generates a clinical research agenda that is strongly focused on its own products, while comparisons involving different interventions from different companies are uncommon. This diminishes the ability to understand the relative merits of different interventions for the same condition.

Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

In order to decrease the simulation time of morphodynamic models, often-complex wave climates are reduced to a few representative wave conditions (RWC). When applied to embayed beaches, a test of whether a reduced wave climate is representative or not is to see whether it can recreate the observed equilibrium (long-term averaged) bathymetry of the bay. In this study, the wave climate experienced at Milagro Beach, Tarragona, Spain was discretized into `average' and `extreme' RWCs. Process-based morphodynamic simulations were sequenced and merged based on `persistent' and `transient' forcing conditions, the results of which were used to estimate the equilibrium bathymetry of the bay. Results show that the effect of extreme wave events appeared to have less influence on the equilibrium of the bay compared to average conditions of longer overall duration. Additionally, the persistent seasonal variation of the wave climate produces pronounced beach rotation and tends to accumulate sediment at the extremities of the beach, rather than in the central sections. It is, therefore, important to account for directional variability and persistence in the selection and sequencing of representative wave conditions as is it essential for accurately balancing the effects beach rotation events.

... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

Full Text Available Abstract Background Randomization is considered to be a key feature to protect against bias in randomized clinical trials. Randomization induces comparability with respect to known and unknown covariates, mitigates selection bias, and provides a basis for inference. Although various randomization procedures have been proposed, no single procedure performs uniformly best. In the design phase of a clinical trial, the scientist has to decide which randomization procedure to use, taking into account the practical setting of the trial with respect to the potential of bias. Less emphasis has been placed on this important design decision than on analysis, and less support has been available to guide the scientist in making this decision. Methods We propose a framework that weights the properties of the randomization procedure with respect to practical needs of the research question to be answered by the clinical trial. In particular, the framework assesses the impact of chronological and selection bias on the probability of a type I error. The framework is applied to a case study with a 2-arm parallel group, single center randomized clinical trial with continuous endpoint, with no-interim analysis, 1:1 allocation and no adaptation in the randomization process. Results In so doing, we derive scientific arguments for the selection of an appropriate randomization procedure and develop a template which is illustrated in parallel by a case study. Possible extensions are discussed. Conclusion The proposed ERDO framework guides the investigator through a template for the choice of a randomization procedure, and provides easy to use tools for the assessment. The barriers for the thorough reporting and assessment of randomization procedures could be further reduced in the future when regulators and pharmaceutical companies employ similar, standardized frameworks for the choice of a randomization procedure.

Full Text Available The development of lidar system, especially incorporated with high-resolution camera components, has shown great potential for urban classification. However, how to automatically select the best features for land-use classification is challenging. Random Forests, a newly developed machine learning algorithm, is receiving considerable attention in the field of image classification and pattern recognition. Especially, it can provide the measure of variable importance. Thus, in this study the performance of the Random Forests-based feature selection for urban areas was explored. First, we extract features from lidar data, including height-based, intensity-based GLCM measures; other spectral features can be obtained from imagery, such as Red, Blue and Green three bands, and GLCM-based measures. Finally, Random Forests is used to automatically select the optimal and uncorrelated features for landuse classification. 0.5-meter resolution lidar data and aerial imagery are used to assess the feature selection performance of Random Forests in the study area located in Mannheim, Germany. The results clearly demonstrate that the use of Random Forests-based feature selection can improve the classification performance by the selected features.

Anonymous survey methods appear to promote greater disclosure of sensitive or stigmatizing information compared to non-anonymous methods. Higher disclosure rates have traditionally been interpreted as being more accurate than lower rates. We examined the impact of 3 increasingly private mailed survey conditions-ranging from potentially identifiable to completely anonymous-on survey response and on respondents' representativeness of the underlying sampling frame, completeness in answering sensitive survey items, and disclosure of sensitive information. We also examined the impact of 2 incentives ($10 versus $20) on these outcomes. A 3X2 factorial, randomized controlled trial of 324 representativelyselected, male Gulf War I era veterans who had applied for United States Department of Veterans Affairs (VA) disability benefits. Men were asked about past sexual assault experiences, childhood abuse, combat, other traumas, mental health symptoms, and sexual orientation. We used a novel technique, the pre-merged questionnaire, to link anonymous responses to administrative data. Response rates ranged from 56.0% to 63.3% across privacy conditions (p = 0.49) and from 52.8% to 68.1% across incentives (p = 0.007). Respondents' characteristics differed by privacy and by incentive assignments, with completely anonymous respondents and $20 respondents appearing least different from their non-respondent counterparts. Survey completeness did not differ by privacy or by incentive. No clear pattern of disclosing sensitive information by privacy condition or by incentive emerged. For example, although all respondents came from the same sampling frame, estimates of sexual abuse ranged from 13.6% to 33.3% across privacy conditions, with the highest estimate coming from the intermediate privacy condition (p = 0.007). Greater privacy and larger incentives do not necessarily result in higher disclosure rates of sensitive information than lesser privacy and lower incentives. Furthermore

Cluster algorithms play an important role in diversity related tasks of modern chemoinformatics, with the widest applications being in pharmaceutical industry drug discovery programs. The performance of these grouping strategies depends on various factors such as molecular representation, mathematical method, algorithmical technique, and statistical distribution of data. For this reason, introduction and comparison of new methods are necessary in order to find the model that best fits the problem at hand. Earlier comparative studies report on Ward's algorithm using fingerprints for molecular description as generally superior in this field. However, problems still remain, i.e., other types of numerical descriptions have been little exploited, current descriptors selection strategy is trial and error-driven, and no previous comparative studies considering a broader domain of the combinatorial methods in grouping chemoinformatic data sets have been conducted. In this work, a comparison between combinatorial methods is performed,with five of them being novel in cheminformatics. The experiments are carried out using eight data sets that are well established and validated in the medical chemistry literature. Each drug data set was represented by real molecular descriptors selected by machine learning techniques, which are consistent with the neighborhood principle. Statistical analysis of the results demonstrates that pharmacological activities of the eight data sets can be modeled with a few of families with 2D and 3D molecular descriptors, avoiding classification problems associated with the presence of nonrelevant features. Three out of five of the proposed cluster algorithms show superior performance over most classical algorithms and are similar (or slightly superior in the most optimistic sense) to Ward's algorithm. The usefulness of these algorithms is also assessed in a comparative experiment to potent QSAR and machine learning classifiers, where they perform

Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.

Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to representrandom assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing

Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed

The purpose of this study was to assess the effects of schema-broadening instruction (SBI) on second graders' word-problem-solving skills and their ability to represent the structure of word problems using algebraic equations. Teachers (n = 18) were randomly assigned to conventional word-problem instruction or SBI word-problem instruction, which taught students to represent the structural, defining features of word problems with overarching equations. Intervention lasted 16 weeks. We pretested and posttested 270 students on measures of word-problem skill; analyses that accounted for the nested structure of the data indicated superior word-problem learning for SBI students. Descriptive analyses of students' word-problem work indicated that SBI helped students represent the structure of word problems with algebraic equations, suggesting that SBI promoted this aspect of students' emerging algebraic reasoning.

Full Text Available In this paper, we evaluate performances of various user selection protocols under impact of hardware impairments. In the considered protocols, a Base Station (BS selects one of available Users (US to serve, while the remaining USs harvest the energy from the Radio Frequency (RF transmitted by the BS. We assume that all of the US randomly appear around the BS. In the RandomSelection Protocol (RAN, the BS randomlyselects a US to transmit the data. In the second proposed protocol, named Minimum Distance Protocol (MIND, the US that is nearest to the BS will be chosen. In the Optimal Selection Protocol (OPT, the US providing the highest channel gain between itself and the BS will be served. For performance evaluation, we derive exact and asymptotic closed-form expressions of average Outage Probability (OP over Rayleigh fading channels. We also consider average harvested energy per a US. Finally, Monte-Carlo simulations are then performed to verify the theoretical results.

Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

performance assessment. Therefore, a random target motion scenario is adopted. Its implementation in particular for testing the proposed selective track splitting algorithm using Kalman filters is investigated through a number of performance parameters which gives the activity profile of the tracking scenario...... The paper presents a simulation study on the performance of a target tracker using selective track splitting filter algorithm through a random scenario implemented on a digital signal processor. In a typical track splitting filter all the observation which fall inside a likelihood ellipse...... are used for update, however, in our proposed selective track splitting filter less number of observations are used for track update. Much of the previous performance work [1] has been done on specific (deterministic) scenarios. One of the reasons for considering the specific scenarios, which were...

This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

Classic studies of lateral geniculate nucleus (LGN) and visual cortex (V1) in carnivores and primates have found that a majority of neurons in LGN exhibit a center-surround organization, while V1 neurons exhibit strong orientation selectivity and, in many species, direction selectivity. Recent work in the mouse and the monkey has discovered previously unknown classes of orientation- and direction-selective neurons in LGN. Furthermore, some recent studies in the mouse report that many LGN cells exhibit pronounced orientation biases that are of comparable strength to the subthreshold inputs to V1 neurons. These results raise the possibility that, in rodents, orientation biases of individual LGN cells make a substantial contribution to cortical orientation selectivity. Alternatively, the size and contribution of orientation- or direction-selective channels from LGN to V1 may vary across mammals. To address this question, we examined orientation and direction selectivity in LGN and V1 neurons of a highly visual diurnal rodent: the gray squirrel. In the representation of central vision, only a few LGN neurons exhibited strong orientation or direction selectivity. Across the population, LGN neurons showed weak orientation biases and were much less selective for orientation compared with V1 neurons. Although direction selectivity was weak overall, LGN layers 3abc, which contain neurons that express calbindin, exhibited elevated direction selectivity index values compared with LGN layers 1 and 2. These results suggest that, for central visual fields, the contribution of orientation- and direction-selective channels from the LGN to V1 is small in the squirrel. As in other mammals, this small contribution is elevated in the calbindin-positive layers of the LGN PMID:25717157

Classic studies of lateral geniculate nucleus (LGN) and visual cortex (V1) in carnivores and primates have found that a majority of neurons in LGN exhibit a center-surround organization, while V1 neurons exhibit strong orientation selectivity and, in many species, direction selectivity. Recent work

We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values.

Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.

Full Text Available We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data.

Background One of the main topics in the development of quantitative structure-property relationship (QSPR) predictive models is the identification of the subset of variables that represent the structure of a molecule and which are predictors for a given property. There are several automated feature selection methods, ranging from backward, forward or stepwise procedures, to further elaborated methodologies such as evolutionary programming. The problem lies in selecting the minimum subset of descriptors that can predict a certain property with a good performance, computationally efficient and in a more robust way, since the presence of irrelevant or redundant features can cause poor generalization capacity. In this paper an alternative selection method, based on Random Forests to determine the variable importance is proposed in the context of QSPR regression problems, with an application to a manually curated dataset for predicting standard enthalpy of formation. The subsequent predictive models are trained with support vector machines introducing the variables sequentially from a ranked list based on the variable importance. Results The model generalizes well even with a high dimensional dataset and in the presence of highly correlated variables. The feature selection step was shown to yield lower prediction errors with RMSE values 23% lower than without feature selection, albeit using only 6% of the total number of variables (89 from the original 1485). The proposed approach further compared favourably with other feature selection methods and dimension reduction of the feature space. The predictive model was selected using a 10-fold cross validation procedure and, after selection, it was validated with an independent set to assess its performance when applied to new data and the results were similar to the ones obtained for the training set, supporting the robustness of the proposed approach. Conclusions The proposed methodology seemingly improves the prediction

Full Text Available Abstract Background Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. Methods The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site’s allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. Results There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1

Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE) study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site's allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1.92). Though there appeared to be a higher consent rate for females

A method was proposed to find the representative soil moisture measurement points for a steep hillslope located on northeastern part of South Korea. We had analyzed time series data of soil moisture for 49 points between May and November 2013 in the study area to characterize temporal and spatial variation pattern characteristics. The factor analysis showed monthly characteristics of Index of temporal stability (ITS) which can be classified into 3 distinct characteristics. Dendrogram was useful to characterize spatial and topographic patterns of soil moisture. The performance of the proposed method was compared with existing ITS approach in terms of the coefficient of determination showing better representing potential for soil moisture measurements. The results of this pattern approach can be used to interpolate the missing data with high accuracy which was made it possible through addressing characteristics of topography and rainfall events depending on seasonal classification.

This indicator uses population trends of selected bird and tree species as a surrogate measure of genetic diversity. Population decreases, especially associated with small populations, can lead to decreases in genetic diversity, and contribute to increased risk of extinction. Many forest-associated species rely on some particular forest structure, vegetation...

Congress of the U.S., Washington, DC. House Select Committee on Narcotics Abuse and Control.

This annual report describes the activities of the House Select Committee on Narcotics Abuse and Control in 1983 and makes recommendations to the House of Representatives to control the worldwide problem of drug abuse and drug trafficking. An initial section of the report describes the jurisdiction, authority, funding, and organization of the…

In Germany, the guardianship system provides adults who are no longer able to handle their own affairs a court-appointed legal representative, for support without restriction of legal capacity. Although these representatives only rarely are qualified in healthcare, they nevertheless play decisive roles in the decision-making processes for people with dementia. Previously, we developed an education program (PRODECIDE) to address this shortcoming and tested it for feasibility. Typical, autonomy-restricting decisions in the care of people with dementia-namely, using percutaneous endoscopic gastrostomy (PEG) or physical restrains (PR), or the prescription of antipsychotic drugs (AP)-were the subject areas trained. The training course aims to enhance the competency of legal representatives in informed decision-making. In this study, we will evaluate the efficacy of the PRODECIDE education program. A randomized controlled trial with a six-month follow-up will be conducted to compare the PRODECIDE education program with standard care, enrolling legal representatives (N = 216). The education program lasts 10 h and comprises four modules: A, decision-making processes and methods; and B, C and D, evidence-based knowledge about PEG, PR and AP, respectively. The primary outcome measure is knowledge, which is operationalized as the understanding of decision-making processes in healthcare affairs and in setting realistic expectations about benefits and harms of PEG, PR and AP in people with dementia. Secondary outcomes are sufficient and sustainable knowledge and percentage of persons concerned affected by PEG, FEM or AP. A qualitative process evaluation will be performed. Additionally, to support implementation, a concept for translating the educational contents into e-learning modules will be developed. The study results will show whether the efficacy of the education program could justify its implementation into the regular training curricula for legal representatives

Full Text Available Abstract Background The risk of long-term unequal contribution of mating pairs to the gene pool is that deleterious recessive genes can be expressed. Such consequences could be alleviated by appropriately designing and optimizing breeding schemes i.e. by improving selection and mating procedures. Methods We studied the effect of mating designs, random, minimum coancestry and minimum covariance of ancestral contributions on rate of inbreeding and genetic gain for schemes with different information sources, i.e. sib test or own performance records, different genetic evaluation methods, i.e. BLUP or genomic selection, and different family structures, i.e. factorial or pair-wise. Results Results showed that substantial differences in rates of inbreeding due to mating design were present under schemes with a pair-wise family structure, for which minimum coancestry turned out to be more effective to generate lower rates of inbreeding. Specifically, substantial reductions in rates of inbreeding were observed in schemes using sib test records and BLUP evaluation. However, with a factorial family structure, differences in rates of inbreeding due mating designs were minor. Moreover, non-random mating had only a small effect in breeding schemes that used genomic evaluation, regardless of the information source. Conclusions It was concluded that minimum coancestry remains an efficient mating design when BLUP is used for genetic evaluation or when the size of the population is small, whereas the effect of non-random mating is smaller in schemes using genomic evaluation.

The relative contributions of natural selection and random genetic drift are a major source of debate in the study of gene expression evolution, which is hypothesized to serve as a bridge from molecular to phenotypic evolution. It has been suggested that the conflict between views is caused by the lack of a definite model of the neutral hypothesis, which can describe the long-run behavior of evolutionary change in mRNA abundance. Therefore previous studies have used inadequate analogies with the neutral prediction of other phenomena, such as amino acid or nucleotide sequence evolution, as the null hypothesis of their statistical inference. In this study, we introduced two novel theoretical models, one based on neutral drift and the other assuming natural selection, by focusing on a common property of the distribution of mRNA abundance among a variety of eukaryotic cells, which reflects the result of long-term evolution. Our results demonstrated that (1) our models can reproduce two independently found phenomena simultaneously: the time development of gene expression divergence and Zipf's law of the transcriptome; (2) cytological constraints can be explicitly formulated to describe long-term evolution; (3) the model assuming that natural selection optimized relative mRNA abundance was more consistent with previously published observations than the model of optimized absolute mRNA abundances. The models introduced in this study give a formulation of evolutionary change in the mRNA abundance of each gene as a stochastic process, on the basis of previously published observations. This model provides a foundation for interpreting observed data in studies of gene expression evolution, including identifying an adequate time scale for discriminating the effect of natural selection from that of random genetic drift of selectively neutral variations.

This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...

Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

Full Text Available Di Zhao,1,* Jian Song,2,* Xuan Gao,3 Fei Gao,4 Yupeng Wu,2 Yingying Lu,5 Kai Hou1 1Department of Neurosurgery, The First Hospital of Hebei Medical University, 2Department of Neurosurgery, 3Department of Neurology, The Second Hospital of Hebei Medical University, 4Hebei Provincial Procurement Centers for Medical Drugs and Devices, 5Department of Neurosurgery, The Second Hospital of Hebei Medical University, Shijiazhuang People’s Republic of China *These authors contributed equally to this work Background: Selective digestive decontamination (SDD and selective oropharyngeal decontamination (SOD are associated with reduced mortality and infection rates among patients in intensive care units (ICUs; however, whether SOD has a superior effect than SDD remains uncertain. Hence, we conducted a meta-analysis of randomized controlled trials (RCTs to compare SOD with SDD in terms of clinical outcomes and antimicrobial resistance rates in patients who were critically ill. Methods: RCTs published in PubMed, Embase, and Web of Science were systematically reviewed to compare the effects of SOD and SDD in patients who were critically ill. Outcomes included day-28 mortality, length of ICU stay, length of hospital stay, duration of mechanical ventilation, ICU-acquired bacteremia, and prevalence of antibiotic-resistant Gram-negative bacteria. Results were expressed as risk ratio (RR with 95% confidence intervals (CIs, and weighted mean differences (WMDs with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. Results: A total of four RCTs involving 23,822 patients met the inclusion criteria and were included in this meta-analysis. Among patients whose admitting specialty was surgery, cardiothoracic surgery (57.3% and neurosurgery (29.7% were the two main types of surgery being performed. Pooled results showed that SOD had similar effects as SDD in day-28 mortality (RR =1

A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm×260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz–13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz–14 GHz. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values.

We present the Representative Temperature and Precipitation (T&P) GCM Subsetting Approach developed within the Agricultural Model Intercomparison and Improvement Project (AgMIP) to select a practical subset of global climate models (GCMs) for regional integrated assessment of climate impacts when resource limitations do not permit the full ensemble of GCMs to be evaluated given the need to also focus on impacts sector and economics models. Subsetting inherently leads to a loss of information but can free up resources to explore important uncertainties in the integrated assessment that would otherwise be prohibitive. The Representative T&P GCM Subsetting Approach identifies five individual GCMs that capture a profile of the full ensemble of temperature and precipitation change within the growing season while maintaining information about the probability that basic classes of climate changes (relatively cool/wet, cool/dry, middle, hot/wet, and hot/dry) are projected in the full GCM ensemble. We demonstrate the selection methodology for maize impacts in Ames, Iowa, and discuss limitations and situations when additional information may be required to selectrepresentative GCMs. We then classify 29 GCMs over all land areas to identify regions and seasons with characteristic diagonal skewness related to surface moisture as well as extreme skewness connected to snow-albedo feedbacks and GCM uncertainty. Finally, we employ this basic approach to recognize that GCM projections demonstrate coherence across space, time, and greenhouse gas concentration pathway. The Representative T&P GCM Subsetting Approach provides a quantitative basis for the determination of useful GCM subsets, provides a practical and coherent approach where previous assessments selected solely on availability of scenarios, and may be extended for application to a range of scales and sectoral impacts.

Full Text Available Abstract We present the Representative Temperature and Precipitation (T&P GCM Subsetting Approach developed within the Agricultural Model Intercomparison and Improvement Project (AgMIP to select a practical subset of global climate models (GCMs for regional integrated assessment of climate impacts when resource limitations do not permit the full ensemble of GCMs to be evaluated given the need to also focus on impacts sector and economics models. Subsetting inherently leads to a loss of information but can free up resources to explore important uncertainties in the integrated assessment that would otherwise be prohibitive. The Representative T&P GCM Subsetting Approach identifies five individual GCMs that capture a profile of the full ensemble of temperature and precipitation change within the growing season while maintaining information about the probability that basic classes of climate changes (relatively cool/wet, cool/dry, middle, hot/wet, and hot/dry are projected in the full GCM ensemble. We demonstrate the selection methodology for maize impacts in Ames, Iowa, and discuss limitations and situations when additional information may be required to selectrepresentative GCMs. We then classify 29 GCMs over all land areas to identify regions and seasons with characteristic diagonal skewness related to surface moisture as well as extreme skewness connected to snow-albedo feedbacks and GCM uncertainty. Finally, we employ this basic approach to recognize that GCM projections demonstrate coherence across space, time, and greenhouse gas concentration pathway. The Representative T&P GCM Subsetting Approach provides a quantitative basis for the determination of useful GCM subsets, provides a practical and coherent approach where previous assessments selected solely on availability of scenarios, and may be extended for application to a range of scales and sectoral impacts.

Full Text Available Abstract Background Recent studies have shown that when individuals are grouped on the basis of genetic similarity, group membership corresponds closely to continental origin. There has been considerable debate about the implications of these findings in the context of larger debates about race and the extent of genetic variation between groups. Some have argued that clustering according to continental origin demonstrates the existence of significant genetic differences between groups and that these differences may have important implications for differences in health and disease. Others argue that clustering according to continental origin requires the use of large amounts of genetic data or specifically chosen markers and is indicative only of very subtle genetic differences that are unlikely to have biomedical significance. Results We used small numbers of randomlyselected single nucleotide polymorphisms (SNPs from the International HapMap Project to train naïve Bayes classifiers for prediction of ancestral continent of origin. Predictive accuracy was tested on two independent data sets. Genetically similar groups should be difficult to distinguish, especially if only a small number of genetic markers are used. The genetic differences between continentally defined groups are sufficiently large that one can accurately predict ancestral continent of origin using only a minute, randomlyselected fraction of the genetic variation present in the human genome. Genotype data from only 50 random SNPs was sufficient to predict ancestral continent of origin in our primary test data set with an average accuracy of 95%. Genetic variations informative about ancestry were common and widely distributed throughout the genome. Conclusion Accurate characterization of ancestry is possible using small numbers of randomlyselected SNPs. The results presented here show how investigators conducting genetic association studies can use small numbers of arbitrarily

Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

Critical data selection is essential for determining representative baseline levels of atmospheric trace gases even at remote measurement sites. Different data selection techniques have been used around the world, which could potentially lead to reduced compatibility when comparing data from different stations. This paper presents a novel statistical data selection method named adaptive diurnal minimum variation selection (ADVS) based on CO2 diurnal patterns typically occurring at elevated mountain stations. Its capability and applicability were studied on records of atmospheric CO2 observations at six Global Atmosphere Watch stations in Europe, namely, Zugspitze-Schneefernerhaus (Germany), Sonnblick (Austria), Jungfraujoch (Switzerland), Izaña (Spain), Schauinsland (Germany), and Hohenpeissenberg (Germany). Three other frequently applied statistical data selection methods were included for comparison. Among the studied methods, our ADVS method resulted in a lower fraction of data selected as a baseline with lower maxima during winter and higher minima during summer in the selected data. The measured time series were analyzed for long-term trends and seasonality by a seasonal-trend decomposition technique. In contrast to unselected data, mean annual growth rates of all selected datasets were not significantly different among the sites, except for the data recorded at Schauinsland. However, clear differences were found in the annual amplitudes as well as the seasonal time structure. Based on a pairwise analysis of correlations between stations on the seasonal-trend decomposed components by statistical data selection, we conclude that the baseline identified by the ADVS method is a better representation of lower free tropospheric (LFT) conditions than baselines identified by the other methods.

Full Text Available Critical data selection is essential for determining representative baseline levels of atmospheric trace gases even at remote measurement sites. Different data selection techniques have been used around the world, which could potentially lead to reduced compatibility when comparing data from different stations. This paper presents a novel statistical data selection method named adaptive diurnal minimum variation selection (ADVS based on CO2 diurnal patterns typically occurring at elevated mountain stations. Its capability and applicability were studied on records of atmospheric CO2 observations at six Global Atmosphere Watch stations in Europe, namely, Zugspitze-Schneefernerhaus (Germany, Sonnblick (Austria, Jungfraujoch (Switzerland, Izaña (Spain, Schauinsland (Germany, and Hohenpeissenberg (Germany. Three other frequently applied statistical data selection methods were included for comparison. Among the studied methods, our ADVS method resulted in a lower fraction of data selected as a baseline with lower maxima during winter and higher minima during summer in the selected data. The measured time series were analyzed for long-term trends and seasonality by a seasonal-trend decomposition technique. In contrast to unselected data, mean annual growth rates of all selected datasets were not significantly different among the sites, except for the data recorded at Schauinsland. However, clear differences were found in the annual amplitudes as well as the seasonal time structure. Based on a pairwise analysis of correlations between stations on the seasonal-trend decomposed components by statistical data selection, we conclude that the baseline identified by the ADVS method is a better representation of lower free tropospheric (LFT conditions than baselines identified by the other methods.

, and road type. The oral fluid samples were screened for 29 illegal and legal psychoactive substances and metabolites as well as ethanol. Fourteen (0.5%) drivers were positive for ethanol alone or in combination with drugs) at concentrations above 0.53 g/l (0.5 mg/g), which is the Danish legal limit...... drugs detected above the limit of quantitation (LOQ); while, codeine, tramadol, zopiclone, and benzodiazepines were the most frequent legal drugs. Middle aged men (median age 47.5 years) dominated the drunk driving group, while the drivers positive for illegal drugs consisted mainly of young men (median......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...

drugs detected above the limit of quantitation (LOQ); while, codeine, tramadol, zopiclone, and benzodiazepines were the most frequent legal drugs. Middle aged men (median age 47.5 years) dominated the drunk driving group, while the drivers positive for illegal drugs consisted mainly of young men (median...... of narcotic drugs. It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...

Full Text Available Random forest (RF is a machine-learning ensemble method with high predictive performance. Majority voting in RF uses the discrimination results in numerous decision trees produced from bootstrapping data. For the same dataset, the bootstrapping process yields different predictive capacities in each generation. As participants in the Toxicology in the 21st Century (Tox21 DATA Challenge 2014, we produced numerous RF models for predicting the structures of compounds that can activate each toxicity-related pathway, and then selected the model with the highest predictive ability. Half of the compounds in the training dataset supplied by the competition organizer were allocated to the validation dataset. The remaining compounds were used in model construction. The charged and uncharged forms of each molecule were calculated using the molecular operating environment (MOE software. Subsequently, the descriptors were computed using MOE, MarvinView, and Dragon. These combined methods yielded over 4,071 descriptors for model construction. Using these descriptors, pattern recognition analyses were performed by RF implemented in JMP Pro (a statistical software package. A hundred to two hundred RF models were generated for each pathway. The predictive performance of each model was tested against the validation dataset, and the best-performing model was selected. In the competition, the latter model selected a best-performing model from the 50% test set that best predicted the structures of compounds that activate the estrogen receptor ligand-binding domain (ER-LBD.

Are randomized controlled trials (RCTs) on IVF and ICSI subject to selective outcome reporting and is this related to sponsorship? There are inconsistencies, independent from sponsorship, in the reporting of primary outcome measures in the majority of IVF and ICSI trials, indicating selective outcome reporting. RCTs are subject to bias at various levels. Of these biases, selective outcome reporting is particularly relevant to IVF and ICSI trials since there is a wide variety of outcome measures to choose from. An established cause of reporting bias is sponsorship. It is, at present, unknown whether RCTs in IVF/ICSI are subject to selective outcome reporting and whether this is related with sponsorship. We systematically searched RCTs on IVF and ICSI published between January 2009 and March 2016 in MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials and the publisher subset of PubMed. We analysed 415 RCTs. Per included RCT, we extracted data on impact factor of the journal, sample size, power calculation, and trial registry and thereafter data on primary outcome measure, the direction of trial results and sponsorship. Of the 415 identified RCTs, 235 were excluded for our primary analysis, because the sponsorship was not reported. Of the 180 RCTs included in our analysis, 7 trials did not report on any primary outcome measure and 107 of the remaining 173 trials (62%) reported on surrogate primary outcome measures. Of the 114 registered trials, 21 trials (18%) provided primary outcomes in their manuscript that were different from those in the trial registry. This indicates selective outcome reporting. We found no association between selective outcome reporting and sponsorship. We ran additional analyses to include the trials that had not reported sponsorship and found no outcomes that differed from our primary analysis. Since the majority of the trials did not report on sponsorship, there is a risk on sampling bias. IVF and ICSI trials are subject, to

The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and randomselection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small

To assess radiographic methods and diagnostically sufficient images used before removal of mandibular third molars among randomlyselected general dental clinics. Furthermore, to assess factors predisposing for an additional radiographic examination. 2 observers visited 18 randomlyselected clinics in Denmark and studied patient files, including radiographs of patients who had their mandibular third molar(s) removed. The radiographic unit and type of receptor were registered. A diagnostically sufficient image was defined as the whole tooth and mandibular canal were displayed in the radiograph (yes/no). Overprojection between the tooth and mandibular canal (yes/no) and patient-reported inferior alveolar nerve sensory disturbances (yes/no) were recorded. Regression analyses tested if overprojection between the third molar and the mandibular canal and an insufficient intraoral image predisposed for additional radiographic examination(s). 1500 mandibular third molars had been removed; 1090 had intraoral, 468 had panoramic and 67 had CBCT examination. 1000 teeth were removed after an intraoral examination alone, 433 after panoramic examination and 67 after CBCT examination. 90 teeth had an additional examination after intraoral. Overprojection between the tooth and mandibular canal was a significant factor (p

Highlights: • A model based on random forests for short term load forecast is proposed. • An expert feature selection is added to refine inputs. • Special attention is paid to customers behavior, load profile and special holidays. • The model is flexible and able to handle complex load signal. • A technical comparison is performed to assess the forecast accuracy. - Abstract: The electrical load forecast is getting more and more important in recent years due to the electricity market deregulation and integration of renewable resources. To overcome the incoming challenges and ensure accurate power prediction for different time horizons, sophisticated intelligent methods are elaborated. Utilization of intelligent forecast algorithms is among main characteristics of smart grids, and is an efficient tool to face uncertainty. Several crucial tasks of power operators such as load dispatch rely on the short term forecast, thus it should be as accurate as possible. To this end, this paper proposes a short term load predictor, able to forecast the next 24 h of load. Using random forest, characterized by immunity to parameter variations and internal cross validation, the model is constructed following an online learning process. The inputs are refined by expert feature selection using a set of if–then rules, in order to include the own user specifications about the country weather or market, and to generalize the forecast ability. The proposed approach is tested through a real historical set from the Tunisian Power Company, and the simulation shows accurate and satisfactory results for one day in advance, with an average error exceeding rarely 2.3%. The model is validated for regular working days and weekends, and special attention is paid to moving holidays, following non Gregorian calendar

Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1 -norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1 -norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended

Handbooks denote representative authority, which gives their content normative value and through which editors and authors can emphasize certain views and orientations within a field. The representative authority of a handbook is reinforced in various ways, both obvious and subtle. The "SAGE Handbook of Curriculum and Instruction" is no exception…

Computer software was written to randomlyselect sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selectedrandomly, or the areal subsets in the category can be grouped into cells and sites selectedrandomly from each cell.

Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The randomselection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selectingrandom households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This

Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The randomselection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selectingrandom households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only

Full Text Available The analysis of natural phenomena applied to architectural planning and design is facing the most fascinating and elusive of the four dimensions through which man attempts to define life within the universe: time. We all know what time is, said St. Augustine, but nobody knows how to describe it. Within architectural projects and representations, time rarely appears in explicit form. This paper presents the results of a research conducted by students of NABA and of the Polytechnic of Milan with the purpose of representing time considered as a key element within architectural projects. Student investigated new approaches and methodologies to represent time using the two-dimensional support of a sheet of paper.

To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding “true” and “false” values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general (“fuzzy”) case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not. PMID:25386045

BACKGROUND: Selective cyclooxygenase-2 inhibitors and conventional non-selective non-steroidal anti-inflammatory drugs (nsNSAIDs) have been associated with adverse cardiovascular (CV) effects. We compared the CV safety of switching to celecoxib vs. continuing nsNSAID therapy in a European setting...... infarction or other biomarker positive acute coronary syndrome, non-fatal stroke or CV death analysed using a Cox model with a pre-specified non-inferiority limit of 1.4 for the hazard ratio (HR). RESULTS: In total, 7297 participants were randomized. During a median 3-year follow-up, fewer subjects than...

Representing Development presents the different social representations that have formed the idea of development in Western thinking over the past three centuries. Offering an acute perspective on the current state of developmental science and providing constructive insights into future pathways...... and development, addressing their contemporary enactments and reflecting on future theoretical and empirical directions. The first section of the book provides an historical account of early representations of development that, having come from life science, has shaped the way in which developmental science has...... approached development. Section two focuses upon the contemporary issues of developmental psychology, neuroscience and developmental science at large. The final section offers a series of commentaries pointing to the questions opened by the previous chapters, looking to outline the future lines...

Full Text Available RNA interference (RNAi is a mechanism for interfering with gene expression through the action of small, non-coding RNAs. We previously constructed a short-hairpin-loop RNA (shRNA encoding library that is random at the nucleotide level [1]. In this library, the stems of the hairpin are completely complementary. To improve the potency of initial hits, and therefore signal-to-noise ratios in library screening, as well as to simplify hit-sequence retrieval by PCR, we constructed a second-generation library in which we introduced random mismatches between the two halves of the stem of each hairpin, on a random template background. In a screen for shRNAs that protect an interleukin-3 (IL3 dependent cell line from IL3 withdrawal, our second-generation library yielded hit sequences with significantly higher potencies than those from the first-generation library in the same screen. Our method of random mutagenesis was effective for a random template and is likely suitable, therefore, for any DNA template of interest. The improved potency of our second-generation library expands the range of possible unbiased screens for small-RNA therapeutics and biologic tools.

Representative subsets selected from within larger data sets are useful in many chemoinformatics applications including the design of information-rich compound libraries, the selection of compounds for biological evaluation, and the development of reliable quantitative structure-activity relationship (QSAR) models. Such subsets can overcome many of the problems typical of diverse subsets, most notably the tendency of the latter to focus on outliers. Yet only a few algorithms for the selection of representative subsets have been reported in the literature. Here we report on the development of two algorithms for the selection of representative subsets from within parent data sets based on the optimization of a newly devised representativeness function either alone or simultaneously with the MaxMin function. The performances of the new algorithms were evaluated using several measures representing their ability to produce (1) subsets which are, on average, close to data set compounds; (2) subsets which, on average, span the same space as spanned by the entire data set; (3) subsets mirroring the distribution of biological indications in a parent data set; and (4) test sets which are well predicted by qualitative QSAR models built on data set compounds. We demonstrate that for three data sets (containing biological indication data, logBBB permeation data, and Plasmodium falciparum inhibition data), subsets obtained using the new algorithms are more representative than subsets obtained by hierarchical clustering, k-means clustering, or the MaxMin optimization at least in three of these measures.

Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke "design creationism" to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane's hydrophobic/philic nature; a selective "pore" for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the "jackprot," which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the "jackprot," or highest-fitness complete-peptide sequence, required cumulative smaller "wins" (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons ("jackdons" that led to "jackacids" that led to the "jackprot"). The "jackprot" is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide "edition" and gene duplications to generate the 6

Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

Congress of the U.S., Washington, DC. House Select Committee on Aging.

This document contains prepared statements and witness testimony from the Congressional hearing on the elderly and alcohol and drug use. Opening statements are given by Committe on Aging representatives Edward Roybal and Michael Bilirakis. Witness testimony is given by representatives of the University of South Florida Gerontology Center; the…

Full Text Available The impact of selection bias on the results of clinical trials has been analyzed extensively for trials of two treatments, yet its impact in multi-arm trials is still unknown. In this paper, we investigate selection bias in multi-arm trials by its impact on the type I error probability. We propose two models for selection bias, so-called biasing policies, that both extend the classic guessing strategy by Blackwell and Hodges. We derive the distribution of the F-test statistic under the misspecified outcome model and provide a formula for the type I error probability under selection bias. We apply the presented approach to quantify the influence of selection bias in multi-arm trials with increasing number of treatment groups using a permuted block design for different assumptions and different biasing strategies. Our results confirm previous findings that smaller block sizes lead to a higher proportion of sequences with inflated type I error probability. Astonishingly, our results also show that the proportion of sequences with inflated type I error probability remains constant when the number of treatment groups is increased. Realizing that the impact of selection bias cannot be completely eliminated, we propose a bias adjusted statistical model and show that the power of the statistical test is only slightly deflated for larger block sizes.

DNA is constantly exposed to exogenous agents that randomly damage the genetic code. However, external perturbations might also be used to induce malignant cell death if the mutation processes are controlled. The present communication reports a set of parameters allowing DNA mutation through the use of intense external electric fields. This is a step towards the design of pulsed electric field therapy for genetic diseases.

Fault detection and diagnosis is the most important technology in condition-based maintenance (CBM) system for rotating machinery. This paper experimentally explores the development of a random forest (RF) classifier, a recently emerged machine learning technique, for multi-class mechanical fault diagnosis in bearing of an induction motor. Firstly, the vibration signals are collected from the bearing using accelerometer sensor. Parameters from the vibration signal are extracted in the form of...

This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... of Huang et al. (2008). Furthermore, the number of relevant variables is allowed to be larger than the sample size....

The choice of negative training data for machine learning is a little explored issue in chemoinformatics. In this study, the influence of alternative sets of negative training data and different background databases on support vector machine (SVM) modeling and virtual screening has been investigated. Target-directed SVM models have been derived on the basis of differently composed training sets containing confirmed inactive molecules or randomlyselected database compounds as negative training instances. These models were then applied to search background databases consisting of biological screening data or randomly assembled compounds for available hits. Negative training data were found to systematically influence compound recall in virtual screening. In addition, different background databases had a strong influence on the search results. Our findings also indicated that typical benchmark settings lead to an overestimation of SVM-based virtual screening performance compared to search conditions that are more relevant for practical applications.

This paper presents a new wrapper-based feature selection method for multilayer perceptron (MLP) neural networks. It uses a feature ranking criterion to measure the importance of a feature by computing the aggregate difference, over the feature space, of the probabilistic outputs of the MLP with and without the feature. Thus, a score of importance with respect to every feature can be provided using this criterion. Based on the numerical experiments on several artificial and real-world data sets, the proposed method performs, in general, better than several selected feature selection methods for MLP, particularly when the data set is sparse or has many redundant features. In addition, as a wrapper-based approach, the computational cost for the proposed method is modest.

The aim of this study is to determine if right arm peripherally inserted central catheters (PICCs) experienced fewer complications while controlling for gender, hand dominance, history of malignancy, dwell time and catheter size. This was an intention-to-treat randomized controlled trial conducted in an academic medical center on two different sites between September 2012 and September 2015. All patients older than 18 years or age without known history of previous central line, contraindication to the use of a specific arm or hospitalized in the intensive care unit regardless of coagulation status, were considered for the study. Participants were randomized to the left or right arm group and were followed until catheter removal. Data collected included: PICC characteristics, insertion details, gender, arm dominance, history of malignancy, reason for insertion/removal, incidence of a complication and total dwell time. One-tailed hypothesis testing using a univariate logistic regression with odds ratio (OR) calculation was used to analyze the results. There were 202 patients randomly assigned, totaling 7657 catheter-days; 103 patients to the right-side group and 99 patients to the left-side group. Participants in both groups were statistically equivalent for right handedness, gender, oncologic status, average dwell time and total catheter days. The overall incidence of complications on the right side was 23% versus 34% on the left side, confirming the hypothesis that right-sided insertions led to fewer complications (p = 0.046). The risk of a complication was reduced by 40% with right-sided insertion (OR 0.58 (CI: 0.31-1.09). This study indicated fewer complications with right-sided insertion irrespective of hand dominance.

The display of peptide sequences on the surface of bacteria is a technology that offers exciting applications in biotechnology and medical research. Type 1 fimbriae are surface organelles of Escherichia coli which mediate D-mannose-sensitive binding to different host surfaces by virtue of the Fim......H adhesin. FimH is a component of the fimbrial organelle that can accommodate and display a diverse range of peptide sequences on the E. coli cell surface. In this study we have constructed a random peptide library in FimH. The library, consisting of similar to 40 million individual clones, was screened...... that completely novel Zn2+-binding peptide sequences had been isolated. By changing the protein scaffold system, we demonstrated that the Zn2+-binding seems to be uniquely mediated by the peptide insert and to be independent of the sequence of the carrier protein. These findings might be applied in the design...

Full Text Available Privacy-preserving data queries for wireless sensor networks (WSNs have drawn much attention recently. This paper proposes a privacy-preserving MAX/MIN query processing approach based on random secure comparator selection in two-tiered sensor network, which is denoted by RSCS-PMQ. The secret comparison model is built on the basis of the secure comparator which is defined by 0-1 encoding and HMAC. And the minimal set of highest secure comparators generating algorithm MaxRSC is proposed, which is the key to realize RSCS-PMQ. In the data collection procedures, the sensor node randomlyselects a generated secure comparator of the maximum data into ciphertext which is submitted to the nearby master node. In the query processing procedures, the master node utilizes the MaxRSC algorithm to determine the corresponding minimal set of candidate ciphertexts containing the query results and returns it to the base station. And the base station obtains the plaintext query result through decryption. The theoretical analysis and experimental result indicate that RSCS-PMQ can preserve the privacy of sensor data and query result from master nodes even if they are compromised, and it has a better performance on the network communication cost than the existing approaches.

Since the solution to many public health problems depends on research, it is critical for the progress and well-being for the patients that we can trust the scientific literature. Misconduct and poor laboratory practice in science threatens the scientific progress, leads to loss of productivity and increased healthcare costs, and endangers lives of patients. Data duplication may represent one of challenges related to these problems. In order to estimate the frequency of data duplication in life science literature, a systematic screen through 120 original scientific articles published in three different cancer related journals [journal impact factor (IF) 20] was completed. The study revealed a surprisingly high proportion of articles containing data duplication. For the IF 20 journals, 25% of the articles were found to contain data duplications. The IF 5-10 journal showed a comparable proportion (22.5%). The proportion of articles containing duplicated data was comparable between the three journals and no significant correlation to journal IF was found. The editorial offices representing the journals included in this study and the individual authors of the detected articles were contacted to clarify the individual cases. The editorial offices did not reply and only 1 out of 29 cases were apparently clarified by the authors, although no supporting data was supplied. This study questions the reliability of life science literature, it illustrates that data duplications are widespread and independent of journal impact factor and call for a reform of the current peer review and retraction process of scientific publishing.

Randomized clinical trial. OBJECTIVE.: To determine if age affects outcomes from differing treatments in patients with spinal metastases. Recently, class I data were published supporting surgery with radiation over radiation alone for patients with malignant epidural spinal cord compression (MESCC). However, the criteria to properly select candidates for surgery remains controversial and few independent variables which predict success after treatment have been identified. Data for this study was obtained in a randomized clinical trial comparing surgery versus radiation for MESCC. Hazard ratios were determined for the effect of age and the interaction between age and treatment. Age estimates at which prespecified relative risks could be expected were calculated with greater than 95% confidence to suggest possible age cut points for further stratification. Multivariate models and Kaplan-Meier curves were tested using stratified cohorts for both treatment groups in the randomized trial each divided into 2 age groups. Secondary data analysis with age stratification demonstrated a strong interaction between age and treatment (hazard ratio = 1.61, P = 0.01), such that as age increases, the chances of surgery being equal to radiation alone increases. The best estimate for the age at which surgery is no longer superior to radiation alone was calculated to be between 60 and 70 years of age (95% CI), using sequential prespecified relative risk ratios. Multivariate modeling and Kaplan-Meier curves for stratified treatment groups showed that there was no difference in outcome between treatments for patients >or=65 years of age. Ambulation preservation was significantly prolonged in patients variable in predicting preservation of ambulation and survival for patients being treated for spinal metastases. Our results provide compelling evidence for the first time that particular age cut points may help in selecting patients for surgical or nonsurgical intervention based on outcome.

Congress of the U.S., Washington, DC. House Select Committee on Narcotics Abuse and Control.

This document contains witness testimonies and prepared statements from the Congressional hearing called to examine drug abuse in the workplace, how the public and private sectors are dealing with this problem, and the issue of urine testing. Opening statements are included from Representatives Charles Rangel, Benjamin Gilman, Frank Guarini Mel…

In response to Representative Edward Roybal's concern that aging organizations have used direct mail in a manner that might frighten, threaten, or coerce the elderly into contributing money or buying products from these organizations, the General Accounting Office (GAO) agreed to: (1) identify federal agencies with jurisdiction in reviewing the…

Congress of the U.S., Washington, DC. House Select Committee on Children, Youth, and Families.

This document presents witness testimonies and prepared statements from the Congressional hearing called to examine the effects of homelessness on children and families. In their opening statements, Representatives George Miller and Dan Coats emphasize that homelessness threatens the physical health and safety of children, places them at risk of…

Congress of the U.S., Washington, DC. House Committee on Education and Labor.

The transcript of the 1986 House of Representatives hearings on deaf education programs contains verbatim testimony and committee questions, prepared statements, letters, and supplemental material. The Hearings concerned legislation already passed by the Senate which would provide for the continuation of Gallaudet College; would combine the…

Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

Full Text Available The aim of the study was to determine which kind of physical activity could be useful to inmate populations to improve their health status and fitness levels. A repeated measure design was used to evaluate the effects of two different training protocols on subjects in a state of detention, tested pre- and post-experimental protocol.Seventy-five male subjects were enrolled in the studyand randomly allocated to three groups: the cardiovascular plus resistance training protocol group (CRT (n = 25; mean age 30.9 ± 8.9 years,the high-intensity strength training protocol group (HIST (n = 25; mean age 33.9 ± 6.8 years, and a control group (C (n = 25; mean age 32.9 ± 8.9 years receiving no treatment. All subjects underwent a clinical assessmentandfitness tests. MANOVA revealed significant multivariate effects on group (p < 0.01 and group-training interaction (p < 0.05. CRT protocol resulted the most effective protocol to reach the best outcome in fitness tests. Both CRT and HIST protocols produced significant gains in the functional capacity (cardio-respiratory capacity and cardiovascular disease risk decrease of incarcerated males. The significant gains obtained in functional capacity reflect the great potential of supervised exercise interventions for improving the health status of incarcerated people.

Considerable criticism has lately been raised by the media regarding the quality of Swiss medical expertises. The present investigation was therefore undertaken to assess the professional quality of Swiss medical expertises. The study was part of a market analysis of medical expertises (MGS study). A sample of 97 anonymised expertises randomly chosen from a total of 3165, collected in the MGS study over a period of 3 months, were evaluated by an international board of medical experts and reviewers, using a stepwise developed questionnaire. Each expertise was independently evaluated by two experts. Data were then tested for plausibility (obvious errors and misunderstandings). The main outcome was the overall quality rating of the expertise that was graded from 1 (very poor) to 6 (excellent) in analogy to the Swiss school grading system. For analysis and interpretation the grades were divided into sufficient (grades >= 4) and insufficient (grades <4). Overall 19.6% (95% confidence interval: 13.1%; 28.3%) of the expertises were rated to be of insufficient quality. The quality was inversely related to the number of involved medical disciplines, the time relapsed since injury and positively related to the difficulty of the expertise. In addition, expertises in the French and Italian languages were rated superior to those in German. Our results confirm recent criticisms that the professional quality of expertises does not suffice. This is hardly acceptable in face of the financial and personal consequences. There is an obvious need for further research using larger samples and for educational programmes on all levels.

MicroRNAs (miRNAs) are key gene expression regulators in plants and animals. Therefore, miRNAs are involved in several biological processes, making the study of these molecules one of the most relevant topics of molecular biology nowadays. However, characterizing miRNAs in vivo is still a complex task. As a consequence, in silico methods have been developed to predict miRNA loci. A common ab initio strategy to find miRNAs in genomic data is to search for sequences that can fold into the typical hairpin structure of miRNA precursors (pre-miRNAs). The current ab initio approaches, however, have selectivity issues, i.e., a high number of false positives is reported, which can lead to laborious and costly attempts to provide biological validation. This study presents an extension of the ab initio method miRNAFold, with the aim of improving selectivity through machine learning techniques, namely, random forest combined with the SMOTE procedure that copes with imbalance datasets. By comparing our method, termed Mirnacle, with other important approaches in the literature, we demonstrate that Mirnacle substantially improves selectivity without compromising sensitivity. For the three datasets used in our experiments, our method achieved at least 97% of sensitivity and could deliver a two-fold, 20-fold, and 6-fold increase in selectivity, respectively, compared with the best results of current computational tools. The extension of miRNAFold by the introduction of machine learning techniques, significantly increases selectivity in pre-miRNA ab initio prediction, which optimally contributes to advanced studies on miRNAs, as the need of biological validations is diminished. Hopefully, new research, such as studies of severe diseases caused by miRNA malfunction, will benefit from the proposed computational tool.

Abnormalities of the renin-angiotensin system have been reported in patients with diabetes mellitus and with diabetic complications. In this study, plasma concentrations of prorenin, renin, and aldosterone were measured in a stratified random sample of 110 insulin-dependent (Type 1) diabetic patients attending our outpatient clinic. Fifty-four age- and sex-matched control subjects were also examined. Plasma prorenin concentration was higher in patients without complications than in control subjects when upright (geometric mean (95% confidence intervals (CI): 75.9 (55.0-105.6) vs 45.1 (31.6-64.3) mU I-1, p < 0.05). There was no difference in plasma prorenin concentration between patients without and with microalbuminuria and between patients without and with background retinopathy. Plasma renin concentration, both when supine and upright, was similar in control subjects, in patients without complications, and in patients with varying degrees of diabetic microangiopathy. Plasma aldosterone was suppressed in patients without complications in comparison to control subjects (74 (58-95) vs 167 (140-199) ng I-1, p < 0.001) and was also suppressed in patients with microvascular disease. Plasma potassium was significantly higher in patients than in control subjects (mean +\\/- standard deviation: 4.10 +\\/- 0.36 vs 3.89 +\\/- 0.26 mmol I-1; p < 0.001) and plasma sodium was significantly lower (138 +\\/- 4 vs 140 +\\/- 2 mmol I-1; p < 0.001). We conclude that plasma prorenin is not a useful early marker for diabetic microvascular disease. Despite apparently normal plasma renin concentrations, plasma aldosterone is suppressed in insulin-dependent diabetic patients.

Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

Full Text Available Swimming pools have become major recreation facilities for leisure and sports in cities across the world, but the standard guidelines, particularly in developing countries, are not adhered to because little is known about the contaminants in the pools and the possible health risks involved. This study provides a survey of bacterial quality of water from swimming pools in Kampala. A total of 26 water samples were collected from 13 outdoor swimming pools in Kampala between January and June 2016 and analysed for total aerobic plate count (TPC, Escherichia coli, coliforms, and Salmonella. The heterotrophic bacterial load ranged between 0 and 6.35 × 105 cfu/ml, where 6.35 × 105 cfu/ml was the highest load and 3 × 101 cfu/ml the least. The highest average TPC was 6.19 × 105 cfu/ml and the lowest 5.07 × 103 cfu/ml. 30.8% of the pools had TPC within acceptable limits (≤5 × 102 cfu/ml, whereas 69.2% were highly contaminated and did not conform to the Uganda National Water and Sewerage Corporation standards of recreational water quality for both treated (0 cfu/100 mls water and untreated (10 cfu/100 mls water. Although no positive results were yielded for E. coli, coliforms, and Salmonella, TPC represented the presence of heterotrophic bacteria which are often indicated in opportunistic infections.

The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP

Full Text Available Introduction: Mental foramen is usually the anterior limit of inferior dental canal, which is located in the body of mandible between the inferior and alveolar margins. The accurate identification of the position of the mental foramen is important for both diagnostic and clinical procedures on the mandible. Objectives: To determine the most common type and position of the mental foramen in a selected population of Maharashtra. Materials and Methods: A total of 448 orthopantomographs showing mental foramen bilaterally were considered for this study. The type, position, and symmetry of mental foramen on contralateral sides were noted in both the gender. Frequency and percentage of type, position, and symmetry of mental foramen were calculated statistically. Results: The majority of mental foramen were of separate type (n = 554, 61.8% followed by the continuous type (n = 342, 38.2%. The most common position of the mental foramen was position 4 (n = 554, 61.8% followed by position 3 (n = 289, 32.2%. The mental foramen were bilaterally symmetrical (n = 246, 54.9% and asymmetrical (n = 202, 45.1% in radiographs. Significant differences were observed in position between the right and left side in both the gender. Conclusion: The separate type of mental foramen was most predominant and the most common location was position 4 followed by position 3.

Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

We optimized a panel of microsatellite markers from cat and tiger genetic data for efficient genetic monitoring and used it to analyze the genetic structure of an outbred cat stock in China. We selected a set of rich polymorphic microsatellite loci from 131 cat microsatellite loci and 3 Sumatran tiger microsatellite loci using agarose gel electrophoresis. Next, the set of optimized genetic markers was used to analyze the genetic variation in an outbred population of orange tabby cats in China by simple-tandem repeat scanning. Thirty-one loci rich in polymorphisms were selected and the highest allele number in a single locus was 8. Analysis of the orange tabby cat population illustrated that the average observed number of alleles, mean effective allele number, mean Shannon's information index, mean expected heterozygosity, and observed heterozygosity were 3.8387, 2.4027, 0.9787, 0.5565, and 0.5528, respectively. The 31 microsatellite markers used were polymorphic and suitable for analyzing the genetic structure of cats. The population of orange tabby cats was confirmed to be a well-outbred stock.

DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.

Nitrogen inputs into surface waters from diffuse sources are still unduly high and the assessment of mitigation measures is associated with large uncertainties. The objective of this paper is to investigate selected agricultural management scenarios on nitrogen loads and to assess the impact of differing catchment characteristics in central Germany. A new modelling approach, which simulates spatially distributed N-transport and transformation processes in soil and groundwater, was applied to three meso scale catchments with strongly deviating climate, soil and topography conditions. The approach uses the integrated modelling framework JAMS to link an agro-ecosystem, a rainfall-runoff and a groundwater nitrogen transport model. Different agricultural management measures with deviating levels of acceptance were analysed in the three study catchments. N-leaching rates in all three catchments varied with soil type, the lowest leaching rates being obtained for loess soil catchment (18.5 kg nitrate N ha(-1) yr(-1)) and the highest for the sandy soils catchment (41.2 kg nitrate N ha(-1) yr(-1)). The simulated baseflow nitrogen concentrations varied between the catchments from 1 to 6 mg N l(-1), reflecting the nitrogen reduction capacity of the subsurfaces. The management scenarios showed that the highest N leaching reduction could be achieved by good site-adapted agricultural management options. Nitrogen retention in the subsurface did not alter the ranking of the management scenarios calculated as losses from the soil zone. The reduction effect depended strongly on site specific conditions, especially climate, soil variety and the regional formation of the crop rotations.

Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

A personalized treatment policy requires defining the optimal treatment for each patient based on their clinical and other characteristics. Here we consider a commonly encountered situation in practice, when analyzing data from observational cohorts, that there are auxiliary variables which affect both the treatment and the outcome, yet these variables are not of primary interest to be included in a generalizable treatment strategy. Furthermore, there is not enough prior knowledge of the effect of the treatments or of the importance of the covariates for us to explicitly specify the dependency between the outcome and different covariates, thus we choose a model that is flexible enough to accommodate the possibly complex association of the outcome on the covariates. We consider observational studies with a survival outcome and propose to use Random Survival Forest with Weighted Bootstrap (RSFWB) to model the counterfactual outcomes while marginalizing over the auxiliary covariates. By maximizing the restricted mean survival time, we estimate the optimal regime for a target population based on a selected set of covariates. Simulation studies illustrate that the proposed method performs reliably across a range of different scenarios. We further apply RSFWB to a prostate cancer study.

Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.

Background: This study investigated the long-term effectiveness of Preventure, a selective personality-targeted prevention program, in reducing the uptake of alcohol, harmful use of alcohol, and alcohol-related harms over a 3-year period. Methods: A cluster randomized controlled trial was conducted to assess the effectiveness of Preventure.…

Full Text Available Background and objective: Medical record documentation, often use to protect the patients legal rights, also providing information for medical researchers, general studies, education of health care staff and qualitative surveys is used. There is a need to control the amount of data entered in the medical record sheets of patients, considering the completion of these sheets is often carried out after completion of service delivery to the patients. Therefore, in this study the prevalence of completeness of medical history, operation reports, and physician order sheets by different documentaries in Jahrom teaching hospitals during year 2009 was analyzed. Methods and Materials: In this descriptive / retrospective study, the 400 medical record sheets of the patients from two teaching hospitals affiliated to Jahrom medical university was randomlyselected. The tool of data collection was a checklist based on the content of medical history sheet, operation report and physician order sheets. The data were analyzed by SPSS (Version10 software and Microsoft Office Excel 2003. Results: Average of personal (Demography data entered in medical history, physician order and operation report sheets which is done by department's secretaries were 32.9, 35.8 and 40.18 percent. Average of clinical data entered by physician in medical history sheet is 38 percent. Surgical data entered by the surgeon in operation report sheet was 94.77 percent. Average of data entered by operation room's nurse in operation report sheet was 36.78 percent; Average of physician order data in physician order sheet entered by physician was 99.3 percent. Conclusion: According to this study, the rate of completed record papers reviewed by documentary in Jahrom teaching hospitals were not desirable and in some cases were very weak and incomplete. This deficiency was due to different reason such as medical record documentaries negligence, lack of adequate education for documentaries, High work

Full Text Available Abstract Background Studies addressing the effects of aerobic exercise and a prudent diet on lipid and lipoprotein concentrations in adults have reached conflicting conclusions. The purpose of this study was to determine the effects of aerobic exercise combined with a prudent diet on lipid and lipoprotein concentrations in adults. Methods Studies were located by searching nine electronic databases, cross-referencing, and expert review. Two independent reviewers selected studies that met the following criteria: (1 randomized controlled trials, (2 aerobic exercise combined with diet recommendations (saturated/trans fat intake less than 10% of total calories and cholesterol less than 300 mg/day and/or fiber intake ≥25 g/day in women and ≥35 grams per day in men, (3 intervention ≥4 weeks, (4 humans ≥18 years of age, (5 published studies, including dissertations and Master's theses, (6 studies published in any language, (7 studies published between January 1, 1955 and May 1, 2009, (8 assessment of one or more of the following lipid and lipoprotein concentrations: total cholesterol (TC, high-density lipoprotein cholesterol (HDL-C, ratio of TC to HDL-C, non-HDL-C, low-density lipoprotein cholesterol (LDL-C and triglycerides (TG. Two reviewers independently extracted all data. Random-effects models that account for heterogeneity and 95% confidence intervals were used to pool findings. Results Of the 1,401 citations reviewed, six studies representing 16 groups (8 intervention, 8 control and up to 559 men and women (282 intervention, 277 control met the criteria for analysis. Statistically significant intervention minus control reductions were found for TC (-15.5 mg/dl, 95% CI, -20.3 to -10.7, TC:HDL-C (-0.4 mg/dl, 95% CI, -0.7 to -0.2, LDL-C (-9.2 mg/dl, 95% CI, -12.7 to -5.8 and TG (-10.6 mg/dl, 95% CI, -17.2 to -4.0 but not HDL-C (-0.5 mg/dl, 95% CI, -4.0 to 3.1. Changes were equivalent to reductions of 7.5%, 6.6%, 7.2% and 18.2% respectively

The invention relates to virus-like particles of bacteriophage MS2 (MS2 VLPs) displaying peptide epitopes or peptide mimics of epitopes of Nipah Virus envelope glycoprotein that elicit an immune response against Nipah Virus upon vaccination of humans or animals. Affinity selection on Nipah Virus-neutralizing monoclonal antibodies using random sequence peptide libraries on MS2 VLPs selected peptides with sequence similarity to peptide sequences found within the envelope glycoprotein of Nipah itself, thus identifying the epitopes the antibodies recognize. The selected peptide sequences themselves are not necessarily identical in all respects to a sequence within Nipah Virus glycoprotein, and therefore may be referred to as epitope mimics VLPs displaying these epitope mimics can serve as vaccine. On the other hand, display of the corresponding wild-type sequence derived from Nipah Virus and corresponding to the epitope mapped by affinity selection, may also be used as a vaccine.

Full Text Available Abstract Background Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space. Results We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features. Conclusion The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task.

In E. coli cells ribosomal small subunit biogenesis is regulated by RNA-protein interactions involving protein S7. S7 initiates the subunit assembly interacting with 16S rRNA. During shift-down of rRNA synthesis level, free S7 inhibits self-translation by interacting with 96 nucleotides long specific region of streptomycin (str) mRNA between cistrons S12 and S7 (intercistron). Many bacteria do not have the extended intercistron challenging development of specific approaches for searching putative mRNA regulatory regions, which are able to interact with proteins. The paper describes application of SERF approach (Selection of Random RNA Fragments) to reveal regulatory regions of str mRNA. Set of random DNA fragments has been generated from str operon by random hydrolysis and then transcribed into RNA; the fragments being able to bind protein S7 (serfamers) have been selected by iterative rounds. S7 binds to single serfamer, 109 nucleotide long (RNA109), derived from the intercistron. After multiple copying and selection, the intercistronic mutant (RNA109) has been isolated; it has enhanced affinity to S7. RNA109 binds to the protein better than authentic intercistronic str mRNA; apparent dissociation constants are 26 +/- 5 and 60 +/- 8 nM, respectively. Location of S7 binding site on the mRNA, as well as putative mode of regulation of coupled translation of S12 and S7 cistrons have been hypothesized.

Data concerning the virulence and pathogenesis of South African strains of Staphylococcus aureus are limited. We investigated host-pathogen interactions of randomlyselected clinical S. aureus isolates representing various clones. We characterized the ability of isolates to adhere to fibronectin,

This paper analyzes the effect of school vouchers on student sorting - defined as a flight to private schools by high-income and committed public-school students - and whether vouchers can be designed to reduce or eliminate it. Much of the existing literature investigates sorting in cases where private schools can screen students. However, publicly funded U.S. voucher programs require a private school to accept all students unless it is oversubscribed and to pick students randomly if it is ov...

This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomlyselected to be thoroughly audited, while the rest were...

An intercomparative analysis of the concentration of heavy metals:zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomlyselected sample of Kenyan people using the techniques of atomic absorption spectrophotometry (AAS) and differential pulse anodic stripping voltammetry (DPAS) has been undertaken. The percent relative standard deviation for each sample analysed using either of the techniques show good sensitivity and correlation between the techniques. The DPAS was found to be slightly sensitive than the AAs instrument used. The recalculated body burden rations of Cd to Zn, Pb to Fe reveal no unusual health impairement symptoms and suggest a relatively clean environment in Kenya.(author)

We model the time-resolved and time-integrated photoluminescence of a single InAs/GaAs quantum dot (QD) using a random population description. We reproduce the joint power dependence of the single QD exciton complexes (neutral exciton, neutral biexciton and charged trions). We use the model to investigate the selective optical pumping phenomenon, a predominance of the negative trion observed when the optical excitation is resonant to a non-intentional impurity level. Our experiments and simulations determine that the negative charge confined in the QD after exciting resonance to the impurity level escapes in 10 ns.

Random subcloning strategies are commonly employed for analyzing pieces of DNA that are too large for direct analysis. Such strategies are applicable to gene finding, physical mapping, and DNA sequencing. Random subcloning refers to the generation of many small, directly analyzable fragments of DNA that representrandom fragments of a larger whole, such as a genome. Following analysis of these fragments, a map or sequence of the original target may be reconstructed. Mathematical modeling is useful in planning such strategies and in providing a reference for their evaluation, both during execution and following completion. The statistical theory necessary for constructing these models has been developed independently over the last century. This paper brings this theory together into a statistical model for random subcloning strategies. This mathematical model retains its utility even at high subclone redundancies, which are necessary for project completion. The discussion here centers on shotgun sequencing, a random subcloning strategy envisioned as the method of choice for sequencing the human genome.

- and which sampling scheme will be optimal (random, stratified random or systematic selection). In addition variography will delineate cyclic behaviors as well as long-term trends thereby ensuring that future sampling will not accidentally be performed with a sampling rate coincident with the frequency...

Efficacy of pre-trauma prevention for post-traumatic stress disorder (PTSD) has not yet been established in a randomized controlled trial. Attention bias modification training (ABMT), a computerized intervention, is thought to mitigate stress-related symptoms by targeting disruptions in threat monitoring. We examined the efficacy of ABMT delivered before combat in mitigating risk for PTSD following combat. We conducted a double-blind, four-arm randomized controlled trial of 719 infantry soldiers to compare the efficacy of eight sessions of ABMT (n = 179), four sessions of ABMT (n = 184), four sessions of attention control training (ACT; n = 180), or no-training control (n = 176). Outcome symptoms were measured at baseline, 6-month follow-up, 10 days following combat exposure, and 4 months following combat. Primary outcome was PTSD prevalence 4 months post-combat determined in a clinical interview using the Clinician-Administered PTSD Scale. Secondary outcomes were self-reported PTSD and depression symptoms, collected at all four assessments. PTSD prevalence 4 months post-combat was 7.8% in the no-training control group, 6.7% with eight-session ABMT, 2.6% with four-session ABMT, and 5% with ACT. Four sessions of ABMT reduced risk for PTSD relative to the no-training condition (odds ratio 3.13, 95% confidence interval 1.01-9.22, p < 0.05, number needed to treat = 19.2). No other between-group differences were found. The results were consistent across a variety of analytic techniques and data imputation approaches. Four sessions of ABMT, delivered prior to combat deployment, mitigated PTSD risk following combat exposure. Given its low cost and high scalability potential, and observed number needed to treat, research into larger-scale applications is warranted. The ClinicalTrials.gov identifier is NCT01723215.

Little is known about the long-term effect of a chef-enhanced menu on healthier food selection and consumption in school lunchrooms. In addition, it remains unclear if extended exposure to other strategies to promote healthier foods (eg, choice architecture) also improves food selection or consumption. To evaluate the short- and long-term effects of chef-enhanced meals and extended exposure to choice architecture on healthier school food selection and consumption. A school-based randomized clinical trial was conducted during the 2011-2012 school year among 14 elementary and middle schools in 2 urban, low-income school districts (intent-to-treat analysis). Included in the study were 2638 students in grades 3 through 8 attending participating schools (38.4% of eligible participants). Schools were first randomized to receive a professional chef to improve school meal palatability (chef schools) or to a delayed intervention (control group). To assess the effect of choice architecture (smart café), all schools after 3 months were then randomized to the smart café intervention or to the control group. School food selection was recorded, and consumption was measured using plate waste methods. After 3 months, vegetable selection increased in chef vs control schools (odds ratio [OR], 1.75; 95% CI, 1.36-2.24), but there was no effect on the selection of other components or on meal consumption. After long-term or extended exposure to the chef or smart café intervention, fruit selection increased in the chef (OR, 3.08; 95% CI, 2.23-4.25), smart café (OR, 1.45; 95% CI, 1.13-1.87), and chef plus smart café (OR, 3.10; 95% CI, 2.26-4.25) schools compared with the control schools, and consumption increased in the chef schools (OR, 0.17; 95% CI, 0.03-0.30 cups/d). Vegetable selection increased in the chef (OR, 2.54; 95% CI, 1.83-3.54), smart café (OR, 1.91; 95% CI, 1.46-2.50), and chef plus smart café schools (OR, 7.38, 95% CI, 5.26-10.35) compared with the control schools

Pennsylvania State Dept. of Education, Harrisburg. Bureau of Planning and Evaluation.

Step-by-step instructions for the school representative responsible for Educational Quality Assessment in Pennsylvania are provided. The representative, who is expected to attend Quality Assessment Workshops, is given information about how to schedule the administration of the questionnaire, how to collect district and school data, and how to…

Group acupuncture is a growing and cost-effective method for delivering acupuncture in the United States and is the practice model in China. However, group acupuncture has not been tested in a research setting. To test the treatment effect of group acupuncture vs group education in persons with fibromyalgia. Random allocation two-group study with repeated measures. Group clinic in an academic health center in Portland, Oregon. Women with confirmed diagnosis of fibromyalgia (American College of Radiology 1990 criteria) and moderate to severe pain levels. Twenty treatments of a manualized acupuncture treatment based on Traditional Chinese Medicine diagnosis or group education over 10 weeks (both 900 minutes total). Weekly Revised Fibromyalgia Impact Questionnaire (FIQR) and Global Fatigue Index at baseline, five weeks, and 10 weeks and a four-week follow-up were assessed. Thirty women were recruited, with 78% reporting symptoms for longer than 10 years. The mean attendance was 810 minutes for acupuncture and 861 minutes for education. FIQR total, FIQR pain, and Global Fatigue Index all had clinically and statistically significant improvement in the group receiving acupuncture at end of treatment and four weeks post-treatment but not in participants receiving group education between groups. Compared with education, group acupuncture improved global symptom impact, pain, and fatigue. Furthermore, it was a safe and well-tolerated treatment option, improving a broader proportion of patients than current pharmaceutical options.

Department of Veterans Affairs — VA accreditation is for the sole purpose of providing representation services to claimants before VA and does not imply that a representative is qualified to provide...

Fractals provide an extreme test of representing fine detail in terms of band-limited functions, i.e. by superoscillations. We show that this is possible, using the example of the Weierstrass nondifferentiable fractal. If this is truncated at an arbitrarily fine scale, it can be expressed to any desired accuracy with a simple superoscillatory function. In illustrative simulations, fractals truncated with fastest frequency 2 16 are easily represented by superoscillations with fastest Fourier frequency 1. (letter)

Psychological and developmental research have been critiqued for the lack of diversity of research samples. Because differences in culture, race, and ethnicity can influence participant behavior, limited diversity limits the generalizability of the findings. These differences may also impact how participants behave in response to recruitment attempts, which suggests that recruitment itself may be leveraged to increase sample diversity. The goal of the current study was to determine what factors, within a recruitment interaction, could be leveraged to increase success and diversity when recruiting families with children for developmental research. Study 1 found three factors influenced success: (1) recruitment was more successful when other potential participants were also interested (i.e., recruiters were busy), (2) recruiters of particular races were more successful than recruiters of other races, and (3) differences in success were related to what the recruiter said to engage the potential participant (i.e., the script). The latter two factors interacted, suggesting some recruiters were using less optimal scripts. To improve success rates, study 2 randomly assigned scripts to recruiters and encouraged them to recruit more vigorously during busy periods. Study 2 found that two factors influenced success: (1) some scripts were more successful than others and (2) we were more successful at recruiting non-White potential participants than White participants. These two interacted, with some scripts being more successful with White and other scripts being more successful with non-White families. This intervention significantly increased recruitment success rate by 8.1% and the overall number of families recruited by 15.3%. These findings reveal that empirically evaluating and tailoring recruitment efforts based on the most successful strategies is effective in boosting diversity through increased participation of children from non-White families. PMID:25972829

Yearly, 600,000 people complete suicide in low- and middle-income countries, accounting for 75% of the world's burden of suicide mortality. The highest regional rates are in South and East Asia. Nepal has one of the highest suicide rates in the world; however, few investigations exploring patterns surrounding both male and female suicides exist. This study used psychological autopsies to identify common factors, precipitating events, and warning signs in a diverse sample. Randomly sampled from 302 police case reports over 24 months, psychological autopsies were conducted for 39 completed suicide cases in one urban and one rural region of Nepal. In the total police sample (n = 302), 57.0% of deaths were male. Over 40% of deaths were 25 years or younger, including 65% of rural and 50.8% of female suicide deaths. We estimate the crude urban and rural suicide rates to be 16.1 and 22.8 per 100,000, respectively. Within our psychological autopsy sample, 38.5% met criteria for depression and only 23.1% informants believed that the deceased had thoughts of self-harm or suicide before death. Important warning signs include recent geographic migration, alcohol abuse, and family history of suicide. Suicide prevention strategies in Nepal should account for the lack of awareness about suicide risk among family members and early age of suicide completion, especially in rural and female populations. Given the low rates of ideation disclosure to friends and family, educating the general public about other signs of suicide may help prevention efforts in Nepal.

Diffusion approximations are ascertained from a two-time-scale argument in the case of a group-structured diploid population with scaled viability parameters depending on the individual genotype and the group type at a single multi-allelic locus under recurrent mutation, and applied to the case of random pairwise interactions within groups. The main step consists in proving global and uniform convergence of the distribution of the group types in an infinite population in the absence of selection and mutation, using a coalescent approach. An inclusive fitness formulation with coefficient of relatedness between a focal individual J affecting the reproductive success of an individual I, defined as the expected fraction of genes in I that are identical by descent to one or more genes in J in a neutral infinite population, given that J is allozygous or autozygous, yields the correct selection drift functions. These are analogous to the selection drift functions obtained with pure viability selection in a population with inbreeding. They give the changes of the allele frequencies in an infinite population without mutation that correspond to the replicator equation with fitness matrix expressed as a linear combination of a symmetric matrix for allozygous individuals and a rank-one matrix for autozygous individuals. In the case of no inbreeding, the mean inclusive fitness is a strict Lyapunov function with respect to this deterministic dynamics. Connections are made between dispersal with exact replacement (proportional dispersal), uniform dispersal, and local extinction and recolonization. The timing of dispersal (before or after selection, before or after mating) is shown to have an effect on group competition and the effective population size.

There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

The college experience is often the first time when young adults live independently and make their own lifestyle choices. These choices affect dietary behaviors, exercise habits, techniques to deal with stress, and decisions on sleep time, all of which direct the trajectory of future health. There is a need for effective strategies that will encourage healthy lifestyle choices in young adults attending college. This preliminary randomized controlled trial tested the effect of coaching and text messages (short message service, SMS) on self-selected health behaviors in the domains of diet, exercise, stress, and sleep. A second analysis measured the ripple effect of the intervention on health behaviors not specifically selected as a goal by participants. Full-time students aged 18-30 years were recruited by word of mouth and campuswide advertisements (flyers, posters, mailings, university website) at a small university in western Pennsylvania from January to May 2015. Exclusions included pregnancy, eating disorders, chronic medical diagnoses, and prescription medications other than birth control. Of 60 participants, 30 were randomized to receive a single face-to-face meeting with a health coach to review results of behavioral questionnaires and to set a health behavior goal for the 8-week study period. The face-to-face meeting was followed by SMS text messages designed to encourage achievement of the behavioral goal. A total of 30 control subjects underwent the same health and behavioral assessments at intake and program end but did not receive coaching or SMS text messages. The texting app showed that 87.31% (2187/2505) of messages were viewed by intervention participants. Furthermore, 28 of the 30 intervention participants and all 30 control participants provided outcome data. Among intervention participants, 22 of 30 (73%) showed improvement in health behavior goal attainment, with the whole group (n=30) showing a mean improvement of 88% (95% CI 39-136). Mean

A general theory is presented for the origin of a self-replicating chemical system, termed an autogen, which is capable of both crude replication and translation (protein synthesis). The theory requires the availability of free energy and monomers to the system, a significant background low-yield synthesis of kinetically stable oligopeptides and oligonucleotides, the localization of the oligomers, crude oligonucleotide selectivity of amino acids during oligopeptide synthesis, crude oligonucleotide replication, and two short peptide families which catalyze replication and translation, to produce a localized group of at least one copy each of two protogenes and two protoenzymes. The model posits a process of random oligomerization, followed by the random nucleation of functional components and the rapid autocatalytic growth of the functioning autogen to macroscopic amounts, to account for the origin of the first self-replicating system. Such a process contains steps of such high probability and short time periods that it is suggested that the emergence of an autogen in a laboratory experiment of reasonable time scale may be possible.

Full Text Available Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508 tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity, a commercial platform (http://www.lumosity.com/ or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group. Participants were tested individually before and after training to assess working memory (WM and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1 the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task; (2 a marginal training effect was observed for the N-back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations.

Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508) tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity, a commercial platform (http://www.lumosity.com/) or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group). Participants were tested individually before and after training to assess working memory (WM) and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1) the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task); (2) a marginal training effect was observed for the N-back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations. PMID:29163136

Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508) tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity , a commercial platform (http://www.lumosity.com/) or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group). Participants were tested individually before and after training to assess working memory (WM) and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1) the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task); (2) a marginal training effect was observed for the N -back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations.

The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

Full Text Available John FP Bridges,1,2 Norah L Crossnohere,2 Anne L Schuster,1 Judith A Miller,3 Carolyn Pastorini,3,† Rebecca A Aslakson2,4,5 1Department of Health Policy and Management, The Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, 2Department of Health, Behavior, and Society, The Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, 3Patient-Centered Outcomes Research Institute (PCORI Project, Baltimore, MD, 4Department of Anesthesiology and Critical Care Medicine, The Johns Hopkins School of Medicine, Baltimore, MD, 5Armstrong Institute for Patient Safety and Quality, The Johns Hopkins School of Medicine, Baltimore, MD, USA †Carolyn Pastorini passed away on August 24, 2015 Background: Despite a movement toward patient-centered outcomes, best practices on how to gather and refine patients’ perspectives on research endpoints are limited. Advanced care planning (ACP is inherently patient centered and would benefit from patient prioritization of endpoints for ACP-related tools and studies.Objective: This investigation sought to prioritize patient-centered endpoints for the content and evaluation of an ACP video being developed for patients undergoing major surgery. We also sought to highlight an approach using complementary engagement and research strategies to document priorities and preferences of patients and other stakeholders.Materials and methods: Endpoints identified from a previously published environmental scan were operationalized following rating by a caregiver co-investigator, refinement by a patient co-investigator, review by a stakeholder committee, and validation by patients and family members. Finalized endpoints were taken to a state fair where members of the public who indicated that they or a loved one had undergone major surgery prioritized their most relevant endpoints and provided comments.Results: Of the initial 50 ACP endpoints identified from the review, 12 endpoints were selected for public

This annotated bibliography provides a representative sample of the available literature on mentoring. It reviews both qualitative and quantitative research, and covers specific mentoring programs, program implementation, and testimonials to the benefits of mentoring. Materials covered include 40 journal articles, conference papers, books, and…

Full Text Available In Federalist 10 James Madison drew a functional distinction between “parties” (advocates for factional interests and “judgment” (decision-making for the public good and warned of the corrupting effect of combining both functions in a “single body of men.” This paper argues that one way of overcoming “Madisonian corruption” would be by restricting political parties to an advocacy role, reserving the judgment function to an allotted (randomly-selected microcosm of the whole citizenry, who would determine the outcome of parliamentary debates by secret ballot—a division of labour suggested by James Fishkin’s experiments in deliberative polling. The paper then defends this radical constitutional proposal against Bernard Manin’s (1997 claim that an allotted microcosm could not possibly fulfil the “consent” requirement of Natural Right theory. Not only does the proposal challenge Manin’s thesis, but a 28th Amendment implementing it would finally reconcile the competing visions that have bedevilled representative democracy since the Constitutional Convention of 1787.

Purpose Complexity-based approaches to treatment have been gaining popularity in domains such as phonology and aphasia but have not yet been tested in child morphological acquisition. In this study, we examined whether beginning treatment with easier-to-inflect (easy first) or harder-to-inflect (hard first) verbs led to greater progress in the production of regular past-tense –ed by children with developmental language disorder. Method Eighteen children with developmental language disorder (ages 4–10) participated in a randomized controlled trial (easy first, N = 10, hard first, N = 8). Verbs were selected on the basis of frequency, phonological complexity, and telicity (i.e., the completedness of the event). Progress was measured by the duration of therapy, number of verb lists trained to criterion, and pre/post gains in accuracy for trained and untrained verbs on structured probes. Results The hard-first group made greater gains in accuracy on both trained and untrained verbs but did not have fewer therapy visits or train to criterion on more verb lists than the easy-first group. Treatment fidelity, average recasts per session, and verbs learned did not differ across conditions. Conclusion When targeting grammatical morphemes, it may be most efficient for clinicians to select harder rather than easier exemplars of the target. PMID:28796874

Purpose: The aim of the study was to test the hypothesis that aerobic Gram-negative bacteria (AGNB) play a crucial role in the pathogenesis of radiation-induced mucositis; consequently, selective elimination of these bacteria from the oral flora should result in a reduction of the mucositis. Methods and Materials: Head-and-neck cancer patients, when scheduled for treatment by external beam radiation therapy (EBRT), were randomized for prophylactic treatment with an oral paste containing either a placebo or a combination of the antibiotics polymyxin E, tobramycin, and amphotericin B (PTA group). Weekly, the objective and subjective mucositis scores and microbiologic counts of the oral flora were noted. The primary study endpoint was the mucositis grade after 3 weeks of EBRT. Results: Seventy-seven patients were evaluable. No statistically significant difference for the objective and subjective mucositis scores was observed between the two study arms (p=0.33). The percentage of patients with positive cultures of AGNB was significantly reduced in the PTA group (p=0.01). However, complete eradication of AGNB was not achieved. Conclusions: Selective elimination of AGNB of the oral flora did not result in a reduction of radiation-induced mucositis and therefore does not support the hypothesis that these bacteria play a crucial role in the pathogenesis of mucositis

are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming......Title: Representing Distance, Consuming Distance Abstract: Distance is a condition for corporeal and virtual mobilities, for desired and actual travel, but yet it has received relatively little attention as a theoretical entity in its own right. Understandings of and assumptions about distance...

Antidepressant side effects are a significant public health issue, associated with poor adherence, premature treatment discontinuation, and, rarely, significant harm. Older adults assume the largest and most serious burden of medication side effects. We investigated the association between antidepressant side effects and genetic variation in the serotonin system in anxious, older adults participating in a randomized, placebo-controlled trial of the selective serotonin reuptake inhibitor (SSRI) escitalopram. Adults (N = 177) aged ≥ 60 years were randomized to active treatment or placebo for 12 weeks. Side effects were assessed using the Udvalg fur Kliniske Undersøgelser side-effect rating scale. Genetic polymorphisms were putative functional variants in the promoters of the serotonin transporter and 1A and 2A receptors (5-HTTLPR [L/S + rs25531], HTR1A rs6295, HTR2A rs6311, respectively). Four significant drug-placebo side-effect differences were found: increased duration of sleep, dry mouth, diarrhea, and diminished sexual desire. Analyses using putative high- versus low-transcription genotype groupings revealed six pharmacogenetic effects: greater dry mouth and decreased sexual desire for the low- and high-expressing serotonin transporter genotypes, respectively, and greater diarrhea with the 1A receptor low-transcription genotype. Diminished sexual desire was experienced significantly more by high-expressing genotypes in the serotonin transporter, 1A, or 2A receptors. There was not a significant relationship between drug concentration and side effects nor a mean difference in drug concentration between low- and high-expressing genotypes. Genetic variation in the serotonin system may predict who develops common SSRI side effects and why. More work is needed to further characterize this genetic modulation and to translate research findings into strategies useful for more personalized patient care. Published by Elsevier Inc.

Congress of the U.S., Washington, DC. House Select Committee on Aging.

This document contains the prepared statements and panel testimony from the Congressional hearing on over-the-counter (OTC) drug use by the elderly. Opening statements are given by Representatives Claude Pepper (chairman), Ralph Regula, Mary Rose Oakar, Michael Bilirakis, Tom Lantos, and Hal Daub. Topics which are covered include the incidence and…

Congress of the U.S., Washington, DC. House Committee on Education and Labor.

The transcript of the 1986 House of Representatives hearings on amendments to the Rehabilitation Act of 1973 contains verbatim testimony and committee questions, prepared statements, letters, and supplemental material. The Amendments require state plans to address rehabilitation engineering services, the development of mechanisms to provide…

Full text of publication follows. The decommissioning of the Vandellos-I nuclear power plant was a big challenge for the host community of Vandellos i l'Hospitalet de l'Infant and the close-by region. Closing down of the facility resulted in a rise of unemployment and a decrease of municipal income. The public was concerned with three issues: safety, transparency and information about the decommissioning, and economic future. Therefore, from the very beginning, municipal governments entered into negotiations with ENRESA on socio-economic benefits, including local employment in dismantling activities, and other types of financial and non-financial compensation. The ADE business association, i.e. a network of business organisations was created that guided the allotment of work to local firms. To satisfy public demand, local municipalities focused on the triad of safety, dialogue and local development, considered the three 'pillars of trust'. A Municipal Monitoring Commission was created, made up of representatives of affected municipalities, the regional government, the ADE business association, trade unions, the local university, the NPP management and ENRESA to monitor the dismantling process and regularly inform the local public. Items that were handled by this Commission included: - Work process monitoring. - Workers. - Materials Control. - Conventional and radioactive or contaminated waste management. - Emanation waste management (liquid and gas) - Safety (training and accidents). - Surveillance (radiological and environmental: dust, noise). - Effects. - Fulfillment of agreed conditions. A number of communication tools and channels were used, e.g., public information meetings, an information centre, the municipal magazine, the municipal radio station, and meetings with representatives of the local press. Particularly innovative was the idea to ask academics from the University of Tarragona to help with 'translating' technical information into language that could

Abstract Rationale, aims and objectives The aims of this study is to investigate the prevalence of patients seeking care due to different musculoskeletal disorders (MSDs) at primary health care centres (PHCs), to chart different factors such as symptoms, diagnosis and actions prescribed for patients that visited the PHCs due to MSD and to make comparisons regarding differences due to gender, age and rural or urban PHC. Methods Patient records (2000) for patients in working age were randomlyselected equally distributed on one rural and one urban PHC. A 3‐year period was reviewed retrospectively. For all patient records' background data, cause to the visit and diagnosis were registered. For visits due to MSD, type and location of symptoms and actions to resolve the patients problems were registered. Data was analysed using cross tabulation, multidimensional chi‐squared. Results The prevalence of MSD was high; almost 60% of all patients were seeking care due to MSD. Upper and lower limb problems were most common. Symptoms were most prevalent in the young and middle age groups. The patients got a variety of different diagnoses, and between 13 and 35% of the patients did not receive a MSD diagnose despite having MSD symptoms. There was a great variation in how the cases were handled. Conclusions The present study points out some weaknesses regarding diagnostics and management of MSD in primary care. PMID:27538347

To evaluate the effect of a method of Selective Decontamination of the Digestive Tract (SDD) on colonization, nosocomial infection (NI), bacterial resistance, mortality and economic costs. Randomized, double blind, placebo controlled study. Polyvalent intensive care unit (ICU) of a tertiary care hospital with 27 beds. 101 patients with > 3 days of mechanical ventilation and > 5 days of stay, without infection at the start of the study. 47 belonged to the Treated Group (TG) and 54 to the Placebo Group (PG). The TG was given Cefotaxime i.v. (6 g/day) for the first four days and an association of Polymyxin E, Tobramycin and Amphotericin B at the oropharyngeal and gastrointestinal level throughout the whole stay. In the TG, colonization by gram-negative agents at oropharyngeal, tracheal and gastrointestinal level fell significantly. There was a significant drop in the overall, respiratory and urinary NI (26% vs 63%, p < 0.001; 15% vs 46%, p < 0.001; 9% vs 31%, p < 0.01). The overall mortality and NI related mortality was less in the TG (21% vs 44%, p < 0.05; 2% vs 20%, p < 0.01). The economic costs, mechanical ventilation time and length of stay were similar. The percentage of bacterial isolations resistant to Cefotaxime and Tobramycin was greater in the TG (38% vs 15% and 38% vs 9%, p < 0.001). colonization by gram-negative bacilli, NI and the mortality related to it can be modified by SDD. Continuous bacteriological surveillance is necessary.

There is currently no evidence that treatment of unruptured aneurysms is beneficial. Confronted with the uncertainty, many clinicians are attracted by an individual calculus of risks using numbers extracted from subgroup statistics of observational studies or natural history data. The so-called natural history of unruptured aneurysms refers to a purely man-made ratio of events divided by the number of untreated patients identified by imaging, a ratio heavily influenced by referral patterns and arbitrary clinical decisions. Available studies lacked prespecified hypotheses, exposing all analyses to sampling error and bias, and sample sizes were too small to provide reliable subgroup statistics. Far from being "natural kinds" of aneurysms, subgroups were post-hoc creations. Resulting data-driven statistics can only be exploratory, the error too uncontrollable to serve for clinical decisions. A randomized trial is in order, but selection according to fixed size criteria is ill-advised, given the imprecision of imaging, the influence of other factors such as location, previous history, multiplicity of lesions, risks of treatment, age and the danger of arbitrarily excluding from a long trial a large segment of the population with aneurysms for whom the research question is most pertinent.

This document presents testimony delivered before the House Select Committee on Children, Youth, and Families on the economic status of Hispanic children and families in the United States. The speaker, a senior policy analyst at the National Council of La Raza, focuses on the strengths of Hispanic families, the economic challenges they face, and…

Congress of the U.S., Washington, DC. House Select Committee on Children, Youth, and Families.

The House select committee met to hear testimony from parents, children, service providers, and researchers concerning the effects of job and income loss on families in central Illinois. The testimony of the first panel consisted of personal narratives. A high school student whose parents may move the family in order to find better paying jobs…

Congress of the U.S., Washington, DC. House Select Committee on Hunger.

This document presents oral and written testimony concerning the effectiveness of federal assistance programs in reducing infant mortality. In opening statements, members of the House Select Committee on Hunger voiced their concern over the persistence of high infant death rates among minorities, the rural poor, and urban populations, despite…

Antisocial personality is a common adult problem that imposes a major public health burden, but for which there is no effective treatment. Affected individuals exhibit persistent antisocial behavior and pervasive antisocial character traits, such as irritability, manipulativeness, and lack of remorse. Prevention of antisocial personality in childhood has been advocated, but evidence for effective interventions is lacking. The authors conducted two follow-up studies of randomized trials of group parent training. One involved 120 clinic-referred 3- to 7-year-olds with severe antisocial behavior for whom treatment was indicated, 93 of whom were reassessed between ages 10 and 17. The other involved 109 high-risk 4- to 6-year-olds with elevated antisocial behavior who were selectively screened from the community, 90 of whom were reassessed between ages 9 and 13. The primary psychiatric outcome measures were the two elements of antisocial personality, namely, antisocial behavior (assessed by a diagnostic interview) and antisocial character traits (assessed by a questionnaire). Also assessed were reading achievement (an important domain of youth functioning at work) and parent-adolescent relationship quality. In the indicated sample, both elements of antisocial personality were improved in the early intervention group at long-term follow-up compared with the control group (antisocial behavior: odds ratio of oppositional defiant disorder=0.20, 95% CI=0.06, 0.69; antisocial character traits: B=-4.41, 95% CI=-1.12, -8.64). Additionally, reading ability improved (B=9.18, 95% CI=0.58, 18.0). Parental expressed emotion was warmer (B=0.86, 95% CI=0.20, 1.41) and supervision was closer (B=-0.43, 95% CI=-0.11, -0.75), but direct observation of parenting showed no differences. Teacher-rated and self-rated antisocial behavior were unchanged. In contrast, in the selective high-risk sample, early intervention was not associated with improved long-term outcomes. Early intervention with

Full Text Available Abstract Background Single embryo transfer (SET remains underutilized as a strategy to reduce multiple gestation risk in IVF, and its overall lower pregnancy rate underscores the need for improved techniques to select one embryo for fresh transfer. This study explored use of comprehensive chromosomal screening by array CGH (aCGH to provide this advantage and improve pregnancy rate from SET. Methods First-time IVF patients with a good prognosis (age Results For patients in Group A (n = 55, 425 blastocysts were biopsied and analyzed via aCGH (7.7 blastocysts/patient. Aneuploidy was detected in 191/425 (44.9% of blastocysts in this group. For patients in Group B (n = 48, 389 blastocysts were microscopically examined (8.1 blastocysts/patient. Clinical pregnancy rate was significantly higher in the morphology + aCGH group compared to the morphology-only group (70.9 and 45.8%, respectively; p = 0.017; ongoing pregnancy rate for Groups A and B were 69.1 vs. 41.7%, respectively (p = 0.009. There were no twin pregnancies. Conclusion Although aCGH followed by frozen embryo transfer has been used to screen at risk embryos (e.g., known parental chromosomal translocation or history of recurrent pregnancy loss, this is the first description of aCGH fully integrated with a clinical IVF program to select single blastocysts for fresh SET in good prognosis patients. The observed aneuploidy rate (44.9% among biopsied blastocysts highlights the inherent imprecision of SET when conventional morphology is used alone. Embryos randomized to the aCGH group implanted with greater efficiency, resulted in clinical pregnancy more often, and yielded a lower miscarriage rate than those selected without aCGH. Additional studies are needed to verify our pilot data and confirm a role for on-site, rapid aCGH for IVF patients contemplating fresh SET.

Full Text Available Abstract Background Alcohol misuse amongst young people is a serious concern. The need for effective prevention is clear, yet there appear to be few evidenced-based programs that prevent alcohol misuse and none that target both high and low-risk youth. The CAP study addresses this gap by evaluating the efficacy of an integrated approach to alcohol misuse prevention, which combines the effective universal internet-based Climate Schools program with the effective selective personality-targeted Preventure program. This article describes the development and protocol of the CAP study which aims to prevent alcohol misuse and related harms in Australian adolescents. Methods/Design A cluster randomized controlled trial (RCT is being conducted with Year 8 students aged 13 to 14-years-old from 27 secondary schools in New South Wales and Victoria, Australia. Blocked randomisation was used to assign schools to one of four groups; Climate Schools only, Preventure only, CAP (Climate Schools and Preventure, or Control (alcohol, drug and health education as usual. The primary outcomes of the trial will be the uptake and harmful use of alcohol and alcohol related harms. Secondary outcomes will include alcohol and cannabis related knowledge, cannabis related harms, intentions to use, and mental health symptomatology. All participants will complete assessments on five occasions; baseline; immediately post intervention, and at 12, 24 and 36 months post baseline. Discussion This study protocol presents the design and current implementation of a cluster RCT to evaluate the efficacy of the CAP study; an integrated universal and selective approach to prevent alcohol use and related harms among adolescents. Compared to students who receive the stand-alone universal Climate Schools program or alcohol and drug education as usual (Controls, we expect the students who receive the CAP intervention to have significantly less uptake of alcohol use, a reduction in average

Full Text Available Abstract Background Quality hospital care is important in ensuring that the needs of severely ill children are met to avert child mortality. However, the quality of hospital care for children in developing countries has often been found poor. As the first step of a country road map for improving hospital care for children, we assessed the baseline situation with respect to the quality of care provided to children under-five years age in district and sub-district level hospitals in Bangladesh. Methods Using adapted World Health Organization (WHO hospital assessment tools and standards, an assessment of 18 randomlyselected district (n=6 and sub-district (n=12 hospitals was undertaken. Teams of trained assessors used direct case observation, record review, interviews, and Management Information System (MIS data to assess the quality of clinical case management and monitoring; infrastructure, processes and hospital administration; essential hospital and laboratory supports, drugs and equipment. Results Findings demonstrate that the overall quality of care provided in these hospitals was poor. No hospital had a functioning triage system to prioritise those children most in need of immediate care. Laboratory supports and essential equipment were deficient. Only one hospital had all of the essential drugs for paediatric care. Less than a third of hospitals had a back-up power supply, and just under half had functioning arrangements for safe-drinking water. Clinical case management was found to be sub-optimal for prevalent illnesses, as was the quality of neonatal care. Conclusion Action is needed to improve the quality of paediatric care in hospital settings in Bangladesh, with a particular need to invest in improving newborn care.

Full Text Available Structurally similar short peptides often serve as signals in diverse signaling systems. Similar peptides affect diverse physiological pathways in different species or even within the same organism. Assuming that signals provide information, and that this information is tested by the structure of the signal, it is curious that highly similar signaling peptides appear to provide information relevant to very different metabolic processes. Here we suggest a solution to this problem: the synthesis of the propeptide, and its post-translational modifications that are required for its cleavage and the production of the mature peptide, provide information on the phenotypic state of the signaling cell. The mature peptide, due to its chemical properties which render it harmful, serves as a stimulant that forces cells to respond to this information. To support this suggestion, we present cases of signaling peptides in which the sequence and structure of the mature peptide is similar yet provides diverse information. The sequence of the propeptide and its post-translational modifications, which represent the phenotypic state of the signaling cell, determine the quantity and specificity of the information. We also speculate on the evolution of signaling peptides. We hope that this perspective will encourage researchers to reevaluate pathological conditions in which the synthesis of the mature peptide is abnormal.

Understanding the roles of randomselection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of randomselection and assignment…

Full Text Available Acute kidney injury (AKI is a highly morbid condition in critically ill patients that is associated with high mortality. Previous clinical studies have demonstrated the safety and efficacy of the Selective Cytopheretic Device (SCD in the treatment of AKI requiring continuous renal replacement therapy in the intensive care unit (ICU.A randomized, controlled trial of 134 ICU patients with AKI, 69 received continuous renal replacement therapy (CRRT alone and 65 received SCD therapy.No significant difference in 60-day mortality was observed between the treated (27/69; 39% and control patients (21/59; 36%, with six patients lost to follow up in the intention to treat (ITT analysis. Of the 19 SCD subjects (CRRT+SCD and 31 control subjects (CRRT alone who maintained a post-filter ionized calcium (iCa level in the protocol's recommended range (≤ 0.4 mmol/L for greater or equal to 90% of the therapy time, 60-day mortality was 16% (3/19 in the SCD group compared to 41% (11/27 in the CRRT alone group (p = 0.11. Dialysis dependency showed a borderline statistically significant difference between the SCD treated versus control CRRT alone patients maintained for ≥ 90% of the treatment in the protocol's recommended (r iCa target range of ≤ 0.4 mmol/L with values of, 0% (0/16 and 25% (4/16, respectively (P = 0.10. When the riCa treated and control subgroups were compared for a composite index of 60 day mortality and dialysis dependency, the percentage of SCD treated subjects was 16% versus 58% in the control subjects (p<0.01. The incidence of serious adverse events did not differ between the treated (45/69; 65% and control groups (40/65; 63%; p = 0·86.SCD therapy may improve mortality and reduce dialysis dependency in a tightly controlled regional hypocalcaemic environment in the perfusion circuit.ClinicalTrials.gov NCT01400893 http://clinicaltrials.gov/ct2/show/NCT01400893.

The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

Full Text Available Data on absolute risks of outcomes and patterns of drug use in cost-effectiveness analyses are often based on randomised clinical trials (RCTs. The objective of this study was to evaluate the external validity of published cost-effectiveness studies by comparing the data used in these studies (typically based on RCTs to observational data from actual clinical practice. Selective Cox-2 inhibitors (coxibs were used as an example.The UK General Practice Research Database (GPRD was used to estimate the exposure characteristics and individual probabilities of upper gastrointestinal (GI events during current exposure to nonsteroidal anti-inflammatory drugs (NSAIDs or coxibs. A basic cost-effectiveness model was developed evaluating two alternative strategies: prescription of a conventional NSAID or coxib. Outcomes included upper GI events as recorded in GPRD and hospitalisation for upper GI events recorded in the national registry of hospitalisations (Hospital Episode Statistics linked to GPRD. Prescription costs were based on the prescribed number of tables as recorded in GPRD and the 2006 cost data from the British National Formulary. The study population included over 1 million patients prescribed conventional NSAIDs or coxibs. Only a minority of patients used the drugs long-term and daily (34.5% of conventional NSAIDs and 44.2% of coxibs, whereas coxib RCTs required daily use for at least 6-9 months. The mean cost of preventing one upper GI event as recorded in GPRD was US$104k (ranging from US$64k with long-term daily use to US$182k with intermittent use and US$298k for hospitalizations. The mean costs (for GPRD events over calendar time were US$58k during 1990-1993 and US$174k during 2002-2005. Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual

The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

BACKGROUND: Vasopressin is widely used for vasopressor support in septic shock patients, but experimental evidence suggests that selective V1A agonists are superior. The initial pharmacodynamic effects, pharmacokinetics, and safety of selepressin, a novel V1A-selective vasopressin analogue, was e...

OBJECTIVE: The objective of the study was to compare barusiban with placebo in threatened preterm labor. STUDY DESIGN: This was a randomized, double-blind, placebo-controlled, multicenter study. One hundred sixty-three women at 34-35 weeks plus 6 days, and with 6 or more contractions of 30 seconds...

... designated representative is selected shall include a procedure for the owners and operators of the source and affected units at the source to authorize the alternate designated representative to act in lieu...) In the event of a conflict, any action taken by the designated representative shall take precedence...

A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

We propose two variations of the non-cooperative bargaining model for games in coalitional form, introduced by Hart and Mas-Colell (1996a). These strategic games implement, in the limit, two new NTU-values: The random marginal and the random removal values. The main characteristic of these proposals is that they always select a unique payoff allocation in NTU-games. The random marginal value coincides with the Consistent NTU-value (Maschler and Owen, 1989) for hyperplane games, and with the S...

dividers, the Boerner Divider, the ??spoon method??, alternate/fractional shoveling and grab sampling. Only devices based on riffle splitting principles (static or rotational) passes the ultimate representativity test (with minor, but significant relative differences). Grab sampling, the overwhelmingly...... always be representative in the full Theory of Sampling (TOS) sense. This survey also allows empirical verification of the merits of the famous ??Gy?s formula?? for order-of-magnitude estimation of the Fundamental Sampling Error (FSE)....

Background: Pain is the common complication after a surgery. The aim of this study was to evaluate the effect of aromatherapy with Rosa damascena Mill. on the postoperative pain in children. Materials and Methods: In a double-blind, placebo-controlled clinical trial, we selected 64 children of 3?6 years of age through convenient sampling and divided them randomly into two groups. Patients in group A were given inhalation aromatherapy with R. damascena Mill., and in group B, the patients were ...

BACKGROUND & AIMS: Although widely prescribed, the evidence for the use of antidepressants for the treatment of irritable bowel syndrome (IBS) is limited. In this study, we hypothesized that fluoxetine (Prozac), a selective serotonin reuptake inhibitor, has visceral analgesic properties, leading to

Conclusion: Our results suggest that point-selective effects among adjacent, distal, or a combination of acupoints are hardly associated with pain intensity or palpation index in participants with TMDs. Larger sample size trials are required to overcome the shortcomings of the study.

Interdisciplinary physical therapy together with pharmacological treatment constitute conservative treatment strategies related to low back pain (LBP). There is still a lack of high quality studies aimed at an objective evaluation of physiotherapeutic procedures according to their effectiveness in LBP. The aim of this study is to carry out a prospective, randomized, single-blinded, and placebocontrolled clinical trial to evaluate the effectiveness of magnetic fields in discopathy-related LBP. A group of 177 patients was assessed for eligibility based on inclusion and exclusion criteria. In the end, 106 patients were randomly assigned into 5 comparative groups: A (n = 23; magnetic therapy: 10 mT, 50 Hz); B (n = 23; magnetic therapy: 5 mT, 50 Hz); C (n = 20; placebo magnetic therapy); D (n = 20; magnetic stimulation: 49.2 μT, 195 Hz); and E (n = 20; placebo magnetic stimulation). All patients were assessed using tests for pain intensity, degree of disability and range of motion. Also, postural stability was assessed using a stabilographic platform. In this study, positive changes in all clinical outcomes were demonstrated in group A (p 0.05). It was determined that the application of magnetic therapy (10 mT, 50 Hz, 20 min) significantly reduces pain symptoms and leads to an improvement of functional ability in patients with LBP.

The aim of this study was to investigate the effectiveness of short wave diathermy (SWD) in patients with subacromial impingement syndrome. In this double-blinded, randomized, placebo-controlled trial, 57 patients (aged 35-65 yrs) were classified into night pain positive (NP[+]) (n = 28) and night pain negative (NP[-]) (n = 29) groups. Both groups were randomly assigned to SWD (NP[+], n = 14; NP[-], n = 14) and sham (NP[+], n = 15; NP[-], n = 14) subgroups. Visual analog scale, Constant-Murley Scale (CS), and Shoulder Disability Questionnaire (SDQ) scores were used for evaluation. There was only a significant difference in pain with activity at 1-mo (mean difference [MD], -1.65; 95% confidence interval, -3.01 to -0.28]) and 2-mo evaluations (MD, -2.1; 95% confidence interval, -3.51 to -0.69) between SWD versus sham groups. In the NP(+) SWD group, the CS pain score was significantly higher than in the NP(+) sham group at all evaluations after treatment. At 1 mo, the NP(-) SWD group showed significantly better pain, strength, total CS, and SDQ scores than the NP(-) sham group. At 2 mos, the pain, range of motion, strength, and total CS and SDQ scores were better in the NP(-) SWD group than in the NP(-) sham group (P impingement syndrome without NP.

Promotion of healthy pregnancies has gained high priority in the Netherlands because of relatively unfavorable perinatal outcomes. In response, a nationwide study, 'Healthy Pregnancy 4 All' (HP4ALL), has been initiated. Part of this study involves systematic and broadened antenatal risk assessment (the Risk Assessment substudy). Risk selection in current clinical practice is mainly based on medical risk factors. Despite the increasing evidence for the influence of nonmedical risk factors (social status, lifestyle or ethnicity) on perinatal outcomes, these risk factors remain highly unexposed. Systematic risk selection, combined with customized care pathways to reduce or treat detected risks, and regular and structured consultation between community midwives, gynecologists and other care providers such as social workers, is part of this study. Neighborhoods in 14 municipalities with adverse perinatal outcomes above national and municipal averages are selected for participation. The study concerns a cluster randomized controlled trial. Municipalities are randomly allocated to intervention (n = 3,500 pregnant women) and control groups (n = 3,500 pregnant women). The intervention consists of systematic risk selection with the Rotterdam Reproductive Risk Reduction (R4U) score card in pregnant women at the booking visit, and referral to corresponding care pathways. A risk score, based on weighed risk factors derived from the R4U, above a predefined threshold determines structured multidisciplinary consultation. Primary outcomes of this trial are dysmaturity (birth weight

The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

Polymeric aerogels (PA-xx) were synthesized via room-temperature reaction of an aromatic triisocyanate (tris(4-isocyanatophenyl) methane) with pyromellitic acid. Using solid-state CPMAS 13 C and 15 N NMR, it was found that the skeletal framework of PA-xx was a statistical copolymer of polyamide, polyurea, polyimide, and of the primary condensation product of the two reactants, a carbamic-anhydride adduct. Stepwise pyrolytic decomposition of those components yielded carbon aerogels with both open and closed microporosity. The open micropore surface area increased from capacity for CO 2 (up to 4.9 mmol g -1 ), and selectivity toward other gases (via Henry's law). The selectivity for CO 2 versus H 2 (up to 928:1) is suitable for precombustion fuel purification. Relevant to postcombustion CO 2 capture and sequestration (CCS), the selectivity for CO 2 versus N 2 was in the 17:1 to 31:1 range. In addition to typical factors involved in gas sorption (kinetic diameters, quadrupole moments and polarizabilities of the adsorbates), it is also suggested that CO 2 is preferentially engaged by surface pyridinic and pyridonic N on carbon (identified with XPS) in an energy-neutral surface reaction. Relatively high uptake of CH 4 (2.16 mmol g -1 at 0 °C/1 bar) was attributed to its low polarizability, and that finding paves the way for further studies on adsorption of higher (i.e., more polarizable) hydrocarbons. Overall, high CO 2 selectivities, in combination with attractive CO 2 adsorption capacities, low monomer cost, and the innate physicochemical stability of carbon render the materials of this study reasonable candidates for further practical consideration.

The editor of Representing Landscape Architecture, Marc Treib, argues that there is good reason to evaluate the standard practices of representation that landscape architects have been using for so long. In the rush to the promised land of computer design these practices are now in danger of being...... left by the wayside. The 14 often both fitting and well crafted contributions of this publication offer an approach to how landscape architecture has been and is currently represented; in the design study, in presentation, in criticism, and in the creation of landscape architecture....

The Theory of Sampling (TOS) provides a description of all errors involved in sampling of heterogeneous materials as well as all necessary tools for their evaluation, elimination and/or minimization. This tutorial elaborates on—and illustrates—selected central aspects of TOS. The theoretical...... aspects are illustrated with many practical examples of TOS at work in typical scenarios, presented to yield a general overview. TOS provides a full scientific definition of the concept of sampling correctness, an attribute of the sampling process that must never be compromised. For this purpose...... the Fundamental Sampling Principle (FSP) also receives special attention. TOS provides the first complete scientific definition of sampling representativeness. Only correct (unbiased) mass reduction will ensure representative sampling. It is essential to induct scientific and technological professions in the TOS...

Representativity requirements are discussed for various wind data users. It is shown that most applications can be dealt with by using data from wind stations when these are made to conform with WMO specifications. Methods to achieve this WMO normalization are reviewed, giving minimum specifications

to summarize the previously published results of a multicenter randomized clinical research phase III study trial of afobazole (INN: fabomotizole) versus diazepam in the treatment of patients with generalized anxiety disorder (GAD) and adjustment disorders (AD). Five investigating centers included 150 patients aged 18 to 60 years (60 patients with GAD and 90 with AD) a simple structure of anxiety disorders without concurrent mental, neurological or somatic disorders. Patients were randomized to take afobazole (30 mg/day; n=100) or diazepam (30 mg/day; n=50) for 30 days. Prior to drug administration, patients susceptible to placebo were excluded according to the results of its 7-day use. Withdrawal syndrome was evaluated within 10 days after completion of active therapy. The primary efficacy endpoint was the change of Hamilton Anxiety Rating Scale (HAMA) total score. The scores of the Clinical Global Impression (CGI) Scale and the Sheehan Scale as secondary efficacy endpoints were analyzed. Drug safety was evaluated by assessment of adverse events. Afobazole and diazepam caused a significant reduction of HAMA total score. In the afobazole group, the reduction of anxiety exceeded that in the diazepam group (the difference in the total score changes was 2.93 [0.67; 5.19]; p=0,01).The proportion of patients with reduction of disease severity was 72% in the afobazole group and 58% in the diazepam group. After therapy completion, the proportion of patients with no or mild disorder in the afobazole group was significantly higher than that in the diazepam group (69 and 44%, respectively; χ2=12.46; p=0,014). There was a trend toward a higher subjective patient-rated estimate of the afobazole effect using the Sheehan scale. There were a total of 15 and 199 adverse events in the afobazole and diazepam groups, respectively. No manifestations of afobazole withdrawal syndrome were found. Diazepam withdrawal syndrome was observed in 34 (68%) patients. Afobazole is an

Highlights: • We compared 24-gauge side-hole and conventional 22-gauge end-hole catheters in MDCT. • The 24-gauge side-hole catheter is noninferior to the 22-gauge end-hole catheter. • The 24-gauge side-hole catheter is safe and facilitates optimal enhancement quality. • The 24-gauge side-hole catheter is suitable for patients with narrow or fragile veins. - Abstract: Purpose: To compare the 24-gauge side-holes catheter and conventional 22-gauge end-hole catheter in terms of safety, injection pressure, and contrast enhancement on multi-detector computed tomography (MDCT). Materials & methods: In a randomized single-center study, 180 patients were randomized to either the 24-gauge side-holes catheter or the 22-gauge end-hole catheter groups. The primary endpoint was safety during intravenous administration of contrast material for MDCT, using a non-inferiority analysis (lower limit 95% CI greater than −10% non-inferiority margin for the group difference). The secondary endpoints were injection pressure and contrast enhancement. Results: A total of 174 patients were analyzed for safety during intravenous contrast material administration for MDCT. The overall extravasation rate was 1.1% (2/174 patients); 1 (1.2%) minor episode occurred in the 24-gauge side-holes catheter group and 1 (1.1%) in the 22-gauge end-hole catheter group (difference: 0.1%, 95% CI: −3.17% to 3.28%, non-inferiority P = 1). The mean maximum pressure was higher with the 24-gauge side-holes catheter than with the 22-gauge end-hole catheter (8.16 ± 0.95 kg/cm{sup 2} vs. 4.79 ± 0.63 kg/cm{sup 2}, P < 0.001). The mean contrast enhancement of the abdominal aorta, celiac artery, superior mesenteric artery, and pancreatic parenchyma in the two groups were not significantly different. Conclusion: In conclusion, our study showed that the 24-gauge side-holes catheter is safe and suitable for delivering iodine with a concentration of 300 mg/mL at a flow-rate of 3 mL/s, and it may contribute to

Highlights: • We compared 24-gauge side-hole and conventional 22-gauge end-hole catheters in MDCT. • The 24-gauge side-hole catheter is noninferior to the 22-gauge end-hole catheter. • The 24-gauge side-hole catheter is safe and facilitates optimal enhancement quality. • The 24-gauge side-hole catheter is suitable for patients with narrow or fragile veins. - Abstract: Purpose: To compare the 24-gauge side-holes catheter and conventional 22-gauge end-hole catheter in terms of safety, injection pressure, and contrast enhancement on multi-detector computed tomography (MDCT). Materials & methods: In a randomized single-center study, 180 patients were randomized to either the 24-gauge side-holes catheter or the 22-gauge end-hole catheter groups. The primary endpoint was safety during intravenous administration of contrast material for MDCT, using a non-inferiority analysis (lower limit 95% CI greater than −10% non-inferiority margin for the group difference). The secondary endpoints were injection pressure and contrast enhancement. Results: A total of 174 patients were analyzed for safety during intravenous contrast material administration for MDCT. The overall extravasation rate was 1.1% (2/174 patients); 1 (1.2%) minor episode occurred in the 24-gauge side-holes catheter group and 1 (1.1%) in the 22-gauge end-hole catheter group (difference: 0.1%, 95% CI: −3.17% to 3.28%, non-inferiority P = 1). The mean maximum pressure was higher with the 24-gauge side-holes catheter than with the 22-gauge end-hole catheter (8.16 ± 0.95 kg/cm 2 vs. 4.79 ± 0.63 kg/cm 2 , P < 0.001). The mean contrast enhancement of the abdominal aorta, celiac artery, superior mesenteric artery, and pancreatic parenchyma in the two groups were not significantly different. Conclusion: In conclusion, our study showed that the 24-gauge side-holes catheter is safe and suitable for delivering iodine with a concentration of 300 mg/mL at a flow-rate of 3 mL/s, and it may contribute to the care

Our understanding of the evolution of genes of the major histocompatibility complex (MHC) is rapidly increasing, but there are still enigmatic questions remaining, particularly regarding the maintenance of high levels of MHC polymorphisms in small, isolated populations. Here, we analyze the genetic variation at eight microsatellite loci and sequence variation at exon 2 of the MHC class IIB (DAB) genes in two wild populations of the Trinidadian guppy, Poecilia reticulata. We compare the genetic variation of a small (Ne, 100) and relatively isolated upland population to that of its much larger (Ne approximately 2400) downstream counterpart. As predicted, microsatellite diversity in the upland population is significantly lower and highly differentiated from the population further downstream. Surprisingly, however, these guppy populations are not differentiated by MHC genetic variation and show very similar levels of allelic richness. Computer simulations indicate that the observed level of genetic variation can be maintained with overdominant selection acting at three DAB loci. The selection coefficients differ dramatically between the upland (s > or = 0.2) and lowland (s guppies in the upland habitat, which has resulted in high levels of MHC diversity being maintained in this population despite considerable genetic drift.

Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the eﬀectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.

The goal of this systematic review is to evaluate the efficacy and safety of paricalcitol versus active non-selective vitamin D receptor activators (VDRAs) for secondary hyperparathyroidism (SHPT) management in chronic kidney disease (CKD) patients. PubMed, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), clinicaltrials.gov (inception to September 2015), and ASN Web site were searched for relevant studies. A meta-analysis of randomized controlled trials (RCTs) and quasi-RCTs that assessed the effects and adverse events of paricalcitol and active non-selective VDRA in adult CKD patients with SHPT was performed using Review Manager 5.2. A total of 10 trials involving 734 patients were identified for this review. The quality of included trials was limited, and very few trials reported all-cause mortality or cardiovascular calcification without any differences between two groups. Compared with active non-selective VDRAs, paricalcitol showed no significant difference in both PTH reduction (MD -7.78, 95% CI -28.59-13.03, P = 0.46) and the proportion of patients who achieved the target reduction of PTH (OR 1.27, 95% CI 0.87-1.85, P = 0.22). In addition, no statistical differences were found in terms of serum calcium, episodes of hypercalcemia, serum phosphorus, calcium × phosphorus products, and bone metabolism index. Current evidence is insufficient, showing paricalcitol is superior to active non-selective VDRAs in lowering PTH or reducing the burden of mineral loading. Further trials are required to prove the tissue-selective effect of paricalcitol and to overcome the limitation of current research.

Full Text Available The purpose and aim of this research was to (1 identify the factors that contributes towards job burnout in sales service representative (2 What are the relationships of these factors (3 To empirically test the relationships of the determinants relating to burnout in customer service representatives. Based on literature survey six different variables related to burnout were identified. The variables were (1 Emotional exhaustion.(2 Reduced personal accomplishment.(3 Job induced tension.(4 Job satisfaction.(5 Workload (6 Job satisfaction.Each of the variables contained 3 sub-variables. Five different hypotheses were developed and tested through techniques such as Z-test, F-test and regression analysis. The questionnaire administered for the study contained 15 questions including personal data. The subject was Moblink company customers sales service representative in Karachi.The valid sample size was 98 drawn through multi-cluster technique. Techniques such as measure of dispersion and measure of central tendencies were used for analyzing the data. Regression, Z-test, and F-test were used for testing the developed hypothesis.According to the respondents’ opinions, the reduced personal accomplishment had a high rating with a mean of 3.75 and job induced tension has the lowest mean of 3.58. The standard deviation of respondents’ opinions was highest for dimension depersonalization and least for dimension work load. This indicates that there is a high polarization of the respondents’ opinions on the dimension depersonalization moral and least on the dimension work load.The Skew nesses for all the dimensions were in negative except the determinants emotional exhaustion and workload. This indicates that the majority of respondents’ opinions on all the dimensions were below the mean except in the case of emotional exhaustion and workload.Five hypotheses were developed and tested:a The hypothesis relating to low level of burnout in customers

A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt

Full Text Available Abstract Background To cope at their homes, community-dwelling older people surviving a hip fracture need a sufficient amount of functional ability and mobility. There is a lack of evidence on the best practices supporting recovery after hip fracture. The purpose of this article is to describe the design, intervention and demographic baseline results of a study investigating the effects of a rehabilitation program aiming to restore mobility and functional capacity among community-dwelling participants after hip fracture. Methods/Design Population-based sample of over 60-year-old community-dwelling men and women operated for hip fracture (n = 81, mean age 79 years, 78% were women participated in this study and were randomly allocated into control (Standard Care and ProMo intervention groups on average 10 weeks post fracture and 6 weeks after discharged to home. Standard Care included written home exercise program with 5-7 exercises for lower limbs. Of all participants, 12 got a referral to physiotherapy. After discharged to home, only 50% adhered to Standard Care. None of the participants were followed-up for Standard Care or mobility recovery. ProMo-intervention included Standard Care and a year-long program including evaluation/modification of environmental hazards, guidance for safe walking, pain management, progressive home exercise program and physical activity counseling. Measurements included a comprehensive battery of laboratory tests and self-report on mobility limitation, disability, physical functional capacity and health as well as assessments for the key prerequisites for mobility, disability and functional capacity. All assessments were performed blinded at the research laboratory. No significant differences were observed between intervention and control groups in any of the demographic variables. Discussion Ten weeks post hip fracture only half of the participants were compliant to Standard Care. No follow-up for Standard Care or

Examined the sensitivity of four representative cephalometric angles to the detection of different vectors of craniofacial growth. Landmark coordinate data from a stratified random sample of 48 adolescent subjects were used to calculate conventional values for changes between the pretreatment and end-of-treatment lateral cephalograms. By modifying the end-of-treatment coordinate values appropriately, the angular changes could be recalculated reflecting three hypothetical situations: Case 1. What if there were no downward landmark displacement between timepoints? Case 2. What if there were no forward landmark displacement between timepoints? Case 3. What if there were no Nasion change? These questions were asked for four representative cephalometric angles: SNA, ANB, NAPg and UI-SN. For Case 1, the associations (r) between the baseline and the modified measure for the three angles were very highly significant (P < 0.001) with r2 values no lower than 0.94! For Case 2, however, the associations were much weaker and no r value reached significance. These angular measurements are less sensitive for measuring downward landmark displacement than they are for measuring forward landmark displacement.

As scholars involved with the Search for Extraterrestrial Intelligence (SETI) have contemplated how we might portray humankind in any messages sent to civilizations beyond Earth, one of the challenges they face is adequately representing the diversity of human cultures. For example, in a 2003 workshop in Paris sponsored by the SETI Institute, the International Academy of Astronautics (IAA) SETI Permanent Study Group, the International Society for the Arts, Sciences and Technology (ISAST), and the John Templeton Foundation, a varied group of artists, scientists, and scholars from the humanities considered how to encode notions of altruism in interstellar messages . Though the group represented 10 countries, most were from Europe and North America, leading to the group's recommendation that subsequent discussions on the topic should include more globally representative perspectives. As a result, the IAA Study Group on Interstellar Message Construction and the SETI Institute sponsored a follow-up workshop in Santa Fe, New Mexico, USA in February 2005. The Santa Fe workshop brought together scholars from a range of disciplines including anthropology, archaeology, chemistry, communication science, philosophy, and psychology. Participants included scholars familiar with interstellar message design as well as specialists in cross-cultural research who had participated in the Symposium on Altruism in Cross-cultural Perspective, held just prior to the workshop during the annual conference of the Society for Cross-cultural Research . The workshop included discussion of how cultural understandings of altruism can complement and critique the more biologically based models of altruism proposed for interstellar messages at the 2003 Paris workshop. This paper, written by the chair of both the Paris and Santa Fe workshops, will explore the challenges of communicating concepts of altruism that draw on both biological and cultural models.

By the use of both perturbative and non-perturbative solutions of the reduced Rayleigh equation, we present a detailed study of the scattering of light from two-dimensional weakly rough dielectric films. It is shown that for several rough film configurations, Selényi interference rings exist in the diffusely scattered light. For film systems supported by dielectric substrates where only one of the two interfaces of the film is weakly rough and the other planar, Selényi interference rings are observed at angular positions that can be determined from simple phase arguments. For such single-rough-interface films, we find and explain by a single scattering model that the contrast in the interference patterns is better when the top interface of the film (the interface facing the incident light) is rough than when the bottom interface is rough. When both film interfaces are rough, Selényi interference rings exist but a potential cross-correlation of the two rough interfaces of the film can be used to selectively enhance some of the interference rings while others are attenuated and might even disappear. This feature may in principle be used in determining the correlation properties of interfaces of films that otherwise would be difficult to access.

Full Text Available In the article concept as one of the principle notions of cognitive linguistics is investigated. Considering concept as culture phenomenon, having language realization and ethnocultural peculiarities, the description of the concept “happiness” is presented. Lexical and semantic paradigm of the concept of happiness correlates with a great number of lexical and semantic variants. In the work semantic representatives of the concept of happiness, covering supreme spiritual values are revealed and semantic interpretation of their functioning in the Biblical discourse is given.

Objective: To explore the feasibility of a random-digit dial (RDD) cellular phone survey in order to reach a national and representative sample of college students. Methods: Demographic distributions from the 2011 National Young Adult Health Survey (NYAHS) were benchmarked against enrollment numbers from the Integrated Postsecondary Education…

Full Text Available The article deals with the similarities between conspicuous waste and representativeness heuristic. The conspicuous waste is analyzed according to the classic Veblen’ interpretation as a strategy to increase social status through conspicuous consumption and conspicuous leisure. In “The Theory of the Leisure Class” Veblen introduced two different types of utility – conspicuous and functional. The article focuses on the possible benefits of the analysis of conspicuous utility not only in terms of institutional economic theory, but also in terms of behavioral economics. To this end, the representativeness heuristics is considered, on the one hand, as a way to optimize the decision-making process, which allows to examine it in comparison with procedural rationality by Simon. On the other hand, it is also analyzed as cognitive bias within the Kahneman and Twersky’ approach. The article provides the analysis of the patterns in the deviations from the rational behavior strategy that could be observed in case of conspicuous waste both in modern market economies in the form of conspicuous consumption and in archaic economies in the form of gift-exchange. The article also focuses on the marketing strategies for luxury consumption’ advertisement. It highlights the impact of the symbolic capital (in Bourdieu’ interpretation on the social and symbolic payments that actors get from the act of conspicuous waste. This allows to perform a analysis of conspicuous consumption both as a rational way to get the particular kind of payments, and, at the same time, as a form of institutionalized cognitive bias.

Full Text Available Abstract Background Knee osteoarthritis is a major cause of pain and functional limitation. Complementary and alternative medical approaches have been employed to relieve symptoms and to avoid the side effects of conventional medication. Moxibustion has been widely used to treat patients with knee osteoarthritis. Our past researches suggested heat-sensitive moxibustion might be superior to the conventional moxibustion. Our objective is to investigate the effectiveness of heat-sensitive moxibustion compared with conventional moxibustion or conventional drug treatment. Methods This study consists of a multi-centre (four centers in China, randomised, controlled trial with three parallel arms (A: heat-sensitive moxibustion; B: conventional moxibustion; C: conventional drug group. The moxibustion locations are different from A and B. Group A selects heat-sensitization acupoint from the region consisting of Yin Lingquan(SP9, Yang Lingquan(GB34, Liang Qiu(ST34, and Xue Hai (SP10. Meanwhile, fixed acupoints are used in group B, that is Xi Yan (EX-LE5 and He Ding (EX-LE2. The conventional drug group treats with intra-articular Sodium Hyaluronate injection. The outcome measures above will be assessed before the treatment, the 30 days of the last moxibustion session and 6 months after the last moxibustion session. Discussion This trial will utilize high quality trial methodologies in accordance with CONSORT guidelines. It will provide evidence for the effectiveness of moxibustion as a treatment for moderate and severe knee osteoarthritis. Moreover, the result will clarify the rules of heat-sensitive moxibustion location to improve the therapeutic effect with suspended moxibustion, and propose a new concept and a new theory of moxibustion to guide clinical practices. Trial Registration The trial is registered at Controlled Clinical Trials: ChiCTR-TRC-00000600.

To observe the effect of soothing-liver and nourishing-heart acupuncture on selective serotonin reuptake inhibitor (SSRIs) treatment effect onset in patients with depressive disorder and related indicators of neuroimmunology. Overall, 126 patients with depressive disorder were randomly divided into a medicine and acupuncture-medicine group using a random number table. Patients were treated for 6 consecutive weeks. The two groups were evaluated by the Montgomery-Asberg Depression Rating Scale (MADRS) and Side Effects Rating Scale (SERS) to assess the effect of the soothing-liver and nourishing-heart acupuncture method on early onset of SSRI treatment effect. Changes in serum 5-hydroxytryptamine (5-HT) and inflammatory cytokines before and after treatment were recorded and compared between the medicine group and the acupuncture-medicine group. The acupuncture-medicine group had significantly lower MADRS scores at weeks 1, 2, 4, and 6 after treatment compared with the medicine group (P 0.05). Anti-inflammatory cytokines IL-4 and IL-10 were significantly higher in the acupuncture-medicine group compared with the medicine group (P depressive disorder and can significantly reduce the adverse reactions of SSRIs. Moreover, acupuncture can enhance serum 5-HT and regulate the balance of pro-inflammatory cytokines and anti-inflammatory cytokines.

Our current work concerns the development of a decision support system for the software selection problem. The main idea is to utilize expert knowledge to help the user in selecting the best software / method / computational resource to solve a computational problem. Obviously, this involves multicriterial decision making and the key open question is: which method to choose. The context of the work is provided by the Agents in Grid (AiG) project, where the software selection (and thus multicriterial analysis) is to be realized when all information concerning the problem, the hardware and the software is ontologically represented. Initially, we have considered the Analytical Hierarchy Process (AHP), which is well suited for the hierarchical data structures (e.g., such that have been formulated in terms of ontologies). However, due to its well-known shortcomings, we have decided to extend our search for the multicriterial analysis method best suited for the problem in question. In this paper we report results of our search, which involved: (i) TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), (ii) PROMETHEE, and (iii) GRIP (Generalized Regression with Intensities of Preference). We also briefly argue why other methods have not been considered as valuable candidates.

Big data from the Internet of Things may create big challenge for data classification. Most active learning approaches select either uncertain or representative unlabeled instances to query their labels. Although several active learning algorithms have been proposed to combine the two criteria for query selection, they are usually ad hoc in finding unlabeled instances that are both informative and representative and fail to take the diversity of instances into account. We address this challenge by presenting a new active learning framework which considers uncertainty, representativeness, and diversity creation. The proposed approach provides a systematic way for measuring and combining the uncertainty, representativeness, and diversity of an instance. Firstly, use instances' uncertainty and representativeness to constitute the most informative set. Then, use the kernel k-means clustering algorithm to filter the redundant samples and the resulting samples are queried for labels. Extensive experimental results show that the proposed approach outperforms several state-of-the-art active learning approaches.

Full Text Available We analyze dynamic local interaction in population games where the local interaction structure (modeled as a graph can change over time: A stochastic process generates a random sequence of graphs. This contrasts with models where the initial interaction structure (represented by a deterministic graph or the realization of a random graph cannot change over time.

In this study, the Computed Tomography (CT) and gamma-ray attenuation (GRA) techniques were used in the investigation of representative sample sizes for attributes related to soil structure. First of all, the representative elementary length (REL) for experimental measurements of soil mass attenuation coefficient (μes), of samples from a sandy and a clayey soil, was analyzed. The study was conducted with two radioactive sources ( 241 Am and 137 Cs), three collimators (2 - 4 mm diameters), and 14 thickness (x) samples (2-5 cm). From these analyzes, it was possible to identify an ideal thickness range for each of the studied sources (2-4 cm and 12-15 cm for the sources of 241 Am and 137 Cs, respectively). The application of such results in representative elementary area evaluations, in clayey soil clods via CT, indicated that experimental soil mass attenuation coefficient average values obtained for x>4 cm and source 241 Am might induce the use of samples which are not large enough for soil bulk density evaluations. Subsequently, μCT images with a total volume of 39×39×33 mm 3 and spatial resolution of 60 μm were used for macroporous system morphological characterization of a Rhodic Ferralsol with clayey texture, under no-till (NT) and conventional till (CT) systems. Attributes as macroporosity (MAP), number of macropores (NMAP), tortuosity (τ) and connectivity (C) of the pores were assessed. The C degree was estimated based on the Euler-Poincare characteristic (EPC). Once 3D images enable the study of these attributes in different sample volumes, the proposed study is ideal for the analysis of representative elementary volume (REV). Usually, the selection of subvolumes for REV analysis occurs concentrically to a small volume or in adjacent positions. Here, we introduced a new method for selecting the positions of subvolumes, which are randomly chosen within the total image volume (randomselection). It was observed that higher fluctuations in amplitude of each

Full Text Available Back in the 1970s a number of ethnomusicologists started to elaborate a theoretical reflection on performance as a central issue in the study of music making. This forced them to develop other ways of visualizing music for their analytical purposes. This article deals with how performance has been represented in ethnomusicological studies. I shall discuss how the graphic rendition of a sound recording is simply the mirror of what a scholar perceives, or the consequence of his/her will to emphasise a specific aspect, mediated through the possibilities offered by (and the limits of the Western semiographic system. After presenting a series of examples on how various scholars chose to graphically visualize musical performance, this paper shows how the contemplation of the strategies used to visualize performance in ethnomusicological studies can be a fruitful way of reflecting upon various topics, namely 1 the impassable limits of score transcription for understanding music as a performative phenomenon; 2 the analysis of the graphic solutions adopted by the ethnomusicologist as a way to better understand their idea of what the performance is; 3 the role played by technology in promoting new analytical approaches and methodologies; 4 analysis in ethnomusicology as an "artisanal process".

During my tenure as an AGU Congressional Science Fellow, which began in September 2010 and continues until November 2011, my time has been shared between working with the U.S. House of Representatives Natural Resource Committee Democratic staff and in the office of Rep. Ed Markey (D-Mass., ranking Democrat on the committee). I appreciate getting to work with staff, fellows, and interns who inspire me, make me laugh, and know their issues cold. Much of my work on the committee is related to fish, wildlife, oceans, lands, and water issues and is directly related to my background in ecology and evolutionary biology (I studied zebra ecology and behavior in Kenya). My assignments have included asking the Environmental Protection Agency (EPA) about why it has not changed the allowed usage of certain pesticides that the National Marine Fisheries Service has found to jeopardize the recovery of endangered Pacific salmon; helping to identify research needs and management options to combat the swiftly spreading and catastrophic white nose syndrome in North American bats; and inquiring as to whether a captive-ape welfare bill, if passed without amendment, could thwart development of a vaccine to stop the Ebola virus from continuing to cause mass mortality in endangered wild apes.

Full Text Available The public administration is closely linked to political power of the state, for which it implies the fact that the relationship of subordination is not against political parties or governmentcoalitions, but more of political power, democratically and constitutionally formed at the level of state’s representative bodies. The subordination and the separation are not dichotomous, they are not incompatible, but they form what it is known as, according to philosophy, inseparable and complementary opposed. Basically adopting an etymological definition (demos = people and kratos = power, status, the democracy is the people’s government. The personalization of the political life increases with the proliferation of monocratic institutions - as part of the state. Under these conditions, the elections, rather after having renounced at the party list, tend to transform into a ritual, a procedure for identifying people, which breaks down into an almost unconditional delegation of authority. So, beware of politicians who wish, under the reasoning of the elective investment, to be players in the democracy game, ignoring the rules established in the constitution.

The purpose of this study was to compare the efficacy and safety of aripiprazole versus bupropion augmentation in patients with major depressive disorder (MDD) unresponsive to selective serotonin reuptake inhibitors (SSRIs). This is the first randomized, prospective, open-label, direct comparison study between aripiprazole and bupropion augmentation. Participants had at least moderately severe depressive symptoms after 4 weeks or more of SSRI treatment. A total of 103 patients were randomized to either aripiprazole (n = 56) or bupropion (n = 47) augmentation for 6 weeks. Concomitant use of psychotropic agents was prohibited. Montgomery Asberg Depression Rating Scale, 17-item Hamilton Depression Rating scale, Iowa Fatigue Scale, Drug-Induced Extrapyramidal Symptoms Scale, Psychotropic-Related Sexual Dysfunction Questionnaire scores were obtained at baseline and after 1, 2, 4, and 6 weeks of treatment. Overall, both treatments significantly improved depressive symptoms without causing serious adverse events. There were no significant differences in the Montgomery Asberg Depression Rating Scale, 17-item Hamilton Depression Rating scale, and Iowa Fatigue Scale scores, and response rates. However, significant differences in remission rates between the 2 groups were evident at week 6 (55.4% vs 34.0%, respectively; P = 0.031), favoring aripiprazole over bupropion. There were no significant differences in adverse sexual events, extrapyramidal symptoms, or akathisia between the 2 groups. The present study suggests that aripiprazole augmentation is at least comparable to bupropion augmentation in combination with SSRI in terms of efficacy and tolerability in patients with MDD. Both aripiprazole and bupropion could help reduce sexual dysfunction and fatigue in patients with MDD. Aripiprazole and bupropion may offer effective and safe augmentation strategies in patients with MDD who are unresponsive to SSRIs. Double-blinded trials are warranted to confirm the present findings.

Therapeutic selective nerve root blocks (SNRBs) are a common intervention for patients with sciatica. Patients often are referred to physical therapy after SNRBs, although the effectiveness of this intervention sequence has not been investigated. This study was a preliminary investigation of the effectiveness of SNRBs, with or without subsequent physical therapy, in people with low back pain and sciatica. This investigation was a pilot randomized controlled clinical trial. The settings were spine specialty and physical therapy clinics. Forty-four participants (64% men; mean age=38.5 years, SD=11.6 years) with low back pain, with clinical and imaging findings consistent with lumbar disk herniation, and scheduled to receive SNRBs participated in the study. They were randomly assigned to receive either 4 weeks of physical therapy (SNRB+PT group) or no physical therapy (SNRB alone [SNRB group]) after the injections. All participants received at least 1 SNRB; 28 participants (64%) received multiple injections. Participants in the SNRB+PT group attended an average of 6.0 physical therapy sessions over an average of 23.9 days. Outcomes were assessed at baseline, 8 weeks, and 6 months with the Low Back Pain Disability Questionnaire, a numeric pain rating scale, and the Global Rating of Change. Significant reductions in pain and disability occurred over time in both groups, with no differences between groups at either follow-up for any outcome. Nine participants (5 in the SNRB group and 4 in the SNRB+PT group) underwent surgery during the follow-up period. The limitations of this study were a relatively short-term follow-up period and a small sample size. A physical therapy intervention after SNRBs did not result in additional reductions in pain and disability or perceived improvements in participants with low back pain and sciatica.

Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

This article discusses some ethical principles for distributing pandemic influenza vaccine and other indivisible goods. I argue that a number of principles for distributing pandemic influenza vaccine recently adopted by several national governments are morally unacceptable because they put too much emphasis on utilitarian considerations, such as the ability of the individual to contribute to society. Instead, it would be better to distribute vaccine by setting up a lottery. The argument for this view is based on a purely consequentialist account of morality; i.e. an action is right if and only if its outcome is optimal. However, unlike utilitarians I do not believe that alternatives should be ranked strictly according to the amount of happiness or preference satisfaction they bring about. Even a mere chance to get some vaccine matters morally, even if it is never realized.

Full Text Available Application-oriented Wireless Sensor Networks (WSNs promises to be one of the most useful technologies of this century. However, secure communication between nodes in WSNs is still an unresolved issue. In this context, we propose two protocols (i.e. Optimal Secure Path (OSP and Sub-optimal Secure Path (SSP to minimize the outage probability of secrecy capacity in the presence of multiple eavesdroppers. We consider dissimilar fading at the main and wiretap link and provide detailed evaluation of the impact of Nakagami-m and Rician-K factors on the secrecy performance of WSNs. Extensive simulations are performed to validate our findings. Although the optimal scheme ensures more security, yet the sub-optimal scheme proves to be a more practical approach to secure wireless links.

We present the 5-year clinical outcomes according to treatment strategy with additional risk stratification of the ICTUS (Invasive versus Conservative Treatment in Unstable coronary Syndromes) trial. Long-term outcomes may be relevant to decide treatment strategy for patients presenting with non-ST-segment elevation acute coronary syndromes (NSTE-ACS) and elevated troponin T. We randomly assigned 1,200 patients to an early invasive or selective invasive strategy. The outcomes were the composite of death or myocardial infarction (MI) and its individual components. Risk stratification was performed with the FRISC (Fast Revascularization in InStability in Coronary artery disease) risk score. At 5-year follow-up, revascularization rates were 81% in the early invasive and 60% in the selective invasive group. Cumulative death or MI rates were 22.3% and 18.1%, respectively (hazard ratio [HR]: 1.29, 95% confidence interval [CI]: 1.00 to 1.66, p = 0.053). No difference was observed in mortality (HR: 1.13, 95% CI: 0.80 to 1.60, p = 0.49) or MI (HR: 1.24, 95% CI: 0.90 to 1.70, p = 0.20). After risk stratification, no benefit of an early invasive strategy was observed in reducing death or spontaneous MI in any of the risk groups. In patients presenting with NSTE-ACS and elevated troponin T, we could not demonstrate a long-term benefit of an early invasive strategy in reducing death or MI. (Invasive versus Conservative Treatment in Unstable coronary Syndromes [ICTUS]; ISRCTN82153174). Copyright 2010 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

Pain is the common complication after a surgery. The aim of this study was to evaluate the effect of aromatherapy with Rosa damascena Mill. on the postoperative pain in children. In a double-blind, placebo-controlled clinical trial, we selected 64 children of 3-6 years of age through convenient sampling and divided them randomly into two groups. Patients in group A were given inhalation aromatherapy with R. damascena Mill., and in group B, the patients were given almond oil as a placebo. Inhalation aromatherapy was used at the first time of subjects' arrival to the ward and then at 3, 6, 9, and 12 h afterward. Common palliative treatments to relieve pain were used in both groups. Thirty minutes after aromatherapy, the postoperative pain in children was evaluated with the Toddler Preschooler Postoperative Pain Scale (TPPPS). Data were statistically analyzed using Chi-square test, one-way analysis of variance (ANOVA), and repeated measures ANOVA. There was no significant difference in pain scores at the first time of subjects' arrival to the ward (before receiving any aromatherapy or palliative care) between the two groups. After each time of aromatherapy and at the end of treatment, the pain score was significantly reduced in the aromatherapy group with R. damascena Mill. compared to the placebo group. According to our results, aromatherapy with R. damascena Mill. can be used in postoperative pain in children, together with other common treatments without any significant side effects.

Background: Pain is the common complication after a surgery. The aim of this study was to evaluate the effect of aromatherapy with Rosa damascena Mill. on the postoperative pain in children. Materials and Methods: In a double-blind, placebo-controlled clinical trial, we selected 64 children of 3–6 years of age through convenient sampling and divided them randomly into two groups. Patients in group A were given inhalation aromatherapy with R. damascena Mill., and in group B, the patients were given almond oil as a placebo. Inhalation aromatherapy was used at the first time of subjects’ arrival to the ward and then at 3, 6, 9, and 12 h afterward. Common palliative treatments to relieve pain were used in both groups. Thirty minutes after aromatherapy, the postoperative pain in children was evaluated with the Toddler Preschooler Postoperative Pain Scale (TPPPS). Data were statistically analyzed using Chi-square test, one-way analysis of variance (ANOVA), and repeated measures ANOVA. Results: There was no significant difference in pain scores at the first time of subjects’ arrival to the ward (before receiving any aromatherapy or palliative care) between the two groups. After each time of aromatherapy and at the end of treatment, the pain score was significantly reduced in the aromatherapy group with R. damascena Mill. compared to the placebo group. Conclusions: According to our results, aromatherapy with R. damascena Mill. can be used in postoperative pain in children, together with other common treatments without any significant side effects. PMID:25878704

original study design. We purposively selected facilities in the districts/regions though originally the study clusters were to be randomlyselected. Lifelong antiretroviral therapy for all HIV positive pregnant and lactating women, Option B+, was implemented in the three countries during the study period, with the potential for a differential impact by study arm. Implementation however, was rapidly done across the districts/regions, so that there is unlikely be this potential confounding. We developed a system of monitoring and documentation of potential confounding activities or actions, and these data will be incorporated into analyses at the conclusion of the project. Strengthens of the study are that it tests multilevel interventions, utilizes program as well as study specific and individual data, and it is conducted under "real conditions" leading to more robust findings. Limitations of the protocol include the lack of a true control arm and inadequate control for the potential effect of Option B+, such as the intensification of messages as the importance of early ANC and male partner testing. ClinicalTrials.gov (study ID: NCT01971710) Protocol version 5, 30 July 2013, registered 13 August 2013.

The purpose of this standard is to help ensure that DOE Facility Representatives are selected based on consistently high standards and from the best qualified candidates, that they receive the necessary training, and that their duties are well understood and documented. The standard defines the duties, responsibilities, and qualifications for Facility Representatives, based on facility hazard classification; risks to workers, the public, and the environment; and the operational activity level. Guidance provided includes: (1) an approach for determining the required facility coverage; (2) the duties, responsibilities, and authorities of a Facility Representative; (3) training and qualifications expected of a Facility Representative; and (4) elements necessary for successful Facility Representative Programs at DOE Field Offices. This guidance was written primarily to address nuclear facilities. 12 refs., 2 tabs.

Discusses the importance of science process skills and describes ways to select sets of random numbers for selection of subjects for a research study in an unbiased manner. Presents an activity appropriate for grades 5-12. (JRH)

Randomized trials and meta-analyses demonstrated that a routine invasive strategy improves outcomes in patients with non-ST-elevation acute coronary syndrome (NSTE-ACS) compared to a selective invasive strategy. Benefit was driven primarily by a reduction in the risk of myocardial infarction. However, the impact of either strategy on long-term mortality is unknown. Trials that compared a routine invasive strategy versus a selective invasive strategy in patients with NSTE-ACS and reported data on all-cause mortality ≥1 year were included. Summary odds ratios (OR) were constructed using Peto's model for all-cause mortality using the longest available follow-up data. Subgroup analysis was performed for follow-up at 1 to ≤5 years and >5 years. Eight trials with 6,657 patients were available for analysis. At a mean of 10.3 years, the risk of all-cause mortality was similar with both strategies (28.5% vs 28.5%; OR 1.00, 95% confidence interval [CI] 0.90 to 1.12, p = 0.97). This effect was similar on subgroup analysis for follow-up at 1 to ≤5 years (OR 0.89, 95% CI 0.77 to 1.04, p = 0.15) and >5 years (OR 1.02, 95% CI 0.90 to 1.14, p = 0.79). There was no difference in treatment effect across various study-level covariates such as age, gender, diabetes, and positive troponin (all P for interaction >0.05). In conclusion, in patients with NSTE-ACS, both routine invasive and selective invasive strategies have a similar risk of all-cause mortality at ∼10 years. This illustrates there are still opportunities to change the trajectory of mortality events among invasively treated patients with NSTE-ACS. Published by Elsevier Inc.

Full Text Available Objective: Metabolic Syndrome (MetSyn describes a cluster of metabolic disorders and is considered a risk factor for development of cardiovascular disease. Although a high prevalence is commonly assumed in Germany data about the degree of its occurrence in the population and in subgroups are still missing. The aim of this study was to assess the prevalence of the MetSyn according to the NCEP ATP-III (National Cholesterol Education Program Adult Treatment Panel III criteria in persons aged ≥18 years attending a general practitioner in Germany. Here we describe in detail the methods used and the feasibility of determining the MetSyn in a primary health care setting. Research design and methods: The German-wide cross-sectional study was performed during two weeks in October 2005. Blood samples were analyzed in a central laboratory. Waist circumference and blood pressure were assessed, data on smoking, life style, fasting status, socio-demographic characteristics and core information from non-participants collected. Quality control procedures included telephone-monitoring and random on-site visits. In order to achieve a maximal number of fasting blood samples with a minimal need for follow-up appointments a stepwise approach was developed. Basic descriptive statistics were calculated, the Taylor expansion method used to estimate standard errors needed for calculation of confidence intervals for clustered observations. Results: In total, 1511 randomlyselected general practices from 397 out of 438 German cities and administrative districts enrolled 35,869 patients (age range: 18-99, women 61.1%. More than 50,000 blood samples were taken. Fasting blood samples were available for 49% of the participants. Of the participating patients 99.3% returned questionnaires to the GP, only 12% were not filled out completely. The overall prevalence of the MetSyn (NCEP/ATP III 2001 was found to be 19.8%, with men showing higher prevalence rates than women (22

Full Text Available Abstract Background Depression is a global health challenge. Although there are effective psychological and pharmaceutical interventions, our best treatments achieve remission rates less than 1/3 and limited sustained recovery. Underpinning this efficacy gap is limited understanding of how complex psychological interventions for depression work. Recent reviews have argued that the active ingredients of therapy need to be identified so that therapy can be made briefer, more potent, and to improve scalability. This in turn requires the use of rigorous study designs that test the presence or absence of individual therapeutic elements, rather than standard comparative randomised controlled trials. One such approach is the Multiphase Optimization Strategy, which uses efficient experimentation such as factorial designs to identify active factors in complex interventions. This approach has been successfully applied to behavioural health but not yet to mental health interventions. Methods/Design A Phase III randomised, single-blind balanced fractional factorial trial, based in England and conducted on the internet, randomized at the level of the patient, will investigate the active ingredients of internet cognitive-behavioural therapy (CBT for depression. Adults with depression (operationalized as PHQ-9 score ≥ 10, recruited directly from the internet and from an UK National Health Service Improving Access to Psychological Therapies service, will be randomized across seven experimental factors, each reflecting the presence versus absence of specific treatment components (activity scheduling, functional analysis, thought challenging, relaxation, concreteness training, absorption, self-compassion training using a 32-condition balanced fractional factorial design (2IV 7-2. The primary outcome is symptoms of depression (PHQ-9 at 12 weeks. Secondary outcomes include symptoms of anxiety and process measures related to hypothesized mechanisms

Depression is a global health challenge. Although there are effective psychological and pharmaceutical interventions, our best treatments achieve remission rates less than 1/3 and limited sustained recovery. Underpinning this efficacy gap is limited understanding of how complex psychological interventions for depression work. Recent reviews have argued that the active ingredients of therapy need to be identified so that therapy can be made briefer, more potent, and to improve scalability. This in turn requires the use of rigorous study designs that test the presence or absence of individual therapeutic elements, rather than standard comparative randomised controlled trials. One such approach is the Multiphase Optimization Strategy, which uses efficient experimentation such as factorial designs to identify active factors in complex interventions. This approach has been successfully applied to behavioural health but not yet to mental health interventions. A Phase III randomised, single-blind balanced fractional factorial trial, based in England and conducted on the internet, randomized at the level of the patient, will investigate the active ingredients of internet cognitive-behavioural therapy (CBT) for depression. Adults with depression (operationalized as PHQ-9 score ≥ 10), recruited directly from the internet and from an UK National Health Service Improving Access to Psychological Therapies service, will be randomized across seven experimental factors, each reflecting the presence versus absence of specific treatment components (activity scheduling, functional analysis, thought challenging, relaxation, concreteness training, absorption, self-compassion training) using a 32-condition balanced fractional factorial design (2 IV 7-2 ). The primary outcome is symptoms of depression (PHQ-9) at 12 weeks. Secondary outcomes include symptoms of anxiety and process measures related to hypothesized mechanisms. Better understanding of the active ingredients of

disconnection of community members with both abstract and absolute representations of points, paths and areas. From this we discuss how the local concepts of space and time as frames of reference can not be represented adequately with our current selection of contextual data, and how we are engaging...

The 'ingredients' which control a phase transition in well defined systems as well as in random ones (e.q. random magnetic systems) are listed and discussed within a somehow unifying perspective. Among these 'ingredients' the couplings and elements responsible for the cooperative phenomenon, the topological connectivity as well as possible topological incompatibilities, the influence of new degrees of freedom, the order parameter dimensionality, the ground state degeneracy and finally the 'quanticity' of the system are found. The general trends, though illustrated in magnetic systems, essentially hold for all phase transitions, and give a basis for connection of this area with Field theory, Theory of dynamical systems, etc. (Author) [pt

In colorectal cancer (CRC), unresectable liver metastases are associated with a poor prognosis. The FOXFIRE (an open-label randomized phase III trial of 5-fluorouracil, oxaliplatin, and folinic acid +/- interventional radioembolization as first-line treatment for patients with unresectable liver-only or liver-predominant metastatic colorectal cancer), SIRFLOX (randomized comparative study of FOLFOX6m plus SIR-Spheres microspheres versus FOLFOX6m alone as first-line treatment in patients with nonresectable liver metastases from primary colorectal carcinoma), and FOXFIRE-Global (assessment of overall survival of FOLFOX6m plus SIR-Spheres microspheres versus FOLFOX6m alone as first-line treatment in patients with nonresectable liver metastases from primary colorectal carcinoma in a randomized clinical study) clinical trials were designed to evaluate the efficacy and safety of combining first-line chemotherapy with selective internal radiation therapy (SIRT) using yttrium-90 resin microspheres, also called transarterial radioembolization. The aim of this analysis is to prospectively combine clinical data from 3 trials to allow adequate power to evaluate the impact of chemotherapy with SIRT on overall survival. Eligible patients are adults with histologically confirmed CRC and unequivocal evidence of liver metastases which are not treatable by surgical resection or local ablation with curative intent at the time of study entry. Patients may also have limited extrahepatic metastases. Final analysis will take place when all participants have been followed up for a minimum of 2 years. Efficacy and safety estimates derived using individual participant data (IPD) from SIRFLOX, FOXFIRE, and FOXFIRE-Global will be pooled using 2-stage prospective meta-analysis. Secondary outcome measures include progression-free survival (PFS), liver-specific PFS, health-related quality of life, response rate, resection rate, and adverse event profile. The large study population will

Random insertion mutagenesis is a widely used technique for the identification of bacterial virulence genes. Most strategies for random mutagenesis involve cloning in Escherichia coli for passage of plasmids or for phenotypic selection. This can result in biased selection due to restriction or

Approximately 20 % of hepatocellular carcinoma (HCC) patients diagnosed in the early stages may benefit from potentially curative ablative therapies such as surgical resection, transplantation or radiofrequency ablation. For patients not eligible for such options, prognosis is poor. Sorafenib and Selective Internal Radiation Therapy (SIRT) are clinically proven treatment options in patients with unresectable HCC, and this study aims to assess overall survival following either SIRT or Sorafenib therapy for locally advanced HCC patients. This investigator-initiated, multi-centre, open-label, randomized, controlled trial will enrol 360 patients with locally advanced HCC, as defined by Barcelona Clinic Liver Cancer stage B or stage C, without distant metastases, and which is not amenable to immediate curative treatment. Exclusion criteria include previous systemic therapy, metastatic disease, complete occlusion of the main portal vein, or a Child-Pugh score of >7. Eligible patients will be randomised 1:1 and stratified by centre and presence or absence of portal vein thrombosis to receive either a single administration of SIRT using yttrium-90 resin microspheres (SIR-Spheres®, Sirtex Medical Limited, Sydney, Australia) targeted at HCC in the liver by the trans-arterial route or continuous oral Sorafenib (Nexavar®, Bayer Pharma AG, Berlin, Germany) at a dose of 400 mg twice daily until disease progression, no further response, complete regression or unacceptable toxicity. Patients for both the Sorafenib and SIRT arms will be followed-up every 4 weeks for the first 3 months and 12 weekly thereafter. Overall survival is the primary endpoint, assessed for the intention-to-treat population. Secondary endpoints are tumour response rate, time-to-tumour progression, progression free survival, quality of life and down-staging to receive potentially curative therapy. Definitive data comparing these two therapies will help to determine clinical practice in the large group of

The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model. (orig.)

Vermont Center for Geographic Information — (Link to Metadata) This coverage represents the results of an analysis of landscape diversity in Vermont. Polygons in the dataset represent as much as possible (in a...

..., determine that the Personal Representative for purposes of compensation by the Fund is the first person in... under this provision. The claim forms shall require that the purported Personal Representative certify... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Personal Representative. 104.4 Section...

In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

The primary objective of this research was to investigate the selection process used by consumers when choosing a restaurant to dine. This study examined literature on consumer behaviour, restaurant selection, and decision-making, underpinning the contention that service quality is linked to the consumer’s selection of a restaurant. It supports the utility theories that consumers buy bundles of attributes that simultaneously combined represent a certain level of service quality at a certain p...

In this paper, we develop efficient multiscale methods for flows in heterogeneous media. We use the generalized multiscale finite element (GMsFEM) framework. GMsFEM approximates the solution space locally using a few multiscale basis functions. This approximation selects an appropriate snapshot space and a local spectral decomposition, e.g., the use of oversampled regions, in order to achieve an efficient model reduction. However, the successful construction of snapshot spaces may be costly if too many local problems need to be solved in order to obtain these spaces. We use a moderate quantity of local solutions (or snapshot vectors) with random boundary conditions on oversampled regions with zero forcing to deliver an efficient methodology. Motivated by the randomized algorithm presented in [P. G. Martinsson, V. Rokhlin, and M. Tygert, A Randomized Algorithm for the approximation of Matrices, YALEU/DCS/TR-1361, Yale University, 2006], we consider a snapshot space which consists of harmonic extensions of random boundary conditions defined in a domain larger than the target region. Furthermore, we perform an eigenvalue decomposition in this small space. We study the application of randomized sampling for GMsFEM in conjunction with adaptivity, where local multiscale spaces are adaptively enriched. Convergence analysis is provided. We present representative numerical results to validate the method proposed.

A previous paper suggested that humans can generate genuinely random numbers. I tested this hypothesis by repeating the experiment with a larger number of highly numerate subjects, asking them to call out a sequence of digits selected from 0 through 9. The resulting sequences were substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number, in line with previous literature. However, the previous literature suggests that humans generate random numbers with substantial conscious effort, and distractions which reduce that effort reduce the randomness of the numbers. I reduced my subjects' concentration by asking them to call out in another language, and with alcohol - neither affected the randomness of their responses. This suggests that the ability to generate random numbers is a 'basic' function of the human mind, even if those numbers are not mathematically 'random'. I hypothesise that there is a 'creativity' mechanism, while not truly random, provides novelty as part of the mind's defence against closed programming loops, and that testing for the effects seen here in people more or less familiar with numbers or with spontaneous creativity could identify more features of this process. It is possible that training to perform better at simple random generation tasks could help to increase creativity, through training people to reduce the conscious mind's suppression of the 'spontaneous', creative response to new questions.

Bullying prevalence studies are limited by varied measurement methods and a lack of representative samples. This study estimated the national prevalence of bullying victimisation, perpetration and combined victim-perpetration experiences in a representative population-based sample of Australian youth. The relationships between the three types of bullying involvement with a range of mental health symptoms and diagnoses were also examined. A randomlyselected nationally representative sample aged 11-17 years ( N = 2967, M age = 14.6 years; 51.6% male) completed the youth component of the Second Australian Child and Adolescent Survey of Mental Health and Wellbeing (Young Minds Matter). Parents or carers also completed a structured face-to-face interview that asked questions about a single randomlyselected child in the household. The youth survey comprised self-reported bullying victimisation and perpetration (Olweus Bully-Victim Questionnaire-adapted), psychological distress (K10), emotional and behavioural problems (Strengths and Difficulties Questionnaire), as well as self-harm, suicide attempts and substance use. Modules from the Diagnostic Interview Schedule for Children Version IV were administered to all youth and parents to assess for mental disorder diagnoses (major depressive disorder, any anxiety disorder and any externalising disorder [attention-deficit hyperactivity disorder, oppositional defiant disorder and conduct disorder]). The 12-month prevalence of bullying victimisation was 13.3%, perpetration 1.6% and victim-perpetration 1.9%. Logistic regression models showed all forms of involvement in bullying were associated with increased risk of psychological distress, emotional and behavioural problems, substance use, self-harm and attempted suicide. Victimisation and victim-perpetration were associated with youth-reported major depressive disorder. There were also significant associations between bullying involvement and parent-reported diagnoses of major

Full Text Available Despite integration of advanced functions that enable Femto Access Points (FAPs to be deployed in a plug-and-play manner, the femtocell concept still cause several opened issues to be resolved. One of them represents an assignment of Physical Cell Identifiers (PCIs to FAPs. This paper analyses a random based assignment algorithm in LTE systems operating in diverse femtocell scenarios. The performance of the algorithm is evaluated by comparing the number of confusions for various femtocell densities, PCI ranges and knowledge of vicinity. Simulation results show that better knowledge of vicinity can significantly reduce the number of confusions events.

The Perceived Stress Scale Cohen (J Health Soc Behav 24:385-96, 1983) is a widely and well-established self-report scale measuring perceived stress. However, the German version of the PSS-10 has not yet been validated. Thus, the purposes of this representative study were to psychometrically evaluate the PSS-10, and to provide norm values for the German population. The PSS-10 and standardized scales of depression, anxiety, fatigue, procrastination and life satisfaction were administered to a representative, randomlyselected German community sample consisting of 1315 females and 1148 male participants in the age range from 14 to 90 years. The results demonstrated a good internal consistency and construct validity. Perceived stress was consistently associated with depression, anxiety, fatigue, procrastination and reduced life satisfaction. Confirmatory factor analysis revealed a bi-dimensional structure with two related latent factors. Regarding demographic variables, women reported a higher level of stress than men. Perceived stress decreased with higher education, income and employment status. Older and married participants felt less stressed than younger and unmarried participants. The PSS-10 is a reliable, valid and economic instrument for assessing perceived stress. As psychological stress is associated with an increased risk of diseases, identifying subpopulations with higher levels of stress is essential. Due to the dependency of the perceived stress level on demographic variables, particularly age and sex, differentiated norm values are needed, which are provided in this paper.

DNA nanoparticles of approximately 250nm were produced by rolling circle replication of circular oligonucleotide templates which results in highly condensed DNA particulates presenting concatemeric sequence repeats. Using templates containing randomized sequences, high diversity libraries of particles were produced. A biopanning method that iteratively screens for binding and uses PCR to recover selected particles was developed. The initial application of this technique was the selection of particles that bound to human dendritic cells (DCs). Following 9 rounds of selection the population of particles was enriched for particles that bound DCs, and individual binding clones were isolated and confirmed by flow cytometry and microscopy. This process, which we have termed DeNAno, represents a novel library technology akin to aptamer and phage display, but unique in that the selected moiety is a multivalent nanoparticle whose activity is intrinsic to its sequence. Cell targeted DNA nanoparticles may have applications in cell imaging, cell sorting, and cancer therapy. PMID:19963022

Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

Agomelatine is a new antidepressant with unique melatonin receptor type 1A (MTNR1A) and 1B ( MTNR1B) agonism and serotonergic receptor 5-hydroxytryptamine receptor 2C (5-HT-2C) antagonism. Several studies of patients with major depressive disorder (MDD) have confirmed the superior efficacy and safety of agomelatine in comparison with established treatments, such as selective serotonin reuptake inhibitors (SSRIs) or serotonin-norepinephrine reuptake inhibitors (SNRIs). This meta-analysis comprehensively shows the efficacy, acceptability, and safety of agomelatine in comparison with SSRIs and SNRIs used as antidepressants in MDD. Comprehensive electronic database searches were performed to identify reports of head-to-head randomized controlled trials that have compared agomelatine with SSRIs or SNRIs in terms of efficacy/effectiveness in treating MDD. Response and remission rates at both acute (6-12 weeks) and follow-up (24 weeks) phases, Clinical Global Impression-Improvement Scale response and remission rates, changes in depression scale scores, improvements in subjective sleep, dropout rates, and side effect rates were extracted and analysed. The meta-analysis included six head-to-head trials involving 1871 patients. In the acute phase, agomelatine had higher response rates (relative risk (RR) 1.08, 95% confidence interval (CI) 1.02-1.15) compared to SSRIs and SNRIs. In the remission analysis, only acute remission rates (RR 1.12, 95% CI 1.01-1.24) significantly differed. The action of agomelatine was superior on the Leeds Sleep Evaluation Questionnaire-Quality of Sleep score (mean difference 4.05, 95% CI 0.61-7.49). Discontinuation due to inefficacy did not differ between agomelatine and SSRIs/SNRIs (RR 0.74, 95% CI 0.42-1.28). Compared to SSRIs and SNRIs, however, agomelatine revealed a lower rate of discontinuation due to side effects (RR 0.38, 95% CI 0.25-0.57). Agomelatine has significantly higher efficacy and potential acceptability compared to SSRIs and

Approximately 20 % of hepatocellular carcinoma (HCC) patients diagnosed in the early stages may benefit from potentially curative ablative therapies such as surgical resection, transplantation or radiofrequency ablation. For patients not eligible for such options, prognosis is poor. Sorafenib and Selective Internal Radiation Therapy (SIRT) are clinically proven treatment options in patients with unresectable HCC, and this study aims to assess overall survival following either SIRT or Sorafenib therapy for locally advanced HCC patients. This investigator-initiated, multi-centre, open-label, randomized, controlled trial will enrol 360 patients with locally advanced HCC, as defined by Barcelona Clinic Liver Cancer stage B or stage C, without distant metastases, and which is not amenable to immediate curative treatment. Exclusion criteria include previous systemic therapy, metastatic disease, complete occlusion of the main portal vein, or a Child-Pugh score of >7. Eligible patients will be randomised 1:1 and stratified by centre and presence or absence of portal vein thrombosis to receive either a single administration of SIRT using yttrium-90 resin microspheres (SIR-Spheres®, Sirtex Medical Limited, Sydney, Australia) targeted at HCC in the liver by the trans-arterial route or continuous oral Sorafenib (Nexavar®, Bayer Pharma AG, Berlin, Germany) at a dose of 400 mg twice daily until disease progression, no further response, complete regression or unacceptable toxicity. Patients for both the Sorafenib and SIRT arms will be followed-up every 4 weeks for the first 3 months and 12 weekly thereafter. Overall survival is the primary endpoint, assessed for the intention-to-treat population. Secondary endpoints are tumour response rate, time-to-tumour progression, progression free survival, quality of life and down-staging to receive potentially curative therapy. Definitive data comparing these two therapies will help to determine clinical practice in the large group

Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance...... of uncertain parameters. Monte Carlo simulation is readily used for practical calculations. However, an alternative approach is offered by possibility theory making use of possibility distributions such as intervals and fuzzy intervals. This approach is well suited to represent lack of knowledge or imprecision...

Random walk polynomials and random walk measures play a prominent role in the analysis of a class of Markov chains called random walks. Without any reference to random walks, however, a random walk polynomial sequence can be defined (and will be defined in this paper) as a polynomial sequence{Pn(x)}

Abstract. Random walks as well as diffusions in random media are considered. Methods are developed that allow one to establish large deviation results for both the 'quenched' and the 'averaged' case. Keywords. Large deviations; random walks in a random environment. 1. Introduction. A random walk on Zd is a stochastic ...

The harm of screening (unnecessary biopsies and overdiagnosis) generally outweighs the benefit of reducing prostate cancer (PCa) mortality in men aged ≥70 yr. Patient selection for biopsy using risk stratification and magnetic resonance imaging (MRI) may improve this benefit-to-harm ratio. To assess the potential of a risk-based strategy including MRI to selectively identify men aged ≥70 yr with high-grade PCa. Three hundred and thirty-seven men with prostate-specific antigen ≥3.0 ng/ml at a fifth screening (71-75 yr) in the European Randomized study of Screening for Prostate Cancer Rotterdam were biopsied. One hundred and seventy-nine men received six-core transrectal ultrasound biopsy (TRUS-Bx), while 158 men received MRI, 12-core TRUS-Bx, and fusion TBx in case of Prostate Imaging Reporting and Data System ≥3 lesions. The primary outcome was the overall, low-grade (Gleason Score 3+3) and high-grade (Gleason Score ≥ 3+4) PCa rate. Secondary outcome was the low- and high-grade PCa rate detected by six-core TRUS-Bx, 12-core TRUS-Bx, and MRI ± TBx. Tertiary outcome was the reduction of biopsies and low-grade PCa detection by upfront risk stratification with the Rotterdam Prostate Cancer Risk Calculator 4. Fifty-five percent of men were previously biopsied. The overall, low-grade, and high-grade PCa rates in biopsy naïve men were 48%, 27%, and 22%, respectively. In previously biopsied men these PCa rates were 25%, 20%, and 5%. Sextant TRUS-Bx, 12-core TRUS-Bx, and MRI ± TBx had a similar high-grade PCa rate (11%, 12%, and 11%) but a significantly different low-grade PCa rate (17%, 28%, and 7%). Rotterdam Prostate Cancer Risk Calculator 4-based stratification combined with 12-core TRUS-Bx ± MRI-TBx would have avoided 65% of biopsies and 68% of low-grade PCa while detecting an equal percentage of high-grade PCa (83%) compared with a TRUS-Bx all men approach (79%). After four repeated screens and ≥1 previous biopsies in half of men, a significant

Congress of the U.S., Washington, DC. House Select Committee on Aging.

This document contains transcripts of witness testimony and prepared statements from the Congressional hearing called to review the need for a national health care policy for long-term care. Opening statements are presented from committee chairman Claude Pepper and from Representatives Sala Burton and Barbara Boxer. Testmonies are presented from…

Full Text Available Childhood obesity has become, a global public health problem, and epidemiological studies are important to identify its determinants in different populations. This study aimed to investigate factors associated with obesity in a representative sample of children in Neishabour, Iran. This study was conducted among 1500 randomlyselected 6–12-year-old students from urban areas of Neishabour, northeast of Iran. Then, through a case-control study, 114 obese (BMI≥95th percentile of Iranian reference children were selected as the case group and were compared with 102 controls (15th≤BMI<85th percentile. Factors suggested to be associated with weight status were investigated, for example, parental obesity, child physical activity levels, socio-economic status (SES, and so forth. The analysis was conducted using univariate and multivariate logistic regression (MLR in SPSS version 16. In univariate logistic regression model, birth weight, birth order, family extension, TV watching, sleep duration, physical activity, parents’ job, parents’ education, parental obesity history, and SES were significantly associated with children’s obesity. After MLR analysis, physical activity and parental obesity history remained statistically significant in the model. Our findings showed that physical activity and parental obesity history are the most important determinants for childhood obesity in our population. This finding should be considered in implementation of preventive interventions.

Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....

International Series of Monographs in Natural Philosophy, Volume 32: Random Functions and Turbulence focuses on the use of random functions as mathematical methods. The manuscript first offers information on the elements of the theory of random functions. Topics include determination of statistical moments by characteristic functions; functional transformations of random variables; multidimensional random variables with spherical symmetry; and random variables and distribution functions. The book then discusses random processes and random fields, including stationarity and ergodicity of random

Testimony of Associate Attorney General Stephen S. Trott on the federal government's present and future efforts in drug law enforcement before the Congressional Select Committee on Narcotics Abuse and Control is presented in this document. Three topic areas are included in the testimony. The first topic of management initiatives discusses…

Because of the considerable variability in the oxygen dissociation curves for chickens reported in the literature, the respiratory physiologist studying avian gas exchange is faced with the dilemma of which curve is representative for the chicken. In order to arrive at a representative curve, data from eight reported curves were compiled and adjusted to the same set of standard conditions of temperature (T), pH, and partial pressure of carbon dioxide (PCO2): T = 42 C, pH = 7.5 PCO2 = 40 torr. The mean PO2 STD (mean +/- SD) versus percent saturation of hemoglobin curve was then determined. The mean data were fitted to an equation representing the oxygen dissociation curve so that for any selected partial pressure of oxygen (PO2) the percent saturation (%SAT) of oxyhemoglobin may be computed. The P50 values for the mean literature and equation curves, respectively, were 47.4 +/- 9.8 and 45.3 Torr. The mean curve with its standard deviations provides a chicken oxygen dissociation curve representative of the literature data to which experimental data may be compared. The equation for the curve enables rapid referral to the representative curve to compute the %SAT, given the PO2 adjusted to the standard conditions.

The purpose of this handbook is to provide guidance to Bechtel Hanford, Inc. Subcontract Representatives in their assignments. It is the intention of this handbook to ensure that subcontract work is performed in accordance with the subcontract documents

The attached clarification by a spokesman of the Iraqi Ministry of Foreign Affairs is being circulated for the information of Member States pursuant to a request made by the Resident Representative of Iraq

Policy innovation is a key aspect of public innovation, which has been largely overlooked. Political leadership, competition and collaboration are key drivers of policy innovation. It is a barrier in traditional models of representative democracy that they provide weak conditions for collaboration....... Two Danish case studies indicate that collaboration between politicians and relevant and affected stakeholders can promote policy innovation, but also that a redesign of representative democracy is needed in order to establish a productive combination of political leadership, competition...

In celebration of the bicentenary of the publication of Wessel's paper on the geometric interpretation of complex numbers it is decsribed how Wessel used complex numbers to represent directions in surveying, at least as early as 1787.......In celebration of the bicentenary of the publication of Wessel's paper on the geometric interpretation of complex numbers it is decsribed how Wessel used complex numbers to represent directions in surveying, at least as early as 1787....

Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.

Due to the randomness of radioactive decay and nuclear reaction, the signals from detectors are random in time. But normal pulse generator generates periodical pulses. To measure the performances of nuclear electronic devices under random inputs, a random generator is necessary. Types of random pulse generator are reviewed, 2 digital random pulse generators are introduced. (authors)

with Clinical Outcome (SORT OUT) IV trial was designed as a prospective, multi-center, open-label, all-comer, two-arm, randomized, non-inferiority study comparing the everolimus-eluting stent with the sirolimus-eluting stent in the treatment of atherosclerotic coronary artery lesions. Based on a non...

Full Text Available In this paper, first, we study random best approximations to random sets, using fixed point techniques, obtaining this way stochastic analogues of earlier deterministic results by Browder-Petryshyn, KyFan and Reich. Then we prove two fixed point theorems for random multifunctions with stochastic domain that satisfy certain tangential conditions. Finally we consider a random differential inclusion with upper semicontinuous orientor field and establish the existence of random solutions.

The dominant congenital disorders Apert syndrome, achondroplasia and multiple endocrine neoplasia-caused by specific missense mutations in the FGFR2, FGFR3 and RET proteins respectively-represent classical examples of paternal age-effect mutation, a class that arises at particularly high frequencies in the sperm of older men. Previous analyses of DNA from randomlyselected cadaveric testes showed that the levels of the corresponding FGFR2, FGFR3 and RET mutations exhibit very uneven spatial d...

of Escherichia coli representative of those found in the Danish pig population, we compared the growth of 50 randomlyselected strains. The observed net growth rates were used to describe the in vitro pharmacodynamic relationship between drug concentration and net growth rate based on E max model with three...... parameters: maximum net growth rate (α max ); concentration for a half-maximal response (E max ); and the Hill coefficient (γ). The net growth rate in the absence of antibiotic did not differ between susceptible and resistant isolates (P = 0.97). The net growth rate decreased with increasing tetracycline...... text] between susceptible and resistant strains in the absence of a drug was not different. EC 50 increased linearly with MIC on a log-log scale, and γ was different between susceptible and resistant strains. The in vitro model parameters described the inhibition effect of tetracycline on E. coli when...

The association between personality and gambling has been explored previously. However, few studies are based on representative populations. This study aimed at examining the association between risk gambling and personality in a representative Swedish population. A random Swedish sample (N = 19,530) was screened for risk gambling using the Lie/Bet questionnaire. The study sample (N = 257) consisted of those screening positive on Lie/Bet and completing a postal questionnaire about gambling and personality (measured with the NODS-PERC and the HP5i respectively). Risk gambling was positively correlated with Negative Affectivity (a facet of Neuroticism) and Impulsivity (an inversely related facet of Conscientiousness), but all associations were weak. When taking age and gender into account, there were no differences in personality across game preference groups, though preferred game correlated with level of risk gambling. Risk gamblers scored lower than the population norm data with respect to Negative Affectivity, but risk gambling men scored higher on Impulsivity. The association between risk gambling and personality found in previous studies was corroborated in this study using a representative sample. We conclude that risk and problem gamblers should not be treated as a homogeneous group, and prevention and treatment interventions should be adapted according to differences in personality, preferred type of game and the risk potential of the games.

Full Text Available Many governments from European Union recognize the fact that agrotourism and rural tourism represents one way that can save the agriculture in that in the coming year’s rural tourism and agrotourism will become representative elements for rural area, the arguments of this representativity being the purpose of this paper. The rural area offers great opportunities for development of agrotourism, practicing of its being necessary in the current period. At the majority of rural settlements, the defining emblematic is multiple: the quality of landscape and warmth of the inhabitants, works of art and popular technique, traditional occupations, costumes, customs, traditions, cuisine, resources, etc. To these is added also the awareness, by small farmers, of the need to diversify agricultural activity, both in and outside the farm, by engaging in other activities, with non-agricultural character, from which agrotourism is one of the most circulated.

Full Text Available Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao’s original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.

Full Text Available In an assessment of representative democracy in Australian local government, this paper considers long-run changes in forms of political representation, methods of vote counting, franchise arrangements, numbers of local government bodies and elected representatives, as well as the thorny question of constitutional recognition. This discussion is set against the background of ongoing tensions between the drive for economic efficiency and the maintenance of political legitimacy, along with more deep-seated divisions emerging from the legal relationship between local and state governments and the resultant problems inherent in local government autonomy versus state intervention.

As computers and software systems move beyond the desktopand into the physical environments we live and workin, the systems are required to adapt to these environmentsand the activities taking place within them. Making applicationscontext-aware and representing context informationalong side...... application data can be a challenging task. Thispaper describes how digital context traditionally has beenrepresented in hypermedia data models and how this representationcan scale to also represent physical context. TheHyCon framework and data model, designed for the developmentof mobile context...

Full Text Available In this work, we aim to formalize the inception of representative bubbles giving the condition under which they may arise. We will find that representative bubbles may start at any time, depending on the definition of a behavioral component. This result is at odds with the theory of classic rational bubbles, which are those models that rely on the fulfillment of the transversality condition by which a bubble in a financial asset can arise just at its first trade. This means that a classic rational bubble (differently from our model cannot follow a cycle since if a bubble exists, it will burst by definition and never arise again.

This document deals with the quantification of the minimum thermal power level for a demonstrator and the definition of the physical criteria which define the representative character of a demonstrator towards a power reactor. Solutions allowing to keep an acceptable flow in an industrial core, have also been studied. The document is divided in three parts: the representativeness elements, the considered solutions and the characterization of the neutrons flows at the interfaces and the dose rates at the outer surface of the vessel. (A.L.B.)

for adaptation. Owing to the stochastic nature of climate change temporally and spatially, the acquired knowledge and practices need to be contextually relevant and responsive to the immediate needs of the .... simple randomselection of households from each randomlyselected cluster and, thereafter, respondents were ...

Affilin ® molecules represent a new class of so-called scaffold proteins. The concept of scaffold proteins is to use stable and versatile protein structures which can be endowed with de novo binding properties and specificities by introducing mutations in surface exposed amino acid residues. Complex variations and combinations are generated by genetic methods of randomization resulting in large cDNA libraries. The selection for candidates binding to a desired target can be executed by display methods, especially the very robust and flexible phage display. Here, we describe the construction of ubiquitin based Affilin ® phage display libraries and their use in biopanning experiments for the identification of novel protein ligands.

To develop a multivariate method for quantifying the population representativeness across related clinical studies and a computational method for identifying and characterizing underrepresented subgroups in clinical studies. We extended a published metric named Generalizability Index for Study Traits (GIST) to include multiple study traits for quantifying the population representativeness of a set of related studies by assuming the independence and equal importance among all study traits. On this basis, we compared the effectiveness of GIST and multivariate GIST (mGIST) qualitatively. We further developed an algorithm called "Multivariate Underrepresented Subgroup Identification" (MAGIC) for constructing optimal combinations of distinct value intervals of multiple traits to define underrepresented subgroups in a set of related studies. Using Type 2 diabetes mellitus (T2DM) as an example, we identified and extracted frequently used quantitative eligibility criteria variables in a set of clinical studies. We profiled the T2DM target population using the National Health and Nutrition Examination Survey (NHANES) data. According to the mGIST scores for four example variables, i.e., age, HbA1c, BMI, and gender, the included observational T2DM studies had superior population representativeness than the interventional T2DM studies. For the interventional T2DM studies, Phase I trials had better population representativeness than Phase III trials. People at least 65years old with HbA1c value between 5.7% and 7.2% were particularly underrepresented in the included T2DM trials. These results confirmed well-known knowledge and demonstrated the effectiveness of our methods in population representativeness assessment. mGIST is effective at quantifying population representativeness of related clinical studies using multiple numeric study traits. MAGIC identifies underrepresented subgroups in clinical studies. Both data-driven methods can be used to improve the transparency of

Building systems with the robustness of human reasoning capabilities requires inspirations from cognitive science. The primary objective of this study is to investigate the possibility of representing some basic principles of cognitive semantics’ Mental Spaces Theory such as domain construction, reality status of domains and their elements, and mental attitudes in a knowledge representation framework for the purpose of developing cognitively plausible knowledge representation systems. The model used as the basis of representation is the extended version of conventional semantic networks, namely Multi-Layered Extended Semantic Networks (MultiNet. The data used in this study have been selected from English expressions and have been represented in MWR, MultiNet’s knowledge representation software. Results obtained from analysis of represented data and their comparison to principles of mental spaces theory shows that theoretical constructs of mental spaces theory such as domain construction, reality status of domains and their elements, and mental attitudes can be formally represented in the MultiNet framework.

Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)

Full Text Available Abstract Background Evidence suggests that poor recruitment into clinical trials rests on a patient ‘deficit’ model – an inability to comprehend trial processes. Poor communication has also been cited as a possible barrier to recruitment. A qualitative patient interview study was included within the feasibility stage of a phase III non-inferiority Randomized Controlled Trial (RCT (SPARE, CRUK/07/011 in muscle invasive bladder cancer. The aim was to illuminate problems in the context of randomization. Methods The qualitative study used a ‘Framework Analysis’ that included ‘constant comparison’ in which semi-structured interviews are transcribed, analyzed, compared and contrasted both between and within transcripts. Three researchers coded and interpreted data. Results Twenty-four patients agreed to enter the interview study; 10 decliners of randomization and 14 accepters, of whom 2 subsequently declined their allocated treatment. The main theme applying to the majority of the sample was confusion and ambiguity. There was little indication that confusion directly impacted on decisions to enter the SPARE trial. However, confusion did appear to impact on ethical considerations surrounding ‘informed consent’, as well as cause a sense of alienation between patients and health personnel. Sub-optimal communication in many guises accounted for the confusion, together with the logistical elements of a trial that involved treatment options delivered in a number of geographical locations. Conclusions These data highlight the difficulty of providing balanced and clear trial information within the UK health system, despite best intentions. Involvement of multiple professionals can impact on communication processes with patients who are considering participation in RCTs. Our results led us to question the ‘deficit’ model of patient behavior. It is suggested that health professionals might consider facilitating a context in which patients

Aircraft passenger spaces designed without proper anthropometric analyses can create serious problems for obese passengers, including: possible denial of boarding, excessive body pressures and contact stresses, postural fixity and related health hazards, and increased risks of emergency evacuation failure. In order to help address the obese passenger's accommodation issues, this study developed male and female manikin families that represent obese US airline passengers. Anthropometric data of obese individuals obtained from the CAESAR anthropometric database were analyzed through PCA-based factor analyses. For each gender, a 99% enclosure cuboid was constructed, and a small set of manikins was defined on the basis of each enclosure cuboid. Digital human models (articulated human figures) representing the manikins were created using a human CAD software program. The manikin families were utilized to develop design recommendations for selected aircraft seat dimensions. The manikin families presented in this study would greatly facilitate anthropometrically accommodating large airline passengers.

Randomized controlled trials generate high-quality medical evidence. However, the use of unjustified inclusion/exclusion criteria may compromise the external validity of a study. We have introduced a method to assess the population representativeness of related clinical trials using electronic health record (EHR) data. As EHR data may not perfectly represent the real-world patient population, in this work, we further validated the method and its results using the National Health and Nutrition Examination Survey (NHANES) data. We visualized and quantified the differences in the distributions of age, HbA1c, and BMI among the target population of Type 2 diabetes trials, diabetics in NHANES databases, and a convenience sample of patients enrolled in selected Type 2 diabetes trials. The results are consistent with the previous study.

Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model.…

. This paper will attempt to outline a third way that does not retreat into nostalgia nor caves in to policy pressure. Through the metaphor of teaching as standing as a representative of the world, and a pedagogy of exemplification an alternative form of legitimisation of teaching is formulated. This way deals...

We present a framework for representing the trajectories of moving objects and the time-varying results of operations on moving objects. This framework supports the realization of discrete data models of moving objects databases, which incorporate representations of moving objects based on non...

Examined adjustment following physical disability using the representative case method with two persons with quadriplegia. Results highlighted the importance of previously established coping styles as well as the role of the environment in adjustment. Willingness to mourn aided in later growth. (JAC)

Eighteen high school science students were involved in a study to determine what attributes in the problem statement they need when representing a typical osmosis problem. In order to realize this goal students were asked to solve problems aloud and to explain their answers. Included as a part of the results are the attributes that the students…

A set of symbols is presented along with logical operators which represent the possible manipulations of the linear model. The use of these symbols and operators is to simplify the representation of analysis of variance models, correlation models and factor analysis models. (Author)

Representative national surveys in dwellings are important to unbiasedly evaluate the exposure of the general population to radon. In Italy, a representative national survey was conducted from 1989 to 1996, which involved about 5600 dwellings in 232 towns. Later on, some Regions carried out more detailed surveys, but a new national survey in dwellings is necessary in order to obtain a more thorough estimate of radon concentration distribution over the Italian territory. The need to make this survey in an affordable way led to implement a new approach based on the collaboration between the Istituto Superiore di Sanità and a national company with workplaces and employees' homes throughout the country. The intent is to carry out a proxy of a population representative survey by measuring radon concentration in the homes of a random sample of the company employees. The realisation of this survey was affordable, thanks to the availability of corporate e-mail for each employee, intranet service, and company internal mail service. A dedicated web procedure and e-questionnaires allowed to automatically manage the contact with employees and to collect their data, which were both cost- and time-saving. Using this e-mail contact approach, 53% of contacted employees consented to participate in the survey. Radon concentration passive measuring devices were distributed to about 7000 dwellings, using about 14000 CR-39 detectors (two measured rooms per dwelling). In order to reduce costs, the devices were exposed for 12 months instead of two consecutive 6-month periods (as with the former national survey). A first checking of the actual representativeness of the sample was done by comparing characteristics of dwellings and occupants in the sample with corresponding data from the latest National Census. This was accomplished thanks to the fact that the questions in the survey questionnaire were tailored to the categories adopted for the Census questionnaire. A preliminary

Interfaces are created to separate two distinct phases in a situation in which phase coexistence occurs. This book discusses randomly fluctuating interfaces in several different settings and from several points of view: discrete/continuum, microscopic/macroscopic, and static/dynamic theories. The following four topics in particular are dealt with in the book. Assuming that the interface is represented as a height function measured from a fixed-reference discretized hyperplane, the system is governed by the Hamiltonian of gradient of the height functions. This is a kind of effective interface model called ∇φ-interface model. The scaling limits are studied for Gaussian (or non-Gaussian) random fields with a pinning effect under a situation in which the rate functional of the corresponding large deviation principle has non-unique minimizers. Young diagrams determine decreasing interfaces, and their dynamics are introduced. The large-scale behavior of such dynamics is studied from the points of view of the hyd...

Gibbs states of a spin system with the single-spin space S=R{sup m} and unbounded pair interactions are studied. The spins are attached to the points of a realization γ of a random point process in R{sup n}. Under certain conditions on the model parameters we prove that, for almost all γ, the set G(S{sup γ}) of all Gibbs states is nonempty and its elements have support properties, explicitly described in the paper. We also show the existence of measurable selections γ→ν{sub γ}ϵG(S{sup γ}) (random Gibbs measures) and derive the corresponding averaged moment estimates.

This study evaluates the prevalence of HIV stigma in Spain and analyzes some variables that may affect its existence. In 2008, we conducted a computer-assisted telephone survey of 1607 people, representative of the Spanish population. Two-wave random stratified sampling was performed, first selecting the home and then the person, depending on the rates of age and sex. About 50% of the population feels discomfort about potential contact with people with HIV and tries to avoid it and 20% advocate discriminatory policies involving physical or social segregation of people with HIV. The belief that HIV is easily transmitted through social contact (15%) and blaming people with HIV for their disease (19.3%) are associated with stigmatization. Degree of proximity to people with HIV, political ideology, educational level, and age are also associated with the degree of stigmatization. According to these results, we suggest that, in order to reduce stigma, we need to modify the erroneous beliefs about the transmission pathways, decrease attributions of blame to people with HIV, and increase contact with them. These interventions should particularly target older people, people with a low educational level, and people with a more conservative political ideology.

Full Text Available ABSTRACT OBJECTIVE To estimate factors associated to illicit drug use among patients with mental illness in Brazil according to gender. METHODS A cross-sectional representative sample of psychiatric patients (2,475 individuals was randomlyselected from 11 hospitals and 15 public mental health outpatient clinics. Data on self-reported illicit drug use and sociodemographic, clinical and behavioral characteristics were obtained from face-to-face interviews. Logistic regression was used to estimate associations with recent illicit drug use. RESULTS The prevalence of any recent illicit drug use was 11.4%. Men had higher prevalence than women for all substances (17.5% and 5.6%, respectively. Lower education, history of physical violence, and history of homelessness were associated with drug use among men only; not professing a religion was associated with drug use in women only. For both men and women, younger age, current hospitalization, alcohol and tobacco use, history of incarceration, younger age at sexual debut, and more than one sexual partner were statistically associated with illicit drug use. CONCLUSIONS Recent illicit drug use among psychiatric patients is higher than among the general Brazilian population and it is associated with multiple factors including markers of psychiatric severity. Our data indicate the need for the development of gender-based drug-use interventions among psychiatric patients in Brazil. Integration of substance use treatment strategies with mental health treatment should be a priority.

Full Text Available This paper attempts to analyze dissociation from the perspective of phenomenology as experience of persons engaged in activities related to high stress (physical and/ or psychological. Dissociation is usually correlated with the so-called reaction fading in life-threatening situations, which along with the reactions of “fight or flight” reveal both in humans and the representatives of the animal world (“fight, flight or freeze”. However, unlike animals humans are often able to act purposefully in dissociative states, or randomly enter them. Specific features and diversity of manifestations of dissociation in humans are determined by the linguistic nature of human consciousness, which is logical to appeal to the philosophers of the phenomenological direction within which consciousness is the subject matter of their research. Based on the concept of Henri Bergson and Gilles Deleuze various manifestations of dissociation are detected: from the grave symptoms of PTSD (the so-called invasion of symptoms to controlled arbitrarily selected dissociative strategies for athletes. Dissociative experiences by experts of extreme careers are considered: law enforcement officers who participated in missions in «hot spots», and EMERCOM psychologists. Dissociation mechanism in terms of phenomenology is defined. The development and application of adequate diagnostic tools, psychological work with the athletes to regulate the focus of attention during the competition is supposed to contribute to the achievement of a high sports results.

The artifice of an infinitely-lived representative agent is commonly invoked to balance the present costs and future benefits of climate stabilization policies. Since actual economies are populated by overlapping generations of finite-lived persons, this approach begs important questions of welfare aggregation. This paper compares the results of representative agent and overlapping generations models that are numerically calibrated based on standard assumptions regarding climate economy interactions. Under two social choice rules - Pareto efficiency and classical utilitarianism - the models generate closely similar simulation results. In the absence of policies to redistribute income between present and future generations, efficient rates of carbon dioxide emissions abatement rise from 15 to 20% between the years 2000 and 2105. Under classical utilitarianism, in contrast, optimal control rates rise from 48 to 79% this same period. 23 refs.

Four experiments were conducted to assess whether or not rhesus macaques (Macaca mulatta) could represent the unperceived movements of a stimulus. Subjects were tested on 2 computerized tasks, HOLE (monkeys) and LASER (humans and monkeys), in which subjects needed to chase or shoot at, respectively, a moving target that either remained visible or became invisible for a portion of its path of movement. Response patterns were analyzed and compared between target-visible and target-invisible conditions. Results of Experiments 1, 2, and 3 demonstrated that the monkeys are capable of extrapolating movement. That this extrapolation involved internal representation of the target's invisible movement was suggested but not confirmed. Experiment 4, however, demonstrated that the monkeys are capable of representing the invisible displacements of a stimulus.

Mental abacus (MA) is a system for performing rapid and precise arithmetic by manipulating a mental representation of an abacus, a physical calculation device. Previous work has speculated that MA is based on visual imagery, suggesting that it might be a method of representing exact number nonlinguistically, but given the limitations on visual working memory, it is unknown how MA structures could be stored. We investigated the structure of the representations underlying MA in a group of children in India. Our results suggest that MA is represented in visual working memory by splitting the abacus into a series of columns, each of which is independently stored as a unit with its own detailed substructure. In addition, we show that the computations of practiced MA users (but not those of control participants) are relatively insensitive to verbal interference, consistent with the hypothesis that MA is a nonlinguistic format for exact numerical computation.

The AASC (Assembly of Agency Staff Committee) held its 27th Meeting of the specialized European Agencies on 26 and 27 May on the premises of the OHIM (Office for Harmonization in the Internal Market) in Alicante, Spain. Two representatives of the CERN Staff Association, in charge of External Relations, attended as observers. This participation is a useful complement to regular contacts we have with FICSA (Federation of International Civil Servants' Associations), which groups staff associations of the UN Agencies, and the annual CSAIO conferences (Conference of Staff Associations of International Organizations), where each Autumn representatives of international organizations based in Europe meet to discuss themes of common interest to better promote and defend the rights of the international civil servants. All these meetings allow us to remain informed on items that are directly or indirectly related to employment and social conditions of our colleagues in other international and Europ...

Data structures and apparatuses to represent knowledge are disclosed. The processes can comprise labeling elements in a knowledge signature according to concepts in an ontology and populating the elements with confidence values. The data structures can comprise knowledge signatures stored on computer-readable media. The knowledge signatures comprise a matrix structure having elements labeled according to concepts in an ontology, wherein the value of the element represents a confidence that the concept is present in an information space. The apparatus can comprise a knowledge representation unit having at least one ontology stored on a computer-readable medium, at least one data-receiving device, and a processor configured to generate knowledge signatures by comparing datasets obtained by the data-receiving devices to the ontologies.

This anthology containing contributions of 19 sociologists is a systematic investigation of the locality, the possibilities and the effective radius of citizen's initiatives under the functional conditions of the parliamentary - representative system. The intellectual and political surroundings, the sociologic context, the institutional, political and judical overall conditions as well as the consequences of this movement for the whole political system of the Federal Republic of Germany. (orig.) [de

Montgeron geothermal plant was realized between 1980 and 1983. It belongs to the Seimaroise, a low-cost housing agency and provides floor-panels heating to three building units totalizing 3042 dwelling-equivalents. This plant is one of the most representative of Paris region and may also be a model from the point of view of its realization by a private society and its economic aspect.

The primary objective of Project Activity ORD-FY04-012, 'Yucca Mountain Climate Technical Support Representative', was to provide the Office of Civilian Radioactive Waste Management (OCRWM) with expertise on past, present, and future climate scenarios and to support the technical elements of the Yucca Mountain Project (YMP) climate program. The Climate Technical Support Representative was to explain, defend, and interpret the YMP climate program to the various audiences during Site Recommendation and License Application. This technical support representative was to support DOE management in the preparation and review of documents, and to participate in comment response for the Final Environmental Impact Statement, the Site Recommendation Hearings, the NRC Sufficiency Comments, and other forums as designated by DOE management. Because the activity was terminated 12 months early and experience a 27% reduction in budget, it was not possible to complete all components of the tasks as originally envisioned. Activities not completed include the qualification of climate datasets and the production of a qualified technical report. The following final report is an unqualified summary of the activities that were completed given the reduced time and funding

In the past few weeks many of you have filled out the questionnaire for preparing the upcoming Five-yearly review. Similarly, Staff Association members have elected their delegates to the Staff Council for the coming two years. Once again we would like to thank all those who have taken the time and effort to make their voice heard on these two occasions. Elections to the Staff Council Below we publish the new Staff Council with its forty delegates who will represent in 2014 and 2015 all CERN staff in the discussions with Management and Member States in the various bodies and committees. Therefore it is important that the Staff Council represents as far as possible the diversity of the CERN population. By construction, the election process with its electoral colleges and two-step voting procedure guarantees that all Departments, even the small ones, and the various staff categories are correctly represented. Figure 1 shows the participation rate in the elections. The average rate is just above 52 %, with ...

We investigate a network growth model in which the genealogy controls the evolution. In this model, a new node selects a random target node and links either to this target node, or to its parent, or to its grandparent, etc; all nodes from the target node to its most ancient ancestor are equiprobable destinations. The emerging random ancestor tree is very shallow: the fraction g n of nodes at distance n from the root decreases super-exponentially with n, g n = e −1 /(n − 1)!. We find that a macroscopic hub at the root coexists with highly connected nodes at higher generations. The maximal degree of a node at the nth generation grows algebraically as N 1/β n , where N is the system size. We obtain the series of nontrivial exponents which are roots of transcendental equations: β 1 ≅1.351 746, β 2 ≅1.682 201, etc. As a consequence, the fraction p k of nodes with degree k has an algebraic tail, p k ∼ k −γ , with γ = β 1 + 1 = 2.351 746

The acquisition or designation of new protected areas is usually based on criteria for representation of different ecosystems or land-cover classes, and it is unclear how well-threatened species are conserved within protected-area networks. Here, we assessed how Australia's terrestrial protected-area system (89 million ha, 11.6% of the continent) overlaps with the geographic distributions of threatened species and compared this overlap against a model that randomly placed protected areas across the continent and a spatially efficient model that placed protected areas across the continent to maximize threatened species' representation within the protected-area estate. We defined the minimum area needed to conserve each species on the basis of the species' range size. We found that although the current configuration of protected areas met targets for representation of a given percentage of species' ranges better than a randomselection of areas, 166 (12.6%) threatened species occurred entirely outside protected areas and target levels of protection were met for only 259 (19.6%) species. Critically endangered species were among those with the least protection; 12 (21.1%) species occurred entirely outside protected areas. Reptiles and plants were the most poorly represented taxonomic groups, and amphibians the best represented. Spatial prioritization analyses revealed that an efficient protected-area system of the same size as the current protected-area system (11.6% of the area of Australia) could meet representation targets for 1272 (93.3%) threatened species. Moreover, the results of these prioritization analyses showed that by protecting 17.8% of Australia, all threatened species could reach target levels of representation, assuming all current protected areas are retained. Although this amount of area theoretically could be protected, existing land uses and the finite resources available for conservation mean land acquisition may not be possible or even effective

We introduce a new approach to competing risks using random forests. Our method is fully non-parametric and can be used for selecting event-specific variables and for estimating the cumulative incidence function. We show that the method is highly effective for both prediction and variable selection...

This paper devotes to develop a computational model for stochastic analysis and reliability assessment of vehicle-track systems subject to earthquakes and track random irregularities. In this model, the earthquake is expressed as non-stationary random process simulated by spectral representation and random function, and the track random irregularities with ergodic properties on amplitudes, wavelengths and probabilities are characterized by a track irregularity probabilistic model, and then the number theoretical method (NTM) is applied to effectively selectrepresentative samples of earthquakes and track random irregularities. Furthermore, a vehicle-track coupled model is presented to obtain the dynamic responses of vehicle-track systems due to the earthquakes and track random irregularities at time-domain, and the probability density evolution method (PDEM) is introduced to describe the evolutionary process of probability from excitation input to response output by assuming the vehicle-track system as a probabilistic conservative system, which lays the foundation on reliability assessment of vehicle-track systems. The effectiveness of the proposed model is validated by comparing to the results of Monte-Carlo method from statistical viewpoint. As an illustrative example, the random vibrations of a high-speed railway vehicle running on the track slabs excited by lateral seismic waves and track random irregularities are analyzed, from which some significant conclusions can be drawn, e.g., track irregularities will additionally promote the dynamic influence of earthquakes especially on maximum values and dispersion degree of responses; the characteristic frequencies or frequency ranges respectively governed by earthquakes and track random irregularities are greatly different, moreover, the lateral seismic waves will dominate or even change the characteristic frequencies of system responses of some lateral dynamic indices at low frequency.

An algorithm that integrates Karhunen-Loeve expansion (KLE) and the finite element method (FEM) is proposed to perform non-stationary random vibration analysis of structures under excitations, represented by multiple random processes that are correlated in both time and spatial domains. In KLE, the auto-covariance functions of random excitations are discretized using orthogonal basis functions. The KLE for multiple correlated random excitations relies on expansions in terms of correlated sets of random variables reflecting the cross-covariance of the random processes. During the response calculations, the eigenfunctions of KLE used to represent excitations are applied as forcing functions to the structure. The proposed algorithm is applied to a 2DOF system, a 2D cantilever beam and a 3D aircraft wing under both stationary and non-stationary correlated random excitations. Two methods are adopted to obtain the structural responses: a) the modal method and b) the direct method. Both the methods provide the statistics of the dynamic response with sufficient accuracy. The structural responses under the same type of correlated random excitations are bounded by the response obtained by perfectly correlated and uncorrelated random excitations. The structural response increases with a decrease in the correlation length and with an increase in the correlation magnitude. The proposed methodology can be applied for the analysis of any complex structure under any type of random excitation.

Background Promotion of healthy pregnancies has gained high priority in the Netherlands because of relatively unfavorable perinatal outcomes. In response, a nationwide study, ?Healthy Pregnancy 4 All? (HP4ALL), has been initiated. Part of this study involves systematic and broadened antenatal risk assessment (the Risk Assessment substudy). Risk selection in current clinical practice is mainly based on medical risk factors. Despite the increasing evidence for the influence of nonmedical risk f...

Full Text Available Background: Chronic non-specific low back pain is a common disorder that often has no clear mechanism. Exercise therapy is an effective and safe method for treatment of chronic musculoskeletal disorders such as back pain. Pilate’s and Williams’ exercises are two types of distinct exercises used for the treatment of back pain, but there is no evidence for the advantage of these two exercises in literature. Therefore, the aim of this study was to investigate and compare the effects of selective Pilate’s and Williams’ exercises on the back flexibility and back pain in men with chronic non-specific low back pain. Methods: Forty men with chronic non-specific low back pain were divided into two equal groups who participated in 10 sessions (during two weeks of treatment program. Patients in group 1 received an electrotherapy treatment followed by selective Pilate’s exercises. Patients in group 2 received Williams’ exercises after the same electrotherapy treatment. Back flexibility and pain level were measured before and after the interventions. Also, a pain follow up was done four weeks later. Statistical analysis was done using Mann-Whitney, Wilcoxon, mixed ANOVA and Friedman tests. Results: The results showed that the back flexibility increased and pain decreased in both groups (P=0.001. Selective Pilate’s exercises were more effective in enhancing the back flexibility and reducing back pain (P=0.001. Conclusion: Comparison of these two types of therapeutic exercises showed that selective Pilate’s exercises are more effective in treatment of patients with chronic non-specific low back pain.

Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

Total Hip Arthroplasty (THA) is being used more commonly in younger higher demand patients. The purpose of this randomized pilot study was to explore a) feasibility of comprehensive postoperative rehabilitation compared to usual care following primary THA in subjects program (Intervention) or usual postoperative care (Control). Subjects were assessed preoperatively, six-weeks postoperatively (Pre-intervention) and four and 12 months postoperatively (Post-intervention). Self-report measures were the Western Ontario McMaster Osteoarthritis Index (WOMAC) and Rand 36-Item Health Survey (RAND-36). Performance-based measures included lower extremity strength, walking speed and endurance, and gait laboratory assessment. Ten Control and 11 Intervention subjects with an average age of 53.4 (SD9.3) years were randomized. All Intervention subjects completed the program without adverse effects. Although no statistically significantly results were reported, four months postoperatively, Intervention subjects had clinically important differences (CID) in strength compared with Control subjects. Walking endurance, WOMAC and RAND scores improved significantly with no CID noted between groups. Ten (48%) subjects reported a ceiling effect on the WOMAC (9 (43%) subjects on Pain; 1 (5%) subject on Function). No group CID were noted in gait measures. Our recommendations would be that performance-based strength measures should be considered for the primary outcome in this younger cohort. Because of the ceiling effects with WOMAC Pain, a different pain measure is indicated. Other more challenging functional performance-based tests should be considered such as a more prolonged endurance test. There is merit in one-year follow-up as strength improved after four months in both groups.

Full Text Available There has been analyzed a phenomenon of global consciousness, and its cultural and historical, civilizational dimensions have been substantiated. There has been demonstrated that the concept of planetary consciousness, global thinking, noosphere was described for the first time in the philosophy of cosmism. However, in modern conditions ideas of representatives of the naturalistic philosophical direction of cosmism have not lost their heuristic potential. They can be reconsidered in a new fashion within the context of emerging anthropomorphic (human dimension networks. There has been proved that global consciousness is a component of the social and cultural potential of global information networks defining vectors to prospects of humanity progress in the 21st century. Relying on methodology of the structural and functional analysis, the author arrives at a conclusion about global networks obtaining the status of representatives of global consciousness. This is the area of networks where all relevant information is concentrated – from statistical data to scientific and technical information. Access to these data is limited by human abilities and is realized in the form of discrete requests with using heuristic algorithms of information procession. A suggestion is introduced considering the fact that modern society being a self-organized system seeks to gain stable condition. Anthropomorphic networks are means of decreasing social entropy, which is growing as a result of any kind of human intervention into social processes. Thus, for the first time a human is challenged by their intellect, ability to create, discover and control.

How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms.

Selective mutism is an uncommon disorder in young children, in which they selectively don't speak in certain social situations, while being capable of speaking easily in other social situations. Many etiologies were proposed for selective mutism including psychodynamic, behavioral and familial etc. A developmental etiology that includes insights from all the above is gaining support. Accordingly, mild language impairment in a child with an anxiety trait may be at the root of developing selective mutism. The behavior will be reinforced by an avoidant pattern in the family. Early treatment and followup for children with selective mutism is important. The treatment includes non-pharmacological therapy (psychodynamic, behavioral and familial) and pharmacologic therapy--mainly selective serotonin reuptake inhibitors (SSRI).

Human beings are selectively fatalistic. Some risks appear as "background noise," whereas other, quantitatively identical risks cause enormous concern. This essay explores the reasons for selective fatalism and possible legal responses. Sometimes selective fatalism is a product of distributional issues, as people focus especially on risks that face particular groups; sometimes people adapt their preferences and beliefs so as to reduce concern with risks that they perceive themselves unable to...

Canada is a large nation with forested ecosystems that occupy over 60% of the national land base, and knowledge of the patterns of Canada's land cover is important to proper environmental management of this vast resource. To this end, a circa 2000 Landsat-derived land cover map of the forested ecosystems of Canada has created a new window into understanding the composition and configuration of land cover patterns in forested Canada. Strategies for summarizing such large expanses of land cover are increasingly important, as land managers work to study and preserve distinctive areas, as well as to identify representative examples of current land-cover and land-use assemblages. Meanwhile, the development of extremely efficient clustering algorithms has become increasingly important in the world of computer science, in which billions of pieces of information on the internet are continually sifted for meaning for a vast variety of applications. One recently developed clustering algorithm quickly groups large numbers of items of any type in a given data set while simultaneously selecting a representative-or "exemplar"-from each cluster. In this context, the availability of both advanced data processing methods and a nationally available set of landscape metrics presents an opportunity to identify sets of representative landscapes to better understand landscape pattern, variation, and distribution across the forested area of Canada. In this research, we first identify and provide context for a small, interpretable set of exemplar landscapes that objectively represent land cover in each of Canada's ten forested ecozones. Then, we demonstrate how this approach can be used to identify flagship and satellite long-term study areas inside and outside protected areas in the province of Ontario. These applications aid our understanding of Canada's forest while augmenting its management toolbox, and may signal a broad range of applications for this versatile approach.

Structural analysis of proteins and nucleic acids is complicated by their inherent flexibility, conferred, for example, by linkers between their contiguous domains. Therefore, the macromolecule needs to be represented by an ensemble of conformations instead of a single conformation. Determining this ensemble is challenging because the experimental data are a convoluted average of contributions from multiple conformations. As the number of the ensemble degrees of freedom generally greatly exceeds the number of independent observables, directly deconvolving experimental data into a representative ensemble is an ill-posed problem. Recent developments in sparse approximations and compressive sensing have demonstrated that useful information can be recovered from underdetermined (ill-posed) systems of linear equations by using sparsity regularization. Inspired by these advances, we designed Sparse Ensemble Selection (SES) method for recovering multiple conformations from a limited number of observations. SES is more general and accurate than previously published minimum-ensemble methods, and we use it to obtain representative conformational ensembles of Lys48-linked di-ubiquitin, characterized by the residual dipolar coupling data measured at several pH conditions. These representative ensembles are validated against NMR chemical shift perturbation data and compared to maximum-entropy results. The SES method reproduced and quantified the previously observed pH dependence of the major conformation of Lys48-linked di-ubiquitin, and revealed lesser-populated conformations that are pre-organized for binding known di-ubiquitin receptors, thus providing insights into possible mechanisms of receptor recognition by polyubiquitin. SES is applicable to any experimental observables that can be expressed as a weighted linear combination of data for individual states. PMID:24093873

Completely random signed measures are defined, characterized and related to Lévy random measures and Lévy bases.......Completely random signed measures are defined, characterized and related to Lévy random measures and Lévy bases....

In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

The Selenium and Vitamin E Cancer Prevention Trial (SELECT) was a randomized, double-blind, placebo-controlled prostate cancer prevention study funded by the National Cancer Institute (NCI) and conducted by the Southwest Oncology Group (SWOG). A total of 35,533 men were assigned randomly to one of the four treatment groups (vitamin E + placebo, selenium + placebo, vitamin E + selenium, and placebo + placebo). The independent Data and Safety Monitoring Committee (DSMC) recommended the discontinuation of study supplements because of the lack of efficacy for risk reduction and because futility analyses demonstrated no possibility of benefit of the supplements to the anticipated degree (25% reduction in prostate cancer incidence) with additional follow-up. Study leadership agreed that the randomized trial should be terminated but believed that the cohort should be maintained and followed as the additional follow-up would contribute important information to the understanding of the biologic consequences of the intervention. Since the participants no longer needed to be seen in person to assess acute toxicities or to be given study supplements, it was determined that the most efficient and cost-effective way to follow them was via a central coordinated effort. A number of changes were necessary at the local Study Sites and SELECT Statistical Center to transition to following participants via a Central Coordinating Center. We describe the transition process from a randomized clinical trial to the observational Centralized Follow-Up (CFU) study. The process of transitioning SELECT, implemented at more than 400 Study Sites across the United States, Canada, and Puerto Rico, entailed many critical decisions and actions including updates to online documents such as the SELECT Workbench and Study Manual, a protocol amendment, reorganization of the Statistical Center, creation of a Transition Committee, development of materials for SELECT Study Sites, development of procedures

Previous studies of the vacuum polarization on de Sitter have demonstrated that there is a simple, noncovariant representation of it in which the physics is transparent. There is also a cumbersome, covariant representation in which the physics is obscure. Despite being unwieldy, the latter form has a powerful appeal for those who are concerned about de Sitter invariance. We show that nothing is lost by employing the simple, noncovariant representation because there is a closed form procedure for converting its structure functions to those of the covariant representation. We also present a vastly improved technique for reading off the noncovariant structure functions from the primitive diagrams. And we discuss the issue of representing the vacuum polarization for a general metric background.

Full Text Available The need to acquire a representative periphytic diatom sample for river water quality monitoring has been recognised in the development of existing diatom indices, important in the development and employment of diatom monitoring tools for the Water Framework Directive. In this study, a nested design with replication is employed to investigate the magnitude of variation in diatom biomass, composition and Trophic Diatom Index at varying scales within a small chalk river. The study shows that the use of artificial substrates may not result in diatom communities that are typical of the surrounding natural substrates. Periphytic diatom biomass and composition varies between artificial and natural substrates, riffles and glides and between two stretches of the river channel. The study also highlights the existence of high variation in diatom frustule frequency and biovolume at the individual replicate scale which may have implications for the use of diatoms in routine monitoring.

Full Text Available This article entitled Statistical Models of Representing Intellectual Capital approaches and analyses the concept of intellectual capital, as well as the main models which can support enterprisers/managers in evaluating and quantifying the advantages of intellectual capital. Most authors examine intellectual capital from a static perspective and focus on the development of its various evaluation models. In this chapter we surveyed the classical static models: Sveiby, Edvisson, Balanced Scorecard, as well as the canonical model of intellectual capital. Among the group of static models for evaluating organisational intellectual capital the canonical model stands out. This model enables the structuring of organisational intellectual capital in: human capital, structural capital and relational capital. Although the model is widely spread, it is a static one and can thus create a series of errors in the process of evaluation, because all the three entities mentioned above are not independent from the viewpoint of their contents, as any logic of structuring complex entities requires.

Over 80 years ago Samuel Wilks proposed that the “generalized variance” of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the “Wilks standard deviation” –the square root of the generalized variance–is indeed the standard deviation of a random vector. We further establish that the “uncorrelation index” –a derivative of the Wilks standard deviation–is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: “randomness measures” and “independence indices” of random vectors. In turn, these general notions give rise to “randomness diagrams”—tangible planar visualizations that answer the question: How random is a random vector? The notion of “independence indices” yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

To help resolve the problem of site selection for the proposed 300 GeV machine, the Council selected "three wise men" (left to right, J H Bannier of the Netherlands, A Chavanne of Switzerland and L K Boggild of Denmark).

Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

The suitability of European networks to check compliance with air quality standards and to assess exposure of the population was investigated. An air quality model (URBIS) was applied to estimate and compare the spatial distribution of the concentration of nitrogen dioxide (NO2) in ambient air in four large cities. The concentrations calculated at the location of the monitoring stations, compared well with the concentrations measured at the stations indicating that the models worked well. Therefore the calculated concentration distributions were used as a proxy for the actual concentration distributions across the cities. The distributions of these proxy concentrations across the city populations was determined and cumulative population distribution curves were estimated. The calculated annual mean values at the monitoring network stations were located on the population distribution curves to estimate the fractions of the populations that the monitoring network stations represent. This macro scale procedure is used to evaluate which subgroups of the monitoring stations can be reliably used to decide on compliance or to estimate the concentration the population is exposed to. In addition, the CAR model and Computational Fluid Dynamics (CFD) models are used to investigate the effect of micro scale siting of the monitoring stations within the streets. The following observations were made: - Berlin and London networks cover the distribution of concentrations to which the population is exposed rather well, while Stuttgart and Barcelona have stations at sites with mainly the higher concentrations and the exposure is covered less well. - The networks in London and Berlin, with a substantial number of urban background stations, seem fit to monitor the average population exposure, contrary to those in Stuttgart and Barcelona with only a limited number of these stations. - The concentrations measured at street stations hardly reflect the calculated differences in street

We develop a white noise theory for Poisson random measures associated with a Lévy process. The starting point of this theory is a chaos expansion with kernels of polynomial type. We use this to construct the white noise of a Poisson random measure, which takes values in a certain distribution space. Then we show, how a Skorohod/Itô integral for point processes can be represented by a Bochner integral in terms of white noise of the random measure and a Wick product. Further, we apply these co...

The European Geosciences Union is a bottom-up-organisation, in which its members are represented by their respective scientific divisions, committees and council. In recent years, EGU has embarked on a mission to reach out for its numerous 'younger' members by giving awards to outstanding young scientists and the setting up of Early Career Scientists (ECS) representatives. The division representative's role is to engage in discussions that concern students and early career scientists. Several meetings between all the division representatives are held throughout the year to discuss ideas and Union-wide issues. One important impact ECS representatives have had on EGU is the increased number of short courses and workshops run by ECS during the annual General Assembly. Another important contribution of ECS representatives was redefining 'Young Scientist' to 'Early Career Scientist', which avoids discrimination due to age. Since 2014, the Seismology Division has its own ECS representative. In an effort to more effectively reach out for young seismologists, a blog and a social media page dedicated to seismology have been set up online. With this dedicated blog, we'd like to give more depth to the average browsing experience by enabling young researchers to explore various seismology topics in one place while making the field more exciting and accessible to the broader community. These pages are used to promote the latest research especially of young seismologists and to share interesting seismo-news. Over the months the pages proved to be popular, with hundreds of views every week and an increased number of followers. An online survey was conducted to learn more about the activities and needs of early career seismologists. We present the results from this survey, and the work that has been carried out over the last two years, including detail of what has been achieved so far, and what we would like the ECS representation for Seismology to achieve. Young seismologists are

We investigated the spontaneous brain electric activity of 13 skeptics and 16 believers in paranormal phenomena; they were university students assessed with a self-report scale about paranormal beliefs. 33-channel EEG recordings during no-task resting were processed as sequences of momentary potential distribution maps. Based on the maps at peak times of Global Field Power, the sequences were parsed into segments of quasi-stable potential distribution, the 'microstates'. The microstates were clustered into four classes of map topographies (A-D). Analysis of the microstate parameters time coverage, occurrence frequency and duration as well as the temporal sequence (syntax) of the microstate classes revealed significant differences: Believers had a higher coverage and occurrence of class B, tended to decreased coverage and occurrence of class C, and showed a predominant sequence of microstate concatenations from A to C to B to A that was reversed in skeptics (A to B to C to A). Microstates of different topographies, putative "atoms of thought", are hypothesized to represent different types of information processing.The study demonstrates that personality differences can be detected in resting EEG microstate parameters and microstate syntax. Microstate analysis yielded no conclusive evidence for the hypothesized relation between paranormal belief and schizophrenia.

In order to develop primary care research by general practice university lecturers, it was necessary to evaluate the representativeness of this group of lecturers at the Angers Faculty of Medicine. Declarative study based on self-administered questionnaires filled in by 216 university lecturers. The questionnaire was derived from that of the regional panel of the Research, studies, evaluation and statistics directorate of 2007, investigating the sociodemographic characteristics, professional organization, activities and certain professional practices of general practitioners. University lecturers were compared to the population of the panel by means of a Chi-square test of conformity. A total of 181 university lecturers participated in the survey, comprising 65% of men. The proportion of women was higher among university lecturers and the 45-54years age-group was underrepresented. The university lecturers group was significantly different from the panel in terms of predominantly group practice and shorter weekly working hours. No significant difference was observed for the type of town of practice ahd the continuing medical education participation rate. University lecturers present certain specificities, partly related to the reference population used. The development of research based on such a network appears to be feasible in terms ofrepresentativeness, provided these specificities are clearly described.

Full Text Available This paper deals with an investigation of breakdates in agricultural prices. A structural break has occurred if at least one of the model parameters has changed at some date. This date is a breakdate. Ignoring structural breaks in time series can lead to serious problems with economic models of time series. The aim is to determine the number and date of the breakdates in individual time series and connect them with changes in the market and economic environment. The time series of agricultural price relating to animal production, namely the prices of pork, beef, chicken, milk and eggs, are analyzed for the period from January 1996 to December 2011. The autoregressive model (AR model of Box-Jenkins methodology and stability testing according to Quandt or Wald statistics are used for the purposes of this paper. Multiple breakdates are found in the case of eggs (September 1998, May 2004, milk (October 1999, December 2007 and chicken (October 2002, February 2005 prices. One breakdate was detected in the prices of beef (April 2002 and none in the case of pork prices. The results show the importance of multiple breakdate testing. The Quandt statistic provides one possible way of applying a multiple approach. All breakdates which were confirmed using these statistics can be associated with changes in the agri-food market and economic environment. Information about the date of changes in the time series can be used for other unbiased modelling in more complex models.

Kinetic Monte Carlo methods such as the Gillespie algorithm model chemical reactions as random walks in particle number space. The interreaction times are exponentially distributed under the assumption that the system is well mixed. We introduce an arbitrary interreaction time distribution, which may account for the impact of incomplete mixing on chemical reactions, and in general stochastic reaction delay, which may represent the impact of extrinsic noise. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical master equation. This leads naturally to a generalization of the Gillespie algorithm. Based on this formalism, we determine the modified chemical rate laws for different interreaction time distributions. This framework traces Michaelis-Menten-type kinetics back to finite-mean delay times, and predicts time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local nonequilibrium.

... have a family history of selective mutism, extreme shyness, or anxiety disorders, which may increase their risk ... well Inability to speak in certain social situations Shyness This pattern must be seen for at least ...

Selective enumeration is an approach to pruning search trees with the goal of preventing the generation of extraneous paths in the search tree, rather than generating paths that will later be pruned...

Musculoskeletal disorders (MSDs) are a leading cause of work-related ill health. Existing literature indicates that pharmaceutical sales representatives (PSRs) report a high prevalence of MSDs, possibly exacerbated by the nature of work (prolonged driving and manual handling). In addition, they experience difficulty in accessing occupational health services. To assess the prevalence of musculoskeletal symptoms and associated risk factors among PSRs in order to assist their occupational health management through raising risk awareness. A self-completed questionnaire distributed to 205 PSRs within a UK pharmaceutical company was used to assess the prevalence of musculoskeletal symptoms, psychosocial factors, work tasks undertaken and company car use. To assist understanding of work tasks and organizational factors, semi-structured interviews were undertaken with a sample of 12 key personnel. The questionnaire response rate was 68%. PSRs reported high mileage and 100% reported working from the car in a typical day. Forty-seven per cent reported both manual handling for > or = 4 h/day and 'often' or 'sometimes' working from the car. Fifty-seven per cent reported low back symptoms in the last 12 months. Interview data revealed issues relating to car choice, storage in the boot and working from the car, which should be considered when developing priorities for preventive management of MSDs. Musculoskeletal symptoms appear to be a problem for PSRs, with risk factors reported as prolonged driving, sitting in the car, working from the car and manual handling. Interventions to facilitate their occupational health management should focus on raising awareness of the risks of prolonged driving and working from the car.

Surface air temperature is an essential variable for monitoring the atmosphere, and it is generally acquired at meteorological stations that can provide information about only a small area within an r m radius ( r-neighborhood) of the station, which is called the representable radius. In studies on a local scale, ground-based observations of surface air temperatures obtained from scattered stations are usually interpolated using a variety of methods without ascertaining their effectiveness. Thus, it is necessary to evaluate the spatial representativeness of ground-based observations of surface air temperature before conducting studies on a local scale. The present study used remote sensing data to estimate the spatial distribution of surface air temperature using the advection-energy balance for air temperature (ADEBAT) model. Two target stations in the study area were selected to conduct an analysis of spatial representativeness. The results showed that one station (AWS 7) had a representable radius of about 400 m with a possible error of less than 1 K, while the other station (AWS 16) had the radius of about 250 m. The representable radius was large when the heterogeneity of land cover around the station was small.

LifeLines is a large prospective population-based three generation cohort study in the north of the Netherlands. Different recruitment strategies were adopted: recruitment of an index population via general practitioners, subsequent inclusion of their family members, and online self-registration. Our aim was to investigate the representativeness of the adult study population at baseline and to evaluate differences in the study population according to recruitment strategy. Demographic characteristics of the LifeLines study population, recruited between 2006-2013, were compared with the total adult population in the north of the Netherlands as registered in the Dutch population register. Socioeconomic characteristics, lifestyle, chronic diseases, and general health were further compared with participants of the Permanent Survey of Living Conditions within the region (2005-2011, N = 6,093). Differences according to recruitment strategy were assessed. Compared with the population of the north of the Netherlands, LifeLines participants were more often female, middle aged, married, living in a semi-urban place and Dutch native. Adjusted for differences in demographic composition, in LifeLines a smaller proportion had a low educational attainment (5% versus 14%) or had ever smoked (54% versus 66%). Differences in the prevalence of various chronic diseases and low general health scores were mostly smaller than 3%. The age profiles of the three recruitment groups differed due to age related inclusion criteria of the recruitment groups. Other differences according to recruitment strategy were small. Our results suggest that, adjusted for differences in demographic composition, the LifeLines adult study population is broadly representative for the adult population of the north of the Netherlands. The recruitment strategy had a minor effect on the level of representativeness. These findings indicate that the risk of selection bias is low and that risk estimates in Life

Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.

Full Text Available The advent of molecular genetic comprises a true revolution of far-reaching consequences for human-kind, which evolved into a specialized branch of the modern-day Biochemistry. The analysis of specicgenomic information are gaining wide-ranging interest because of their signicance to the early diag-nosis of disease, and the discovery of modern drugs. In order to take advantage of a wide assortmentof signal processing (SP algorithms, the primary step of modern genomic SP involves convertingsymbolic-DNA sequences into complex-valued signals. How to represent the genetic code? Despitebeing extensively known, the DNA mapping into proteins is one of the relevant discoveries of genetics.The genetic code (GC is revisited in this work, addressing other descriptions for it, which can beworthy for genomic SP. Three original representations are discussed. The inner-to-outer map buildson the unbalanced role of nucleotides of a codon. A two-dimensional-Gray genetic representationis oered as a structured map that can help interpreting DNA spectrograms or scalograms. Theseare among the powerful visual tools for genome analysis, which depends on the choice of the geneticmapping. Finally, the world-chart for the GC is investigated. Evoking the cyclic structure of thegenetic mapping, it can be folded joining the left-right borders, and the top-bottom frontiers. As aresult, the GC can be drawn on the surface of a sphere resembling a world-map. Eight parallels oflatitude are required (four in each hemisphere as well as four meridians of longitude associated tofour corresponding anti-meridians. The tropic circles have 11.25o, 33.75o, 56.25o, and 78.5o (Northand South. Starting from an arbitrary Greenwich meridian, the meridians of longitude can be plottedat 22.5o, 67.5o, 112.5o, and 157.5o (East and West. Each triplet is assigned to a single point on thesurface that we named Nirenberg-Kohamas Earth. Despite being valuable, usual representations forthe GC can be

... attention of the Safety and Health Inspector any unsafe or unhealthful working condition which the employee... EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH PROGRAMS AND RELATED MATTERS Inspection and Abatement § 1960.27... knowledge of any existing or potential unsafe or unhealthful working conditions. The representative of...

Reliability analyses are performed for three Japanese vertical wall breakwaters in this chapter. Only the geotechnical failure modes described in chapter 3 are investigated. For none of the breakwaters detailed data are available for the wave climate and for the soil conditions. Therefore represe...

Demonstrations of interactions between diverse selective forces on bright coloration in defended species are rare. Recent work has suggested that not only do the bright colours of Neotropical poison frogs serve to deter predators, but they also play a role in sexual selection, with females preferring males similar to themselves. These studies report an interaction between the selective forces of mate choice and predation. However, evidence demonstrating phenotypic discrimination by potential predators on these polymorphic species is lacking. The possibility remains that visual (avian) predators possess an inherent avoidance of brightly coloured diurnal anurans and purifying selection against novel phenotypes within populations is due solely to non-random mating. Here, we examine the influence of predation on phenotypic variation in a polymorphic species of poison frog, Dendrobates tinctorius. Using clay models, we demonstrate a purifying role for predator selection, as brightly coloured novel forms are more likely to suffer an attack than both local aposematic and cryptic forms. Additionally, local aposematic forms are attacked, though infrequently, indicating ongoing testing/learning and a lack of innate avoidance. These results demonstrate predator-driven phenotypic purification within populations and suggest colour patterns of poison frogs may truly represent a 'magic trait'.

Full Text Available The objective of this study was to evaluate the effect of selection for body weight on the genetic variability and diversity in broiler lines. Two paternal broiler lines (LL and LLc were used. LL line was selected for 12 generations for growth and carcass and reproduction characteristics. The LLc line was established from LL line in 1985 and mated at random. Blood samples from six chickens per line were collected and used for molecular analysis. Also, a DNA pool was made for each line to compare effects between lines. Data were analyzed considering the collected information on the presence or absence of DNA bands. Band sharing scores were calculated using the DICE coefficient. The pattern of the 21 most representative bands was used. DNA fingerprinting (DFP showed 90.48 % of polymorphism bands for both lines. Difference between lines was not due to the presence or absence of bands, but to the frequency of such bands in each genotype. Considering that both lines had the same genetic background, changes on band frequency were probably due to selection. Selection for body weight had an effect on the band frequency as evaluated by DFP, and for this reason this technique could be used as a tool in the selection process. Results also suggest that bands 4, 5 and 19 were linked to body weight traits, and bands 9, 10, 12, 13 and 21 were linked to reproductive traits such as egg production.

NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.

Full Text Available Effective immunizations require a thorough, multi-step process, yet few studies comprehensively addressed issues around vaccination management.To assess variations in vaccination management and vaccination errors in primary care.A cross sectional, web-based questionnaire survey was performed among 1157 primary physicians from North Rhine-Westphalia, Germany: a representative 10% random sample of general practitioners (n = 946 and all teaching physicians from the University Duisburg-Essen (n = 211. Four quality aspects with three items each were included: patient-related quality (patient information, patient consent, strategies to increase immunization rates, vaccine-related quality (practice vaccine spectrum, vaccine pre-selection, vaccination documentation, personnel-related quality (recommendation of vaccinations, vaccine application, personnel qualification and storage-related quality (storage device, temperature log, vaccine storage control. For each of the four quality aspects, "good quality" was reached if all three criteria per quality aspect were fulfilled. Good vaccination management was defined as fulfilling all twelve items. Additionally, physicians' experiences with errors and nearby-errors in vaccination management were obtained.More than 20% of the physicians participated in the survey. Good vaccination management was reached by 19% of the practices. Patient-related quality was good in 69% of the practices, vaccine-related quality in 73%, personnel-related quality in 59% and storage-related quality in 41% of the practices. No predictors for error reporting and good vaccination management were identified.We identified good results for vaccine- and patient-related quality but need to improve issues that revolve around vaccine storage.

Spectral Feature Selection for Data Mining introduces a novel feature selection technique that establishes a general platform for studying existing feature selection algorithms and developing new algorithms for emerging problems in real-world applications. This technique represents a unified framework for supervised, unsupervised, and semisupervised feature selection. The book explores the latest research achievements, sheds light on new research directions, and stimulates readers to make the next creative breakthroughs. It presents the intrinsic ideas behind spectral feature selection, its th

We estimated the proportion of the nation's public school districts that have high school grades in which random drug testing is conducted. We collected data in spring 2005 from 1343 drug prevention coordinators in a nationally representative sample of school districts with schools that have high school grades; of these districts, 14% conducted random drug testing. Almost all districts randomly tested athletes, and 65% randomly tested other students engaged in extracurricular activities; 28% randomly tested all students, exceeding the current sanction of the US Supreme Court.

Random forests consisting of an ensemble of regression trees with equal weights are frequently used for design of predictive models. In this article, we consider an extension of the methodology by representing the regression trees in the form of probabilistic trees and analyzing the nature of heteroscedasticity. The probabilistic tree representation allows for analytical computation of confidence intervals (CIs), and the tree weight optimization is expected to provide stricter CIs with comparable performance in mean error. We approached the ensemble of probabilistic trees' prediction from the perspectives of a mixture distribution and as a weighted sum of correlated random variables. We applied our methodology to the drug sensitivity prediction problem on synthetic and cancer cell line encyclopedia dataset and illustrated that tree weights can be selected to reduce the average length of the CI without increase in mean error.

Full Text Available Abstract Background Disgust sensitivity is defined as a predisposition to experiencing disgust, which can be measured on the basis of the Disgust Scale and its German version, the Questionnaire for the Assessment of Disgust Sensitivity (QADS. In various studies, different factor structures were reported for either instrument. The differences may most likely be due to the selected factor analysis estimation methods and the small non-representative samples. Consequently, the aims of this study were to explore and confirm a theory-driven and statistically coherent QADS factor structure in a large representative sample and to present its standard values. Methods The QADS was answered by N = 2473 healthy subjects. The respective households and participants were selected using the random-route sampling method. Afterwards, the collected sample was compared to the information from the Federal Statistical Office to ensure that it was representative for the German residential population. With these data, an exploratory Promax-rotated Principal Axis Factor Analysis as well as comparative confirmatory factor analyses with robust Maximum Likelihood estimations were computed. Any possible socio-demographic influences were quantified as effect sizes. Results The data-driven and theoretically sound solution with the three highly interrelated factors Animal Reminder Disgust, Core Disgust, and Contamination Disgust led to a moderate model fit. All QADS scales had very good reliabilities (Cronbach's alpha from .90 to .95. There were no age-differences found among the participants, however, the female participants showed remarkably higher disgust ratings. Conclusions Based on the representative sample, the QADS factor structure was revised. Gender-specific standard percentages permit a population-based assessment of individual disgust sensitivity. The differences of the original QADS, the new solution, and the Disgust Scale - Revised will be discussed.

An experiment was conducted to investigate the effect of short-term selection for 4 week breast weight (4wk BRW), and to estimate genetic parameters of body weight, and carcass traits. A selection (S) line and control (C) line was randomlyselected from a base population. Data were collected over...

This article investigates the question: why has Danish minority policy shown such remarkable selectiveness with regard to Europeanization? This question is particularly pertinent given that Denmark is typically seen as an otherwise very efficient and keen complier, especially with EU norms and ru...

It is presented a revision and discussion about the characteristics and factors that relate activity and selectivity in the catalytic and not catalytic partial oxidation of methane and the effect of variables as the temperature, pressure and others in the methane conversion to methanol. It thinks about the zeolites use modified for the catalytic oxidation of natural gas

Reach is a key factor in translating research to practical application. This study examined reach and representativeness of a multi-city, randomized controlled community health trial in African American (AA) and Hispanic or Latina (HL) women. Participants completed measures of demographics, body mass index (BMI), percent body fat, resting heart rate, and blood pressure followed by a run-in procedure and a randomization meeting. AA were more likely to be screened out initially; HL were more likely to drop out. Participation did not differ by city or recruitment method. Women who completed the post-intervention assessment were more likely to be AA, older, and have higher socioeconomic status (p values Representativeness can change over the course of the study and impact the practicality of translating research to practice.

We review recent progress on ancestral processes related to mutation-selection models, both in the deterministic and the stochastic setting. We mainly rely on two concepts, namely, the killed ancestral selection graph and the pruned lookdown ancestral selection graph. The killed ancestral selection graph gives a representation of the type of a random individual from a stationary population, based upon the individual's potential ancestry back until the mutations that define the individual's type. The pruned lookdown ancestral selection graph allows one to trace the ancestry of individuals from a stationary distribution back into the distant past, thus leading to the stationary distribution of ancestral types. We illustrate the results by applying them to a prototype model for the error threshold phenomenon.

To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.

I.E. Tamm is one of the great figures of 20th-century physics and the mentor of the late A.D. Sakharov. Together with I.M. Frank, he received the Nobel Prize in 1958 for the explanation of the Cherenkov effect. This book contains an annotated selection of his most important contributions to physics literature and essays on his contemporaries - Mandelstam, Einstein, Landau and Bohr as well as his contributions to the Pugwash conferences. About a third of the selections originally appeared in Russian and are now available to Western readers. This volume includes a preface by Sir Rudolf Peierls, a biography compiled by Tamm's former students, V.Ya. Frenkel and B.M. Bolotovskii, and a complete bibliography. This monograph on quantum theory, science history, particles and fields and the Cherenkov effect is intended for students, researchers, mathematicians and natural scientists in general.

Focusing on the mathematics that lies at the intersection of probability theory, statistical physics, combinatorics and computer science, this volume collects together lecture notes on recent developments in the area. The common ground of these subjects is perhaps best described by the three terms in the title: Random Walks, Random Fields and Disordered Systems. The specific topics covered include a study of Branching Brownian Motion from the perspective of disordered (spin-glass) systems, a detailed analysis of weakly self-avoiding random walks in four spatial dimensions via methods of field theory and the renormalization group, a study of phase transitions in disordered discrete structures using a rigorous version of the cavity method, a survey of recent work on interacting polymers in the ballisticity regime and, finally, a treatise on two-dimensional loop-soup models and their connection to conformally invariant systems and the Gaussian Free Field. The notes are aimed at early graduate students with a mod...

This short book provides a unified view of the history and theory of random sets and fuzzy random variables, with special emphasis on its use for representing higher-order non-statistical uncertainty about statistical experiments. The authors lay bare the existence of two streams of works using the same mathematical ground, but differing form their use of sets, according to whether they represent objects of interest naturally taking the form of sets, or imprecise knowledge about such objects. Random (fuzzy) sets can be used in many fields ranging from mathematical morphology, economics, artificial intelligence, information processing and statistics per se, especially in areas where the outcomes of random experiments cannot be observed with full precision. This book also emphasizes the link between random sets and fuzzy sets with some techniques related to the theory of imprecise probabilities. This small book is intended for graduate and doctoral students in mathematics or engineering, but also provides an i...

We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

Changes in gene expression may represent an important mode of human adaptation. However, to date, there are relatively few known examples in which selection has been shown to act directly on levels or patterns of gene expression. In order to test whether single nucleotide polymorphisms (SNPs) that affect gene expression in cis are frequently targets of positive natural selection in humans, we analyzed genome-wide SNP and expression data from cell lines associated with the International HapMap Project. Using a haplotype-based test for selection that was designed to detect incomplete selective sweeps, we found that SNPs showing signals of selection are more likely than random SNPs to be associated with gene expression levels in cis. This signal is significant in the Yoruba (which is the population that shows the strongest signals of selection overall) and shows a trend in the same direction in the other HapMap populations. Our results argue that selection on gene expression levels is an important type of human adaptation. Finally, our work provides an analytical framework for tackling a more general problem that will become increasingly important: namely, testing whether selection signals overlap significantly with SNPs that are associated with phenotypes of interest.

Abstract Comparing drug-induced driving impairments with the effects of benchmark blood alcohol concentrations (BACs) is an approved approach to determine the clinical relevance of findings for traffic safety. The present study aimed to collect alcohol calibration data to validate findings of clinical trials that were derived from a representative test course in a dynamic driving simulator. The driving performance of 24 healthy volunteers under placebo and with 0.05% and 0.08% BACs was measured in a double-blind, randomized, crossover design. Trained investigators assessed the subjects’ driving performance and registered their driving errors. Various driving parameters that were recorded during the simulation were also analyzed. Generally, the participants performed worse on the test course (P the investigators’ assessment) under the influence of alcohol. Consistent with the relevant literature, lane-keeping performance parameters were sensitive to the investigated BACs. There were significant differences between the alcohol and placebo conditions in most of the parameters analyzed. However, the total number of errors was the only parameter discriminating significantly between all three BAC conditions. In conclusion, data show that the present experimental setup is suitable for future psychopharmacological research. Thereby, for each drug to be investigated, we recommend to assess a profile of various parameters that address different levels of driving. On the basis of this performance profile, the total number of driving errors is recommended as the primary endpoint. However, this overall endpoint should be completed by a specifically sensitive parameter that is chosen depending on the effect known to be induced by the tested drug. PMID:25689289

We therefore, studied the anthropometry, and the coverage of the Expanded Programme on Immunization (EPI) vaccines in randomlyselected rural communities of Sokoto State. Design: Cross-sectional randomized study. Method: One hundred and fifteen mothers of children present at the randomly chosen sites on the ...

The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

Recent experiments have presented listeners with complex tonal stimuli consisting of components with values (i.e., intensities or frequencies) randomly sampled from probability distributions [e.g., R. A. Lutfi, J. Acoust. Soc. Am. 86, 934--944 (1989)]. In the present experiment, brief tones were presented at intervals corresponding to the intensity of a random process. Specifically, the intervals between tones were randomlyselected from exponential probability functions. Listeners were asked to decide whether tones presented during a defined observation interval represented a ''noise'' process alone or the ''noise'' with a ''signal'' process added to it. The number of tones occurring in any observation interval is a Poisson variable; receiver operating characteristics (ROCs) arising from Poisson processes have been considered by Egan [Signal Detection Theory and ROC Analysis (Academic, New York, 1975)]. Several sets of noise and signal intensities and observation interval durations were selected which were expected to yield equivalent performance. Rating ROCs were generated based on subjects' responses in a single-interval, yes--no task. The performance levels achieved by listeners and the effects of intensity and duration are compared to those predicted for an ideal observer

Examines 11 randomlyselected English handbooks to determine the currently published guidelines for avoiding sexist language. Reveals that major differences exist among these handbooks when dealing with equality in language and suggests that feminist teachers exert leadership when selecting handbooks. Ranks five handbooks from offensive to…

Results of the degradation kinetics project and describes a general approach for calculating and selectingrepresentative half-life values from soil and aquatic transformation studies for risk assessment and exposure modeling purposes.

The quality of randomization of Chinese randomized trials on herbal medicines for hepatitis B was assessed. Search strategy and inclusion criteria were based on the published protocol. One hundred and seventy-six randomized clinical trials (RCTs) involving 20,452 patients with chronic hepatitis B...... virus (HBV) infection were identified that tested Chinese medicinal herbs. They were published in 49 Chinese journals. Only 10% (18/176) of the studies reported the method by which they randomized patients. Only two reported allocation concealment and were considered as adequate. Twenty percent (30...

Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

A ground water sampling tool, the HydroPunch trademark, was tested at the Department of Energy's Savannah River Site in South Carolina to determine if representative ground water samples could be obtained without installing monitoring wells. Chemical analyses of ground water samples collected with the HydroPunch trademark from various depths within a borehole were compared with chemical analyses of ground water from nearby monitoring wells. The site selected for the test was in the vicinity of a large coal storage pile and a coal pile runoff basin that was constructed to collect the runoff from the coal storage pile. Existing monitoring wells in the area indicate the presence of a ground water contaminant plume that: (1) contains elevated concentrations of trace metals; (2) has an extremely low pH; and (3) contains elevated concentrations of major cations and anions. Ground water samples collected with the HydroPunch trademark provide in excellent estimate of ground water quality at discrete depths. Groundwater chemical data collected from various depths using the HydroPunch trademark can be averaged to simulate what a screen zone in a monitoring well would sample. The averaged depth-discrete data compared favorably with the data obtained from the nearby monitoring wells

In this paper we continue our earlier studies (Diao et al 2011 J. Phys. A: Math. Theor. 44 405202, Diao et al J. Phys. A: Math. Theor. 45 275203) on the generation methods of random equilateral polygons confined in a sphere. The first half of this paper is concerned with the generation of confined equilateral random walks. We show that if the selection of a vertex is uniform subject to the position of its previous vertex and the confining condition, then the distributions of the vertices are not uniform, although there exists a distribution such that if the initial vertex is selected following this distribution, then all vertices of the random walk follow this same distribution. Thus in order to generate a confined equilateral random walk, the selection of a vertex cannot be uniform subject to the position of its previous vertex and the confining condition. We provide a simple algorithm capable of generating confined equilateral random walks whose vertex distribution is almost uniform in the confinement sphere. In the second half of this paper we show that any process generating confined equilateral random walks can be turned into a process generating confined equilateral random polygons with the property that the vertex distribution of the polygons approaches the vertex distribution of the walks as the polygons get longer and longer. In our earlier studies, the starting point of the confined polygon is fixed at the center of the sphere. The new approach here allows us to move the starting point of the confined polygon off the center of the sphere. (paper)

A simple analysis of gamma spectra selected to represent the performance of different detection systems, or, for one same system, different operation modes or states of progress of the system development, allows to compare the relative average-sensitivities of the represented systems themselves, as operated in the selected cases. The obtained SP figure-of-merit takes into account and correlates the main parameters commonly used to estimate the performance of a system. An example of application is given.

The purpose of this study was to investigate the cross-sectional association of employment contract, company size, and occupation with psychological distress using a nationally representative sample of the Japanese population. From June through July 2007, a total of 9,461 male and 7,717 female employees living in the community were randomlyselected and surveyed using a self-administered questionnaire and interview including questions about occupational class variables, psychological distress (K6 scale), treatment for mental disorders, and other covariates. Among males, part-time workers had a significantly higher prevalence of psychological distress than permanent workers. Among females, temporary/contract workers had a significantly higher prevalence of psychological distress than permanent workers. Among males, those who worked at companies with 300-999 employees had a significantly higher prevalence of psychological distress than those who worked at the smallest companies (with 1-29 employees). Company size was not significantly associated with psychological distress among females. Additionally, occupation was not significantly associated with psychological distress among males or females. Similar patterns were observed when the analyses were conducted for those who had psychological distress and/or received treatment for mental disorders. Working as part-time workers, for males, and as temporary/contract workers, for females, may be associated with poor mental health in Japan. No clear gradient in mental health along company size or occupation was observed in Japan.

Full Text Available BACKGROUND: With the increasing prevalence of chronic noncommunicable diseases, patient education is becoming important to strengthen disease prevention and control. We aimed to systematically determine the extent to which registered, ongoing randomized controlled trials (RCTs evaluated an educational intervention focus on patient-important outcomes (i.e., outcomes measuring patient health status and quality of life. METHODS: On May 6, 2009, we searched for all ongoing RCTs registered in the World Health Organization International Clinical Trials Registry platform. We used a standardized data extraction form to collect data and determined whether the outcomes assessed were 1 patient-important outcomes such as clinical events, functional status, pain, or quality of life or 2 surrogate outcomes, such as biological outcome, treatment adherence, or patient knowledge. PRINCIPAL FINDINGS: We selected 268 of the 642 potentially eligible studies and assessed a random sample of 150. Patient-important outcomes represented 54% (178 of 333 of all primary outcomes and 46% (286 of 623 of all secondary outcomes. Overall, 69% of trials (104 of 150 used at least one patient-important outcome as a primary outcome and 66% (99 of 150 as a secondary outcome. Finally, for 31% of trials (46 of 150, primary outcomes were only surrogate outcomes. The results varied by medical area. In neuropsychiatric disorders, patient important outcomes represented 84% (51 of 61 of primary outcomes, as compared with 54% (32 of 59 in malignant neoplasm and 18% (4 of 22 in diabetes mellitus trials. In addition, only 35% assessed the long-term impact of interventions (i.e., >6 months. CONCLUSIONS: There is a need to improve the relevance of outcomes and to assess the long term impact of educational interventions in RCTs.

Natural selection is crucial for the adaptation of populations to their environments. Here, we present the first global study of natural selection in the Hominidae (humans and great apes) based on genome-wide information from population samples representing all extant species (including most subspecies). Combining several neutrality tests we create a multi-species map of signatures of natural selection covering all major types of natural selection. We find that the estimated efficiency of bot...

At the nanoscale measures can move from a mass-scale analogue calibration to counters of discrete units. The shift redefines the possible levels of control that can be achieved in a system if adequate selectivity can be imposed. As an example as ionic substances pass through nanoscale pores, the quantity of ions is low enough that the pore can contain either negative or positive ions. Yet precise control over this selectivity still raises difficulties. In this issue researchers address the challenge of how to regulate the ionic selectivity of negative and positive charges with the use of an external charge. The approach may be useful for controlling the behaviour, properties and chemical composition of liquids and has possible technical applications for nanofluidic field effect transistors [1]. Selectivity is a critical advantage in the administration of drugs. Nanoparticles functionalized with targeting moieties can allow delivery of anti-cancer drugs to tumour cells, whilst avoiding healthy cells and hence reducing some of the debilitating side effects of cancer treatments [2]. Researchers in Belarus and the US developed a new theranostic approach—combining therapy and diagnosis—to support the evident benefits of cellular selectivity that can be achieved when nanoparticles are applied in medicine [3]. Their process uses nanobubbles of photothermal vapour, referred to as plasmonic nanobubbles, generated by plasmonic excitations in gold nanoparticles conjugated to diagnosis-specific antibodies. The intracellular plasmonic nanobubbles are controlled by laser fluence so that the response can be tuned in individual living cells. Lower fluence allows non-invasive high-sensitive imaging for diagnosis and higher fluence can disrupt the cellular membrane for treatments. The selective response of carbon nanotubes to different gases has leant them to be used within various different types of sensors, as summarized in a review by researchers at the University of

In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different

A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

is not included in the Danish list of narcotic drugs. Conclusion The present study gives valuable data on drugged and drunk driving. Driving under the influence constitutes a major risk in traffic and the relative high frequency of young men driving while taking drugs is worrying. Reference K.W. Simonsen, A...... stratified by time, season, and road type. The oral fluid samples were screened for 29 illegal and legal psychoactive substances and metabolites as well as ethanol. Results Fourteen (0.5%) drivers were positive for ethanol (alone or in combination with drugs) at concentrations above 0.53 g/l, which...

the most frequent illicit drugs detected above the limit of quantitation (LOQ); while, codeine, tramadol, zopiclone, and benzodiazepines were the most frequent legal drugs. Middle aged men (median age 47.5 years) dominated the drunk driving group, while the drivers positive for illegal drugs consisted...... is not included in the Danish list of narcotic drugs. Conclusion The present study gives valuable data on drugged and drunk driving. Driving under the influence constitutes a major risk in traffic and the relative high frequency of young men driving while taking drugs is worrying. Reference K.W. Simonsen, A......Introduction Driving under the influence of alcohol and drugs is a global problem. In Denmark as well as in other countries there is an increasing focus on impaired driving. Little is known about the occurrence of psychoactive drugs in the general traffic. Therefore the European commission...

Considering cell cycle dependent cytotoxicity, intercalation of chemotherapy and epidermal growth factor receptor (EGFR) tyrosine kinase inhibitor (TKI) may be a treatment option in non-small cell lung cancer (NSCLC). This randomized phase 2 study compared the efficacy of paclitaxel and carboplatin (PC) intercalated with gefitinib (G) versus PC alone in a selected, chemotherapy-naïve population of advanced NSCLC patients with a history of smoking or wild-type EGFR. Eligible patients were chemotherapy-naïve advanced NSCLC patients with Eastern Cooperative Oncology Group performance status of 0—2. Non-smoking patients with adenocarcinoma or patients with activating EGFR mutation were excluded because they could benefit from gefitinib alone. Eligible patients were randomized to one of the following treatment arms: PCG, P 175 mg/m 2 , and C AUC 5 administered intravenously on day 1 intercalated with G 250 mg orally on days 2 through 15 every 3 weeks for four cycles followed by G 250 mg orally until progressive disease; or PC, same dosing schedule for four cycles only. The primary endpoint was the objective response rate (ORR), and the secondary endpoints included progression-free survival (PFS), overall survival (OS), and toxicity profile. A total of 90 patients participated in the study. The ORRs were 41.9 % (95 % confidence interval (CI) 27.0–57.9 %) for the PCG arm and 39.5 % (95 % CI 25.0–55.6 %) for the PC arm (P = 0.826). No differences in PFS (4.1 vs. 4.1 months, P = 0.781) or OS (9.3 vs. 10.5 months, P = 0.827) were observed between the PCG and PC arms. Safety analyses showed a similar incidence of drug-related grade 3/4 toxicity. Rash and pruritus were more frequent in the PCG than in the PC arm. PCG did not improve ORR, PFS, and OS compared to PC chemotherapy alone for NSCLC in a clinically selected population excluding non-smoking adenocarcinoma or mutated EGFR. The study is registered with ClinicalTrials.gov (NCT01196234). Registration date is 08/09/2010

characteristics was carried out using segmental content analysis. The K. Muller’s technique of sign analyzing was used for nonverbal characteristics. Statistical analysis of the data was based on the use of the coefficient of φ-angular Fisher transform, correlation coefficient φ, and cluster analysis.Results and scientific novelty. Three multimodal clusters were selected, the structure of each included verbal and non-verbal characteristics of the oral narrative of happiness, individual characteristics (gender, age, and personality traits (extraversion-introversion. It is shown that these clusters are invariant and are manifested in the representation of the state of happiness, regardless of belonging to culture. The variability in the oral narrative of happiness is manifested in the differences in the frequency of access to the categories represented in the clusters identified, as well as in minor changes in the structure of clusters due to age. In particular, the Russian students more often appealed to the categories describing values of family and family well-being, as well as they used the non-verbal units showing aspiration to deduction of emotional state. The American students were more inclined to the choice of the verbal units connected with achievements in activity, and non-verbal categories (peripheral gestures indicating importance of the relations with other people. The verbal and non-verbal units, reflecting interaction with a social context, have been equally presented in the sample of the Russian and American respondents’ representations.Practical significance. The materials of the present publication can be taken into account when choosing methods of work in the framework of positive psychotherapy, as well as in developing training programs aimed at raising the level of subjective well-being, happiness and satisfaction with life. The obtained results are the basis for the development of practical recommendations for improving the effectiveness of

PURPOSE: Patients with gliomas often experience cognitive deficits, including problems with attention and memory. This randomized, controlled trial evaluated the effects of a multifaceted cognitive rehabilitation program (CRP) on cognitive functioning and selected quality-of-life domains in patients

A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

Since reflection or transmission of a quantum particle on a beamsplitter is inherently random quantum process, a device built on this principle does not suffer from drawbacks of neither pseudo-random computer generators or classical noise sources. Nevertheless, a number of physical conditions necessary for high quality random numbers generation must be satisfied. Luckily, in quantum optics realization they can be well controlled. We present an easy random number generator based on the division of weak light pulses on a beamsplitter. The randomness of the generated bit stream is supported by passing the data through series of 15 statistical test. The device generates at a rate of 109.7 kbit/s.

Vermont Center for Geographic Information — (Link to Metadata) This coverage represents the results of an analysis of landscape diversity in Vermont. Polygons in the dataset represent as much as possible (in a...

Obtaining an adequate, representative sample of ecological communities to make taxon richness (TR) or compositional comparisons among sites is a continuing challenge. Sample representativeness literally means the similarity in species composition and relative abundance between a ...

Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

Congress of the U.S., Washington, DC. House Committee on Education and Labor.

This report is on the hearings before the House Select Subcommittee on Education, conducted on October 7 and 8, 1974, to consider the creation of the American Film Institute as an independent agency. Witnesses testifying before the subcommittee included: Maya Angelou, Ellen Burstyn, John Culkin, Ed Emshwiller, John Hancock, Nancy Hanks, Charlton…

'Philosophy is written in this great book which is continually open before our eyes - I mean the universe...' Galileo's astronomical discoveries changed the way we look at the world, and our place in the universe. Threatened by the Inquisition for daring to contradict the literal truth of the Bible, Galileo ignited a scientific revolution when he asserted that the Earth moves. This generous selection from his writings contains all the essential texts for a reader to appreciate his lasting significance. Mark Davie's new translation renders Galileo's vigorous Italian prose into clear modern English, while William R. Shea's version of the Latin Sidereal Message makes accessible the book that created a sensation in 1610 with its account of Galileo's observations using the newly invented telescope. All Galileo's contributions to the debate on science and religion are included, as well as key documents from his trial before the Inquisition in 1633. A lively introduction and clear notes give an overview of Galileo's...

Following our detailed review of the granulation reports and additional conversations with process and development personnel, we have reached a consensus position regarding granulator selection. At this time, we recommend going forward with implementation of the tumbling granulator approach (GEMCO) based on our assessment of the tested granulation techniques using the established criteria. The basis for this selection is summarized in the following sections, followed by our recommendations for proceeding with implementation of the tumbling granulation approach. All five granulation technologies produced granulated products that can be made into acceptable sintered pucks. A possible exception is the product from the fluidized bed granulator. This material has been more difficult to press into uniform pucks without subsequent cracking of the puck during the sintering cycle for the pucks in this series of tests. This problem may be an artifact of the conditions of the particular granulation demonstration run involved, but earlier results have also been mixed. All granulators made acceptable granulated feed from the standpoint of transfer and press feeding, though the roller compactor and fluidized bed products were dustier than the rest. There was also differentiation among the granulators in the operational areas of (1) potential for process upset, (2) plant implementation and operational complexity, and (3) maintenance concerns. These considerations will be discussed further in the next section. Note that concerns also exist regarding the extension of the granulation processes to powders containing actinides. Only the method that involves tumbling and moisture addition has been tested with uranium, and in that instance, significant differences were found in the granulation behavior of the powders.

Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Representative value in exchange. 23.4 Section... Representative value in exchange. Representative value in exchange for the collection of a fee means foreign currency equivalent to the prescribed United States dollar fee at the current rate of exchange at the time...

Full Text Available Evidence from research findings seems to corroborate with the assertion that the students’ constant difficulties in the comprehension of English language texts may have emanated from the limited visual representativeness in most of the recommended textbooks in use in Junior Secondary Schools in Ebonyi State of Nigeria. Textbooks help to equip teachers and learners with broad and factual information concerning what learners should learn by helping them to perform well in the learning environment and in external examinations. This study investigated the types of visual representativeness in Junior Secondary School English Language Textbooks in Ebonyi State vis-à-vis the curriculum content. An exploratory-interpretive design was used in the study. A total of 9 textbooks were selected out of the 18 English language textbooks used in Ebonyi State. The visual representativeness in the textbooks were analyzed using content and qualitative analysis. Simple Percentage and frequency counts were used in the interpretation of visual representativeness in JSS English Language in Use in Ebonyi State. This was done by taking the frequency of the visual representativeness in the textbooks based on the content specification on visual representativeness of topics in the curriculum. The findings showed that the English language textbooks used in junior secondary schools in Ebonyi State of Nigeria adopted decorational, representational, and transformational visuals. It was also found that the visual representativeness in some of the English language textbooks were inadequate. For instance, the Intensive English (books I, II, and III, had 14.2%, 8%, and 19.2% under represented visuals respectively, and 85.7%, 92%, and 80.8% adequately represented visuals respectively, In New Concept English (books I, II, III , we had 14.3%, 23.8%, and 29.4% under represented visuals respectively, and 85.7%, 77.1%, and 70.5% adequately represented visuals respectively and Junior

It was hypothesized that color selection consists of two stages. The first stage represents a feature specific selection in neural populations specialized in processing color. The second stage constitutes feature non-specific selections, related to executive attentional processes and/or motor

This survey was conducted in Al-Kalakla Fishery (KF) and Jabel Awlia Dam Fishery (JADF) in the White Nile River, Khartoum state to identify the selective and non-selective fishing gear. The results showed the selective fishing gear represented by gill-nets and seine nets (beach nets) in both fisheries with clear variation in ...

A random access memory (RAM) uses n bits to randomly address N=2^n distinct memory cells. A quantum random access memory (qRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(log N) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust qRAM algorithm, as it in general requires entanglement among exponentially l...

As the numbers and complexity of nuclear facilities increase, limitations on resources for international safeguards may restrict attainment of safeguards goals. One option for improving the efficiency of limited resources is to expand the current inspection regime to include random allocation of the amount and frequency of inspection effort to material strata or to facilities. This paper identifies the changes in safeguards policy, administrative procedures, and operational procedures that would be necessary to accommodate randomized inspections and identifies those situations where randomization can improve inspection efficiency and those situations where the current nonrandom inspections should be maintained. 9 refs., 1 tab

The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)

Objective The objective of this study was to describe the involvement of patients or their representatives in quality management (QM) functions and to assess associations between levels of involvement and the implementation of patient-centred care strategies. Design A cross-sectional, multilevel study design that surveyed quality managers and department heads and data from an organizational audit. Setting Randomlyselected hospitals (n = 74) from seven European countries (The Czech Republic, France, Germany, Poland, Portugal, Spain and Turkey). Participants Hospital quality managers (n = 74) and heads of clinical departments (n = 262) in charge of four patient pathways (acute myocardial infarction, stroke, hip fracture and deliveries) participated in the data collection between May 2011 and February 2012. Main Outcome Measures Four items reflecting essential patient-centred care strategies based on an on-site hospital visit: (1) formal survey seeking views of patients and carers, (2) written policies on patients' rights, (3) patient information literature including guidelines and (4) fact sheets for post-discharge care. The main predictors were patient involvement in QM at the (i) hospital level and (ii) pathway level. Results Current levels of involving patients and their representatives in QM functions in European hospitals are low at hospital level (mean score 1.6 on a scale of 0 to 5, SD 0.7), but even lower at departmental level (mean 0.6, SD 0.7). We did not detect associations between levels of involving patients and their representatives in QM functions and the implementation of patient-centred care strategies; however, the smallest hospitals were more likely to have implemented patient-centred care strategies. Conclusions There is insufficient evidence that involving patients and their representatives in QM leads to establishing or implementing strategies and procedures that facilitate patient-centred care; however, lack of evidence should not be

’ potential identities. It shows that the students preferred STEM representatives resembled themselves in some aspects (primarily social and health aspects) and fit their perceptions of a typical person working in STEM in other aspects (knowledge seeking, hard-working etc.). At least two different...... studies, it is important to introduce high school students to good STEM representatives to make possible the development of potential STEM identities. A potential identity within a specific subject area relies on at least a situation bound relation-ship to the subject area or the person representing it....... Some representatives transmit infor-mation and are thereby definers, whereas other representatives illustrates as personal examples and are thereby models. This study focuses on high school students’ views on STEM representatives and the impact these representatives have on the high school students...

Cal Elgot was a very serious and thoughtful researcher, who with great determi­ nation attempted to find basic explanations for certain mathematical phenomena­ as the selection of papers in this volume well illustrate. His approach was, for the most part, rather finitist and constructivist, and he was inevitably drawn to studies of the process of computation. It seems to me that his early work on decision problems relating automata and logic, starting with his thesis under Roger Lyndon and continuing with joint work with Biichi, Wright, Copi, Rutledge, Mezei, and then later with Rabin, set the stage for his attack on the theory of computation through the abstract treatment of the notion of a machine. This is also apparent in his joint work with A. Robinson reproduced here and in his joint papers with John Shepherdson. Of course in the light of subsequent work on decision problems by Biichi, Rabin, Shelah, and many, many others, the subject has been placed on a completely different plane from what it was whe...