A matrix approach is described for assessing the variance of effects in incomplete diallels designs. The method is illustrated by reference to simulated complete and incomplete diallels using different combinations of constraints, average degree of dominance and, for the incomplete diallel, number of hybrids. Our results showed that caution should be taken in working with incomplete diallels under conditions of overdominance because there were changes in the rank of the genotypes when the excluded hybrid had parents with a low frequency of the favorable allele (i.e. the allele which increases expression of a character). The expression described in this paper is a rapid and safe approach to estimate variances and covariances of the effects of contrasts of incomplete diallels. Copyright by the Brazilian Society of Genetics.

Analysis of genetic main effects and genotype × environment (GE) interaction effects for the fruit shape traits fruit length and fruit circumference in the sponge gourd (Luffa cylindrical (L) Roem. Violales, Cucurbitaceae) was conducted for diallel cross data from two planting seasons. A genetic model including fruit direct effects and maternal effects and unconditional and conditional variances analysis was used to evaluate the development of the fruit at four maturation stages. The variance analysis results indicated that fruit length and circumference were simultaneously affected by fruit direct genetic effects and maternal effects as well as GE interaction effects. Fruit direct genetic effects were relatively more important for both fruit shape traits during the whole developmental period. Gene activation was mostly due to additive effects at the first maturation stage and dominance effects were mainly active during the other three stages. The fruit shape trait correlation coefficients due to different genetic effects and the phenotypic correlation coefficients varied significantly for the various maturation stages. The results indicate that it is relatively easy to improve the two fruit shape traits for market purposes by carefully selecting the parents at the first maturation stage 3 days after flowering instead of at fruit economic maturation.

Community detection helps us simplify the complex configuration of networks, but communities are reliable only if they are statistically significant. To detect statistically significant communities, a common approach is to resample the original network and analyze the communities. But resampling assumes independence between samples, while the components of a network are inherently dependent. Therefore, we must understand how breaking dependencies between resampled components affects the results of the significance analysis. Here we use scientific communication as a model system to analyze this effect. Our dataset includes citations among articles published in journals in the years 1984–2010. We compare parametric resampling of citations with non-parametric article resampling. While citation resampling breaks link dependencies, article resampling maintains such dependencies. We find that citation resampling underestimates the variance of link weights. Moreover, this underestimation explains most of the differences in the significance analysis of ranking and clustering. Therefore, when only link weights are available and article resampling is not an option, we suggest a simple parametric resampling scheme that generates link-weight variances close to the link-weight variances of article resampling. Nevertheless...

This article examined repeated measures analysis of variance (RMANOVA). Within-subjects repeated measurements are unavoidable during clinical and experimental investigation, and between- and within-subject variability should be treated separately. Only through proper use and meticulous interpretation can ethical and scientific integrity be guaranteed. The philosophical background of, and knowledge pertaining to, RMANOVA are described in the first half of this text. The sphericity assumption and associated issues are discussed in the latter half. The final section provides a summary measure analysis, which was neglected by P value-dependent interpreters.

The healthcare industry is expanding swiftly and total healthcare expenditures are expected to reach 18% of GDP by 2008. However there exist steep variances in quality of care and high incidences of medical error. This has given impetus to efforts at progressively evolving the healthcare delivery system. The role of information technology (IT) is seen as being central to cost reduction and quality improvement of healthcare delivery. Furthermore, efficiency gains will realize approximately 20% in cost reductions. However, there are significant challenges associated with widespread adoption of IT by healthcare providers. Despite the existence of vendor and technology maturity, implementation rates for clinical patient record systems were only 35% in 2006. This study addresses the problem of low IT adoption in hospitals through a three pronged analysis methodology. A Clockspeed analysis has revealed a dichotomy between the maturity levels of technology and vendors on the one hand and delivery processes on the other. This has resulted in lower business value being realized from IT investments by healthcare providers.; (cont.) To address the concern of low business value realization from IT investments, a workflow and Social Networking Analysis was conducted on the surgery patient flow process of a prominent Boston area hospital. It was demonstrated that productivity gains could be achieved through redesign of social networks at the workplace and by inculcating an enterprise-wide process orientation. This would generate greater business value from existing IT investments and thereby impact the rate of IT adoption. Furthermore...

Whereas the State of Rio Grande do Sul (RS), have an economy directly dependent on agriculture and livestock sectors, which in different studies are reported as dependent on the variability of certain climatological elements, and the RS element water is regarded as fundamental. We conducted a study of the monthly total rainfall, to long 60 years (1948/2007), collected from 31 meteorological stations (EMs) and distributed geographically in the state. In the interest of contributing to the local society to predict possible shortages, and / or development of public policies for the use of water resources in urban and rural areas.
In order to obtain a model that can provide an approximation of the behavior of the average rainfall for each of the six homogeneous regions, as defined in the literature (Marques, 2005), held has an harmonic analysis of the data previously adjusted to 30-day months. Before the analisys, the properties were checked for normality, homogeneity of variance and stationarity. The data tested for normality and homogeneity of variances, have not passed satisfactory in these tests and, hence, there was a transformation of data, generating new data sets that met the conditions of homogeneity of variance and normality. The relative increase in the trend...

The author has attempted to illustrate in a somewhat
brief manner the application of certain statistical techniques
to the analysis of core sampling data. The statistical areas
of frequency distributions, analysis of variances, and to a
lesser degree, sampling, provide the basis for the study.

MBA Professional Report; For six of the past eight years, naval aviation depot-level maintenances activities have encountered operating losses that were not anticipated in the Navy Working Capital Fund (NWCF) budgets. These unanticipated losses resulted in increases or surcharges to the stabilized rates as an offset. This project conducts a variance analysis to uncover possible causes of the unanticipated losses. The variance analysis between budgeted (projected) and actual financial results was performed on financial data collected on the E-2C aircraft program from Fleet Readiness Center Southwest (FRCSW) located in San Diego, California. The results of the variance analysis are interpreted and discussed in terms of labor sales quantity, mix, and rate variances, material sales variance, material expense variance, labor, production overhead, and general and administrative rate/spending and quantity variances. The results of this project reveal the factors that created the greatest variance in FRCSW's net operating results. The variance analysis suggests that the factors having the greatest affect on the operating results were the material sales variances, material expense variances, and the variances due to the quantity of work. Additionally...

This work is an attempt to explain wide variations in drug licensing deal value by using regression modeling to describe and predict the relationship between oncology drug deal characteristics and their licensing deal values. Although the reasons for large variances in value between deals may not be immediately apparent, it was hypothesized that objective independent variables, such as a molecule's phase, its target market size and the size of the acquiring/licensor company could explain a significant portion of variation in cancer drug values. This model, although not predictive when used independently, could be used to supplement other discounted cash flow and market based techniques to help assess the worth of incipient oncology therapies. Using regression analysis to study drug licensing deals is not novel: a study was published by Loeffler et al in 2002 that attempted to assess the impact of multiple variables on deal value in a wide range of pharmaceutical indications. The independent variables in Loeffler's work could explain less than 50% of differences in deal values. It was expected that refining the model could lead to improved regression R squared coefficient and, potentially, be a useful tool for managers. This current work is based on the 2002 Loeffler paper...

A Bayesian method of moments/instrumental variable (BMOM/IV) approach is
developed and applied in the analysis of the important mean and multiple
regression models. Given a single set of data, it is shown how to obtain
posterior and predictive moments without the use of likelihood functions, prior
densities and Bayes' Theorem. The posterior and predictive moments, based on a
few relatively weak assumptions, are then used to obtain maximum entropy
densities for parameters, realized error terms and future values of variables.
Posterior means for parameters and realized error terms are shown to be equal
to certain well known estimates and rationalized in terms of quadratic loss
functions. Conditional maxent posterior densities for means and regression
coefficients given scale parameters are in the normal form while scale
parameters' maxent densities are in the exponential form. Marginal densities
for individual regression coefficients, realized error terms and future values
are in the Laplace or double-exponential form with heavier tails than normal
densities with the same means and variances. It is concluded that these results
will be very useful, particularly when there is difficulty in formulating
appropriate likelihood functions and prior densities needed in traditional
maximum likelihood and Bayesian approaches.; Comment: 14 pages...

When n replicates are available from a factorial experiment, several methods exist for
testing the validity of the assumption of equal variances within the "cells"
or treatment
combinations of the experiment. A new test is proposed for variances of random samples
believed to be from normal populations. This new test combines both the familiar graphical
analysis of means for treatment effects (ANOME) and the analysis of the logarithms of the
within-group variances to produce a graphical display of the test for variance homogeneity.
To determine robustness of the proposed test against departures from the underlying
normality assumption, this new test is also evaluated for non-normal populations.
Another analysis-of-means-type test was developed by Wludyka and Nelson which
utilizes Dirichlet distributions and specially constructed tables. The new test, proposed
herein, has an advantage in that it relies solely on critical values developed for the analysis of-
means procedure. As an added simplification, only those critical values corresponding to
infinite degrees of freedom are required.
A In ANOME analysis of Nelson's data (used to demonstrate the In ANOVA
procedure) yielded the same conclusion. Also, simulation results indicate that when the
underlying assumption of normality is not feasible...

Daniel Krige's influence on soil science, and on soil survey in particular, has been profound. From the 1920s onwards soil surveyors made their maps by classifying the soils and drawing boundaries between the classes they recognized. By the 1960s many influential pedologists were convinced that if one knew to which class of soil a site belonged then one would be able to predict the soil's properties there. At the same time, engineers began to realize that prediction from such maps was essentially a statistical matter and to apply classical sampling theory. Such methods, though sound, proved inefficient because they failed to take account of the spatial dependence within the classes. Matters changed dramatically in the 1970s when soil scientists learned of the work of Daniel Krige and Georges Matheron's theory of regionalized variables. Statistical pedologists (pedometricians) first linked R.A. Fisher's analysis of variance to regionalized variables via spatial hierarchical designs to estimate spatial components of variance. They then applied the mainstream geostatistical methods of spatial analysis and kriging to map plant nutrients, trace elements, pollutants, salt, and agricultural pests in soil, which has led to advances in modern precision agriculture. They were among the first Earth scientists to use nonlinear statistical estimation for modelling variograms and to make the programmed algorithms publicly available. More recently...