Sample records for stand-alone statistical algorithm

Full Text Available In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.

Highlights: • A new method to improve the performance of renewable power management is proposed. • The proposed method is based on Fuzzy Logic optimized by the Water Cycle Algorithm. • The proposed method characteristics are compared with two other methods. • The comparisons confirm that the proposed method is robust and effectiveness one. - Abstract: This paper aims to improve the power management system of a Stand-alone Hybrid Green Power generation based on the Fuzzy Logic Controller optimized by the Water Cycle Algorithm. The proposed Stand-alone Hybrid Green Power consists of wind energy conversion and photovoltaic systems as primary power sources and a battery, fuel cell, and Electrolyzer as energy storage systems. Hydrogen is produced from surplus power generated by the wind energy conversion and photovoltaic systems of Stand-alone Hybrid Green Power and stored in the hydrogen storage tank for fuel cell later using when the power generated by primary sources is lower than load demand. The proposed optimized Fuzzy Logic Controller based power management system determines the power that is generated by fuel cell or use by Electrolyzer. In a hybrid system, operation and maintenance cost and reliability of the system are the important issues that should be considered in studies. In this regard, Water Cycle Algorithm is used to optimize membership functions in order to simultaneously minimize the Loss of Power Supply Probability and operation and maintenance. The results are compared with the particle swarm optimization and the un-optimized Fuzzy Logic Controller power management system to prove that the proposed method is robust and effective. Reduction in Loss of Power Supply Probability and operation and maintenance, are the most advantages of the proposed method. Moreover the level of the State of Charge of the battery in the proposed method is higher than other mentioned methods which leads to increase battery lifetime.

The LHCb upgrade detector project foresees the presence of a scintillating fiber tracker (SciFi) to be used during the LHC Run III, starting in 2020. The instantaneous luminosity will be increased up to $2\\times10^{33}$, five times larger than in Run II and a full software event reconstruction will be performed at the full bunch crossing rate by the trigger. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for track reconstruction. This poster presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_{S}^{0}$ and $\\Lambda$ and allows to greatly enhance the physics potential and capabilities of the LHCb upgrade when compared to its previous implementation.

and clinical/radiological results in 22 patients treated with XLIF procedure for DS or degenerative disc disease (DDD). Material and methods: 22 consecutive patients with DS underwent surgery with the XLIF stand-alone procedure, with follow-up of 24 months. Clinical outcome scores were collected. Complications......Introduction: Adult thoracolumbar degeneration is an increasing challenge in the aging population. With age the progressive degeneration of the discs leads to an asymmetric collapse and a thoracolumbar coronal plane deformity, a degenerative scoliosis (DS). Aim: To evaluate the complication rate......-year follow-up, with a 31.8% revision rate. Due to the high revision rate we recommend supplementary posterior instrumentation, to achieve a higher fusion rate. When considering XLIF-stand-alone procedure for DS or DDD without supplemental posterior instrumentation, only single-level disease should...

Full Text Available One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC. The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.

One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.

The first Autonomous Heliostat has been developed by CIEMAT at PSA facilities in Almeria. This heliostat is an innovative approach to reducing the civil engineering work costs in heliostat fields of central tower plants. Channels, cables and other electric elements have been eliminated in the new heliostat. Thus, one 70-nr, classical T glass/metal heliostat has been adapted to include all the new stand-alone concept components. A PV system is able to drive two sun-tracking DC motors between 5 and 24Vdc, 0 and 15A. The heliostat communicates with the control room 400-m away by using a radio-modem working at 9600 baud. An anemometer, a wind switcher, light and ambient temperature sensors have been installed on the heliostat for self-protection decision-making. A PV panel integrated into the heliostat reflecting surface, eliminates cabling and other elements required for a conventional power supply. Communication lines between master control and local control have been replaced by radio-modem. Testing has validated the technical feasibility of the prototype and quantified the real consumption and efficiencies of new elements. The extra costs produced.by the autonomous concepts are compared with the cost of civil work in conventional heliostat field. (Author) 8 refs

Full Text Available In order to improve the transient stability of frequency in a small stand-alone microgrid (SSM, this paper takes a SSM composed of a direct-drive permanent magnet synchronous generator (D-PMSG and a micro gas turbine (MGT as the background and uses wind turbine generator (WTG virtual inertia (VI to participate in the primary (short-term system frequency regulation. First of all, this paper constructs a grid-connected model composed of a WTG and a MGT, analyzes the WTG virtual inertia frequency regulation mechanism, and explains the principle of proportional-differentiation (PD virtual inertia control (VIC and its shortcomings. Secondly, the paper introduces the structure principle of n-order active disturbance rejection control (ADRC and deduces the design process of second-order ADRC-VIC. Finally, through the simulation and experimental verification, comparing the frequency perturbation of without-VIC, PD-VIC, and ADRC-VIC, it is concluded that PD-VIC and ADRC-VIC both can use the WTG virtual inertia to participate in the primary frequency regulation. The frequency regulation effect of ADRC-VIC is better than PD-VIC, ADRC-VIC can extend the rotor speed recovery time and avoid overshoot, and its frequency fluctuation amplitude and settling time are obviously improved, and ADRC-VIC can effectively avoid the overshoot phenomenon of the MGT output power.

The IEA Photovoltaic Power Systems Programme (PVPS) is one of the collaborative R and D agreements established within the IEA. The objective of Task III is to promote and facilitate the exchange of information and experiences in the field of PV Systems in Stand-alone and Island Applications (SAPV). The book focuses on the practical experiences gained, and does not aim to provide a complete manual on SAPV. When Task III started its activities in 1993, a collection of 50 'State of the art' projects was published in the book 'Examples of Stand-Alone Photovoltaic Systems'. This publication marked the base line for the work of the task. Now, in 1998, the showcases from each country demonstrate the lessons learned in five years of cooperation. The book consists of two parts. The first part contains eight chapters dealing with a specific aspect of stand-alone PV. The second part introduces 14 national showcase projects in a systematic presentation. Each chapter and showcase can be read independently from the rest of the book. Chapter 2, contributed by The Netherlands, analyses the market for stand-alone PV systems. It gives an overview of the 'traditional' application of stand-alone PV, which is the electrification of remote buildings and which has been addressed in depth in other publications. The focus is on the market niches of service applications that are also interesting for more densely populated areas, e.g. in industrialised countries. The United Kingdom illustrates the economic aspects in Chapter 3. Cost comparisons are made, but more important is the illustration of the non-financial considerations that make PV the preferred choice as a power source for many applications. Switzerland explores in Chapter 4 (financing aspects) different financing mechanisms, and financial policies used to overcome the initial cost barrier. Most of these approaches have been applied in developing countries rather than in the western world. Using various examples from all over the

Full Text Available This paper is analyzing the operation of a stand-alone wind farm with variable speed turbines, permanent magnet synchronous generators (PMSG and a system for converting wind energy during wind speed variations. On this paper, the design and modeling of a wind system which uses PMSG’s to provide the required power of a hydrogen gas electrolyzer system, is discussed. This wind farm consists of three wind turbines, boost DC-DC converters, diode full bridge rectifiers, permanent magnet synchronous generators, MPPT control and a hydrogen gas electrolyzer system. The MPPT controller based on fuzzy logic is designed to adjust the duty ratio of the boost DC-DC converters to absorb maximum power. The proposed fuzzy logic controller assimilates, with (PSF MPPT algorithm which generally used to absorb maximum power from paralleled wind turbines and stores it in form of hydrogen gas. The system is modeled and its behavior is studied using the MATLAB software.

Stand-alone photovoltaic power systems are natural options for application in electrification of remote areas which are not served by the grid electricity supply system. An ampere-hour method of sizing a stand-alone PV system for application in any remote location has been presented. The design which is for both ac and dc ...

This paper proposes a stand-alone photovoltaic (PV) system study in domestic applications. Because of the decrease in power of photovoltaic module as a consequence of changes in solar radiation and temperature which affect the photovoltaic module performance, the design and control of DC-DC buck converter was proposed for providing power to the load from a photovoltaic source.In fact, the control of this converter is carried out with integrated MPPT (Maximum Power Point Tracking) algorithm which ensures a maximum energy generated by the PV arrays. Moreover, the output stage is composed by a battery energy storage system, dc-ac inverter, LCL filter which enables higher efficiency, low distortion ac waveforms and low leakage currents. The control strategy adopted is cascade control composed by two regulation loops.Simulations performed with PSIM software were able to validate the control system.The realization and testing of the photovoltaic system were achieved in the Photovoltaic laboratory of the Centre for Research and Energy Technologies at the Technopark Borj Cedria. Experimental results verify the effeciency of the proposed system.

This report presents a number of models for modelling and simulation of a stand-alone photovoltaic (PV) system with a battery bank verified against a system installed at Risø National Laboratory. The work has been supported by the Danish Ministry ofEnergy, as a part of the activities in the Solar...... Energy Centre Denmark. The study is carried out at Risø National Laboratory with the main purpose to establish a library of simple mathematical models for each individual element of a stand-alone PVsystem, namely solar cells, battery, controller, inverter and load. The models for PV module and battery...

This paper deals with a permanent magnet brushless DC generator (PMBLDCG) based stand-alone wind energy conversion system (WECS) for small scale power generation. A buck-boost DC-DC converter is used for controlling the PMBLDCG speed to achieve optimum energy output from the wind turbine without sensing ...

The rule of decreasing serial cost sharing defined in de Frutos [1] over the class of concave cost functions may violate the important stand-alone test. Sufficient conditions for the test to be satisfied are given, in terms of individual rationality as well as coalitional stability...

A study to identify and quantify the market for stand-alone renewable energy supplies of power (photovoltaics, wind and micro-hydro electricity systems) was described. The study focused on small systems, generally less than a few kW installed capacity. It was suggested that in the UK, the emphasis on grid-connected renewable energy technologies (RETs) has blurred the fact that it is 'off-grid' renewable systems that can offer more immediate real commercial markets for the renewables business. With the likelihood of a significant increase in demand for renewables world wide over the next ten years, the UK needs to make a special effort to become involved

frequency avoids perturbations in the load being propagated to the photovoltaic panel and thus deviating the operating point. Linearization of the photovoltaic panel and converter state-space modeling is performed. In order to achieve stable operation under all operating conditions, the photovoltaic panel......The converter control scheme plays an important role in the performance of maximum power point tracking (MPPT) algorithms. In this paper, an input voltage control with double loop for a stand-alone photovoltaic system is designed and tested. The inner current control loop with high crossover...

The NASA Lewis Research Center (LeRC) is managing stand-alone photovoltaic (PV) system activities sponsored by the U.S. Department of Energy (DOE) and the U.S. Agency for International Development (AID). The DOE project includes village PV power demonstration projects in Gabon (four sites) and the Marshall Islands, PV-powered medical refrigerators in six countries, PV system microprocessor control development activities and PV-hybrid system assessments. The AID project includes a large village system in Tunisia, a water pumping/grain grinding project in Upper Volta, five medical clinics in four countries, PV-powered remote earth station application. These PV activities and summarizes significant findings to data are reviewed.

A feasibility study is achieved on an integral type small PWR with stand-alone safety. It is designed to have the following features. (1) The coolant does not leak out at any accidental condition. (2) The fuel failure does never occur while it is supposed on the large scale PWR at the design base accident. (3) At any accidental condition the safety is secured without any support from the outside (stand-alone safety secure). (4) It has self-regulating characteristics and easy controllability. The above features can be satisfied by integrate the steam generator and CRDM in the reactor vessel while the pipe line break has to be considered on the conventional PWR. Several counter measures are planned to satisfy the above features. The economy feature is also attained by several simplifications such as (1) elimination of main coolant piping and pressurizer by the integration of primary cooling system and self-pressurizing, (2) elimination of RCP by application of natural circulating system, (3) elimination of ECCS and accumulator by application of static safety system, (4) large scale volume reduction of the container vessel by application of integrated primary cooling system, (5) elimination of boric acid treatment by deletion of chemical shim. The long operation period such as 10 years can be attained by the application of Gd fuel in one batch refueling. The construction period can be shortened by the standardizing the design and the introduction of modular component system. Furthermore the applicability of the reduced modulation core is also considered. (K. Tsuchihashi)

on zero-cross detection can't work effectively in small rating stand-alone power grid. Then a soft phase locked loop with additional filter is proposed. It can lock the phase angle on to the positive sequence of fundamental voltage accurately and rapidly. It ensures the performance of APF applied...... in the small rating stand-alone power grid. Moreover, the soft phase locked loop is easy to be implemented in a Digital Signal Processor (DSP). Simulation and experimental results validate that the soft phase locked loop has satisfactory performance.......Traditional LC filters can't work stably in small rating stand-alone power grid. So active power filter (APF) is becoming an important tool to solve the power quality problem in small rating stand-alone power grid. In most current detection algorithm of APF, it needs a synchronizing signal. Firstly...

Full Text Available The optimized dispatch of different distributed generations (DGs in stand-alone microgrid (MG is of great significance to the operation’s reliability and economy, especially for energy crisis and environmental pollution. Based on controllable load (CL and combined cooling-heating-power (CCHP model of micro-gas turbine (MT, a multi-objective optimization model with relevant constraints to optimize the generation cost, load cut compensation and environmental benefit is proposed in this paper. The MG studied in this paper consists of photovoltaic (PV, wind turbine (WT, fuel cell (FC, diesel engine (DE, MT and energy storage (ES. Four typical scenarios were designed according to different day types (work day or weekend and weather conditions (sunny or rainy in view of the uncertainty of renewable energy in variable situations and load fluctuation. A modified dispatch strategy for CCHP is presented to further improve the operation economy without reducing the consumers’ comfort feeling. Chaotic optimization and elite retention strategy are introduced into basic particle swarm optimization (PSO to propose modified chaos particle swarm optimization (MCPSO whose search capability and convergence speed are improved greatly. Simulation results validate the correctness of the proposed model and the effectiveness of MCPSO algorithm in the optimized operation application of stand-alone MG.

The first Autonomous Heliostat has been developed by CIEMAT at PSA facilities in Almeria. This heliostat is an innovative approach to reducing the civil engineering work costs in heliostat fields of central tower plants. Channels, cables and other electric elements have been eliminated in the new heliostat. Thus, one 70-m''2, classical T glass/metal heliostat has been adapted to include all the new stand-alone concept components. A PV system is able to drive two sun-tracking DC motors between 5 and 24 Vdc, 0 and 15A. The heliostat communicates with the control room 400-m away by using a radio-modern working at 9600 baud. An anemometer, a wind switcher, light and ambient temperature sensors have been installed on the heliostat for self-protection decision-making. A PV panel integrated into the heliostat reflecting surface, eliminates cabling and other elements required for a conventional power supply. Communication lines between master control and local control have been replaced by radio-modern. Testing has validated the technical feasibility of the prototype and quantified the real consumption and efficiencies of new elements. The extra costs produced by the autonomous concepts are compared with the cost of civil work in conventional heliostat field. (Author) 8 refs.

The introduction of a stand-alone Bone Bank in our Regional Orthopaedic Hospital has improved the availability of femoral head allograft. Benninger et al. (Bone Joint J 96-B:1307-1311, 2014), demonstrated their institutions bank to be cost effective despite a 30 % discard rate for harvested allograft. We sought to audit our own discard rates and subsequent cost-effectiveness of our bone bank. Donor recruitment. Before approaching a potential donor, our establishment's nurse specialists review their clinical notes and biochemical laboratory results, available on a regional Electronic Care Records. They view femoral head architecture on radiographs against set criteria, Patient Archive and Communication system (SECTRA, Sweden). In total 1383 femoral heads were harvested, 247 were discarded giving an overall rate of 17.9 %. The most common reasons for discard of harvested graft was a positive microbiology/bacteriology result, n = 96 (38.9 %). After a rise in discard rates in 2007, we have steadily reduced our discard rates since 2006/2007 (28.2 %), 2008/2009 (17 %), 2010/2011 (14.8 %), and finally to 10.3 % in 2012/2013. In the current financial year, our cost to harvest, test, store and release a femoral head is £ 610. With a structured donor recruitment process and unique pre-operative radiographic analysis we have successfully reduced our discard rates bi-annually making our bone bank increasingly cost-effective.

The practical applicability of the considerations made in a previous paper to characterize energy balances in stand-alone photovoltaic systems (SAPV) is presented. Given that energy balances were characterized based on monthly estimations, the method is appropriate for sizing installations with variable monthly demands and variable monthly panel tilt (for seasonal estimations). The method presented is original in that it is the only method proposed for this type of demand. The method is based on the rational utilization of daily solar radiation distribution functions. When exact mathematical expressions are not available, approximate empirical expressions can be used. The more precise the statistical characterization of the solar radiation on the receiver module, the more precise the sizing method given that the characterization will solely depend on the distribution function of the daily global irradiation on the tilted surface H{sub g{beta}}{sub i}. This method, like previous ones, uses the concept of loss of load probability (LLP) as a parameter to characterize system design and includes information on the standard deviation of this parameter ({sigma}{sub LLP}) as well as two new parameters: annual number of system failures (f) and the standard deviation of annual number of system failures ({sigma}{sub f}). This paper therefore provides an analytical method for evaluating and sizing stand-alone PV systems with variable monthly demand and panel inclination. The sizing method has also been applied in a practical manner. (author)

The behavior of stand-alone microgrid is analyzed under the condition of faults on distribution feeders. During fault since battery is not able to maintain dc-link voltage within limit, the resistive dump load control is presented to do so. An inverter control is proposed to maintain balanced voltages at PCC under the unbalanced load condition and to reduce voltage unbalance factor (VUF) at load points. The proposed inverter control also has facility to protect itself from high fault current. Existing maximum power point tracker (MPPT) algorithm is modified to limit the speed of generator during fault. Extensive simulation results using MATLAB/SIMULINK established that the performance of the controllers is quite satisfactory under different fault conditions as well as unbalanced load conditions.

This book discusses the computational approach in modern statistical physics, adopting simple language and an attractive format of many illustrations, tables and printed algorithms. The discussion of key subjects in classical and quantum statistical physics will appeal to students, teachers and researchers in physics and related sciences. The focus is on orientation with implementation details kept to a minimum. - ;This book discusses the computational approach in modern statistical physics in a clear and accessible way and demonstrates its close relation to other approaches in theoretical phy

Energy storage devices are required for power balance and power quality in standalone wind energy systems. A Vanadium Redox Flow Battery (VRB) system has many features which make its integration with a stand-alone wind energy system attractive. This paper proposes the integration of a VRB system...

Several power sources such as PV solar arrays and battery are often used to manage the power flow for a photovoltaic (PV) based stand-alone power system due to the fluctuation nature of solar energy resource, and deliver a continuous power to the users in an appropriate form. Traditionally, three...... for PV and battery stand-alone system....

To demonstrate the non-inferiority of synthetic image (SI) mammography versus full-field digital mammography (FFDM) in breast tomosynthesis (DBT) examinations. An observational, retrospective, single-centre, multireader blinded study was performed, using 2384 images to directly compare SI and FFDM based on Breast Imaging Reporting and Data System (BIRADS) categorisation and visibility of radiological findings. Readers had no access to digital breast tomosynthesis slices. Multiple reader, multiple case (MRMC) receiver operating characteristic (ROC) methodology was used to compare the diagnostic performance of SI and FFDM images. The kappa statistic was used to estimate the inter-reader and intra-reader reliability. The area under the ROC curves (AUC) reveals the non-inferiority of SI versus FFDM based on BIRADS categorisation [difference between AUC (ΔAUC), -0.014] and lesion visibility (ΔAUC, -0.001) but the differences were not statistically significant (p=0.282 for BIRADS; p=0.961 for lesion visibility). On average, 77.4% of malignant lesions were detected with SI versus 76.5% with FFDM. Sensitivity and specificity of SI are superior to FFDM for malignant lesions scored as BIRADS 5 and breasts categorised as BIRADS 1. SI is not inferior to FFDM when DBT slices are not available during image reading. SI can replace FFDM, reducing the dose by 45%. • Stand-alone SI demonstrated performance not inferior for lesion visibility as compared to FFDM. • Stand-alone SI demonstrated performance not inferior for lesion BIRADS categorisation as compared to FFDM. • Synthetic images provide important dose savings in breast tomosynthesis examinations.

Highlights: • Analyzes the performance of ANN and ANFIS MPPT algorithms by standalone PV system. • ISSBC with ANFIS can provide the overall efficiency higher than ANN. • CHBMLI integrate with SHE ANN modulation technique improve output voltage quality. • Simulation and hardware results show the ANFIS algorithm efficient than ANN algorithm. - Abstract: This paper presents a unique combination of an interleaved soft switched boost converter (ISSBC) run by a set of two photovoltaic panel (PV) with a distributed MPPT, suitable to guarantee MPPT even under partial shadowed conditions, managed by an adaptive neuro fuzzy inference system trained by the training data derived from a particle swarm optimization (PSO–ANFIS) unit. The ISSBC is followed by a, single phase cascaded H bridge five-level inverter (CHI) driven by the individual DC outputs of the ISSBC, with selective harmonic elimination scheme to eliminate typically the seventh order harmonics. A comparison of different intelligent distributed maximum power point tracking (MPPT) algorithms for photovoltaic (PV) system under partial shadow conditions is carried out. The use of the ISSBC guarantees mitigation of ripple and it is meant to handle higher currents with minimal switching losses. Simulation was carried out in the Matlab Simulink environment and an experimental verification with a scaled down model validated the proposed scheme. It has been thus established, by both simulation and experimental verification, that the PSO–ANFIS model of distributed MPPT scheme of control outperforms other schemes of control for MPPT

As a part of the Hybrid Intelligent Algorithm, a model based on an ANN (artificial neural network) has been proposed in this paper to represent hybrid system behaviour considering the uncertainty related to wind speed and solar radiation, battery bank lifetime, and fuel prices. The Hybrid Intelligent Algorithm suggests a combination of probabilistic analysis based on a Monte Carlo simulation approach and artificial neural network training embedded in a genetic algorithm optimisation model. The installation of a typical hybrid system was analysed. Probabilistic analysis was used to generate an input–output dataset of 519 samples that was later used to train the ANNs to reduce the computational effort required. The generalisation ability of the ANNs was measured in terms of RMSE (Root Mean Square Error), MBE (Mean Bias Error), MAE (Mean Absolute Error), and R-squared estimators using another data group of 200 samples. The results obtained from the estimation of the expected energy not supplied, the probability of a determined reliability level, and the estimation of expected value of net present cost show that the presented model is able to represent the main characteristics of a typical hybrid power system under uncertain operating conditions. - Highlights: • This paper presents a probabilistic model for stand-alone hybrid power system. • The model considers the main sources of uncertainty related to renewable resources. • The Hybrid Intelligent Algorithm has been applied to represent hybrid system behaviour. • The installation of a typical hybrid system was analysed. • The results obtained from the study case validate the presented model

Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

Full Text Available Background:One of the most common surgical operations for treatment of cervical spondylosis is anterior cervical discectomy with fusion (ACDF. In order to achieve stable fusion after discectomy and avoid dysphagia the artificial stand-alone zero-profile cages with integrated screws were developed and introduced into clinical practice. Outcome and complications after ACDF with such cages were not adequately assessed yet.Methods:We analyzed 20 consecutive patients with cervical spondylosis treated in our institution with ACDF with stand-alone zero-profile cage Zero-P. Before and after surgery and then 6, 12 and 24 months after surgery we assessed the level of pain with VAS scale, severity of myelopathy with mJOA scale and dysphagia with four level scale. Treatment outcome was assessed after 2 years according to Odom's criteria.Results:No complications occurred during surgery or recovery after surgery. The VAS score after surgery and then after 6, 12 and 24 months was statistically significantly lower than before surgery (p<0.05. The mJOA scores were 6, 12 and 24 months after surgery statistically significantly higher than before surgery (p<0.05. Transient and mild dysphagia was present after surgery in 15% (3/20 of patients and 6, 12 or 24 months after surgery in none. Outcome after 2 years was excellent in 9 patients and good in 11 patients.Conclusions:Operative treatment of symptomatic cervical spondylosis with ACDF using stand-alone zero-profile cage with integrated screws is safe and efficient. Incidence of dysphagia after surgery is low and generally transient.

Highlights: ► This paper presents a methodology for the installation capacity optimization. ► Hybrid generation system is optimized by application of adaptive genetic algorithm. ► A cost investigation is made under various conditions and component characteristics. ► The optimization scheme is validated to meet the annual power load demand. -- Abstract: The aim of this work is to present an optimization methodology for the installation capacity of a stand-alone hybrid generation system, taking into consideration the cost and reliability. Firstly, on the basis of derived steady state models of a wind generator (WG), a photovoltaic array (PV), a battery and an inverter, the hybrid generation system is modeled for the purpose of capacity optimization. Secondly, the power system is analyzed for determining both the system structure and the operation control strategy. Thirdly, according to hourly weather database of wind speed, temperature and solar irradiation, annual power generation capacity is estimated for the system match design in order that an annual power load demand can be met. The capacity determination of a hybrid generation system becomes complicated as a result of the uncertainty in the renewable energy together with load demand and the nonlinearity of system components. Aimed at the power system reliability and the cost minimization, the capacity of a hybrid generation system is optimized by application of an adaptive genetic algorithm (AGA) to individual power generation units. A total cost investigation is made under various conditions, such as wind generator power curves, battery discharge depth and the loss of load probability (LOLP). At the end of this work, the capacity of a hybrid generation system is optimized at two installation sites, namely the offshore Orchid Island and Wuchi in Taiwan. The optimization scheme is validated to optimize power capacities of a photovoltaic array, a battery and a wind turbine generator with a relative

Aim of study was to find a proper method for assessing subsidence using a radiologic measurement following anterior cervical discectomy and fusion (ACDF) with stand-alone polyetheretherketone (PEEK), Solistrade mark cage. Forty-two patients who underwent ACDF with Solistrade mark cage were selected. With a minimum follow-up of 6 months, the retrospective investigation was conducted for 37 levels in 32 patients. Mean follow-up period was 18.9 months. Total intervertebral height (TIH) of two fused vertebral bodies was measured on digital radiographs with built-in software. Degree of subsidence (DeltaTIH) was reflected by the difference between the immediate postoperative and follow-up TIH. Change of postoperative disc space height (CT-MRDeltaTIH) was reflected by the difference between TIH of the preoperative mid-sagittal 2D CT and that of the preoperative mid-sagittal T1-weighted MRI. Compared to preoperative findings, postoperative disc height was increased in all cases and subsidence was observed only in 3 cases. For comparison of subsidence and non-subsidence group, TIH and CT-MRDeltaTIH of each group were analyzed. There was no statistically significant difference in TIH and CT-MRDeltaTIH between each group at 4 and 8 weeks, but a difference was observed at the last follow-up TIH (p=0.0497). ACDF with Solistrade mark cage was associated with relatively good radiologic long-term results. Fusion was achieved in 94.5% and subsidence occurred in 8.1% by the radiologic assessment. Statistical analysis reveals that the subsidence seen later than 8 weeks after surgery and the development of subsidence does not correlate statistically with the change of the postoperative disc space height.

Energy storage devices are required for power balance and power quality in standalone wind energy systems. A Vanadium Redox Flow Battery (VRB) system has many features which make its integration with a stand-alone wind energy system attractive. This paper proposes the integration of a VRB system...... with a typical stand-alone wind energy system during wind speed variation as well as transient performance under variable load. The investigated system consists of a variable speed wind turbine with permanent magnet synchronous generator (PMSG), diode rectifier bridge, buck-boost converter, bidirectional charge...... controller, transformer, inverter, ac loads and VRB (to store a surplus of wind energy and to supply power during a wind power shortage). The main purpose is to supply domestic appliances through a single phase 230V, 50Hz inverter. Simulations are accomplished in order to validate the stability of the supply....

The conventional stand-alone brushless doubly-fed generator (BDFG) control strategies need the feedback from the rotor position or speed sensors, which can reduce system reliability and increase the cost and axial volume of the machine. In this paper, a sensorless direct voltage control (DVC......) strategy is presented for the stand-alone BDFG. The satisfactory dynamic performance is verified by experimental results under four kinds of typical operation conditions. Besides, the proposed control strategy is robust due to no generator parameters being required....

In this paper, a current sensorless MPPT control method for a stand-alone-type PV generation system is proposed. This control method offers advantages of the simplified hardware configuration and the low cost, by using only one sensor to measure the PV output voltage. In the application to stand-alone-type with a battery load, the experimental results show that the estimated values of PV output current are accurate, and the use of the proposed MPPT control increases the PV generated energy by 16.3% compared to the conventional system. Furthermore, it is clarified that the proposed method has extremely high UUF (Useful utilization factor) of 98.7%.

Photovoltaic(PV) Module is indispensable of a stand-alone PV system. In this paper, a one-diode equivalent circuit-based versatile simulation model in the form of masked block PV module is proposed. By the model, it is allowed to estimate behavior of PV module with respect changes on irradiance intensity, ambient temperature and parameters of the PV module. In addition, the model is capable of function of Maximum Power Point Tracking (MPPT) which can be used in the dynamic simulation of stand-alone PV systems.

A control strategy to regulate the frequency and voltage of a stand-alone wound rotor induction machine is presented. This strategy allows the machine to work as a generator in stand-alone systems (without grid connection) with variable rotor speed. A stator flux-oriented control is proposed using the rotor voltages as actuation variables. Two cascade control loops are used to regulate the stator flux and the rotor currents. A closed loop observer is designed to estimate the machine flux which is necessary to implement these control loops. The proposed control strategy is validated through simulations with satisfactory results. (author)

This paper considers the pros and cons of stand-alone displays, analog (e.g. billboards, blackboards, whiteboards, large pieces of paper etc.) as well as digital (e.g. large shared screens, digital whiteboards or similar), as tools for time management processes in a creativity-driven learning...... to storing information digitally. The findings could indicate a possible market for stand-alone, interactive digital displays combining the ‘touch and feel’ character of an analog board with the convenience of digital data storage....

The following is an implementation of a stand-alone system for solar energy harvesting and electrical energy storage systems for use in off-grid housing applications. The principal aim of this project was to construct a compact and affordable system for an off-grid house and to monitor its efficiency along the year.

and enable control of frequency and voltage independently on both the grid side and the generator side. The prototype has been installed at Risø. The paper will present results from test runs of the system both operating stand-alone supplying a single load and in parallel operation with a diesel genset....

The purpose of this dissertation is to offer a multifaceted overview of stand-alone literature reviews. These texts, literature reviews published unattached to research articles, have existed for centuries but remained largely unstudied by linguists. Thus, the goal of this project is to present these reviews' situational, grammatical, and…

This paper discusses the modeling, design and operation of a PV powered stand-alone system, which includes a PV array, a battery bank, power electronic converters and the associated control system. The design considerations are analyzed and a design platform is presented. Furthermore the operatio...

low back pain resulting from degenerative disc disease. ALIF surgery has previously been linked with certain high risk complications and unfavorable long term fusion results. Newer studies suggest that stand-alone ALIF can possibly be advantageous compared to other types of posterior instrumented...

This paper investigates the climatic eects and environmental variations on the performance of a stand-alone photovoltaic system. The eects of partial shading with dierent climate conditions and load resistance variations were examined. A survey of some of the work done in this eld of environmental eect on solar panel was ...

This paper presents a performance evaluation and comparison of state-of-the-art low voltage Si MOSFETs for a stand-alone photovoltaic-LED Light to Light (LtL) system. The complete system is formed by two cascaded converters that will be optimized for a determined solar irradiation and LED...

Several power sources such as PV solar arrays and battery are often used to manage the power flow for a photovoltaic (PV) based stand-alone power system due to the fluctuation nature of solar energy resource, and deliver a continuous power to the users in an appropriate form. Traditionally, three...

This report describes the development of a stand-alone version of the 11kW Gaia wind turbine. Various possible configurations are investigated and a configuration using a back-to-back converter is chosen. A model is developed for controller design of thefast controllers of the unit. Controllers...... assessment and controller design a dynamic performance assessment model has been developed....

A maximum power point tracking scheme for a 1kw stand-alone solar energy based power supply. ... This paper elucidates one of the tracking schemes for a photovoltaic (PV) systems using Cuk converter operating in discontinuous inductor current mode (DICM) as an interface. A method for efficiently maximizing the output ...

The common practice for controlling the stand-alone voltage source inverters (VSIs) is to transform abc voltage and current signals to DC signals using the dq transformation, which makes it possible to control the new DC voltage and current signals just using simple proportional-integral controll...

In this paper, a new control method for maximum power point tracking (MPPT) in stand-alone-type PV generaton systems is proposed. In this control method, the operations detecting the maximum power point and tracking its point are alternately carried out by using a step-up DC—DC converter. This method requires neither the measurement of temperature and insolation level nor PV array model. In a stand-alone-type application with a battery load, the design method for the boost inductance L of the step-up DC—DC converter is described, and the experimental results show that the use of the proposed MPPT control increases the PV generated energy by 14.8% compared to the conventional system.

OBJECTIVE Lateral lumbar interbody fusion (LLIF) is a less invasive surgical option commonly used for a variety of spinal conditions, including in high-risk patient populations. LLIF is often performed as a stand-alone procedure, and may be complicated by graft subsidence, the clinical ramifications of which remain unclear. The aim of this study was to characterize further the sequelae of graft subsidence following stand-alone LLIF. METHODS A retrospective review of prospectively collected data was conducted on consecutive patients who underwent stand-alone LLIF between July 2008 and June 2015; 297 patients (623 levels) met inclusion criteria. Imaging studies were examined to grade graft subsidence according to Marchi criteria, and compared between those who required revision surgery and those who did not. Additional variables recorded included levels fused, DEXA (dual-energy x-ray absorptiometry) T-score, body mass index, and routine demographic information. The data were analyzed using the Student t-test, chi-square analysis, and logistic regression analysis to identify potential confounding factors. RESULTS Of 297 patients, 34 (11.4%) had radiographic evidence of subsidence and 18 (6.1%) required revision surgery. The median subsidence grade for patients requiring revision surgery was 2.5, compared with 1 for those who did not. Chi-square analysis revealed a significantly higher incidence of revision surgery in patients with high-grade subsidence compared with those with low-grade subsidence. Seven of 18 patients (38.9%) requiring revision surgery suffered a vertebral body fracture. High-grade subsidence was a significant predictor of the need for revision surgery (p subsidence following stand-alone LLIF required revision surgery. When evaluating patients for LLIF, supplemental instrumentation should be considered during the index surgery in patients with a significant risk of graft subsidence.

Three-port converter (TPC) topologies for renewable energy systems aim to provide higher efficiency and power density than conventional cascaded structures. This work proposes an analytical comparison of different TPC topologies for a photovoltaic LED lamp stand-alone system. A comparison using...... component stress factor (CSF) is performed, which gives a quantitative measure of the performance of the converter. The candidate topologies are compared to each other according to a defined LED lighting strategy and a solar irradiation profile....

The properties of a secure stand-alone positive personnel identity verification system are detailed. The system is designed to operate without the aid of a central computing facility and the verification function is performed in the absence of security personnel. Security is primarily achieved by means of data encryption on a magnetic stripe badge. Several operational configurations are discussed. Advantages and disadvantages of this system compared to a central computer driven system are detailed

Full Text Available Standalone renewable energy based on photovoltaic systems accompanied with battery storage system are beginning to play an important role over the world to supply power to remote areas. The objective of the study reported in this paper is to elaborate and design a bond graphs model for sizing stand-alone domestic solar photovoltaic electricity systems and simulating the performance of the systems in a tropical climate. The systems modelled consist of an array of PV modules, a lead-acid battery, and a number of direct current appliances. This paper proposes the combination of lead acid battery system with a typical standalone photovoltaic energy system under variable loads. The main activities of this work purpose to establish library graphical models for each individual component of standalone photovoltaic system. Control strategy has been considered to achieve permanent power supply to the load via photovoltaic/battery based on the power available from the sun. The complete model was simulated under two testing including sunny and cloudy conditions. Simulation of the system using Symbols software was performed and the results of simulation show the superior stable control system and high efficiency. These results have been contrasted with real measured data from a measurement campaign plant carried on electrical engineering laboratory of Grenoble using various interconnection schemes are presented.

Full Text Available Stand-alone photovoltaic (SAPV systems are widely used in rural areas where there is no national grid or as a precaution against power outages. In this study, technical and economic analysis of a SAPV system was carried out using meteorological data for 75 province centers in seven geographical regions of Turkey. Obtained results for each province center were separated by geographical area. The averages of the centers for each region are taken as output. A calculation algorithm based on MsExcel has been established for these operations. The analyses made with the developed algorithm are repeated for five different scenarios that they cover periods of time when a constant strong load is active for all seasons (winter, spring, summer, and autumn and all year round. The developed algorithm calculates the life-cycle cost, the unit energy cost, the electrical capacity utilization rate, the amount of generated/excess energy per month, the initial investment/replacement, and operating and maintenance (O&M costs of each element. As a result, geographical regions of Turkey are compared in terms of these outputs graphically. Further investigations may include the sale of excess energy generated, small-scale PV system cost factors parallel to the grid, and the effects of government incentives.

In order to embed ESD in the EE curriculum, several approaches has been introduced and practiced in higher education institutions. One of the approaches is to introduce a new ESD course as an add-on to the existing curriculum being either compulsory or elective and either designed for a single...... and reported examples of other ESD courses of the same kind. The presented conceptual framework is put to practice, characterising the AAU course as a stand-alone interdisciplinary course with a consensual approach. The conclusion is that the conceptual framework can provide an awareness of the design features...

Materials management information systems (MMISs) incorporate information tools that hospitals can use to automate certain business processes, increase staff compliance with these processes, and identify opportunities for cost savings. Recently, there has been a push by hospital administration to purchase enterprise resource planning (ERP) systems, information systems that promise to integrate many more facets of healthcare business. We offer this article to help materials managers, administrators, and others involved with information system selection understand the changes that have taken place in materials management information systems, decide whether they need a new system and, if so, whether a stand-alone MMIS or an ERP system will be the best choice.

An introduction to photoconductivity, semiconductors, and solar photovoltaic cells is included along with a demonstration of specific applications and application identification. Small solar cell power system design engineering is discussed. Solar PV power system applications involve classical direct electrical energy conversion and electric power system analysis and synthesis. Presentations and examples involve a variety of disciplines including structural analysis, electric power and load analysis, reliability, sizing and optimization; and, installation, operation and maintenance. Four specific system designs are demonstrated: water pumping; domestic uses; navigational and aircraft aids; and telecommunications. All of the applications discussed are for small power requirement (under 2 kilowatts), standalone systems to be used in remote locations.

Few data are available on the occurrence after stand-alone lateral lumbar interbody fusion (LLIF) of implant subsidence, whose definition and incidence vary across studies. The primary objective of this work was to determine the incidence of subsidence 1 year postoperatively, using an original measurement method, whose validity was first assessed. The secondary objective was to assess the clinical impact of subsidence. Implant subsidence after stand-alone LLIF is a common complication that can adversely affect clinical outcomes. Of 69 included patients who underwent stand-alone LLIF, 67 (97%) were re-evaluated at least 1 year later. Furthermore, 63 (91%) patients had two available computed tomography (CT) scans for assessing subsidence, one performed immediately after surgery and the other 1 year later. Reproducibility of the original measurement method was assessed in a preliminary study. Subsidence was defined as at least 4mm loss of fused space height. The incidence of subsidence was 32% (20 patients). Subsidence was global in 7 (11%) patients and partial in 13 (21%) patients. Mean loss of height was 5.5±1.5mm. Subsidence predominated anteriorly in 50% of cases. The lordotic curvature of the fused segment was altered in 50% of patients, by a mean of 8°±3°. Fusion was achieved in 67/69 (97%) patients. The Oswestry score and visual analogue scale scores for low-back and nerve-root pain were significantly improved after 1 year in the overall population and in the groups with and without subsidence. Reproducibility of our measurement method was found to be excellent. Subsidence was common but without significant clinical effects after 1 year. Nevertheless, subsidence can be associated with pain and can result in loss of lumbar lordosis, which is a potential risk factor for degenerative disease of the adjacent segments. A score for predicting the risk of subsidence will now be developed by our group as a tool for improving patient selection to stand-alone LLIF

Full Text Available Access to electricity can have a positive psychological impact through a lessening of the sense of exclusion, and vulnerability often felt by the orphanages. This paper presented the simulation and optimization study of a stand-alone photovoltaic power system that produced the desired power needs of an orphanage. Solar resources for the design of the system were obtained from the National Aeronautics and Space Administration (NASA Surface Meteorology and Solar Energy website at a location of 6°51′N latitude and 7°35′E longitude, with annual average solar radiation of 4.92 kWh/m2/d. This study is based on modeling, simulation, and optimization of energy system in the orphanage. The patterns of load consumption within the orphanage were studied and suitably modeled for optimization. Hybrid Optimization Model for Electric Renewables (HOMER software was used to analyze and design the proposed stand-alone photovoltaic power system model. The model was designed to provide an optimal system configuration based on an hour-by-hour data for energy availability and demands. A detailed design, description, and expected performance of the system were presented in this paper.

Full Text Available Interest in DC microgrids is rapidly increasing along with the improvement of DC power technology because of its advantages. To support the integration process of DC microgrids with the existing AC utility grids, the form of hybrid AC/DC microgrids is considered for higher power conversion efficiency, lower component cost and better power quality. In the system, AC and DC portions are connected through interlink bidirectional AC/DC converters (IC with a proper control system and power management. In the stand-alone operation mode of AC/DC hybrid microgrids, the control of power injection through the IC is crucial in order to maintain the system security. This paper mainly deals with a coordination control strategy of IC and a battery energy storage system (BESS converter under stand-alone operation. A coordinated control strategy for the IC, which considers the state of charge (SOC level of BESS and the load shedding scheme as the last resort, is proposed to obtain better power sharing between AC and DC subgrids. The scheme will be tested with a hybrid AC/DC microgrid, using the tool of the PSCAD/EMTDC software.

Photovoltaic energy has nowadays an increased importance in electrical power applications, since it is considered as an essentially inexhaustible and broadly available energy resource. However, the output power provided via the photovoltaic conversion process depends on solar irradiation and temperature. Therefore, to maximize the efficiency of the photovoltaic energy system, it is necessary to track the maximum power point of the PV array. The present paper proposes a maximum power point tracker (MPPT) method, based on fuzzy logic controller (FLC), applied to a stand-alone photovoltaic system. It uses a sampling measure of the PV array power and voltage then determines an optimal increment required to have the optimal operating voltage which permits maximum power tracking. This method carries high accuracy around the optimum point when compared to the conventional one. The stand-alone photovoltaic system used in this paper includes two bi-directional DC/DC converters and a lead-acid battery bank to overcome the scare periods. One converter works as an MPP tracker, while the other regulates the batteries state of charge and compensates the power deficit to provide a continuous delivery of energy to the load. The Obtained simulation results show the effectiveness of the proposed fuzzy logic controller.

The final result of an international assessment of the market for stand-alone photovoltaic systems in cottage industry applications is reported. Nonindustrialized countries without centrally planned economies were considered. Cottage industries were defined as small rural manufacturers, employing less than 50 people, producing consumer and simple products. The data to support this analysis were obtained from secondary and expert sources in the U.S. and in-country field investigations of the Philippines and Mexico. The near-term market for photovoltaics for rural cottage industry applications appears to be limited to demonstration projects and pilot programs, based on an in-depth study of the nature of cottage industry, its role in the rural economy, the electric energy requirements of cottage industry, and a financial analysis of stand-alone photovoltaic systems as compared to their most viable competitor, diesel driven generators. Photovoltaics are shown to be a better long-term option only for very low power requirements. Some of these uses would include clay mixers, grinders, centrifuges, lathes, power saws and lighting of a workshop.

With reference to a standalone power plant configuration, a photovoltaic (PV) system, integrated with an electrolytic hydrogen production and fuel cell reconversion line, is examined with the aim of checking the possibility of assuring energy supply for a given load with time continuity. In order to size the PV array in function of the annual load diagram, for a given site, a method, which utilizes the monthly average values of sun energy density collected on horizontal surface as input data, is presented. The results demonstrate that a PV-H{sub 2} system is suitable for a modular standalone solar power station, able to supply continuously each generic load. This technical acquisition has to be considered as the first necessary condition to get photovoltaics to overcome the limit due to working intermittence and become a real energy option. An example of an application for a small Mediterranean island, the Volcano Island, permanently inhabited by 469 people, is also discussed. This typical case can be actually show the technical viability of the PV-H{sub 2} solution in powering a small community with the required reliability.

This paper presents a new robust fuzzy control of energy management strategy for the stand-alone hybrid power systems. It consists of two levels named centralized fuzzy supervisory control which generates the power references for each decentralized robust fuzzy control. Hybrid power systems comprises: a photovoltaic panel and wind turbine as renewable sources, a micro turbine generator and a battery storage system. The proposed control strategy is able to satisfy the load requirements based on a fuzzy supervisor controller and manage power flows between the different energy sources and the storage unit by respecting the state of charge and the variation of wind speed and irradiance. Centralized controller is designed based on If-Then fuzzy rules to manage and optimize the hybrid power system production by generating the reference power for photovoltaic panel and wind turbine. Decentralized controller is based on the Takagi-Sugeno fuzzy model and permits us to stabilize each photovoltaic panel and wind turbine in presence of disturbances and parametric uncertainties and to optimize the tracking reference which is given by the centralized controller level. The sufficient conditions stability are formulated in the format of linear matrix inequalities using the Lyapunov stability theory. The effectiveness of the proposed Strategy is finally demonstrated through a SAHPS (stand-alone hybrid power systems) to illustrate the effectiveness of the overall proposed method. (paper)

Photovoltaic (PV) stand-alone systems need to achieve multiple energy conversion modes. I.e. the energy conversion from PV to a local energy storage as well as energy conversion from the energy storage to the load. This paper documents the practical design considerations for the development of a ...... conversion from photovoltaic panel to the battery, and 97 % in the area 1.4 W to 2 W for power delivery to the OLED.......Photovoltaic (PV) stand-alone systems need to achieve multiple energy conversion modes. I.e. the energy conversion from PV to a local energy storage as well as energy conversion from the energy storage to the load. This paper documents the practical design considerations for the development...... of a three-port-converter for this purpose optimized for the specifications for driving an Organic Light Emitting Diode (OLED) panel intended for lighting purposes. By using a three-port-converter, featuring shared components for each conversion mode, the converter reaches 97 % efficiency at 1.8 W during...

The final result of an international assessment of the market for stand-alone photovoltaic systems in cottage industry applications is reported. Nonindustrialized countries without centrally planned economies were considered. Cottage industries were defined as small rural manufacturers, employing less than 50 people, producing consumer and simple products. The data to support this analysis were obtained from secondary and expert sources in the U.S. and in-country field investigations of the Philippines and Mexico. The near-term market for photovoltaics for rural cottage industry applications appears to be limited to demonstration projects and pilot programs, based on an in-depth study of the nature of cottage industry, its role in the rural economy, the electric energy requirements of cottage industry, and a financial analysis of stand-alone photovoltaic systems as compared to their most viable competitor, diesel driven generators. Photovoltaics are shown to be a better long-term option only for very low power requirements. Some of these uses would include clay mixers, grinders, centrifuges, lathes, power saws and lighting of a workshop.

This paper gives a well-documented health risk of fuel-based lighting (kerosene lamps and fuel-powered generators) and proposed a design of a stand-alone solar PV system for sustainable home lightings in rural Nigerian area. The design was done in three different patterns of electricity consumptions with energy efficient lightings (EELs) using two different battery types (Rolls Surrette 6CS25PS and Hoppecke 10 OpzS 1000) on; (i) judicious power consumption, (ii) normal power consumption, and (iii) excess power consumption; and compared them with the incandescent light bulb consumption. The stand-alone photovoltaic energy systems were designed to match the rural Nigerian sunlight and weather conditions to meet the required lightings of the household. The objective function and constraints for the design models were formulated and optimization procedures were used to demonstrate the best solution (reliability at the lowest lifecycle cost). Initial capital costs as well as annualized costs over 5, 10, 15, 20, and 25 years were quantified and documented. The design identified the most cost-effective and reliable solar and battery array among the patterns of electricity consumption with EEL options (judicious power consumption, normal power consumption, and excess power consumption).

Studies are performed to develop a simulation program for a stand-alone photovoltaic power generation system equipped with a lead acid battery. In this stand-alone photovoltaic power generation system, the load is connected in shunt with the solar cell array output through the intermediary of a lead acid battery and inverter. The program is a model in which the solar cell model is built taking parallel resistance into account, and the temperature-dependence of the constants is described using approximations experimentally obtained by Solar Techno Center of JQA (Japan Quality Assurance Organization), Hamamatsu. Insolation data for the model is described using METPV compiled by Japan Weather Association, and load data is described using data actually measured at Shizuoka. This program is compared with the data of operation at Hamamatsu, and the result is almost satisfactory. Simulations are conducted at five typical locations in Japan using this program, and it is found that the array load matching correction factor is dependent on seasonal changes rather than locality, that the battery contribution rate does not change much throughout the year, and that it is not dependent on locality. 5 refs., 7 figs., 3 tabs.

Full Text Available This paper gives a well-documented health risks of fuel-based lighting (kerosene lamps and fuel-powered generators and proposed a design of a stand-alone solar PV system for sustainable home lightings in rural Nigerian area. The design was done in three different patterns of electricity consumptions with energy efficient lightings (EELs using two different battery types (Rolls Surrette 6CS25PS and hoppecke 10 OpzS 1000 on; i judicious power consumption, ii normal power consumption, iii excess power consumption; and compared them with the incandescent light bulb consumption. The stand-alone photovoltaic energy systems were designed to match the rural Nigerian sunlight and weather conditions to meet the required lightings of the household. The objective function and constraints for the design models were formulated and optimization procedure were used to demonstrate the best solution (reliability at the lowest lifecycle cost. Initial capital costs as well as annualized costs over 5, 10, 15, 20, and 25 years were quantified and documented. The design identified the most cost-effective and reliable solar and battery array among the patterns of electricity consumption with energy efficient lighting options (judicious power consumption, normal power consumption, and excess power consumption.

Indonesian government insists to optimize the use of renewable energy resources in electricity generation. One of the efforts is launching Independent Energy Village plan. This program aims to fulfill the need of electricity for isolated or remote villages in Indonesia. In order to support the penetration of renewable energy resources in electricity generation, a hybrid power generation system is developed. The simulation in this research is based on the availability of renewable energy resources in Brumbun beach, Tulungagung, East Java. Initially, the electricity was supplied through stand-alone electricity generations which are installed at each house. Hence, the use of electricity between 5 p.m. – 9 p.m. requires high operational costs. Based on the problem above, this research is conducted to design a stand-alone hybrid electricity generation system, which may consist of diesel, wind, and photovoltaic. The design is done by using HOMER software to optimize the use of electricity from renewable resources and to reduce the operation of diesel generation. The combination of renewable energy resources in electricity generation resulted in NPC of 44.680, COE of 0,268, and CO2 emissions of 0,038 % much lower than the use of diesel generator only.

The purposes of the present study are to evaluate the subsidence and nonunion that occurred after anterior cervical discectomy and fusion using a stand-alone intervertebral cage and to analyze the risk factors for the complications. Thirty-eight patients (47 segments) who underwent anterior cervical fusion using a stand-alone polyetheretherketone (PEEK) cage and an autologous cancellous iliac bone graft from June 2003 to August 2008 were enrolled in this study. The anterior and posterior segmental heights and the distance from the anterior edge of the upper vertebra to the anterior margin of the cage were measured on the plain radiographs. Subsidence was defined as ≥ a 2 mm (minor) or 3 mm (major) decrease of the segmental height at the final follow-up compared to that measured at the immediate postoperative period. Nonunion was evaluated according to the instability being ≥ 2 mm in the interspinous distance on the flexion-extension lateral radiographs. The anterior and posterior segmental heights decreased from the immediate postoperative period to the final follow-up at 1.33 ± 1.46 mm and 0.81 ± 1.27 mm, respectively. Subsidence ≥ 2 mm and 3 mm were observed in 12 segments (25.5%) and 7 segments (14.9%), respectively. Among the expected risk factors for subsidence, a smaller anteroposterior (AP) diameter (14 mm vs. 12 mm) of cages (p = 0.034; odds ratio [OR], 0.017) and larger intraoperative distraction (p = 0.041; OR, 3.988) had a significantly higher risk of subsidence. Intervertebral nonunion was observed in 7 segments (7/47, 14.9%). Compared with the union group, the nonunion group had a significantly higher ratio of two-level fusion to one-level fusions (p = 0.001). Anterior cervical fusion using a stand-alone cage with a large AP diameter while preventing anterior intraoperative over-distraction will be helpful to prevent the subsidence of cages. Two-level cervical fusion might require more careful attention for avoiding nonunion.

Highlights: • An Li-ion battery based stand-alone a-Si PV was designed. The system composed of three a-Si panels with an efficiency of 7% and 40 cells of LFP batteries. • Effects of solar radiation and environmental temperature for three cities, Istanbul, Ankara, and Adana, have been investigated on a-Si panels. • Using transition formulas BSPV outputs are predictable for any location out of standard test condition. - Abstract: The number of photovoltaic (PV) system installations is increasing rapidly. As more people learn about this versatile and often cost-effective power option, this trend will accelerate. This document presents a recommended design for a battery based stand-alone photovoltaic system (BSPV). BSPV system has the ability to be applied in different areas, including warning signals, lighting, refrigeration, communication, residential water pumping, remote sensing, and cathodic protection. The presented calculation method gives a proper idea for a system sizing technique. Based on application load, different scenarios are possible for designing a BSPV system. In this study, a battery based stand-alone system was designed. The electricity generation part is three a-Si panels, which are connected in parallel, and for the storage part LFP (lithium iron phosphate) battery was used. The high power LFP battery packs are 40 cells each 8S5P (configured 8 series 5 parallel). Each individual pack weighs 0.5 kg and is 25.6 V. In order to evaluate the efficiency of a-Si panels with respect to the temperature and the solar irradiation, cities of Istanbul, Ankara and Adana in Turkey were selected. Temperature and solar irradiation were gathered from reliable sources and by using translation equations, current and voltage output of panels were calculated. As a result of these calculations, current and energy outputs were computed by considering an average efficient solar irradiation time value per day in Turkey. The calculated power values were inserted to a

This paper presents structure design, microfabrication processes, calibration techniques and experimental results of differential capacitance force sensors with features of sub-nano-newton sensitivity, up to 10 000 Hz sampling rate, and applicability as stand-alone devices. The representative sensor demonstrates a force resolution of 0.11 nN at a 19 Hz sampling rate or 1.47 nN at 10 000 Hz. A novel asymmetric differential capacitance structure proposed results in remarkable increase in the ratio of measurement range to resolution in comparison with traditional symmetric structure. In addition, the stiction between silicon and glass caused by the capillary force during dicing is eliminated by the use of hydrophobization treatment. Such a treatment is essential to successfully fabricate structures with a large ratio of overlapped area to gap in silicon/glass anodic bonding processes.

This work focuses on a special type of wireless sensor networks (WSNs) that we refer to as a standalone network. These netwoks operate in harsh and extreme environments where data collection is done only occasionally. Typical examples include habitat monitoring systems, monitoring systems...... in chemical plants, etc. Given resource constrained operation of a sensor network where the nodes are battery powered and buffer sizes are limited, efficient methods for in-network data storage abd it subsequent fast and reliable transmission to a gateway is desirable. To save scarse resources and to prolong...... the lifetime of the whole network, the lossy data grregation method can be applied. It is especially viable in the networks where several sensors are measuring the same physical phenomenum and only average values of sensor readings are of interest. In this paper we present a method for efficient lossy data...

This paper presents a model of a stand-alone solar energy conversion system based on synchronous machine working as a synchronous condenser in overexcited state. The proposed model consists of a Synchronous Condenser, a DC/DC boost converter whose output is fed to the field of the SC. The boost converter is supplied by the modelled solar panel and a day time variable irradiance is fed to the panel during the simulation time. The model also has one alternate source of rechargeable batteries for the time when irradiance falls below a threshold value. Also the excess power produced when there is ample irradiance is divided in two parts and one is fed to the boost converter while other is utilized to recharge the batteries. A simulation is done in MATLAB-SIMULINK and the obtained results show the utility of such modelling for supplying reactive power is feasible.

Use of GPS for road pricing has often been suggested as the way of creating more efficient charging strategies than existing systems based on cordon lines or time use. In Denmark, Copenhagen participated with the AKTA project in the PRoGRESS programme, sponsored by the EU. The major part...... of the AKTA project was to equip 500 cars with GPS receivers. The paper presents the methods and results from a study of GPS quality in relation to road pricing in a dense urban area. The collected data from 500 cars over a two-year period in the Copenhagen region was analyzed in order to determine whether...... the standalone GPS quality and reliability is adequate for implementation of an operational road pricing system in Copenhagen. The results from the analysis show that the satellite availability in Copenhagen is not sufficient to form the basis for a reliable operational road pricing system. The narrow...

A stand-alone renewable-energy system employing a hydrogen-based energy store is now being commissioned within the HaRI project at West Beacon Farm, Leicestershire, UK. The interconnection of the various generators, loads and storage system is made through a central DC busbar: an arrangement that is believed to be unique within systems of this type and scale. The rotating generators, such as the wind turbines, are connected through standard industrial drives operating in regenerative mode, while the DC devices - electrolyser, fuel cell and solar photovoltaic array - employ custom DC-DC converters. This paper reviews the design philosophy of the electrical system and the various converters required. Modelling and simulation of the system is discussed along with practical lessons learnt from its implementation and some initial results are presented. (author)

Conventional construction of digital dynamic system simulations often involves collecting differential equations that model each subsystem, arran g them to a standard form, and obtaining their numerical gin solution as a single coupled, total-system simultaneous set. Simulation by numerical coupling of independent stand-alone subsimulations is a fundamentally different approach that is attractive because, among other things, the architecture naturally facilitates high fidelity, broad scope, and discipline independence. Recursive feedback is defined and discussed as a candidate approach to multidiscipline dynamic system simulation by numerical coupling of self-contained, single-discipline subsystem simulations. A satellite motion example containing three subsystems (orbit dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Distributed and centralized implementations of coupling have been considered. Numerical results are evaluated by direct comparison with a standard total-system, simultaneous-solution approach.

This report describes the development of a stand-alone version of the 11kW Gaia wind turbine. Various possible configurations are investigated and a configuration using a back-to-back converter is chosen. A model is developed for controller design of the fast controllers of the unit. Controllers are designed and a prototype is built for testing. The report documents the performance of the prototype through measurements done on the full scale prototype installed in a test facility where it has been tested both as a standalone unit and in parallel with a diesel genset. For system wide power quality assessment and controller design a dynamic performance assessment model has been developed. (au)

An energy management system for stand-alone microgrid composed of diesel generators, wind turbine generator, biomass generator and an ESS (energy storage system) is proposed in this paper. Different operation objectives are achieved by a hierarchical control structure with different time scales. Firstly, the optimal schedules of the diesel generators, wind turbine generator, biomass generator and ESS are determined fifteen minutes ahead according to the super short-term forecast of load and wind speed in the optimal scheduling layer. Comprehensive analysis which takes the uncertainty of load and wind speed into account is conducted in this layer to minimize the operation cost of the system and ensure a desirable range of the state of charge of the ESS. Secondly, the operation points of each unit are regulated dynamically to guarantee real-time power balance and safety range of diesel generation in the real-time control layer, based on which the response capability when suffering significant forecast deviation and other emergency issues, e.g. sudden load-up can be improved. Finally, the effectiveness of the proposed energy management strategy is verified on an RT_Lab based real-time simulation platform, and the economic performances with different types of ESS are analyzed as well. - Highlights: • A hierarchical control strategy is proposed for a stand-alone microgrid. • The uncertainties of load and wind speed have been considered. • Better economic performance and high reliability of the system can be achieved. • The influences of different energy storage systems have been analyzed.

Full Text Available Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image.The inclusion criteria were that (a the intervention was stand-alone (i.e., solely focused on improving body image, (b a control group was used, (c participants were randomly assigned to conditions, and (d at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy.The literature search identified 62 tests of interventions (N = 3,846. Interventions produced a small-to-medium improvement in body image (d+ = 0.38, a small-to-medium reduction in beauty ideal internalisation (d+ = -0.37, and a large reduction in social comparison tendencies (d+ = -0.72. However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated.The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions.

A new and refreshingly different approach to presenting the foundations of statisticalalgorithms, Foundations of StatisticalAlgorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today's more powerful statisticalalgorithms. It emphasizes recurring themes in all statisticalalgorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques

The hypothesis of this work is that there are social, financial, technical, managerial institutional and political key factors that may either support or prevent the success of small standalone energy systems in rural areas. This research work aims at contributing to the identification of such factors and study their relevance to the performance and sustainability of standalone energy systems in rural areas; to meet its purpose, a wide range of literature was reviewed including rural electrification programmes and projects, research and development projects on access to electricity in rural areas, impact studies and others, and a field research survey was done the Andes and Upper Jungle regions in Peru. Nineteen possible critical factors were identified, thirteen with relevance at the local context (the community or village), and six with relevance at the national (or wider) context. From literature review it was found that the possible local critical factors were relevant only to four categories of factors instead of the six considered initially (i.e. social, financial, technological and managerial): the other two categories, political and institutional were found to be more relevant to the national context, therefore those were included in the group of possible critical factors of wider context. A series of questionnaires were designed to collect field data information, which was later used to analyse and establish the relation of each identified factor with the success of the systems studied. The survey research was implemented in 14 villages, 7 with small diesel sets and 7 with small hydropower schemes, all spread in the Andes and Upper Jungle of Peru, which were carefully selected to be representative of regions with isolated standalone systems and with different socioeconomic background. Out of the 13 possible critical factors of local context, it was found that only 3 are really critical, the others are important but not critical; one of them (technical

Anterior cervical discectomy and fusion (ACDF) constitutes the conventional treatment of cervical disc herniation due to degenerative disc disease (DDD). ACDF with plating presents a variety of complications postoperatively and stand-alone cages are thought to be a promising alternative. The aim of this study was firstly, to analyze prospectively collected data from a sample of patients treated with single ACDF using C-Plus self-locking stand-alone PEEK cage system, without the use of plates or screws, in order to evaluate pain levels of patients, utilizing Neck and Arm Pain scale as an expression of visual analogue scale (VAS). Secondly, we aimed to evaluate health-related quality of life, via the short-form 36 (SF-36) and Neck Disability Index (NDI). Thirty-six patients (19 male and 17 female) with mean age 49.6±7 years old who underwent successful single ACDF using self-locking stand-alone PEEK cage for symptomatic cervical DDD were selected for the study. Neck and Arm pain, as well as SF-36 and NDI were estimated preoperatively and 1, 3, 6, and 12 months postoperatively. Patients underwent preoperative and postoperative clinical, neurological and radiological evaluation. The clinical and radiological outcomes were satisfactory after a minimum 1-year follow-up. All results were statistically important (P<0.05), excluding improvement in NDI measured between 6 and 12 months. SF-36, Neck Pain, as well as Arm Pain featured gradual and constant improvement during follow-up, with best scores presenting at 12 months after surgery, while NDI reached its best at 6 months postoperatively. Generally, all scores showed improvement postoperatively during the different phases of the follow-up. Subsequently, ACDF using C-Plus cervical cage constitutes an effective method for cervical disc herniation treatment, in terms of postoperative improvement on pain levels and health-related quality of life and a safe alternative to the conventional method of treatment for cervical DDD.

The number of photovoltaic (PV) system installations is increasing rapidly. As more people learn about this versatile and often cost-effective power option, this trend will accelerate. This document presents a recommended design for a battery based stand-alone photovoltaic system (BSPV). BSPV system has the ability to be applied in different areas, including warning signals, lighting, refrigeration, communication, residential water pumping, remote sensing, and cathodic protection. The presented calculation method gives a proper idea for a system sizing technique. Based on application load, different scenarios are possible for designing a BSPV system. In this study, a battery based stand-alone system was designed. The electricity generation part is three a-Si panels, which are connected in parallel, and for the storage part LFP (lithium iron phosphate) battery was used. The high power LFP battery packs are 40 cells each 8S5P (configured 8 series 5 parallel). Each individual pack weighs 0.5 kg and is 25.6 V. In order to evaluate the efficiency of a-Si panels with respect to the temperature and the solar irradiation, cities of Istanbul, Ankara and Adana in Turkey were selected. Temperature and solar irradiation were gathered from reliable sources and by using translation equations, current and voltage output of panels were calculated. As a result of these calculations, current and energy outputs were computed by considering an average efficient solar irradiation time value per day in Turkey. The calculated power values were inserted to a battery cycler system, and the behavior of high power LFP batteries in a time sequence of 7.2 h was evaluated. The charging and discharging cycles were obtained and their behavior was discussed. According to the results, Istanbul has the lowest number of peak month's energy, it followed by Ankara, and ultimately Adana has the highest number of peak months and energy storage. It was observed during the tests that values up to 4 A was

Indirect decompression of the neural structures through interbody distraction and fusion in the lumbar spine is feasible, but cage subsidence may limit maintenance of the initial decompression. The influence of interbody cage size on subsidence and symptoms in minimally invasive lateral interbody fusion is heretofore unreported. The authors report the rate of cage subsidence after lateral interbody fusion, examine the clinical effects, and present a subsidence classification scale. The study was performed as an institutional review board-approved prospective, nonrandomized, comparative, single-center radiographic and clinical evaluation. Stand-alone short-segment (1- or 2-level) lateral lumbar interbody fusion was investigated with 12 months of postoperative follow-up. Two groups were compared. Forty-six patients underwent treatment at 61 lumbar levels with standard interbody cages (18 mm anterior/posterior dimension), and 28 patients underwent treatment at 37 lumbar levels with wide cages (22 mm). Standing lateral radiographs were used to measure segmental lumbar lordosis, disc height, and rate of subsidence. Subsidence was classified using the following scale: Grade 0, 0%-24% loss of postoperative disc height; Grade I, 25%-49%; Grade II, 50%-74%; and Grade III, 75%-100%. Fusion status was assessed on CT scanning, and pain and disability were assessed using the visual analog scale and Oswestry Disability Index. Complications and reoperations were recorded. Pain and disability improved similarly in both groups. While significant gains in segmental lumbar lordosis and disc height were observed overall, the standard group experienced less improvement due to the higher rate of interbody graft subsidence. A difference in the rate of subsidence between the groups was evident at 6 weeks (p = 0.027), 3 months (p = 0.042), and 12 months (p = 0.047). At 12 months, 70% in the standard group and 89% in the wide group had Grade 0 or I subsidence, and 30% in the standard group

This paper presents the development and test of a flexible control strategy for an 11-kw wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connected mode. The stand-alone control is featured with a complex output voltage controller capable of handling...... nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase...

A stand-alone micro-hydropower station was presented. The plant was comprised of a squirrel cage induction machine coupled to a Kaplan water turbine. Power converters were used to control the variable frequency and voltage outputs of the generator caused by variations in water flow. The hydropower plant was installed a farm in the Kwazulu-Natal region of South Africa, and was designed to provide electricity in relation to the low power demand of users in the region as well as according to the site's hydrology and topology. Load forecasts for the 8 houses using the system were conducted. A generator with a higher output than the average power needed to feed the load was selected in order to ensure load supply during peak demand. The system was designed to shore energy generated during off-peak periods in batteries. An AC-DC-AC converter was used as an interface between the generator and the load in order to ensure voltage and frequency stabilization. Simulations of plant components were conducted to demonstrate output power supply during water flow variations. Results of the modelling study indicated that power converters are needed to stabilize generator outputs. The hydropower design is a cost-effective means of supplying power to low-income households. 10 refs., 2 tabs., 7 figs.

This seminar material was developed primarily to provide solar photovoltaic (PV) applied engineering technology to the Federal community. An introduction to photoconductivity, semiconductors, and solar photovoltaic cells is included along with a demonstration of specific applications and application identification. The seminar details general systems design and incorporates most known information from industry, academia, and Government concerning small solar cell power system design engineering, presented in a practical and applied manner. Solar PV power system applications involve classical direct electrical energy conversion and electric power system analysis and synthesis. Presentations and examples involve a variety of disciplines including structural analysis, electric power and load analysis, reliability, sizing and optimization; and, installation, operation and maintenance. Four specific system designs are demonstrated: water pumping, domestic uses, navigational and aircraft aids, and telecommunications. All of the applications discussed are for small power requirement (under 2 kilowatts), stand-alone systems to be used in remote locations. Also presented are practical lessons gained from currently installed and operating systems, problems at sites and their resolution, a logical progression through each major phase of system acquisition, as well as thorough design reviews for each application.

Full Text Available This study presents a stand-alone excitation synchronous wind power generator (SESWPG with power flow management strategy (PFMS. The rotor speed of the excitation synchronous generator tracks the utility grid frequency by using servo motor tracking technologies. The automatic voltage regulator governs the exciting current of generator to achieve the control goals of stable voltage. When wind power is less than the needs of the consumptive loading, the proposed PFMS increases motor torque to provide a positive power output for the loads, while keeping the generator speed constant. Conversely, during the periods of wind power greater than output loads, the redundant power of generator production is charged to the battery pack and the motor speed remains constant with very low power consumption. The advantage of the proposed SESWPG is that the generator can directly output stable alternating current (AC electricity without using additional DC–AC converters. The operation principles with software simulation for the system are described in detail. Experimental results of a laboratory prototype are shown to verify the feasibility of the system.

In order to evaluate the simulation results of a photovoltaic power generation system, an operation simulation was carried out using the actual measured data of a standalone PV system in Miyakojima, Okinawa Prefecture, so as to make a comparison with the actual operation data. The electric power was supplied to 250 houses and primary/junior high schools in the surrounding villages, which had an average demand load of approximately 90kw and the maximum of approximately 200kw. The power was supplied through the PV power generation in the duration of the sunshine, with an excess power charged in storage batteries and then supplied from the batteries at night. The array capacity was made 750kWp, the output current and storage batteries being characteristic type with an actual efficiency curve used for the inverter. The weather data used were the actual inclined insolation quantity and the outside air temperature data for a period of one month of November. The power charged in excess of 100% in the batteries was termed as an overflow power. With the charging condition 30% or less, a diesel generator was run for a rated operation for one hour, the power of which was termed as a backup power. As a result, the simulation was found nearly in agreement with the actual measurements. 5 refs., 7 figs., 2 tabs.

Full Text Available This paper presents a simple control strategy for the operation of a variable speed stand-alone wind turbine with a permanent magnet synchronous generator (PMSG. The PMSG is connected to a three phase resistive load through a switch mode rectifier and a voltage source inverter. Control of the generator side converter is used to achieve maximum power extraction from the available wind power. Control of the DC-DC bidirectional buck-boost converter, which is connected between batteries bank and DC-link voltage, is used to maintain the DC-link voltage at a constant value. It is also used to make the batteries bank stores the surplus of wind energy and supplies this energy to the load during a wind power shortage. The load side voltage source inverter uses a relatively complex vector control scheme to control the output load voltage in terms of amplitude and frequency. The control strategy works under wind speed variation as well as with variable load. Extensive simulation results have been performed using MATLAB/SIMULINK.

A novel electrochemical scheme to convert a stand-alone supply of aqueous hydrogen peroxide into a fuel cell-ready stream of hydrogen gas plus aqueous hydrogen peroxide is described. The electrochemical cell, consisting of a solid base and solid acid electrocatalyst, together with a proton exchange membrane, comprise the system that converts aqueous hydrogen peroxide into separate gas streams of oxygen and hydrogen. Aqueous hydrogen peroxide is contained in the anode compartment only and exists in the region where oxygen gas is formed, whereas the cathode compartment is where hydrogen gas is generated and therefore exists in a reduced state. A near zero theoretical over-potential can be achieved by the choice of basicity and acidity of the electrode materials. The primary cost of the electrochemical cell is electrode construction and the aqueous hydrogen peroxide energy storage compound. Additional research effort is required to experimentally validate the concept and explore the full economic impact should initial studies, based on the design presented here, prove promising. (author)

Full Text Available Batteries are promising storage technologies for stationary applications because of their maturity, and the ease with which they are designed and installed compared to other technologies. However, they pose threats to the environment and human health. Several studies have discussed the various battery technologies and applications, but evaluating the environmental impact of batteries in electrical systems remains a gap that requires concerted research efforts. This study first presents an overview of batteries and compares their technical properties such as the cycle life, power and energy densities, efficiencies and the costs. It proposes an optimal battery technology sizing and selection strategy, and then assesses the environmental impact of batteries in a typical renewable energy application by using a stand-alone photovoltaic (PV system as a case study. The greenhouse gas (GHG impact of the batteries is evaluated based on the life cycle emission rate parameter. Results reveal that the battery has a significant impact in the energy system, with a GHG impact of about 36–68% in a 1.5 kW PV system for different locations. The paper discusses new batteries, strategies to minimize battery impact and provides insights into the selection of batteries with improved cycling capacity, higher lifespan and lower cost that can achieve lower environmental impacts for future applications.

Accurate sizing is one of the most important aspects to take into consideration when designing a stand-alone photovoltaic system (SAPV). Various methods, which differ in terms of their simplicity or reliability, have been developed for this purpose. Analytical methods, which seek functional relationships between variables of interest to the sizing problem, are one of these approaches. A series of rational considerations are presented in this paper with the aim of shedding light upon the basic principles and results of various sizing methods proposed by different authors. These considerations set the basis for a new analytical method that has been designed for systems with variable monthly energy demands. Following previous approaches, the method proposed is based on the concept of loss of load probability (LLP) - a parameter that is used to characterize system design. The method includes information on the standard deviation of loss of load probability ({sigma}{sub LLP}) and on two new parameters: annual number of system failures (f) and standard deviation of annual number of failures ({sigma}{sub f}). The method proves useful for sizing a PV system in a reliable manner and serves to explain the discrepancies found in the research on systems with LLP<10{sup -2}. We demonstrate that reliability depends not only on the sizing variables and on the distribution function of solar radiation, but on the minimum value as well, which in a given location and with a monthly average clearness index, achieves total solar radiation on the receiver surface. (author)

Highlights: • Economic evaluation of three renewable hybrid power plants for off-grid operation. • The high electricity cost of remote regions increases the competitiveness of renewable energy. • The proposed plants are economically viable when compared to the existing situation. • The zero direct emissions of the plants constitute an additional advantage of the plants. - Abstract: In recent years ever more examples of regions that have managed to achieve or orientate themselves toward renewable energy sufficiency are emerging. However, actions to create energy autonomy are mainly the result of isolated activities and they are less driven from fully organized movements. In addition, total energy independence without the support of a centralized electrical grid is yet to be achieved. The objectives of this work are to investigate the associated costs of stand-alone renewable hybrid power plants on a Greek island and compare them to the cost of the currently used fossil-fuel-based conventional plant. The plants examined here are designed to fully cover the electricity needs of the island. Islands may face numerous energy problems and rely heavily on foreign and environmentally-harmful fuels. It is shown that the relatively high cost of electricity of such a remote region can increase the competitiveness and promote the wider incorporation of technologies based on renewable energy sources that may, in other cases, seem economically inferior to business-as-usual energy solutions.

This paper presents concepts and tests results on a flexible sensorless control strategy for a PMSG driven by a small wind turbine with back-to-back power converters capable to function in both standalone and grid connection mode. A new automatic seamless transfer method, based on phase-locked...

The declaration of 2014-2024 as the Decade of Sustainable Energy for All has catalyzed actions towards achieving universal electricity access. The high costs of building electric infrastructure are a major impediment to improved access, making stand-alone photovoltaic (PV) systems an attractive

This article analyzes the extent to which Australian and New Zealand marketing educators use dedicated or stand-alone courses to equip students with alternative views of business. A census of marketing programs in degree-granting universities was conducted. Program brochures were obtained via the Internet and were content analyzed. This study…

... 49 Transportation 8 2010-10-01 2010-10-01 false Mandatory mediation in rate cases to be considered... § 1109.4 Mandatory mediation in rate cases to be considered under the stand-alone cost methodology. (a) A... methodology must engage in non-binding mediation of its dispute with the railroad upon filing a formal...

The phase-locked loop (PLL) based on conventional synchronous reference frame, i.e. dqPLL, is usually employed in grid-connected variable speed constant frequency (VSCF) power generation systems (PGSs). However, the voltage amplitude drop of stand-alone PGSs is often greater than that of the grid...

Bioinformatics skills have become essential for many research areas; however, the availability of qualified researchers is usually lower than the demand and training to increase the number of able bioinformaticians is an important task for the bioinformatics community. When conducting training or hands-on tutorials, the lack of control over the analysis tools and repositories often results in undesirable situations during training, as unavailable online tools or version conflicts may delay, complicate, or even prevent the successful completion of a training event. The eBioKit is a stand-alone educational platform that hosts numerous tools and databases for bioinformatics research and allows training to take place in a controlled environment. A key advantage of the eBioKit over other existing teaching solutions is that all the required software and databases are locally installed on the system, significantly reducing the dependence on the internet. Furthermore, the architecture of the eBioKit has demonstrated itself to be an excellent balance between portability and performance, not only making the eBioKit an exceptional educational tool but also providing small research groups with a platform to incorporate bioinformatics analysis in their research. As a result, the eBioKit has formed an integral part of training and research performed by a wide variety of universities and organizations such as the Pan African Bioinformatics Network (H3ABioNet) as part of the initiative Human Heredity and Health in Africa (H3Africa), the Southern Africa Network for Biosciences (SAnBio) initiative, the Biosciences eastern and central Africa (BecA) hub, and the International Glossina Genome Initiative.

Standalone systems supplied only by a photovoltaic generator need an energy storage unit to be fully self sufficient. Lead acid batteries are commonly used to store energy because of their low cost, despite several operational constraints. A hydrogen-based energy storage unit (HESU) could be another candidate, including an electrolyser, a fuel cell and a hydrogen tank. However many efforts still need to be carried out for this technology to reach an industrial stage. In particular, market outlets must be clearly identified. The study of small stationary applications (few kW) is performed by numerical simulations. A simulator is developed in the Matlab/Simulink environment. It is mainly composed of a photovoltaic field and a storage unit (lead acid batteries, HESU, or hybrid storage HESU/batteries). The system component sizing is achieved in order to ensure the complete system autonomy over a whole year of operation. The simulator is tested with 160 load profiles (1 kW as a yearly mean value) and three locations (Algeria, France and Norway). Two coefficients are set in order to quantify the correlation between the power consumption of the end user and the renewable resource availability at both daily and yearly scales. Among the tested cases, a limit value of the yearly correlation coefficient came out, enabling to recommend the use of the most adapted storage to a considered case. There are cases for which using HESU instead of lead acid batteries can increase the system efficiency, decrease the size of the photovoltaic field and improve the exploitation of the renewable resource. In addition, hybridization of HESU with batteries always leads to system enhancements regarding its sizing and performance, with an efficiency increase by 10 to 40 % depending on the considered location. The good agreement between the simulation data and field data gathered on real systems enabled the validation of the models used in this study. (author)

Noncoding small RNAs play diverse, important biological roles through gene expression regulation. However, their low expression levels make it difficult to identify new small RNA species and study their functions, calling for the development of detection schemes with higher simplicity, sensitivity, and specificity. Herein, we reported a straightforward assay that combined the stand-alone rolling circle amplification (RCA) with capillary electrophoresis (CE) for specific and sensitive detection of small RNAs in biological samples. In order to enhance the overall reaction efficiency and simplify the procedure, RCA was not preceded with ligation, and a preformed circular probe was employed as the template for the target small RNA-primed isothermal amplification. The long RCA product was digested and analyzed by CE. Two DNA polymerases, the Phi29 and Bst, were compared for their detection performance. Bst is superior in the aspects of specificity, procedure simplicity, and reproducibility, while Phi29 leads to a 5-fold lower detection limit and is able to detect as low as 35 amol of the target small RNA. Coamplification of an internal standard with the target and employment of the RNase A digestion step allow accurate and reproducible quantification of low amounts of small RNA targets spiked into hundreds of nanograms of the plant total RNA extract with a recovery below 110% using either enzyme. Our assay can be adapted to a capillary array system for high-throughput screening of small RNA expression in biological samples. Also, the one-step isothermal process has the potential to conveniently amplify a very limited amount of the RNA samples, e.g., RNA extracted from only a few cells, inside the capillary column or on a microchip.

Full Text Available The purpose of this paper was to investigate the stand-alone lateral interbody fusion as a minimally invasive option for the treatment of low-grade degenerative spondylolisthesis with a minimum 24-month followup. Prospective nonrandomized observational single-center study. 52 consecutive patients (67.6±10 y/o; 73.1% female; 27.4±3.4 BMI with single-level grade I/II single-level degenerative spondylolisthesis without significant spine instability were included. Fusion procedures were performed as retroperitoneal lateral transpsoas interbody fusions without screw supplementation. The procedures were performed in average 73.2 minutes and with less than 50cc blood loss. VAS and Oswestry scores showed lasting improvements in clinical outcomes (60% and 54.5% change, resp.. The vertebral slippage was reduced in 90.4% of cases from mean values of 15.1% preoperatively to 7.4% at 6-week followup (P<0.001 and was maintained through 24 months (7.1%, P<0.001. Segmental lordosis (P<0.001 and disc height (P<0.001 were improved in postop evaluations. Cage subsidence occurred in 9/52 cases (17% and 7/52 cases (13% spine levels needed revision surgery. At the 24-month evaluation, solid fusion was observed in 86.5% of the levels treated. The minimally invasive lateral approach has been shown to be a safe and reproducible technique to treat low-grade degenerative spondylolisthesis.

Full Text Available Recently, the family Midichloriaceae has been described within the bacterial order Rickettsiales. It includes a variety of bacterial endosymbionts detected in different metazoan host species belonging to Placozoa, Cnidaria, Arthropoda and Vertebrata. Representatives of Midichloriaceae are also considered possible etiological agents of certain animal diseases. Midichloriaceae have been found also in protists like ciliates and amoebae. The present work describes a new bacterial endosymbiont, "Candidatus Fokinia solitaria", retrieved from three different strains of a novel Paramecium species isolated from a wastewater treatment plant in Rio de Janeiro (Brazil. Symbionts were characterized through the full-cycle rRNA approach: SSU rRNA gene sequencing and fluorescence in situ hybridization (FISH with three species-specific oligonucleotide probes. In electron micrographs, the tiny rod-shaped endosymbionts (1.2 x 0.25-0.35 μm in size were not surrounded by a symbiontophorous vacuole and were located in the peripheral host cytoplasm, stratified in the host cortex in between the trichocysts or just below them. Frequently, they occurred inside autolysosomes. Phylogenetic analyses of Midichloriaceae apparently show different evolutionary pathways within the family. Some genera, such as "Ca. Midichloria" and "Ca. Lariskella", have been retrieved frequently and independently in different hosts and environmental surveys. On the contrary, others, such as Lyticum, "Ca. Anadelfobacter", "Ca. Defluviella" and the presently described "Ca. Fokinia solitaria", have been found only occasionally and associated to specific host species. These last are the only representatives in their own branches thus far. Present data do not allow to infer whether these genera, which we named "stand-alone lineages", are an indication of poorly sampled organisms, thus underrepresented in GenBank, or represent fast evolving, highly adapted evolutionary lineages.

This paper presents a low-power stand-alone tongue drive system (sTDS) used for individuals with severe disabilities to potentially control their environment such as computer, smartphone, and wheelchair using their voluntary tongue movements. A low-power local processor is proposed, which can perform signal processing to convert raw magnetic sensor signals to user-defined commands, on the sTDS wearable headset, rather than sending all raw data out to a PC or smartphone. The proposed sTDS significantly reduces the transmitter power consumption and subsequently increases the battery life. Assuming the sTDS user issues one command every 20 ms, the proposed local processor reduces the data volume that needs to be wirelessly transmitted by a factor of 64, from 9.6 to 0.15 kb/s. The proposed processor consists of three main blocks: serial peripheral interface bus for receiving raw data from magnetic sensors, external magnetic interference attenuation to attenuate external magnetic field from the raw magnetic signal, and a machine learning classifier for command detection. A proof-of-concept prototype sTDS has been implemented with a low-power IGLOO-nano field programmable gate array (FPGA), bluetooth low energy, battery and magnetic sensors on a headset, and tested. At clock frequency of 20 MHz, the processor takes 6.6 s and consumes 27 nJ for detecting a command with a detection accuracy of 96.9%. To further reduce power consumption, an application-specified integrated circuit processor for the sTDS is implemented at the postlayout level in 65-nm CMOS technology with 1-V power supply, and it consumes 0.43 mW, which is 10 lower than FPGA power consumption and occupies an area of only 0.016 mm.

Full Text Available The aim of this study is to examine the possibility of using a stand-alone photovoltaic system (SAPVS for electricity generation in urban areas in Southern Mexico. In Mexico, an urban area is defined as an area where more than 2500 inhabitants live. Due to constant migration from the countryside to the cities, the number of inhabitants of urban localities has been increasing. Global horizontal irradiation (GHI data were recorded every 10 min during 2014–2016 in Coatzacoalcos in the state of Veracruz located on 18°08′09″ N and 94°27′48″ W. In this study, batteries represented 77% of the total cost, 12 PV panels of 310 W could export 5.41 MWh to the grid, and an inverter with an integrated controller and charger was selected, which decreased the initial cost. The city of Coatzacoalcos was chosen because the average annual temperature is 28°, with an average relative humidity of 75% and an average irradiance of 5.3 kWh/m2/day. An emission factor 0.505 tCO2/MWh of greenhouse gases (GHG were obtained, based on the power system, the reduction of net annual GHG would be 11 tCO2 and a financial revenue of 36.951 × 103 $/tCO2 would be obtained. Financial parameters such as a 36.3% Internal Rate Return (IRR and 3.4 years payback show the financial viability of this investment. SAPVSs in urban areas in Mexico could be a benefit as long as housing has a high consumption of electricity.

Desalination of brackish water as a viable option to cope with water scarcity and to overcome water deficit in Jordan is assessed. A standalone reverse osmosis (RO) desalination unit powered by photovoltaic (PV) solar energy is proposed, and a computer code in C++ was generated in order to simulate the process, and to predict the water production at 10 selected sites based on the available solar radiation data, sunshine hours and salinity of the feed water (TDS of 3000, 5000, 7000, and 10,000 mg/L). It was found that most of the selected sites showed favorable application of the proposed system in Jordan. Tafila, Queira, Ras Muneef, H-4, and H-5 are the most favorable sites. With TDS of 7000 mg/L, the highest annual water production of 1679 m{sup 3}/year was observed in Tafila, followed by Queira with production of 1473 m{sup 3}/year. Ras Muneef, H-4, and H-5 showed close to each other production of 1363, 1345, and 1340 m{sup 3}/year, respectively. Among the most favorable sites (Tafila, Queira, Ras Muneef, H-4, and H-5), Ras Muneef was found to be the best site in terms of the daily amount of water produced during the driest months of the year (May-September). Its production during these months forms about 65% of its total daily water production during a 1-year cycle, while for each of the other most favorable sites namely Tafila, Queira, H-4, and H-5, a 61% of production was observed during the same period. (author)

Full Text Available The current study presents the concept of a stand-alone solar organic Rankine cycle (ORC water pumping system for rural Nepalese areas. Experimental results for this technology are presented based on a prototype. The economic viability of the system was assessed based on solar radiation data of different Nepalese geographic locations. The mechanical power produced by the solar ORC is coupled with a water pumping system for various applications, such as drinking and irrigation. The thermal efficiency of the system was found to be 8% with an operating temperature of 120 °C. The hot water produced by the unit has a temperature of 40 °C. Economic assessment was done for 1-kW and 5-kW solar ORC water pumping systems. These systems use different types of solar collectors: a parabolic trough collector (PTC and an evacuated tube collector (ETC. The economic analysis showed that the costs of water are $2.47/m3 (highest and $1.86/m3 (lowest for the 1-kW system and a 150-m pumping head. In addition, the cost of water is reduced when the size of the system is increased and the pumping head is reduced. The minimum volumes of water pumped are 2190 m3 and 11,100 m3 yearly for 1 kW and 5 kW, respectively. The payback period is eight years with a profitability index of 1.6. The system is highly feasible and promising in the context of Nepal.

Highlights: • Hybrid stand-alone wind–solar–fossil power system is analyzed. • Measurement data are used to evaluate system performance. • The proposed system can generate about 70% from renewables. • Such a hybrid plant is very promising for remote regions in Algeria. - Abstract: There is a great interest in the development of renewable power technologies in Algeria, and more particularly hybrid concept. The present paper has investigated the performance of hybrid PV–Wind–Diesel–Battery configuration based on hourly measurements of Adrar climate (southern Algeria). Data of global solar radiation, ambient temperature and wind speed for a period of one year have been used. Firstly, the proposed hybrid system has been optimized by means of HOMER software. The optimization process has been carried out taking into account renewable resources potential and energy demand; while maximizing renewable electricity use and fuel saving are the purpose. In the second step, a mathematical model has been developed to ensure efficient energy management on the basis of various operation strategies. The analysis has shown that renewable energy system (PV–Wind) is able to supply about 70% of the demand. Wind power has ranked first with 43% of the annual total electricity production followed by diesel generator (with 31%) while the remaining fraction is being to PV panels. In this context, 69% of the fossil fuel can be saved when using the proposed hybrid configuration instead of the diesel generators that are currently installed in most remote regions in Algeria. Such a concept is very promising to meet the focus of renewable energy program announced in 2011.

Highlights: ► A fast optimal sizing method under climate change. ► It provides the method of creating climate cycles for PV system applications. ► It proposes a new reliability indicator to denote the upper bound probability of a PV application. ► It presents a premiere economic optimization for such approach. ► It gives examples to illustrate the proposed approach. -- Abstract: This paper proposes a novel and fast sizing method under the constant daily load profile for sizing a stand-alone PV system. The term “efficient sizing” means that the approach did not use simulation but could get the result as good as those employing simulation. So, the sizing method is more efficient than the others. Traditionally, a typical day or a typical year’s solar irradiation profile is employed for the sizing task. However, facing the global warming crisis as well as the fact that no 2 years would have the same weather condition for a single site, this approach statistically models the trend of climate change year by year and put it into the sizing formula, so that the results are optimal for the current weather condition and for the future as well. Hence, the suitable size for the PV array and the number of batteries are obtained by purely computation. This is different from the traditional sizing curve method. Although the traditional sizing curve method were satisfactory in the normal cases, they might fail in the extreme climate condition. This paper concludes the behavior of the extreme climate for at least 20 years. So, the derived system may have statistical confidence for at least 20 years of operation. A new reliability index (Loss of Power Probability) in terms of Extreme Value Theory is introduced. LPP provides upper bound reliability for application and rich information for many extreme events. A technological and economical comparison among the traditional daily energy balance method, sizing curve method and this approach is conducted and shows the

prfectBLAST is a multiplatform graphical user interface (GUI) for the stand-alone BLAST+ suite of applications. It allows researchers to do nucleotide or amino acid sequence similarity searches against public (or user-customized) databases that are locally stored. It does not require any dependencies or installation and can be used from a portable flash drive. prfectBLAST is implemented in Java version 6 (SUN) and runs on all platforms that support Java and for which National Center for Biotechnology Information has made available stand-alone BLAST executables, including MS Windows, Mac OS X, and Linux. It is free and open source software, made available under the GNU General Public License version 3 (GPLv3) and can be downloaded at www.cicy.mx/sitios/jramirez or http://code.google.com/p/prfectblast/.

This paper presents the development and test of a flexible control strategy for an 11-kw wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connected mode. The stand-alone control is featured with a complex output voltage controller capable of handling...... nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase......-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and on-line tuning is used to implement and test the different control strategies. The back...

A simple methodology to estimate photovoltaic system size and life-cycle costs in stand-alone applications is presented. It is designed to assist engineers at Government agencies in determining the feasibility of using small stand-alone photovoltaic systems to supply ac or dc power to the load. Photovoltaic system design considerations are presented as well as the equations for sizing the flat-plate array and the battery storage to meet the required load. Cost effectiveness of a candidate photovoltaic system is based on comparison with the life-cycle cost of alternative systems. Examples of alternative systems addressed are batteries, diesel generators, the utility grid, and other renewable energy systems.

Stand-alone photovoltaic (PV) systems comprise one of the most promising electrification solutions for covering the demand of remote consumers. However, such systems are strongly questioned due to extreme life-cycle (LC) energy requirements. For similar installations to be considered as environmentally sustainable, their LC energy content must be compensated by the respective useful energy production, i.e. their energy pay-back period (EPBP) should be found less than their service period. In this context, an optimum sizing methodology is currently developed, based on the criterion of minimum embodied energy. Various energy autonomous stand-alone PV-lead-acid battery systems are examined and two different cases are investigated; a high solar potential area and a medium solar potential area. By considering that the PV-battery (PV-Bat) system's useful energy production is equal to the remote consumer's electricity consumption, optimum cadmium telluride (CdTe) based systems yield the minimum EPBP (15 years). If achieving to exploit the net PV energy production however, the EPBP is found less than 20 years for all PV types. Finally, the most interesting finding concerns the fact that in all cases examined the contribution of the battery component exceeds 27% of the system LC energy requirements, reflecting the difference between grid-connected and stand-alone configurations.

This paper investigates a variable speed wind turbine based on permanent magnet synchronous generator and a full-scale power converter in a stand-alone system. An energy storage system(ESS) including battery and fuel cell-electrolyzer combination is connected to the DC link of the full-scale power...... converter through the power electronics interface. Wind is the primary power source of the system, the battery and FC-electrolyzer combination is used as a backup and a long-term storage system to provide or absorb power in the stand-alone system, respectively. In this paper, a control strategy is proposed...... for the operation of this variable speed wind turbine in a stand-alone system, where the generator-side converter and the ESS operate together to meet the demand of the loads. This control strategy is competent for supporting the variation of the loads or wind speed and limiting the DC-link voltage of the full...

The main objective with this work was to investigate techno-economically the opportunity for integrated gasification-based biomass-to-methanol production in an existing chemical pulp and paper mill. Three different system configurations using the pressurized entrained flow biomass gasification (PEBG) technology were studied, one stand-alone plant, one where the bark boiler in the mill was replaced by a PEBG unit and one with a co-integration of a black liquor gasifier operated in parallel with a PEBG unit. The cases were analysed in terms of overall energy efficiency (calculated as electricity-equivalents) and process economics. The economics was assessed under the current as well as possible future energy market conditions. An economic policy support was found to be necessary to make the methanol production competitive under all market scenarios. In a future energy market, integrating a PEBG unit to replace the bark boiler was the most beneficial case from an economic point of view. In this case the methanol production cost was reduced in the range of 11–18 Euro per MWh compared to the stand-alone case. The overall plant efficiency increased approximately 7%-units compared to the original operation of the mill and the non-integrated stand-alone case. In the case with co-integration of the two parallel gasifiers, an equal increase of the system efficiency was achieved, but the economic benefit was not as apparent. Under similar conditions as the current market and when methanol was sold to replace fossil gasoline, co-integration of the two parallel gasifiers was the best alternative based on received IRR. - Highlights: • Techno-economic results regarding integration of methanol synthesis processes in a pulp and paper mill are presented. • The overall energy efficiency increases in integrated methanol production systems compared to stand-alone production units. • The economics of the integrated system improves compared to stand-alone alternatives. • Tax

Full Text Available Norms clarification has been identified as an effective component of college student drinking interventions, prompting research on norms clarification as a single-component intervention known as Personalized Normative Feedback (PNF. Previous reviews have examined PNF in combination with other components but not as a stand-alone intervention.To investigate the degree to which computer-delivered stand-alone personalized normative feedback interventions reduce alcohol consumption and alcohol-related harms among college students and to compare gender-neutral and gender-specific PNF.Electronic databases were searched systematically through November 2014. Reference lists were reviewed manually and forward and backward searches were conducted.Outcome studies that compared computer-delivered, stand-alone PNF intervention with an assessment only, attention-matched, or active treatment control and reported alcohol use and harms among college students.Between-group effect sizes were calculated as the standardized mean difference in change scores between treatment and control groups divided by pooled standard deviation. Within-group effect sizes were calculated as the raw mean difference between baseline and follow-up divided by pooled within-groups standard deviation.Eight studies (13 interventions with a total of 2,050 participants were included. Compared to control participants, students who received gender-neutral (dbetween = 0.291, 95% CI [0.159, 0.423] and gender-specific PNF (dbetween = 0.284, 95% CI [0.117, 0.451] reported greater reductions in drinking from baseline to follow-up. Students who received gender-neutral PNF reported 3.027 (95% CI [2.171, 3.882] fewer drinks per week at first follow-up and gender-specific PNF reported 3.089 (95% CI [0.992, 5.186] fewer drinks. Intervention effects were small for harms (dbetween = 0.157, 95% CI [0.037, 0.278].Computer-delivered PNF is an effective stand-alone approach for reducing college student

Norms clarification has been identified as an effective component of college student drinking interventions, prompting research on norms clarification as a single-component intervention known as Personalized Normative Feedback (PNF). Previous reviews have examined PNF in combination with other components but not as a stand-alone intervention. To investigate the degree to which computer-delivered stand-alone personalized normative feedback interventions reduce alcohol consumption and alcohol-related harms among college students and to compare gender-neutral and gender-specific PNF. Electronic databases were searched systematically through November 2014. Reference lists were reviewed manually and forward and backward searches were conducted. Outcome studies that compared computer-delivered, stand-alone PNF intervention with an assessment only, attention-matched, or active treatment control and reported alcohol use and harms among college students. Between-group effect sizes were calculated as the standardized mean difference in change scores between treatment and control groups divided by pooled standard deviation. Within-group effect sizes were calculated as the raw mean difference between baseline and follow-up divided by pooled within-groups standard deviation. Eight studies (13 interventions) with a total of 2,050 participants were included. Compared to control participants, students who received gender-neutral (dbetween = 0.291, 95% CI [0.159, 0.423]) and gender-specific PNF (dbetween = 0.284, 95% CI [0.117, 0.451]) reported greater reductions in drinking from baseline to follow-up. Students who received gender-neutral PNF reported 3.027 (95% CI [2.171, 3.882]) fewer drinks per week at first follow-up and gender-specific PNF reported 3.089 (95% CI [0.992, 5.186]) fewer drinks. Intervention effects were small for harms (dbetween = 0.157, 95% CI [0.037, 0.278]). Computer-delivered PNF is an effective stand-alone approach for reducing college student drinking and

Highlights: • We present an energy management system for a stand-alone WT/PV/hydrogen/battery hybrid system. • Hierarchical control composed by master and slave control strategies. • Control assures reliable electricity support for stand-alone applications subject to technical and economic criteria. - Abstract: This paper presents an energy management system (EMS) for stand-alone hybrid systems composed by photovoltaic (PV) solar panels and a wind turbine (WT) as primary energy sources and two energy storage systems, which are a hydrogen system and a battery. The hydrogen system is composed of fuel cell (FC), electrolyzer and hydrogen storage tank. The EMS is a hierarchical control composed by a master control strategy and a slave control strategy. On the one hand, the master control generates the reference powers to meet several premises (such as to satisfy the load power demand, and to maintain the hydrogen tank level and the state of charge (SOC) of the battery between their target margins), taking also into account economic aspects to discriminate between using the battery or hydrogen system. On the other hand, the slave control modifies the reference powers generated by the master control according to the energy sources dynamic limitations, and maintains the DC bus voltage at its reference value. The models, implemented in MATLAB-Simulink environment, have been developed from commercially available components. To check the viability of the proposed EMS, two kinds of simulations were carried out: (1) A long-term simulation of 25 years (expected lifetime of the system) with a sample time of one hour to validate the master control of the EMS; and (2) A short-term simulation with sudden net power variations to validate the slave control of the EMS

An estimation of a stand-alone solar PV and wind hybrid system for distributed power generation has been made based on the resources available at Sagar island, a remote area distant to grid operation. Optimization and sensitivity analysis has been made to evaluate the feasibility and size of the power generation unit. A comparison of the different modes of hybrid system has been studied. It has been estimated that Solar PV-Wind-DG hybrid system provides lesser per unit electricity cost. Capital investment is observed to be lesser when the system run with Wind-DG compared to Solar PV-DG.

conventional and improved controller is conducted in terms of losses. The switching and conduction losses for the VSC are calculated using MATLAB/SIMULINK and PLECS software. A power level up to 1 kW is considered for the conventional and the improved schemes. All simulation and experimental results......This paper discusses a Model Predictive Control (MPC) scheme for a Voltage Source Converter (VSC) where the aim is to minimize losses and Total Harmonics distortion (THD) compared to the conventional MPC scheme. Different Cost Functions (CFs) are applied to the stand-alone VSC. A comparison between...

In this paper, dynamic behavior and performance of a fuel cell power plant (FCPP) which operates in parallel with a battery bank is tested under classified load conditions, such as mostly resistive, mostly inductive, resistive-inductive and non-linear loads. Thereafter, voltage stability analysis is performed using the dynamic response of the FCPP for stand-alone residential applications. Simulation results are obtained using the MATLAB ® and Simulink ® software packages, based on the mathematical and dynamic electrical models of the system. Using the experimental results, a validated model has been realized and voltage stability analysis is performed through this model.

This study aims to propose a new optimization-based approach for design and analysis of the stand-alone hybrid energy supply system using renewable energy sources (RES). In the energy supply system, we include multiple energy production technologies such as Photovoltaics (PV), Wind turbine, and fossil-fuel-based AC generator along with different types of energy storage and conversion technologies such as battery and inverter. We then select six different regions of Korea to represent various characteristics of different RES potentials and demand profiles. We finally designed and analyzed the optimal RES stand-alone energy supply system in the selected regions using multiobjective optimization (MOOP) technique, which includes two objective functions: the minimum cost and the minimum CO{sub 2} emission. In addition, we discussed the feasibility and expecting benefits of the systems by comparing to conventional systems of Korea. As a result, the region of the highest RES potential showed the possibility to remarkably reduce CO{sub 2} emissions compared to the conventional system. Besides, the levelized cost of electricity (LCOE) of the RES-based energy system is identified to be slightly higher than conventional energy system: 0.35 and 0.46 $/kWh, respectively. However, the total life-cycle emission of CO{sub 2} (LCECO{sub 2}) can be reduced up to 470 gCO{sub 2}/kWh from 490 gCO{sub 2}/kWh of the conventional systems.

When gasified black liquor is used for hydrogen production, significant amounts of biomass must be imported. This paper compares two alternative options for producing hydrogen from biomass: (A) pulp-mill-integrated hydrogen production from gasified back liquor; and (B) stand-alone production of hydrogen from gasified biomass. The comparison assumes that the same amount of biomass that is imported in Alternative A is supplied to a stand-alone hydrogen production plant and that the gasified black liquor in Alternative B is used in a black liquor gasification combined cycle (BLGCC) CHP unit. The comparison is based upon equal amounts of black liquor fed to the gasifier, and identical steam and power requirements for the pulp mill. The two systems are compared on the basis of total CO 2 emission consequences, based upon different assumptions for the reference energy system that reflect different societal CO 2 emissions reduction target levels. Ambitions targets are expected to lead to a more CO 2 -lean reference energy system, in which case hydrogen production from gasified black liquor (Alternative A) is best from a CO 2 emissions' perspective, whereas with high CO 2 emissions associated with electricity production, hydrogen from gasified biomass and electricity from gasified black liquor (Alternative B) is preferable. (author)

As an important means to realize the energetic complementarity and improve the efficiency of renewable resources, the stand-alone microgrid (SAMG) system gains attention increasingly, especially in islands and remote areas. In this paper, considering the interest conflict of the distribution company and the distributed generation owner, a new multi-objective optimal planning model is formulated for medium voltage SAMG. Besides, to avoid the power constraint of distributed generation (DG) once the over-limit voltage occurs, a novel two-step power dispatch control method including the voltage regulation strategy is proposed, in which the absorption of distributed power by energy storage system (ESS) and the reactive power adjustment though its power control system are used to regulate voltage. The goal of this paper is to search the Pareto-optimal front of the site and capacity of DG as well as the contract price between both parties, and thus can provide effective references for practical planning of SAMG. Considering the high cost of ESS, the investment analysis of ESS is also discussed in the paper. - Highlights: • A multi-objective planning model based on different benefit subjects is proposed. • A two-step power dispatch method including the voltage regulation is proposed. • The economical efficiency of the proposed model is analyzed. • The effective reference for the stand-alone microgrid planning is provided.

There is a growing call, within the scientific community, for solid theoretic frameworks and usable indices/models to assess sediment connectivity. Connectivity plays a significant role in characterizing structural properties of the landscape and, when considered in combination with forcing processes (e.g., rainfall-runoff modelling), can represent a valuable analysis for an improved landscape management. In this work, the authors present the development and application of SedInConnect: a free, open source and stand-alone application for the computation of the Index of Connectivity (IC), as expressed in Cavalli et al. (2013) with the addition of specific innovative features. The tool is intended to have a wide variety of users, both from the scientific community and from the authorities involved in the environmental planning. Thanks to its open source nature, the tool can be adapted and/or integrated according to the users' requirements. Furthermore, presenting an easy-to-use interface and being a stand-alone application, the tool can help management experts in the quantitative assessment of sediment connectivity in the context of hazard and risk assessment. An application to a sample dataset and an overview on up-to-date applications of the approach and of the tool shows the development potential of such analyses. The modelled connectivity, in fact, appears suitable not only to characterize sediment dynamics at the catchment scale but also to integrate prediction models and as a tool for helping geomorphological interpretation.

Full Text Available This paper proposes a control approach and supplementary controllers for the operation of a hybrid stand-alone system composed of a wind generation unit and a conventional generation unit based on synchronous generator (CGU. The proposed controllers allow the islanded or isolated operation of small power systems with predominance of wind generation. As an advantage and a paradigm shift, the DC-link voltage of the wind unit is controlled by means of a conventional synchronous generator connected to the AC grid of the system. Two supplementary controllers, added to a diesel generator (DIG and to a DC dump load (DL, are proposed to control the DC-link voltage. The wind generation unit operates in V-f control mode and the DIG operates in PQ control mode, which allows the stand-alone system to operate either in wind-diesel (WD mode or in wind-only (WO mode. The strong influence of the wind turbine speed variations in the DC-link voltage is mitigated by a low-pass filter added to the speed control loop of the wind turbine. The proposed control approach does not require the use battery bank and ultra-capacitor to control the DC-link voltage in wind generation units based on fully rated converter.

Highlights: • Heat recovery exchanger is designed based on practical conditions of a hybrid power system. • Off-the-grid electricity system modeling and analysis using micro-grid analysis software HOMER. • NSGA-II is used for the multi-objective design optimization task. • A new local search is proposed to incorporate the engineering knowledge in NSGA-II. • The proposed approach outperforms the existing ones. - Abstract: Integration of solar power and diesel generators (DGs) together with battery storage has proven to be an efficient choice for stand-alone power systems (SAPS). For higher energy efficiency, heat recovery from exhaust gas of the DG can also be employed to supply all or a portion of the thermal energy demand. Although the design of such heat recovery systems (HRSs) has been studied, the effect of solar power integration has not been taken into account. In this paper, a new approach for practical design of these systems based on varying engine loads is presented. Fast and elitist non-dominated sorting genetic algorithm (NSGA-II) equipped with a novel local search was used for the design process, considering conflicting objectives of annual energy recovery and total cost of the system, and six design variables. An integrated power system, designed for a remote SAPS, was used to evaluate the design approach. The optimum power supply system was first designed using the commercial software Hybrid Optimization of Multiple Energy Resources (HOMER), based on power demand and global solar energy in the region. Heat recovery design was based on the outcome of HOMER for DG hourly load, considering different power scenarios. The proposed approach improves the annual heat recovery of the PV/DG/battery system by 4%, PV/battery by 1.7%, and stand-alone DG by 1.8% when compared with a conventional design based on nominal DG load. The results prove that the proposed approach is effective and that load calculations should be taken into account prior to

In this study, an algorithm is developed for precise positioning in dynamic environment utilizing a single geodetic GNSS receiver using carrier phase data. In this method, users should start the measurement on a known point near the project area for a couple of seconds making use of a single dual-frequency geodetic-grade receiver. The technique employs iono-free carrier phase observations with precise products. The equation of the algorithm is given below; Sm(t(i+1))=SC(ti)+[ΦIF (t(i+1) )-ΦIF (ti)] where, Sm(t(i+1)) is the phase-range between satellites and the receiver, SC(ti) is the initial range computed from the initial known point coordinates and the satellite coordinates and ΦIF is the ionosphere-free phase measurement (in meters). Tropospheric path delays are modelled using the standard tropospheric model. To accomplish the process, an in-house program was coded and some functions were adopted from Easy-Suite available at http://kom.aau.dk/~borre/easy. In order to assess the performance of the introduced algorithm in a dynamic environment, a dataset from a kinematic test measurement was used. The data were collected from a kinematic test measurement in Istanbul, Turkey. In the test measurement, a geodetic dual-frequency GNSS receiver, Ashtech Z-Xtreme, was set up on a known point on the shore and a couple of epochs were recorded for initialization. The receiver was then moved to a vessel and data were collected for approximately 2.5 hours and the measurement was finalized on a known point on the shore. While the kinematic measurement on the vessel were carried out, another GNSS receiver was set up on a geodetic point with known coordinates on the shore and data were collected in static mode to calculate the reference trajectory of the vessel using differential technique. The coordinates of the vessel were calculated for each measurement epoch with the introduced method. With the purpose of obtaining more robust results, all coordinates were calculated

Full Text Available A distributed power system with renewable energy sources is very popular in recent years due to the rapid depletion of conventional sources of energy. Reasonable sizing for such power systems could improve the power supply reliability and reduce the annual system cost. The goal of this work is to optimize the size of a stand-alone hybrid photovoltaic (PV/wind turbine (WT/battery (B/hydrogen system (a hybrid system based on battery and hydrogen (HS-BH for reliable and economic supply. Two objectives that take the minimum annual system cost and maximum system reliability described as the loss of power supply probability (LPSP have been addressed for sizing HS-BH from a more comprehensive perspective, considering the basic demand of load, the profit from hydrogen, which is produced by HS-BH, and an effective energy storage strategy. An improved ant colony optimization (ACO algorithm has been presented to solve the sizing problem of HS-BH. Finally, a simulation experiment has been done to demonstrate the developed results, in which some comparisons have been done to emphasize the advantage of HS-BH with the aid of data from an island of Zhejiang, China.

The results of the CMS tracker alignment analysis are presented using the data from cosmic tracks, optical survey information, and the laser alignment system at the Tracker Integration Facility at CERN. During several months of operation in the spring and summer of 2007, about five million cosmic track events were collected with a partially active CMS Tracker. This allowed us to perform first alignment of the active silicon modules with the cosmic tracks using three different statistical approaches; validate the survey and laser alignment system performance; and test the stability of Tracker structures under various stresses and temperatures ranging from +15C to -15C. Comparison with simulation shows that the achieved alignment precision in the barrel part of the tracker leads to residual distributions similar to those obtained with a random misalignment of 50 (80) microns in the outer (inner) part of the barrel.

Highlights: • Method for optimising the daily operation of photovoltaic-wind-diesel-battery systems. • Weather forecasts of hourly wind speed, irradiation, temperature and load are used. • Each day five control variables are optimised for the control of the system. • Operating cost includes real ageing of the batteries and the diesel generator. • Results show that the optimal control strategy used for each day led to cost savings. - Abstract: This article presents a method for optimising the daily operation (minimising the total operating cost) of a hybrid photovoltaic-wind-diesel-battery system using model predictive control. The model uses actual weather forecasts of hourly values of wind speed, irradiation, temperature and load. Five control variables are optimised, and thus their optimal set points values determine the optimal control strategy for each day. This involves the use of an accurate model for estimating the degradation of the batteries by considering the capacity loss due to corrosion and degradation. The model considers the extra costs of maintaining and replacing the diesel generator due to running out of its optimal conditions. The optimisation is carried out by means of genetic algorithms. An example of application compares the total operating cost obtained using the optimal control strategy for each day with the cost of using the optimal control strategy found for the whole year, obtaining savings of up to 7.8%. Also the comparison with the cost of using the “load following” control strategy is analysed, obtaining savings of up to 37.7%.

Asynchronous communication was made between host (FACOM M-340) and personal computer (OLIBETTIE S-2250) to get patient's information required for RIA test registration. The retrieval system consists of a keyboad input of six numeric codes, patient's ID, and a real time reply containing six parameters for the patient. Their identified parameters are patient's name, sex, date of birth (include area), department, and out- or inpatient. Linking this program to RIA registration program for individual patient, then, operator can input name of RIA test requested. Our simple retrieval program made a useful data network between different types of host and stand-alone personal computers, and enabled us accurate and labor-saving registration for RIA test. (author)

The energetic and environmental life cycle assessment of a 4.2 kW{sub p} stand-alone photovoltaic system (SAPV) at the University of Murcia (south-east of Spain) is presented. PV modules and batteries are the energetically and environmentally most expensive elements. The energy pay-back time was found to be 9.08 years and the specific CO{sub 2} emissions was calculated as 131 g/kWh. The SAPV system has been environmentally compared with other supply options (diesel generator and Spanish grid) showing lower impacts in both cases. The results show the CO{sub 2}-emission reduction potential of SAPV systems in southern European countries and point out the critical environmental issues in these systems. (author)

The European study entitled: 'Market Potential Analysis for Introduction of Hydrogen Energy Technology in Stand-Alone Power Systems (H-SAPS)' aimed to establish a broad understanding of the market potential for H-SAPS and provide a basis for promoting in wide scale new technological applications. The scope of the study was limited to small and medium installations, up to a few hundred kW power rating and based on RE as the primary energy source. The potential for hydrogen technology in SAPS was investigated through an assessment of the technical potential for hydrogen, the market analysis and the evaluation of external factors. The results are mostly directed towards action by governments and the research community but also industry involvement is identified. The results include targeted market research, establishment of individual cost targets, regulatory changes to facilitate alternative grid solutions, information and capacity building, focused technology research and bridging the technology gaps. (author)

Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

Full Text Available Purpose: Traditional maneuvers aim to decrease retroperitoneal bleeding in hemodynamically unstable multi-trauma patients with unstable pelvic fractures, are reportedly successful in approximately only 50%. The life-saving effect of extra-peritoneal pressure packing (EPPP is based on direct compression and control of both venous and arterial retroperitoneal bleeders. This study describes the safety and efficacy of emergent EPPP employment, as a stand-alone surgical treatment, that is, carried out without external pelvic fixation or emergent angiography. Materials and Methods: A retrospective chart review of all hemodynamic unstable, multi-trauma patients with mechanically unstable pelvic fractures treated by the EPPP technique at our medical center between the years 2005 and 2011. Survival rates, clinical, and physiological outcomes were followed prospectively. Results: Twenty-five of the 181 pelvic fracture patients had biomechanically unstable fractures that required surgical fixation. Fourteen of those 25 patients had deteriorating hemodynamic instability from massive pelvic bleeding which was resistant to resuscitation, and they underwent EPPP, as a stand-alone treatment. The procedure successfully achieved hemodynamic stability in all 14 patients and obviated the early mortality associated with massive pelvic bleeding. Three of these patients eventually succumbed to their multiple injuries. Conclusion: Implementation of EPPP improved all measured physiological outcome parameters and survival rates of hemodynamically unstable multi-trauma patients with unstable pelvic fractures in our trauma center. It provided the unique advantage of directly compressing the life-threatening retroperitoneal bleeders by applying direct pressure and causing a tamponade effect to stanch venous and arterial pelvic blood flow and obviate the early mortality associated with massive pelvic bleeding.

The aim of this study was to define the extent to which leadership and leadership skills are taught in dental hygiene degree completion programs by comparing stand-alone leadership courses/hybrid programs with programs that infuse leadership skills throughout the curricula. The study involved a mixed-methods approach using qualitative and quantitative data. Semi-structured interviews were conducted with program directors and faculty members who teach a stand-alone leadership course, a hybrid program, or leadership-infused courses in these programs. A quantitative comparison of course syllabi determined differences in the extent of leadership content and experiences between stand-alone leadership courses and leadership-infused curricula. Of the 53 U.S. dental hygiene programs that offer degree completion programs, 49 met the inclusion criteria, and 19 programs provided course syllabi. Of the program directors and faculty members who teach a stand-alone leadership course or leadership-infused curriculum, 16 participated in the interview portion of the study. The results suggested that competencies related to leadership were not clearly defined or measurable in current teaching. Reported barriers to incorporating a stand-alone leadership course included overcrowded curricula, limited qualified faculty, and lack of resources. The findings of this study provide a synopsis of leadership content and gaps in leadership education for degree completion programs. Suggested changes included defining a need for leadership competencies and providing additional resources to educators such as courses provided by the American Dental Education Association and the American Dental Hygienists' Association.

There is a paucity of diagnostic and therapeutic facilities in Nigeria to confirm coronary artery disease and offer appropriate interventional therapy. There is now a private cardiac catheterization laboratory in Lagos but as there are no sustained Open Heart Surgery programmes, percutaneous coronary interventions are currently being performed without surgical backup. This study was designed to assess results of stand-alone percutaneous coronary intervention (PCI) as currently practiced in Lagos, Nigeria. This cross-sectional study was conducted between July 2009 and July 2012. The study included all patients that underwent PCI in Lagos. Data was extracted from a prospectively maintained database. Coronary artery disease was confirmed in 80 (52.6%) of 152 Nigerians referred with a diagnosis of Ischaemic Heart Disease. There were 53 males (66.2%) and 27 females (33.8%). The average age was 60.3 +/-9.6 years and average euroscore was 4.5 +/-3.1. Of the 80 patients, 77 (96.3%) had significant stenoses and were candidates for revascularization. Distribution of significant stenoses was one in 32 patients (41.5%), two in 11 patients (14.3%), three in 19 patients (24.7%), four in 13 patients (16.9%) and five in 2 patients (2.6%). PCI was performed in 48 (62.3%) of the patients eligible for revascularization as the coronary anatomy in the remaining patients was not suitable for PCI. The indication for PCI was for myocardial infarction or unstable angina in 39 patients (81.2%). PCI was performed with PTCA plus stenting in 41 patients (85.4%) and with PTCA alone in 7 patients (14.6%) with good angiographic results. Overall 29 of the 48 patients (60.4%) had complete revascularization of significant stenoses. Complications of PCI were bleeding that required blood transfusion in 1 patient (2.1%), minor femoral haematomas in 2 patients (4.2%), and a major adverse clinical event in 1 patient (2.1%). A stand-alone PCI programme has been developed in Lagos, Nigeria. Both elective

Full Text Available there is an increased need for analysing the effect of atmospheric variables on photovoltaic (PV production and performance. The outputs from the different PV cells in different atmospheric conditions, such as irradiation and temperature , differ from each other evidencing knowledge deficiency in PV systems [14]. Maximum power point tracking (MPPT methods are used to maximize the PV array output power by tracking continuously the maximum power point (MPP. Among all MPPT methods existing in the literature, perturb and observe (P&O is the most commonly used for its simplicity and ease of implementation; however, it presents drawbacks such as slow response speed, oscillation around the MPP in steady state, and even tracking in wrong way under rapidly changing atmospheric conditions. In order to allow a functioning around the optimal point Mopt, we have inserted a DC-DC converter (Buck–Boost for a better matching between the PV and the load. This paper, we study the Maximum power point tracking using adaptive Intelligent fuzzy logic and conventional (P&O control for stande-alone photovoltaic Array system .In particular, the performances of the controllers are analyzed under variation weather conditions with are constant temperature and variable irradiation. The proposed system is simulated by using MATLAB-SIMULINK. According to the results, fuzzy logic controller has shown better performance during the optimization.

Full Text Available In recent decades, generation of electricity from solar arrays has been increased to meet the world's growing energy demand. However, the utilization rate of the power conditioner in the grid-tied solar power system is low because the operation of solar panels is dependent on sunlight. Thus, we studied the method that the small scale wind power generating system in size from a few hundred watts to two or three kilowatts can be connected to the grid-tied power conditioner of the solar power system for residential applications with low power ratings (single phase, size is limited to 10kW by emulating characteristic of the solar panel. In this paper, we introduce the application of the grid-tied PV cell emulating system in the stand-alone mode to improve the utilization rate of the power conditioner. The simulation and experimental test results verify that the PV cell emulating system can operate the power conditioner of the gridtied solar power system.

The design of the automation system and the implemented operation control strategy in a stand-alone power system in Greece are fully analyzed in the present study. A photovoltaic array and three wind generators serve as the system main power sources and meet a predefined load demand. A lead-acid accumulator is used to compensate the inherent power fluctuations (excess or shortage) and to regulate the overall system operation, based on a developed power management strategy. Hydrogen is produced by using system excess power in a proton exchange membrane (PEM) electrolyzer and is further stored in pressurized cylinders for subsequent use in a PEM fuel cell in cases of power shortage. A diesel generator complements the integrated system and is employed only in emergency cases, such as subsystems failure. The performance of the automatic control system is evaluated through the real-time operation of the power system where data from the various subsystems are recorded and analyzed using a supervised data acquisition unit. Various network protocols were used to integrate the system devices into one central control system managing in this way to compensate for the differences between chemical and electrical subunits. One of the main advantages is the ability of process monitoring from distance where users can perform changes to system principal variables. Furthermore, the performance of the implemented power management strategy is evaluated through simulated scenarios by including a case study analysis on system abilities to meet higher than expected electrical load demands.

This thesis gives a systematic review of the fundamentals of energy systems, the governing physical and chemical laws related to energy, inherent characteristics of energy system, and the availability of the earth`s energy. It shows clearly why solar-hydrogen systems are one of the most viable options for the future. The main subject discussed is the modelling of SAPS (Stand-Alone Power Systems), with focus on photovoltaic-hydrogen energy systems. Simulation models for a transient simulation program are developed for PV-H{sub 2} components, including models for photovoltaics, water electrolysis, hydrogen storage, fuel cells, and secondary batteries. A PV-H{sub 2} demonstration plant in Juelich, Germany, is studied as a reference plant and the models validated against data from this plant. Most of the models developed were found to be sufficiently accurate to perform short-term system simulations, while all were more than accurate enough to perform long-term simulations. Finally, the verified simulation models are used to find the optimal operation and control strategies of an existing PV-H{sub 2} system. The main conclusion is that the simulation methods can be successfully used to find optimal operation and control strategies for a system with fixed design, and similar methods could be used to find alternative system designs. 148 refs., 78 figs., 31 tabs.

Full Text Available Implementation of hybrid energy system (HES is generally considered as a promising way to satisfy the electrification requirements for remote areas. In the present study, a novel decision making methodology is proposed to identify the best compromise configuration of HES from a set of feasible combinations obtained from HOMER. For this purpose, a multi-objective function, which comprises four crucial and representative indices, is formulated by applying the weighted sum method. The entropy weight method is employed as a quantitative methodology for weighting factors calculation to enhance the objectivity of decision-making. Moreover, the optimal design of a stand-alone PV/wind/battery/diesel HES in Yongxing Island, China, is conducted as a case study to validate the effectiveness of the proposed method. Both the simulation and optimization results indicate that, the optimization method is able to identify the best trade-off configuration among system reliability, economy, practicability and environmental sustainability. Several useful conclusions are given by analyzing the operation of the best configuration.

An experimental program to develop and build a dielectric-based slab-symmetric structure (the micro-accelerator platform, or MAP) for generating and accelerating low-energy electrons is underway at UCLA and Manhattanville College. This optical acceleration structure is effectively a resonant cavity powered by a side-coupled laser, and has applications as a radiation source for medicine or industry. We present recent experimental and computational results on the accelerator, and progress toward its incorporation into a self-contained particle source. Such a particle source would incorporate a micron-scale electron emitter and a non-relativistic capture region to enable self-injection into the synchronous field within the accelerator. A prototype of the accelerator itself has been constructed from candidate dielectric materials using micromanufacturing techniques; the current status of the testing program is described. A novel electron emitter incorporating pyroelectric crystals with field-enhancing tips has been demonstrated to produce steady currents; the results are dependent on tip geometry, and appear suitable for injection into a microstructure. Extension of the MAP concept to non-relativistic velocities, as in the stand-alone source, requires a tapered structure that gives rise to numerous complications including beam defocusing and manufacturing challenges; approaches for addressing these complications are mentioned.

In this paper, life cycle analysis has been carried out to evaluate overall performance of given rated stand-alone solar photovoltaic (SAPV) in terms of basic energy matrices, life cycle cost analysis, and earned carbon credit. Further, the experimentally calculated actual on-field life cycle performance results of existing outdoor SAPV system (i.e. almost 20 years old) have been represented with respect to the potential (max.) performance of same SAPV system estimated under same environmental conditions of solar intensity, ambient temperature, PV operating temperature as obtained during actual on-field performance evaluation. This new approach of overall performance evaluation by considering the on-field SAPV system installation as new (i.e. with potential/max. performance) and old (i.e. with actual performance) under same environmental conditions provides an inclusive comparative life cycle assessment of on-field PV system. - Highlights: • We present comparative life cycle assessment methodology for outdoor PV system. • We evaluate on-field PV system life performance by considering it as new and old. • We examine fall in actual on-field PV performance compared to potential performance. • PV system techno-economic performance reduces with the long term exposure or aging. • We observe fall in earned carbon credit and rise in cost/kWh as PV system ages

On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method--function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.

Full Text Available We report the case of a 73-year-old female with severe degenerative scoliosis and back and leg pain that wassuccessfully treated with stand- alone cages via an extreme lateral transpsoas approach. This patient had declinedopen surgery and instrumentation due to her advanced age concerns about potential side effects.

Full Text Available Developments in power electronics have enabled the widespread application of Pulse Width Modulation (PWM inverters, notably for obtaining electricity from renewable systems. This paper critical review the previous studies in designing stand-alone inverter and modelling the inverter with two control loops to improve and provide a high-quality power of a stand-alone inverter. Multi-loop control techniques for a stand-alone inverter are utilised as the first loop is a capacitor current control to provide active damping and improve transient and steady state inverter performance. The capacitor current control is cheaper than inductor current control, where a small current sensing resistor is used. The second loop is the output voltage control that is used to improve the system performance and also control the output voltage. The power quality of the off-grid system is measured experimentally and compared with the grid power, showing power quality of off-grid system to be better than that of the grid. The suggestions and key findings to design the stand-alone inverter are given based in the reviewing of previous publications and from the literature’s point of view.

Highlights: • A stand-alone solar driven Organic Rankine Cycle is optimized parametrically. • The system is optimized energetically and financially. • Nine working fluids are tested with cyclohexane to be the most suitable. • A collecting area of 25,000 m 2 parabolic trough collectors is the optimum solution. • The maximum IRR is 13.46% and the payback period is about 9 years. - Abstract: The use of solar thermal energy for electricity production is a clean and sustainable way to cover the increasing energy needs of our society. The most mature technology for capturing solar energy in high temperature levels is the parabolic trough collectors (PTC). In this study, an Organic Rankine Cycle (ORC) coupled with PTC is analyzed parametrically in order to be optimized financially and energetically. The first step is the thermodynamic investigation of the ORC by using various working fluids. The second step is the energetic and financial investigation of the total system which includes the solar field, the storage tank and the ORC module. By testing many combinations of collecting areas and storage tank volumes, finally cyclohexane proved to be the most suitable working fluid for producing 1 MW el with PTC. Specifically, in the optimum situation a solar field of 25,000 m 2 with storage tank of about 300 m 3 leads to a payback period of 9 years and to an internal rate of return (IRR) equal to 13.46%. Moreover, an economic comparison for different commercial collectors is presented, with Eurotrough ET-150 being the financially optimum solution for this case study.

In this paper we investigate, by using an adaptive Artificial Neural Network (ANN), in order to find a suitable model for sizing Stand-Alone Photovoltaic (SAPV) systems, based on a minimum of input data. This model combines Radial Basis Function (RBF) network and Infinite Impulse Response (IIR) filter in order to accelerate the convergence of the network. For the sizing of a photovoltaic (PV) system, we need to determine the optimal sizing coefficients (K PV , K B . These coefficients allow us to determine the number of solar panels and storage batteries necessary to satisfy a given consumption, especially in isolated sites where the global solar radiation data is not always available and which are considered the most important parameters for sizing a PV system. Obtained results by classical models (analytical, numerical, analytical- numerical, B-spline function) and new models like feed-forward (MLP), radial basis function (RBF), MLP-IIR and RBF-IIR have been compared with experimental sizing coefficients in order to illustrate the accuracy of the results of the new developed model. This model has been trained by using 200 known optimal sizing coefficients corresponding to 200 locations in Algeria. In this way, the adaptive model was trained to accept and even handle a number of unusual cases, the unknown validation sizing coefficients set produced very set accurate estimation and a correlation coefficient of 98% was obtained between the calculated and that estimated by the RBF-IIR model. This result indicates that the proposed method can be successfully used for the estimation of optimal sizing coefficients of SAPV systems for any locations in Algeria, but the methodology can be generalized using different locations over the world. (author)

Full Text Available The paper presents the results of a technical and economic analysis of three stand-alone hybrid power systems based on renewable energy sources which supply a specific group of low-power consumers. This particular case includes measuring sensors and obstacle lights on a meteorological mast for wind measurements requiring an uninterrupted power supply in cold climate conditions. Although these low-power (100 W measuring sensors and obstacle lights use little energy, their energy consumption is not the same as the available solar energy obtained on a daily or seasonal basis. In the paper, complementarity of renewable energy sources was analysed, as well as one of short-term lead-acid battery-based storage and seasonal, hydrogen-based (electrolyser, H2 tank, and fuel cells storage. These relatively complex power systems were proposed earlier for high-power consumers only, while this study specifically highlights the role of the hydrogen system for supplying low-power consumers. The analysis employed a numerical simulation method using the HOMER software tool. The results of the analysis suggest that solar and wind-solar systems, which involve meteorological conditions as referred to in this paper, include a relatively large number of lead-acid batteries. Additionally, the analysis suggests that the use of hydrogen power systems for supplying low power-consumers is entirely justifiable, as it significantly reduces the number of batteries (two at minimum in this particular case. It was shown that the increase in costs induced by the hydrogen system is acceptable.

In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations

The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

Single-phase stand-alone PV system is suitable for household applications in remote area. Hybrid battery/ultra-capacitor energy storage can reduce charge and discharge cycles and avoid deep discharges of battery. This paper proposes a compact seven switches structure for stand-alone PV system......-order harmonic current caused by single-phase inverter. In the proposed compact topology, a small size DC-link capacitor can achieve the same function through charging/discharging control of ultra-capacitor to mitigate second-order ripple current. Simulation results are provided to validate the effectiveness......, which otherwise needs nine switches configuration, inclusive of one switch for boost converter, four switches for single-phase inverter and four switches for two DC/DC converters of battery and ultra-capacitor. It is well-known that a bulky DC-link capacitor is always required to absorb second...

Study Design: A systematic review. Objective: The objective of this study was to determine the safety and efficacy of stand-alone anterior lumbar interbody fusion (sa-ALIF) for the treatment of symptomatic isthmic spondylolisthesis of L5-S1 by assessing the level of available clinical and radiographic evidence. Methods: A systematic review utilizing Medline, Embase, and Scopus online databases was undertaken. Clinical, radiographic, and adverse outcome data were extracted for the relevant ist...

Full Text Available Question: Is a combination of standing, electrical stimulation and splinting more effective than standingalone for the management of ankle contractures after severe brain injury? Design: A multi-centre randomised trial with concealed allocation, assessor blinding and intention-to-treat analysis. Participants: Thirty-six adults with severe traumatic brain injury and ankle plantarflexion contractures. Intervention: All participants underwent a 6-week program. The experimental group received tilt table standing, electrical stimulation and ankle splinting. The control group received tilt table standingalone. Outcome measures: The primary outcome was passive ankle dorsiflexion with a 12 Nm torque. Secondary outcomes included: passive dorsiflexion with lower torques (3, 5, 7 and 9 Nm; spasticity; the walking item of the Functional Independence Measure; walking speed; global perceived effect of treatment; and perceived treatment credibility. Outcome measures were taken at baseline (Week 0, end of intervention (Week 6, and follow-up (Week 10. Results: The mean between-group differences (95% CI for passive ankle dorsiflexion at Week 6 and Week 10 were –3 degrees (–8 to 2 and –1 degrees (–6 to 4, respectively, in favour of the control group. There was a small mean reduction of 1 point in spasticity at Week 6 (95% CI 0.1 to 1.8 in favour of the experimental group, but this effect disappeared at Week 10. There were no differences for other secondary outcome measures except the physiotherapists’ perceived treatment credibility. Conclusion: Tilt table standing with electrical stimulation and splinting is not better than tilt table standingalone for the management of ankle contractures after severe brain injury. Trial registration: ACTRN12608000637347. [Leung J, Harvey LA, Moseley AM, Whiteside B, Simpson M, Stroud K (2014 Standing with electrical stimulation and splinting is no better than standingalone for management of ankle plantarflexion

Observations of present and future X-ray telescopes include a large number of ipitous sources of unknown types. They are a rich source of knowledge about X-ray dominated astronomical objects, their distribution, and their evolution. The large number of these sources does not permit their individual spectroscopical follow-up and classification. Here we use Chandra Multi-Wavelength public data to investigate a number of statisticalalgorithms for classification of X-ray sources with optical imaging follow-up. We show that up to statistical uncertainties, each class of X-ray sources has specific photometric characteristics that can be used for its classification. We assess the relative and absolute performance of classification methods and measured features by comparing the behaviour of physical quantities for statistically classified objects with what is obtained from spectroscopy. We find that among methods we have studied, multi-dimensional probability distribution is the best for both classifying source type and redshift, but it needs a sufficiently large input (learning) data set. In absence of such data, a mixture of various methods can give a better final result. We discuss some of potential applications of the statistical classification and the enhancement of information obtained in this way. We also assess the effect of classification methods and input data set on the astronomical conclusions such as distribution and properties of X-ray selected sources.

Highlights: ► Backup sizing analyses for PV–Wind energy supplied stand-alone house are completed. ► Source and demand side dynamics are considered for the first time in backup sizing. ► Backup size is reduced by 10% compared to backup size found with hourly values. ► The importance of data resolution on sizing study in such systems is shown. -- Abstract: Solar and wind energy use to supply the electrical demand of a stand-alone residential house is investigated. Combining solar and wind energy sources provide more reliable power source for stand-alone applications since they complement each other. Backup units (battery/supercapacitor) are also needed for uninterrupted energy. For a proper backup sizing in such systems, high resolution load data, wind speed and solar radiation data must be used as compared to the use of hourly averaged data found in literature. In this study, high resolution data on both load side and source side are collected experimentally. Then, collected data used as input to system simulations in Matlab/Simulink for sizing the backup in the considered hybrid power system. Backup state of the charge (SOC) is used as decision criteria. It is shown that, when load and source dynamics are considered, approximately 10% less backup size is required compared to backup size found with hourly averaged values. The study shows the importance of data resolution on backup sizing in such systems and could be a guide for renewable energy system designers.

This master thesis analyses and implements a stand-alone photovoltaic system based on decentralized 'Multi String' topology. The proposed system is composed of a set of DC-DC converters linked to the PV arrays of panels, a bidirectional converter to perform the control of the charge/discharge process of the battery bank and ensure the specifications of DC link and a full-bridge inverter that feed the AC loads. Therefore, all operation modes that the stand-alone PV system can work are presented and analyzed. As the chief aim is to ensure the energy balance of the stand-alone PV system, are presented independents control loops for each converter of the PV system and a propose of a supervisory control that, based on information about the conditions of the DC link and the bank of batteries, defines each operation mode should be active, in order to maximize the power extracted from the PV arrays, the life cycle of the battery bank and ensuring the uninterrupted feeding of energy to the loads. Finally, simulation and experimental results validate the operation of the proposed system under different load and solar radiation conditions. (author)

It remains debatable whether cervical spine fusion cages should be filled with any kind of bone or bone substitute. Cortical and subcortical bone from the anterior and posterior osteophytes of the segment could be used to fill the cage. The purposes of the present study are to evaluate the clinical outcomes and radiological outcomes including bone fusion and subsidence that occurred after anterior cervical discectomy and fusion using a stand-alone cage packed with local autobone graft. Thirty-one patients who underwent anterior cervical fusion using a stand-alone polyetheretherketone (PEEK) cage packed with local autobone graft from July 2009 to december 2011 were enrolled in this study. Bone fusion was assessed by cervical plain radiographs and computed tomographic scan. Nonunion was evaluated according to the absence of bony bridge on computed tomographic scan. Subsidence was defined as a ≥2 mm decrease of the interbody height at the final follow-up compared to that measured at the immediate postoperative period. Subsidence was observed in 7 patients (22.6%). Of 7 patients with subsidence greater 2 mm, nonunion was developed in 3. Three patients with subsidence greater 2 mm were related with endplate damage during intraoperative endplate preparation. Solid bone fusion was achieved in 28 out of 31 patients (90.3%). With proper patient selection and careful endplate preparation, anterior cervical discectomy and fusion (ACDF) using a stand-alone PEEK cage packed with local autobone graft could be a good alternative to the standard ACDF techniques with plating.

Many studies attest to the excellent results achieved using anterior lumbar interbody fusion (ALIF) for degenerative spondylolisthesis. The purpose of this report is to document a rare instance of L-4 vertebral body fracture following use of a stand-alone interbody fusion device for L3-4 ALIF. The patient, a 55-year-old man, had suffered intractable pain of the back, right buttock, and left leg for several weeks. Initial radiographs showed Grade I degenerative spondylolisthesis, with instability in the sagittal plane (upon 15° rotation) and stenosis of central and both lateral recesses at the L3-4 level. Anterior lumbar interbody fusion of the affected vertebrae was subsequently conducted using a stand-alone cage/plate system. Postoperatively, the severity of spondylolisthesis diminished, with resolution of symptoms. However, the patient returned 2 months later with both leg weakness and back pain. Plain radiographs and CT indicated device failure due to anterior fracture of the L-4 vertebral body, and the spondylolisthesis had recurred. At this point, bilateral facetectomies were performed, with reduction/fixation of L3-4 by pedicle screws. Again, degenerative spondylolisthesis improved postsurgically and symptoms eased, with eventual healing of the vertebral body fracture. This report documents a rare instance of L-4 vertebral body fracture following use of a stand-alone device for ALIF at L3-4, likely as a consequence of angular instability in degenerative spondylolisthesis. Under such conditions, additional pedicle screw fixation is advised.

El Oyameyo, is an ecological site located to the South-West of the Topilejo town, D.F., 19 degree 25` North latitude, 99 degree 5` West longitude and at an altitude of 3100 m. At present, there are 10 families living at this place. They have energy generators to produce their own electricity by means of solar or wind energy using photovoltaic (PV) technology and eolic systems, respectively. There are three different configurations of energy generators: DC regulated PV systems, AC regulated PV systems and one PV-Wind hybrid system. The electrical power installed for the standalone PV systems are from 48 W-p up to 768 W-p range. Among these, there are 4 PV systems that are configurated in DC regulated systems, and other 6 are AC regulated systems. All these systems use lead-acid battery (scaled or vented) banks to store the energy produced daily by the systems. The PV-Wind hybrid system in formed, at present, by a 5.0 kW wind generator, a PV array of 768 W-p, a 37.8 kW-h storage battery bank and a 5.0 kW DC/AC inverter. In this work, we report the electricity generated, load pattern and overall system performance of the photovoltaic-wind hybrid system. The technical characteristics, energy test on the hybrid system and the experience obtained from energy handling and system maintenance for all the systems are presented. We found that all the systems had shown good performance and users` satisfaction. [Espanol] El Oyameyo es un lugar ecologico localizado al Sur-Oeste del pueblo de Topilejo, D. F., 19 grados 25` de latitud Norte 99 grados 5` de longitud Oeste y a una altitud de 3100m. Actualmente hay 10 familias viviendo en este lugar. Tienen generadores de energia para producir su propia electricidad mediante la energia solar y la del viento usando sistemas fotovoltaicos (FV) y sistemas eolicos, respectivamente. Hay tres diferentes configuraciones de generadores de energia: sistemas fotovoltaicos de CD regulados, sistemas fotovoltaicos de CA regulados y un sistema

The stand-alone treatment of degenerative cervical spine pathologies is a proven method in clinical practice. However, its impact on subsidence, the resulting changes to the profile of the cervical spine and the possible influence of clinical results compared to treatment with additive plate osteosynthesis remain under discussion until present. This study was designed as a retrospective observational cohort study to test the hypothesis that radiographic subsidence of cervical cages is not associated with adverse clinical outcomes. 33 cervical segments were treated surgically by ACDF with stand-alone cage in 17 patients (11 female, 6 male), mean age 56 years (33-82 years), and re-examined after eight and twenty-six months (mean) by means of radiology and score assessment (Medical Outcomes Study Short Form (MOS-SF 36), Oswestry Neck Disability Index (ONDI), painDETECT questionnaire and the visual analogue scale (VAS)). Subsidence was observed in 50.5% of segments (18/33) and 70.6% of patients (12/17). 36.3% of cases of subsidence (12/33) were observed after eight months during mean time of follow-up 1. After 26 months during mean time of follow-up 2, full radiographic fusion was seen in 100%. MOS-SF 36, ONDI and VAS did not show any significant difference between cases with and without subsidence in the two-sample t-test. Only in one type of scoring (painDETECT questionnaire) did a statistically significant difference in t-Test emerge between the two groups (p = 0.03; α = 0.05). However, preoperative painDETECT score differ significantly between patients with subsidence (13.3 falling to 12.6) and patients without subsidence (7.8 dropped to 6.3). The radiological findings indicated 100% healing after stand-alone treatment with ACDF. Subsidence occurred in 50% of the segments treated. No impact on the clinical results was detected in the medium-term study period.

JChainsAnalyser is a Java-based program for the analysis of two-dimensional images of magneto-rheological fluids (MRF) at low concentration of particles obtained using the video-microscopy technique. MRF are colloidal dispersions of micron-sized polarizable particles in a carrier fluid with medium to low viscosity. When a magnetic field is applied to the suspension, the particles aggregate forming chains or clusters. Aggregation dynamics [P. Domínguez-García, S. Melle, J.M. Pastor, M.A. Rubio, Phys. Rev. E 76 (2007) 051403] and morphology of the aggregates [P. Domínguez-García, S. Melle, M.A. Rubio, J. Colloid Interface Sci. 333 (2009) 221-229] have been studied capturing images of the fluid and analyzing them by using this software. The program allows to analyze automatically the MRF images by means of an adequate combination of different imaging methods, while magnitudes and statistics are calculated and saved in data files. It is possible to run the program on a desktop computer, using the GUI (graphical user interface), or in a cluster of processors or remote computer by means of command-line instructions. Program summaryProgram title: JChainsAnalyser Catalogue identifier: AEDT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 79 071 No. of bytes in distributed program, including test data, etc.: 4 367 909 Distribution format: tar.gz Programming language: Java 2 Computer: Any computer with Java Runtime Environment (JRE) installed Operating system: Any OS with Java Runtime Environment (JRE) installed RAM: Typically, 3.3 MB Classification: 23 External routines: ImageJ, ij-imageIO, jdom, L2FProd Nature of problem: The video-microscopy technique usually produces quite a big quantity of images to analyze

Anterior lumbar interbody fusion (ALIF) is an established treatment for structural instability associated with symptomatic disk degeneration (SDD). Stand-alone ALIF offers many advantages, however, it may increase the risk of non-union. Recombinant human bone morphogenetic protein-2 (BMP-2) may enhance fusion rate but is associated with postoperative complication. The optimal dose of BMP-2 remains unclear. This study assessed the fusion and subsidence rates of stand-alone ALIF using the SynFix-LR interbody cage with 6 ml/level of BMP-2. Thirty-two ALIF procedures were performed by a single surgeon in 25 patients. Twenty-five procedures were performed for SDD without spondylolisthesis (SDD group) and seven procedures were performed for SDD with grade-I olisthesis (SDD-olisthesis group). Patients were followed-up for a mean of 17 ± 6 months. Solid fusion was achieved in 29 cases (90.6 %) within 6 months postoperatively. Five cases of implant subsidence were observed (16 %). Four of these occurred in the SDD-olisthesis group and one occurred in the SDD group (57 % vs. 4 % respectively; p = 0.004). Three cases of subsidence failed to fuse and required revision. The body mass index of patients with olisthesis who developed subsidence was higher than those who did not develop subsidence (29 ± 2.6 vs. 22 ± 6.5 respectively; p = 0.04). No BMP-2 related complications occurred. The overall fusion rate of stand-alone ALIF using the SynFix-LR system with BMP-2 was 90.6 %, comparable with other published series. No BMP-2 related complication occurred at a dose of 6 mg/level. Degenerative spondylolisthesis and obesity seemed to increase the rate of implant subsidence, and thus we believe that adding posterior fusion for these cases should be considered.

Retrospective comparative study. Two polyetheretherketone (PEEK) cages of different designs were compared in terms of the postoperative segmental kyphosis after anterior cervical discectomy and fusion. Segmental kyphosis occasionally occurs after the use of a stand-alone cage for anterior cervical discectomy and fusion. Although PEEK material seems to have less risk of segmental kyphosis compared with other materials, the occurrence of segmental kyphosis for PEEK cages has been reported to be from 0% to 29%. There have been a few reports that addressed the issue of PEEK cage design. A total of 41 consecutive patients who underwent single-level anterior discectomy and fusion with a stand-alone cage were included. Either a round tube-type (Solis; 18 patients, S-group) or a trapezoidal tube-type (MC+; 23 patients, M-group) cage was used. The contact area between the cage and the vertebral body is larger in MC+ than in Solis, and anchoring pins were present in the Solis cage. The effect of the cage type on the segmental angle (SA) (lordosis vs. kyphosis) at postoperative month 24 was analyzed. Preoperatively, segmental lordosis was present in 12/18 S-group and 16/23 M-group patients (P=0.84). The SA was more lordotic than the preoperative angle in both groups just after surgery, with no difference between groups (P=0.39). At 24 months, segmental lordosis was observed in 9/18 S-group and 20/23 M-group patients (P=0.01). The patients in M-group were 7.83 times more likely than patients in S-group (P=0.04; odds ratio, 7.83; 95% confidence interval, 1.09-56.28) not to develop segmental kyphosis. The design of the PEEK cage used may influence the SA, and this association needs to be considered when using stand-alone PEEK cages.

Australia's National Mental Health Strategy's statement of rights and responsibilities states that children and adolescents admitted to a mental health facility or community program have the right to be separated from adult patients and provided with programs suited to their developmental needs. However, in rural Australia, where a lack of healthcare services, financial constraints, greater service delivery areas and fewer mental healthcare specialists represent the norm, Child and Adolescent Mental Health Services (CAMHS) are sometimes co-located with adult mental health services. The aim of the present study was to evaluate the impact of a recent relocation of a regional CAMHS in Victoria from co-located to standalone premises. Six CAMHS clinicians who had experienced service delivery at a co-located setting and the current stand-alone CAMHS setting were interviewed about their perceptions of the impact of the relocation on service delivery. An exploratory interviewing methodology was utilized due to the lack of previous research in this area. Interview data were transcribed and analysed according to interpretative phenomenological analysis techniques. Findings indicated a perception that the relocation was positive for clients due to the family-friendly environment at the new setting and separation of CAMHS from adult psychiatric services. However, the impact of the relocation on clinicians was marked by a perceived loss of social capital from adult psychiatric service clinicians. These results provide increased understanding of the effects of service relocation and the influence of co-located versus stand-alone settings on mental health service delivery - an area where little prior research exists.

The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

such as Fischer-Tropsch (FT) diesel. The embedding of the FT plant into the stand-alone based on power mode plants for production of a synthetic fuel is a promising practice, which requires an extensive adaptation of conventional techniques to the special chemical needs found in a gasified biomass. Because...... there are currently no plans to engage the FT process in Thailand, the authors have targeted that this work focus on improving the FT configurations in existing biomass gasification facilities (10 MWth). A process simulation model for calculating extended unit operations in a demonstrative context is designed...

The influence of state feedback coupling in the dynamics performance of power converters for stand-alone microgrids is investigated. Computation and PWM delays are the main factors that limit the achievable bandwidth of current regulators in digital implementations. In particular, the performance...... provided. A proportional resonant voltage controller is designed according to Nyquist criterion taking into account application requirements. For this purpose, a mathematical expression based on root locus analysis is proposed to find the minimum value of the fundamental resonant gain. Experimental tests...

, it should be useful to install many probes on the same site to obtain permittivity measurements over a large area. To reach this goal, the probes should communicate with each other to send data to a record device. Furthermore, it is needed to record measurements over a long time period (many months) to study the in-situ dielectric soil property variations according to changing weather conditions and seasonal trends. The goal of the research work presented is to develop a dielectric sensor system based on end effect probes able to communicate the data using wireless technology. It must be stand-alone from an electric and data recording point of view so it must integrate a VNA circuit instead of the ANRITSU VNA used for the moment. The LoRa wireless technology has been selected because of its low electric consumption and the large distance between equipment available. LoRaWAN™ is a Low Power Wide Area Network specification intended for wireless battery operated devices. The LoRaWAN data rates range from 0.3 kbps to 50 kbps which is sufficient for our probes' data exchanges. We will present the work done to perform the VNA and the LoRa communication board as well as the work done to improve the probes and the permittivity computation algorithm.

To determine risk factors for subsidence in patients treated with anterior cervical discectomy and fusion (ACDF) and stand-alone polyetheretherketone (PEEK) cages. Records of patients with degenerative spondylosis or traumatic disc herniation resulting in radiculopathy or myelopathy between C2 and C7 who underwent ACDF with stand-alone PEEK cages were retrospectively reviewed. Cages were filled with autogenous cancellous bone harvested from iliac crest or hydroxyapatite. Subsidence was defined as a decrease of 3 mm or more of anterior or posterior disc height from that measured on the postoperative radiograph. Eighty-two patients (32 males, 50 females; 182 treatment levels) were included in the analysis. Most patients had 1-2 treatment levels (62.2 %), and 37.8 % had 3-4 treatment levels. Treatment levels were from C2-7. Of the 82 patients, cage subsidence occurred in 31 patients, and at 39 treatment levels. Multivariable analysis showed that subsidence was more likely to occur in patients with more than two treatment levels, and more likely to occur at treatment levels C5-7 than at levels C2-5. Subsidence was not associated with postoperative alignment change but associated with more disc height change (relatively oversized cage). Subsidence is associated with a greater number of treatment levels, treatment at C5-7 and relatively oversized cage use.

Recently, there has been a rapid increase in the use of cervical spine interbody fusion cages, differing in design and biomaterial used, in competition to autologous iliac bone graft and bone cement (PMMA). Limited biomechanical differences in primary stability, as well as advantages and disadvantages of each cage or material have been investigated in studies, using an in vitro human cervical spine model. 20 human cervical spine specimens were tested after fusion with either a cubical stand-alone interbody fusion cage manufactured from a new porous TiO2/glass composite (Ecopore) or PMMA after discectomy. Non-destructive biomechanical testing was performed, including flexion/extension and lateral bending using a spine testing apparatus. Three-dimensional segmental range of motion (ROM) was evaluated using an ultrasound measurement system. ROM increased more in flexion/extension and lateral bending after PMMA fusion (26.5%/36.1%), then after implantation of the Ecopore-cage (8.1%/7.8%). In this first biomechanical in vitro examination of a new porous ceramic bone replacement material a) the feasibility and reproducibility of biomechanical cadaveric cervical examination and its applicability was demonstrated, b) the stability of the ceramic cage as a standalone interbody cage was confirmed in vitro, and c) basic information and knowledge for our intended biomechanical and histological in vivo testing, after implantation of Ecopore in cervical sheep spines, were obtained.

The detection and quantification of fusion transcripts has both biological and clinical implications. RNA sequencing technology provides a means for unbiased and high resolution characterization of fusion transcript information in tissue samples. We evaluated two fusiondetection algorithms,

... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

Grid parity for alternative energy resources occurs when the cost of electricity generated from the source is lower than or equal to the purchasing price of power from the electricity grid. This thesis aims to quantitatively analyze the evolution of hybrid stand-alone microgrids in the US, Germany, Pakistan and South Africa to determine grid parity for a solar PV/Diesel/Battery hybrid system. The Energy System Model (ESM) and NREL's Hybrid Optimization of Multiple Energy Resources (HOMER) software are used to simulate the microgrid operation and determine a Levelized Cost of Electricity (LCOE) figure for each location. This cost per kWh is then compared with two distinct estimates of future retail electricity prices at each location to determine grid parity points. Analysis results reveal that future estimates of LCOE for such hybrid stand-alone microgrids range within the 35-55 cents/kWh over the 25 year study period. Grid parity occurs earlier in locations with higher power prices or unreliable grids. For Pakistan grid parity is already here, while Germany hits parity between the years 2023-2029. Results for South Africa suggest a parity time range of the years 2040-2045. In the US, places with low grid prices do not hit parity during the study period. Sensitivity analysis results reveal the significant impact of financing and the cost of capital on these grid parity points, particularly in developing markets of Pakistan and South Africa. Overall, the study helps conclude that variations in energy markets may determine the fate of emerging energy technologies like microgrids. However, policy interventions have a significant impact on the final outcome, such as the grid parity in this case. Measures such as eliminating uncertainty in policies and improving financing can help these grids overcome barriers in developing economies, where they may find a greater use much earlier in time.

Alignment of amino acid sequences is the main sequence comparison method used in computational molecular biology. The selection of the amino acid substitution matrix best suitable for a given alignment problem is one of the most important decisions the user has to make. In a conventional amino acid substitution matrix all elements are fixed and their values cannot be easily adjusted. Moreover, most existing amino acid substitution matrices account for the average (dis)similarities between amino acid types and do not distinguish the contribution of a specific biochemical property to these (dis)similarities. PR2ALIGN is a stand-alone software program and a web-server that provide the functionality for implementing flexible user-specified alignment scoring functions and aligning pairs of amino acid sequences based on the comparison of the profiles of biochemical properties of these sequences. Unlike the conventional sequence alignment methods that use 20x20 fixed amino acid substitution matrices, PR2ALIGN uses a set of weighted biochemical properties of amino acids to measure the distance between pairs of aligned residues and to find an optimal minimal distance global alignment. The user can provide any number of amino acid properties and specify a weight for each property. The higher the weight for a given property, the more this property affects the final alignment. We show that in many cases the approach implemented in PR2ALIGN produces better quality pair-wise alignments than the conventional matrix-based approach. PR2ALIGN will be helpful for researchers who wish to align amino acid sequences by using flexible user-specified alignment scoring functions based on the biochemical properties of amino acids instead of the amino acid substitution matrix. To the best of the authors' knowledge, there are no existing stand-alone software programs or web-servers analogous to PR2ALIGN. The software is freely available from http://pr2align.rit.albany.edu.

The present investigation is concerned with the formulation of energy management strategies for stand-alone photovoltaic (PV) systems, taking into account a basic control algorithm for a possible predictive, (and adaptive) controller. The control system controls the flow of energy in the system according to the amount of energy available, and predicts the appropriate control set-points based on the energy (insolation) available by using an appropriate system model. Aspects of adaptation to the conditions of the system are also considered. Attention is given to a statistical analysis technique, the analysis inputs, the analysis procedure, and details regarding the basic control algorithm.

Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...

To evaluate the clinical efficacy and radiological outcomes of anterior cervical discectomy and fusion (ACDF) using self-locking polyetheretherketone (PEEK) cages for treatment of three-level cervical degenerative spondylopathy. Twenty-eight patients underwent three-level ACDF using self-locking stand-alone PEEK cages (group A), and 26 patients underwent three-level ACDF using cages and plate fixation (group B) were reviewed retrospectively. Clinical efficacy was evaluated by pre- and post-operative Japanese Orthopedic Association (JOA) scores and Neck Disability Index (NDI). The operation time, blood loss, surgical results according to Odom's criteria and post-operative dysphagia status were also recorded. Radiological outcomes including fusion, cervical Cobb's lordosis, fused segment angle, disc height, and cage subsidence were assessed. Clinical outcome measures such as dysphagia and fusion rate and the results of surgery evaluated according to Odom's criteria were not statistically significant (P > 0.05) between groups. The operation time was shorter and blood loss was less in group A (P 0.05). Post-operative cage subsidence, the loss of disc height, cervical lordosis and the fused segment angle were relatively higher in group A than group B (P stand-alone cages showed similar clinical results as compared to ACDF using cages and plate fixation for the treatment of three-level cervical degenerative spondylopathy. However, potential long-term problems such as cage subsidence, loss of cervical lordosis and fused segment angle post-operatively were shown to be associated with patients who underwent ACDF using self-locking stand-alone cages.

Stand-alone photovoltaic systems are a suitable alternative for rural electrification. However, there are still problems to be solved, mainly related to the system design and the technical quality of the equipment and facilities, which impacts the systems reliability. To determine the factors that affect the reliability of these systems it were studied the most common configurations and associated failures. The Laboratory experimental research, together with an extensive literature review, show the basic technical problems that occur to each of the elements of the installation and the dependence between them. These studies have shown that the storage the system, considering system reliability and economy, is the weakest element due to the decrease of their storage capacity. This led to consider the storage systems as the focus of this study and, through the analysis of their behavior, to develop a procedure to size systems with high reliability, lower cost and appropriate configuration. The impact of batteries on the technical reliability and economic viability of photovoltaic systems is determined. It was achieved through experimental testing and the development and adjustment of mathematical models. These models were implemented to preexisting software called PVSize. The improved software allows the calculation of different configurations of systems and to determine the loss of load probability and the figures of merit associated to the chosen economic-financial project. On this work was installed a photovoltaic system and was developed a battery testing system. The values measured in these systems allow to verify the mathematical models that describe the behavior of each device and characterize the components of the system. Experimental analysis of the behavior of a bank of batteries along a year showed that the connection of batteries in parallel accelerates the batteries degradation process, and this degradation has differentiated impact on the loss of

We describe a new two-step modeling framework for investigating the impact of climate change on human comfort in outdoor urban environments. In the first step, climate change scenarios for air temperature and solar radiation (global, diffuse, direct components) are created using a change-factor algorithm. The change factors are calculated by comparing ranked daily regional climate model outputs for a future-period and a present-day period, and then changes consistent with these daily change factors are applied to historical hourly climate observations. In the second step, the mean-radiant-temperature ( T mrt) is calculated using the SOLWEIG (SOlar and LongWave Environmental Irradiance Geometry) model. T mrt, which describes the radiant heat exchange between a person and their surroundings, is one of the most important meteorologically derived parameters governing human energy balance and outdoor thermal comfort, especially during warm and sunny days. We demonstrate that change factors can be applied independently to maximum air temperature and daily global solar radiation, and show that the outputs from the algorithm, when aggregated to daily values, are consistent with the driving regional climate model. Finally, we demonstrate how to obtain quantitative information from the scenarios regarding the potential impact of climate change on outdoor thermal comfort, by calculating changes in the distribution of hourly summer day-time T mrt and changes in the number of hours with T mrt >55 °C.

Ontologies encode relationships within a domain in robust data structures that can be used to annotate data objects, including scientific papers, in ways that ease tasks such as search and meta-analysis. However, the annotation process requires significant time and effort when performed by humans. Text mining algorithms can facilitate this process, but they render an analysis mainly based upon keyword, synonym and semantic matching. They do not leverage information embedded in an ontology's structure. We present a probabilistic framework that facilitates the automatic annotation of literature by indirectly modeling the restrictions among the different classes in the ontology. Our research focuses on annotating human functional neuroimaging literature within the Cognitive Paradigm Ontology (CogPO). We use an approach that combines the stochastic simplicity of naïve Bayes with the formal transparency of decision trees. Our data structure is easily modifiable to reflect changing domain knowledge. We compare our results across naïve Bayes, Bayesian Decision Trees, and Constrained Decision Tree classifiers that keep a human expert in the loop, in terms of the quality measure of the F1-mirco score. Unlike traditional text mining algorithms, our framework can model the knowledge encoded by the dependencies in an ontology, albeit indirectly. We successfully exploit the fact that CogPO has explicitly stated restrictions, and implicit dependencies in the form of patterns in the expert curated annotations.

This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

This paper deals with an experimental outdoor annual performance evaluation of 2.32 kW P photovoltaic (PV) power system located at solar energy park in New Delhi composite climatic conditions. This PV system operates the daily electrical load nearly 10 kW h/day which comprises of various applications such as electric air blower of an earth to air heat exchanger (EAHE) used for heating/cooling of adobe house, ceiling fan, fluorescent tube-light, computer, submersible water pump, etc. The outdoor efficiencies, power generated and lost in PV system components were determined using hourly experimental measured data for 1 year on typical clear day in each month. These realistic data are useful for design engineers for outdoor assessment of PV system components. The energy conservation, mitigation of CO 2 emission and carbon credit potential of the existing PV integrated EAHE system is presented in this paper. Also, the energy payback time (EPBT) and unit cost of electricity were determined for both stand-alone PV (SAPV) and building roof integrated PV (BIPV) systems.

Full Text Available Digital radiography incresingly is being applied in the fabrication industry. Compared to film- based radiography, digitally radiographed images can be acquired with less time and fewer exposures. However, noises can simply occur on the digital image resulting in a low-quality result. Due to this and the system’s complexity, parameters’ sensitivity, and environmental effects, the results can be difficult to interpret, even for a radiographer. Therefore, the need of an application tool to improve and evaluate the image is becoming urgent. In this research, a user-friendly tool for image processing and image quality measurement was developed. The resulting tool contains important components needed by radiograph inspectors in analyzing defects and recording the results. This tool was written by using image processing and the graphical user interface development environment and compiler (GUIDE toolbox available in Matrix Laboratory (MATLAB R2008a. In image processing methods, contrast adjustment, and noise removal, edge detection was applied. In image quality measurement methods, mean square error (MSE, peak signal-to-noise ratio (PSNR, modulation transfer function (MTF, normalized signal-to-noise ratio (SNRnorm, sensitivity and unsharpness were used to measure the image quality. The graphical user interface (GUI wass then compiled to build a Windows, stand-alone application that enables this tool to be executed independently without the installation of MATLAB.

A statisticalalgorithm has been developed and implemented on a minicomputer system for on-line, surveillance applications. Power spectral density (PSD) measurements on process signals are the performance signatures that characterize the ''health'' of the monitored equipment. Statistical methods provide a quantitative basis for automating the detection of anomalous conditions. The surveillance algorithm has been tested on signals from neutron sensors, proximeter probes, and accelerometers to determine its potential for monitoring nuclear reactors and rotating machinery

The EORTC Quality of Life Group has just completed the final phase (field-testing and validation) of an international project to develop a stand-alone measure of spiritual well-being (SWB) for palliative cancer patients. Participants (n = 451)-from 14 countries on four continents; 54% female; 188

This study examined the effects of two types of course delivery systems (learning community classroom environments versus stand-alone classroom environments) on the achievement of students who were simultaneously enrolled in remedial and college-level social science courses at an inner city open-enrollment public community college. This study was…

Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

Wall-to-wall remotely sensed data are increasingly available to monitor landscape dynamics over large geographic areas. However, statistical monitoring programs that use post-stratification cannot fully utilize those sensor data. The Kalman filter (KF) is an alternative statistical estimator. I develop a new KF algorithm that is numerically robust with large numbers of...

Standalone systems supplied only by a photovoltaic generator need an energy storage unit to be fully self sufficient. Lead acid batteries are commonly used to store energy because of their low cost, despite several operational constraints. A hydrogen-based energy storage unit (HESU) could be another candidate, including an electrolyser, a fuel cell and a hydrogen tank. However many efforts still need to be carried out for this technology to reach an industrial stage. In particular, market outlets must be clearly identified. The study of small stationary applications (few kW) is performed by numerical simulations. A simulator is developed in the Matlab/Simulink environment. It is mainly composed of a photovoltaic field and a storage unit (lead acid batteries, HESU, or hybrid storage HESU/batteries). The system component sizing is achieved in order to ensure the complete system autonomy over a whole year of operation. The simulator is tested with 160 load profiles (1 kW as a yearly mean value) and three locations (Algeria, France and Norway). Two coefficients are set in order to quantify the correlation between the power consumption of the end user and the renewable resource availability at both daily and yearly scales. Among the tested cases, a limit value of the yearly correlation coefficient came out, enabling to recommend the use of the most adapted storage to a considered case. There are cases for which using HESU instead of lead acid batteries can increase the system efficiency, decrease the size of the photovoltaic field and improve the exploitation of the renewable resource. In addition, hybridization of HESU with batteries always leads to system enhancements regarding its sizing and performance, with an efficiency increase by 10 to 40 % depending on the considered location. The good agreement between the simulation data and field data gathered on real systems enabled the validation of the models used in this study. (author)

Oscillometric central blood pressure (CBP) monitors have emerged as a new technology for blood pressure (BP) measurements. With a newly proposed diagnostic threshold for CBP, we investigated the diagnostic performance of a novel CBP monitor. We recruited a consecutive series of 138 subjects (aged 30-93 years) without previous use of antihypertensive agents for simultaneous invasive and noninvasive measurements of BP in a catheterization laboratory. With the cutoff (CBP ≥130/90 mm Hg) for high blood pressure (HBP), the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the novel CBP monitor were calculated with reference to the measured CBP. In comparison, the diagnostic performance of the conventional cuff BP was also evaluated. The noninvasive CBP for detecting HBP in a sample with a prevalence of 52% showed a sensitivity of 93% (95% confidence interval (CI) = 91-95), specificity of 95% (95% CI = 94-97), PPV of 96% (95% CI = 94-97), and NPV of 93% (95% CI = 90-95). In contrast, with cuff BP and the traditional HBP criterion (cuff BP ≥140/90 mm Hg), the sensitivity, specificity, PPV, and NPV were 49% (95% CI = 44-53), 94% (95% CI = 92-96), 90% (95% CI = 86-93), and 63% (95% CI 59-66), respectively. A stand-alone oscillometric CBP monitor may provide CBP values with acceptable diagnostic accuracy. However, with reference to invasively measured CBP, cuff BP had low sensitivity and NPV, which could render possible management inaccessible to a considerable proportion of HBP patients, who may be identifiable through noninvasive CBP measurements from the CBP monitor.

Preoperative parameters including the T1 slope (T1S) and C2-C7 sagittal vertical axis (SVA) have been recognized as predictors of kyphosis after laminoplasty, which is accompanied by posterior neck muscle damage. The importance of preoperative parameters has been under-estimated in anterior cervical discectomy and fusion (ACDF) because there is no posterior neck muscle damage. We aimed to determine whether postoperative subsidence and pseudarthrosis could be predicted according to specific parameters on preoperative plain radiographs. We retrospectively analyzed 41 consecutive patients (male: female, 22: 19; mean age, 51.15±9.25 years) who underwent ACDF with a stand-alone polyether-ether-ketone (PEEK) cage (>1 year follow-up). Parameters including SVA, T1S, segmental angle and range of motion (ROM), C2-C7 cervical angle and ROM, and segmental inter-spinous distance were measured on preoperative plain radiographs. Risk factors of subsidence and pseudarthrosis were determined using multivariate logistic regression. Fifty-five segments (27 single-segment and 14 two-segment fusions) were included. The subsidence and pseudarthrosis rates based on the number of segments were 36.4% and 29.1%, respectively. Demographic data and fusion level were unrelated to subsidence. A greater T1S was associated with a lower risk of subsidence (p=0.017, odds ratio=0.206). A cutoff value of T1Ssubsidence (sensitivity: 70%, specificity: 68.6%). There were no preoperative predictors of pseudarthrosis except old age. A lower T1S (T1Ssubsidence following ACDF. Surgeons need to be aware of this risk factor and should consider various supportive procedures to reduce the subsidence rates for such cases.

Full Text Available A multilayered armor system (MAS is intended to personal protection against high kinetic energy ammunition. MAS layers are composed of materials such as a front ceramic and a back composite that must show both high impact resistance and low weight, usually conflicting characteristics. Synthetic fiber fabrics, such as Kevlarâ¢ and Dyneemaâ¢, are the favorite materials to back the front ceramic, due to their high strength, high modulus and relatively low weight. Recently, composites reinforced with natural fibers have been considered as MAS second layer owing to their good performance associated with other advantages as being cheaper and environmentally friendly. Among the natural fibers, those extracted from the leaves of the Ananas erectifolius plant, known as curaua, stand out due to its exceptional high strength and high modulus. Thus, the objective of the present work is to evaluate the performance of curaua fiber-reinforced polyester composites subjected to ballistic impact of high energy 7.62Â mm ammunition. Composites reinforced with 0, 10, 20 and 30Â vol.% of curaua fibers were produced and stand-alone tested as armor target to evaluate the absorbed energy. Analysis of variance (Anova and Tukey's honest significant difference test (HSD made it possible to compare the results to Kevlarâ¢ laminates. Among the tested materials, the 30Â vol.% fiber composites were found to be the best alternative to Kevlarâ¢. Keywords: Composite, Natural fiber, Curaua fiber, Ballistic test

Next generation sequencing (NGS) is widely used in metagenomic and transcriptomic analyses in biodiversity. The ease of data generation provided by NGS platforms has allowed researchers to perform these analyses on their particular study systems. In particular the 454 platform has become the preferred choice for PCR amplicon based biodiversity surveys because it generates the longest sequence reads. Nevertheless, the handling and organization of massive amounts of sequencing data poses a major problem for the research community, particularly when multiple researchers are involved in data acquisition and analysis. An integrated and user-friendly tool, which performs quality control, read trimming, PCR primer removal, and data organization is desperately needed, therefore, to make data interpretation fast and manageable. We developed CANGS DB (Cleaning and Analyzing Next Generation Sequences DataBase) a flexible, standalone and user-friendly integrated database tool. CANGS DB is specifically designed to organize and manage the massive amount of sequencing data arising from various NGS projects. CANGS DB also provides an intuitive user interface for sequence trimming and quality control, taxonomy analysis and rarefaction analysis. Our database tool can be easily adapted to handle multiple sequencing projects in parallel with different sample information, amplicon sizes, primer sequences, and quality thresholds, which makes this software especially useful for non-bioinformaticians. Furthermore, CANGS DB is especially suited for projects where multiple users need to access the data. CANGS DB is available at http://code.google.com/p/cangsdb/. CANGS DB provides a simple and user-friendly solution to process, store and analyze 454 sequencing data. Being a local database that is accessible through a user-friendly interface, CANGS DB provides the perfect tool for collaborative amplicon based biodiversity surveys without requiring prior bioinformatics skills.

Full Text Available Abstract Background Next generation sequencing (NGS is widely used in metagenomic and transcriptomic analyses in biodiversity. The ease of data generation provided by NGS platforms has allowed researchers to perform these analyses on their particular study systems. In particular the 454 platform has become the preferred choice for PCR amplicon based biodiversity surveys because it generates the longest sequence reads. Nevertheless, the handling and organization of massive amounts of sequencing data poses a major problem for the research community, particularly when multiple researchers are involved in data acquisition and analysis. An integrated and user-friendly tool, which performs quality control, read trimming, PCR primer removal, and data organization is desperately needed, therefore, to make data interpretation fast and manageable. Findings We developed CANGS DB (Cleaning and Analyzing Next Generation Sequences DataBase a flexible, standalone and user-friendly integrated database tool. CANGS DB is specifically designed to organize and manage the massive amount of sequencing data arising from various NGS projects. CANGS DB also provides an intuitive user interface for sequence trimming and quality control, taxonomy analysis and rarefaction analysis. Our database tool can be easily adapted to handle multiple sequencing projects in parallel with different sample information, amplicon sizes, primer sequences, and quality thresholds, which makes this software especially useful for non-bioinformaticians. Furthermore, CANGS DB is especially suited for projects where multiple users need to access the data. CANGS DB is available at http://code.google.com/p/cangsdb/. Conclusion CANGS DB provides a simple and user-friendly solution to process, store and analyze 454 sequencing data. Being a local database that is accessible through a user-friendly interface, CANGS DB provides the perfect tool for collaborative amplicon based biodiversity surveys

Full Text Available Plasma-derived vesicles hold a promising potential for use in biomedical applications. Two major challenges, however, hinder their implementation into translational tools: (a the incomplete characterization of the protein composition of plasma-derived vesicles, in the size range of exosomes, as mass spectrometric analysis of plasma sub-components is recognizably troublesome and (b the limited reach of vesicle-based studies in settings where the infrastructural demand of ultracentrifugation, the most widely used isolation/purification methodology, is not available. In this study, we have addressed both challenges by carrying-out mass spectrometry (MS analyses of plasma-derived vesicles, in the size range of exosomes, from healthy donors obtained by 2 alternative methodologies: size-exclusion chromatography (SEC on sepharose columns and Exo-Spin™. No exosome markers, as opposed to the most abundant plasma proteins, were detected by Exo-Spin™. In contrast, exosomal markers were present in the early fractions of SEC where the most abundant plasma proteins have been largely excluded. Noticeably, after a cross-comparative analysis of all published studies using MS to characterize plasma-derived exosomes from healthy individuals, we also observed a paucity of “classical exosome markers.” Independent of the isolation method, however, we consistently identified 2 proteins, CD5 antigen-like (CD5L and galectin-3-binding protein (LGALS3BP, whose presence was validated by a bead-exosome FACS assay. Altogether, our results support the use of SEC as a stand-alone methodology to obtain preparations of extracellular vesicles, in the size range of exosomes, from plasma and suggest the use of CD5L and LGALS3BP as more suitable markers of plasma-derived vesicles in MS.

With an aging population, it is critical that nurses are educated and prepared to offer quality healthcare to this client group. Incorporating gerontology content into nursing curricula and addressing students' perceptions and career choices in relation to working with older adults are important faculty concerns. To examine the impact of a stand-alone course in gerontological nursing on undergraduate nursing students' perceptions of working with older adults and career intentions. Quasi-experimental, pre- and post-test design. Medium-sized state university in the Mid Western United States PARTICIPANTS: Data were collected from three student cohorts during the spring semesters of 2012 (n=98), 2013 (n=80) and 2014 (n=88) for a total of N=266 with an average response rate of 85%. A survey instrument was administered via Qualtrics and completed by students prior to, and following completion of the course. There was an overall significant increase (p=0.000) in positive perceptions of working with older adults among nursing students following completion of the course. The majority of participants (83.5%) reported having previous experience with older adults. Those with previous experience had higher perception scores at pre-test than those without (p=0.000). Post-test scores showed no significant difference between these two groups, with both groups having increased perception scores (p=0.120). Student preferences for working with different age groups suggested an overall increase in preference for working with older adults following the course. A course in gerontological nursing, incorporating learning partnerships with community dwelling older adults, promotes positive perceptions of working with older adults, independently of the quality of prior experience. There was some evidence that students changed their preferences of working with different age groups in favor of working with older adults. Further research should be conducted to determine the mechanisms through

ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER

Full Text Available During the last 40 years, many studies have been carried out to investigate the different phenomena occurring during a Severe Accident (SA in a Nuclear Power Plant (NPP. Such efforts have been supported by the execution of different experimental campaigns, and the integral Phébus FP tests were probably some of the most important experiments in this field. In these tests, the degradation of a Pressurized Water Reactor (PWR fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the findings on these and previous tests, numerical codes such as ASTEC and MELCOR have been developed to analyze the evolution of a SA in real NPPs. After the termination of the Phébus FP campaign, these two codes have been furthermore improved to implement the more recent findings coming from different experimental campaigns. Therefore, continuous verification and validation is still necessary to check that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The aim of the present work is to re-analyze the Phébus FPT-2 test employing the updated ASTEC and MELCOR code versions. The analysis focuses on the stand-alone containment aspects of this test, and three different spatial nodalizations of the containment vessel (CV have been developed. The paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products (FP behavior. When possible, a comparison among the results obtained during this work and by different authors in previous work is also performed. This paper is part of a series of publications covering the four Phébus FP tests using a PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3, excluding the FPT-4 one, related to the study of the release of low-volatility FP and transuranic elements from a debris bed and a pool of melted fuel. Keywords: Safety

Full Text Available Unlike conventional blind source separation (BSS deals with independent identically distributed (i.i.d. sources, this paper addresses the separation from mixtures of sources with temporal structure, such as linear autocorrelations. Many sequential extraction algorithms have been reported, resulting in inevitable cumulated errors introduced by the deflation scheme. We propose a robust separation algorithm to recover original sources simultaneously, through a joint diagonalizer of several average delayed covariance matrices at positions of the optimal time delay and its integers. The proposed algorithm is computationally simple and efficient, since it is based on the second-order statistics only. Extensive simulation results confirm the validity and high performance of the algorithm. Compared with related extraction algorithms, its separation signal-to-noise rate for a desired source can reach 20dB higher, and it seems rather insensitive to the estimation error of the time delay.

In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.

We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.

The statistical analysis of processing tree models is advanced by showing how the parameters of estimation and hypothesis testing, based on the likelihood functions, can be accomplished by adapting the expectation-maximization (EM) algorithm. The adaptation makes it easy to program a personal computer to accomplish the stages of statistical…

In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.

In this article, we present a new voice activity detection (VAD) algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.

Full Text Available Abstract In this article, we present a new voice activity detection (VAD algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.

Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a

Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

The accurate identification of gait asymmetry is very beneficial to the assessment of at-risk gait in the clinical applications. This paper investigated the application of classification method based on statistical learning algorithm to quantify gait symmetry based on the assumption that the degree of intrinsic change in dynamical system of gait is associated with the different statistical distributions between gait variables from left-right side of lower limbs; that is, the discrimination of small difference of similarity between lower limbs is considered the reorganization of their different probability distribution. The kinetic gait data of 60 participants were recorded using a strain gauge force platform during normal walking. The classification method is designed based on advanced statistical learning algorithm such as support vector machine algorithm for binary classification and is adopted to quantitatively evaluate gait symmetry. The experiment results showed that the proposed method could capture more intrinsic dynamic information hidden in gait variables and recognize the right-left gait patterns with superior generalization performance. Moreover, our proposed techniques could identify the small significant difference between lower limbs when compared to the traditional symmetry index method for gait. The proposed algorithm would become an effective tool for early identification of the elderly gait asymmetry in the clinical diagnosis. PMID:25705672

Full Text Available The accurate identification of gait asymmetry is very beneficial to the assessment of at-risk gait in the clinical applications. This paper investigated the application of classification method based on statistical learning algorithm to quantify gait symmetry based on the assumption that the degree of intrinsic change in dynamical system of gait is associated with the different statistical distributions between gait variables from left-right side of lower limbs; that is, the discrimination of small difference of similarity between lower limbs is considered the reorganization of their different probability distribution. The kinetic gait data of 60 participants were recorded using a strain gauge force platform during normal walking. The classification method is designed based on advanced statistical learning algorithm such as support vector machine algorithm for binary classification and is adopted to quantitatively evaluate gait symmetry. The experiment results showed that the proposed method could capture more intrinsic dynamic information hidden in gait variables and recognize the right-left gait patterns with superior generalization performance. Moreover, our proposed techniques could identify the small significant difference between lower limbs when compared to the traditional symmetry index method for gait. The proposed algorithm would become an effective tool for early identification of the elderly gait asymmetry in the clinical diagnosis.

A brief introduction of characteristic statisticalgorithm (CSA) is given in the paper, which is a new global optimization algorithm to solve the problem of PWR in-core fuel management optimization. CSA is modified by the adoption of back propagation neural network and fast local adjustment. Then the modified CSA is applied to PWR Equilibrium Cycle Reloading Optimization, and the corresponding optimization code of CSA-DYW is developed. CSA-DYW is used to optimize the equilibrium cycle of 18 month reloading of Daya bay nuclear plant Unit 1 reactor. The results show that CSA-DYW has high efficiency and good global performance on PWR Equilibrium Cycle Reloading Optimization. (authors)

The special features of pulsed reactor power stabilization are considered for reactor and joint reactor and injector regimes. Statistically optimal algorithms of regulation are obtained for providing a minimum of the mean-square deviation of energy from its basic value for the future power pulse based on information obtained from the previous pulses taking into account the information aging rate. It is shown as a result of the based simplification that the optimal algorithm is realized by means of the regulator as an integrating element.

This report describes the work done during five years of the second phase of Task 3 of the photovoltaic power systems programme of the International Energy Agency (IEA-PVPS). Task 3 activities were concentrated on stand-alone photovoltaic systems with the main effort on improving the quality and reducing the cost of these systems. The work was divided in 2 sub-tasks whose first one was concentrated on quality insurance schemes and second one on technical recommendations coming from practical experience. Twelve original reports have been published covering topics that can be sorted in 4 categories: the first one is dedicated on quality issues with a review of existing standards in the participating countries and a double paper giving quality assurance recommendations on project management and examples of applying these rules in practical cases. The second category dwelled on photovoltaic systems with papers on charge controllers, on lightning protection and monitoring of systems. The third category presents interesting studies on the storage of energy which remains the main subject where improvements should be made in order to lower the cost of energy; four papers describe the management and the test procedures of lead-acid batteries, how to choose a lead-acid battery and finally are there alternatives to lead-acid batteries for the storage of photovoltaic electricity. The last category worked on loads and users of renewable energy and gives a large amount of experience with loads, how to choose them and how the energy can be better used through demand side management. (author)

A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statisticalalgorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace.

A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statisticalalgorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

Objective: To assess whether the performance of a computer-assisted detection (CAD) algorithm for acute pulmonary embolism (PE) differs in pulmonary CT angiographies acquired at various institutions. Methods: In this retrospective study, we included 40 consecutive scans with and 40 without PE from 3

OBJECTIVE: To assess whether the performance of a computer-assisted detection (CAD) algorithm for acute pulmonary embolism (PE) differs in pulmonary CT angiographies acquired at various institutions. METHODS: In this retrospective study, we included 40 consecutive scans with and 40 without PE from 3

Several existing information systems of urban passenger transport (UPT) are considered. Author’s UPT network model is presented. To a passenger a new service is offered that is the best path from one stop to another stop at a specified time. The algorithm and software implementation for finding the optimal path are presented. The algorithm uses the current UPT schedule. The article also describes the algorithm of statistical analysis of trip payments by the electronic E-cards. The algorithm allows obtaining the density of passenger traffic during the day. This density is independent of the network topology and UPT schedules. The resulting density of the traffic flow can solve a number of practical problems. In particular, the forecast for the overflow of passenger transport in the «rush» hours, the quantitative comparison of different topologies transport networks, constructing of the best UPT timetable. The efficiency of the proposed integrated approach is demonstrated by the example of the model town with arbitrary dimensions.

Tensor-Based Morphometry (TBM) is an increasingly popular method for group analysis of brain MRI data. The main steps in the analysis consist of a nonlinear registration to align each individual scan to a common space, and a subsequent statistical analysis to determine morphometric differences, or difference in fiber structure between groups. Recently, we implemented the Statistically-Assisted Fluid Registration Algorithm or SAFIRA,1 which is designed for tracking morphometric differences among populations. To this end, SAFIRA allows the inclusion of statistical priors extracted from the populations being studied as regularizers in the registration. This flexibility and degree of sophistication limit the tool to expert use, even more so considering that SAFIRA was initially implemented in command line mode. Here, we introduce a new, intuitive, easy to use, Matlab-based graphical user interface for SAFIRA's multivariate TBM. The interface also generates different choices for the TBM statistics, including both the traditional univariate statistics on the Jacobian matrix, and comparison of the full deformation tensors.2 This software will be freely disseminated to the neuroimaging research community.

Full Text Available Photovoltaic (PV power system can be used to replace wholly 650VA generator for electricity generation for household use in Nigeria. This paper presented the feasibility analysis of load data and simulation study of a stand-alone PV power system that produced the electrical needs of a household. This study is based on designing of PV energy system for household use. The patterns of load consumption within the household were studied and suitably modeled for simulation. The simulation study indicates that energy requirements to provide electricity which is equivalent to 650VA generator for household use in Nigeria can be accomplished by 520W solar PV array, 2312 Ah nominal capacity battery, and a 1kW DC/AC inverter. This would be suitable for deployment of 100% clean energy for environmental sustainability and uninterruptable power performance in the household. The results of this research show that, with a low-power consuming appliances, it is possible to meet the entire annual electricity demand of a single household solely through a stand-alone PV energy supply. Installing solar panels by most Nigerian home can significantly reduce home reliance on government power thereby reduce the strain on the current capacity of our power generation infrastructure. A detailed design and description of the system were presented in this paper.

Photovoltaic (PV) power system can be used to replace wholly 650VA generator for electricity generation for household use in Nigeria. This paper presented the feasibility analysis of load data and simulation study of a stand-alone PV power system that produced the electrical needs of a household. This study is based on designing of PV energy system for household use. The patterns of load consumption within the household were studied and suitably modeled for simulation. The simulation study indicates that energy requirements to provide electricity, which is equivalent to 650VA generator for household use in Nigeria, can be accomplished by 520 W solar PV array, 2312 Ah nominal capacity battery, and a 1 kW DC/AC inverter. This would be suitable for deployment of 100% clean energy for environmental sustainability and uninterruptable power performance in the household. The results of this research show that, with a low-power consuming appliances, it is possible to meet the entire annual electricity demand of a single household solely through a stand-alone PV energy supply. Installing solar panels by most Nigerian home can significantly reduce home reliance on government power thereby reduce the strain on the current capacity of our power generation infrastructure. A detailed design and description of the system were presented in this paper.

As integrated circuit devices scale into the deep sub-micron regime, ion implantation will continue to be the primary means of introducing dopant atoms into silicon. Different types of impurity profiles such as ultra-shallow profiles and retrograde profiles are necessary for deep submicron devices in order to realize the desired device performance. A new algorithm to reduce the statistical noise in three-dimensional ion implant simulations both in the lateral and shallow/deep regions of the profile is presented. The computational effort in BCA Monte Carlo ion implant simulation is also reduced.

Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statistical sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are

Full Text Available The rising global demand in energy and the limited resources in fossil fuels require new technologies in renewable energies like solar cells. Silicon solar cells offer a good efficiency but suffer from high production costs. A promising alternative are polymer solar cells, due to potentially low production costs and high flexibility of the panels. In this paper, the nanostructure of organic–inorganic composites is investigated, which can be used as photoactive layers in hybrid–polymer solar cells. These materials consist of a polymeric (OC1C10-PPV phase with CdSe nanoparticles embedded therein. On the basis of 3D image data with high spatial resolution, gained by electron tomography, an algorithm is developed to automatically extract the CdSe nanoparticles from grayscale images, where we assume them as spheres. The algorithm is based on a modified version of the Hough transform, where a watershed algorithm is used to separate the image data into basins such that each basin contains exactly one nanoparticle. After their extraction, neighboring nanoparticles are connected to form a 3D network that is related to the transport of electrons in polymer solar cells. A detailed statistical analysis of the CdSe network morphology is accomplished, which allows deeper insight into the hopping percolation pathways of electrons.

This paper describes a new generalized and efficient model for performance analysis of a six-phase self-excited induction generator (SPSEIG) with three capacitor excitation topologies; simple shunt, short shunt and long shunt. Mathematical model of SPSEIG is formulated using nodal admittance method based on graph theory. Attention is focused on the influence of the different capacitor connections on the generator overload and output power capabilities. The generator voltage with simple shunt excitation connection collapses when it is overloaded while with either the short shunt or long shunt excitation connection; generator is able to sustain the load at a lower operating voltage and larger load current. The matrix equation developed by nodal admittance method is solved by Genetic Algorithm (GA) technique to predetermine the steady-state performance of SPSEIG. The experimental and theoretical results are found to be in good agreement.

Full Text Available Land-surface albedo plays a critical role in the earth's radiant energy budget studies. Satellite remote sensing provides an effective approach to acquire regional and global albedo observations. Owing to cloud coverage, seasonal snow and sensor malfunctions, spatiotemporally continuous albedo datasets are often inaccessible. The Global LAnd Surface Satellite (GLASS project aims at providing a suite of key land surface parameter datasets with high temporal resolution and high accuracy for a global change study. The GLASS preliminary albedo datasets are global daily land-surface albedo generated by an angular bin algorithm (Qu et al., 2013. Like other products, the GLASS preliminary albedo datasets are affected by large areas of missing data; beside, sharp fluctuations exist in the time series of the GLASS preliminary albedo due to data noise and algorithm uncertainties. Based on the Bayesian theory, a statistics-based temporal filter (STF algorithm is proposed in this paper to fill data gaps, smooth albedo time series, and generate the GLASS final albedo product. The results of the STF algorithm are smooth and gapless albedo time series, with uncertainty estimations. The performance of the STF method was tested on one tile (H25V05 and three ground stations. Results show that the STF method has greatly improved the integrity and smoothness of the GLASS final albedo product. Seasonal trends in albedo are well depicted by the GLASS final albedo product. Compared with MODerate resolution Imaging Spectroradiometer (MODIS product, the GLASS final albedo product has a higher temporal resolution and more competence in capturing the surface albedo variations. It is recommended that the quality flag should be always checked before using the GLASS final albedo product.

Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

Clinical outcomes of the stand-alone cage have been encouraging when used in anterior cervical discectomy and fusion (ACDF), but concerns remain regarding its complications, especially cage subsidence. This retrospective study was undertaken to investigate the long-term radiological and clinical outcomes of the stand-alone titanium cage and to evaluate the incidence of cage subsidence in relation to the clinical outcome in the surgical treatment of degenerative cervical disc disease. A total of 57 consecutive patients (68 levels) who underwent ACDF using a titanium box cage for the treatment of cervical radiculopathy and/or myelopathy were reviewed for the radiological and clinical outcomes. They were followed for at least 5 years. Radiographs were obtained before and after surgery, 3 months postoperatively, and at the final follow-up to determine the presence of fusion and cage subsidence. The Cobb angle of C2-C7 and the vertebral bodies adjacent to the treated disc were measured to evaluate the cervical sagittal alignment and local lordosis. The disc height was measured as well. The clinical outcomes were evaluated using the Japanese Orthopaedic Association (JOA) score for cervical myelopathy, before and after surgery, and at the final follow-up. The recovery rate of JOA score was also calculated. The Visual Analogue Scale (VAS) score of neck and radicular pain were evaluated as well. The fusion rate was 95.6% (65/68) 3 months after surgery. Successful bone fusion was achieved in all patients at the final follow-up. Cage subsidence occurred in 13 cages (19.1%) at 3-month follow-up; however, there was no relation between fusion and cage subsidence. Cervical and local lordosis improved after surgery, with the improvement preserved at the final follow-up. The preoperative disc height of both subsidence and non-subsidence patients was similar; however, postoperative posterior disc height (PDH) of subsidence group was significantly greater than of non-subsidence group

Statistical tomography to obtain local field variables from non-intrusive line-of-sight measurements in turbulent flows has been an intriguing subject for some time. In this study, a novel algorithm is presented to obtain statistical information on the local scalar field in axisymmetric turbulent flows. The algorithm uses line-of-sight transverse deflection angle measurements in only one view direction to greatly simplify the optical configuration. The validity of the algorithm is examined using noise-free synthetically generated scalar data that simulate the concentration field of a turbulent helium jet. Results show that the proposed algorithm provides excellent reconstruction of integral length scale and variance of refractive index difference, which can be related to scalar physical properties such as density, temperature and/or species concentrations. Good reconstruction accuracy and the need for a simple optical configuration make the proposed algorithm a promising method to characterize the scalar field in turbulent flows using path-integrated measurements

Full Text Available We analyse the asymptotic behaviour of random instances of the maximum set packing (MSP optimization problem, also known as maximum matching or maximum strong independent set on hypergraphs. We give an analytic prediction of the MSPs size using the 1RSB cavity method from statistical mechanics of disordered systems. We also propose a heuristic algorithm, a generalization of the celebrated Karp-Sipser one, which allows us to rigorously prove that the replica symmetric cavity method prediction is exact for certain problem ensembles and breaks down when a core survives the leaf removal process. The e-phenomena threshold discovered by Karp and Sipser, marking the onset of core emergence and of replica symmetry breaking, is elegantly generalized to Cs=e/(d-1 for one of the ensembles considered, where d is the size of the sets.

Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

A new online, unsupervised voice activity detection (VAD) method is proposed. The method is based on a feature derived from high-order statistics (HOS), enhanced by a second metric based on normalized autocorrelation peaks to improve its robustness to non-Gaussian noises. This feature is also oriented for discriminating between close-talk and far-field speech, thus providing a VAD method in the context of human-to-human interaction independent of the energy level. The classification is done by an online variation of the Expectation-Maximization (EM) algorithm, to track and adapt to noise variations in the speech signal. Performance of the proposed method is evaluated on an in-house data and on CENSREC-1-C, a publicly available database used for VAD in the context of automatic speech recognition (ASR). On both test sets, the proposed method outperforms a simple energy-based algorithm and is shown to be more robust against the change in speech sparsity, SNR variability and the noise type.

Full Text Available In automobile, brake system is an essential part responsible for control of the vehicle. Any failure in the brake system impacts the vehicle's motion. It will generate frequent catastrophic effects on the vehicle cum passenger's safety. Thus the brake system plays a vital role in an automobile and hence condition monitoring of the brake system is essential. Vibration based condition monitoring using machine learning techniques are gaining momentum. This study is one such attempt to perform the condition monitoring of a hydraulic brake system through vibration analysis. In this research, the performance of a Clonal Selection Classification Algorithm (CSCA for brake fault diagnosis has been reported. A hydraulic brake system test rig was fabricated. Under good and faulty conditions of a brake system, the vibration signals were acquired using a piezoelectric transducer. The statistical parameters were extracted from the vibration signal. The best feature set was identified for classification using attribute evaluator. The selected features were then classified using CSCA. The classification accuracy of such artificial intelligence technique has been compared with other machine learning approaches and discussed. The Clonal Selection Classification Algorithm performs better and gives the maximum classification accuracy (96% for the fault diagnosis of a hydraulic brake system.

One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific data sets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a non-trivial model to a data set of one billion data points with a covariance matrix comprising of 10^18 entries.

Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.

Full Text Available [b]Abstract[/b]. The LSB algorithm is one of the most studied steganographic algorithms. There are several types of attacks that can detect the fact of conducting cover communication — chi-square attack and RS. This paper presents modification of the LSB algorithm which introduces fewer changes to carrier than the original LSB algorithm. Modified algorithms use a compression function, which significantly hinders the detection process. This paper also includes a description of main steganalytic methods along with their application to the proposed modification of the LSB algorithm.[b]Keywords[/b]: steganography, cyclic code, error correction codes, LSB, BCH, chi-square, steganalysis

Tool for OPerations on Catalogues And Tables or TOPCAT is a graphical viewer for table data. It offers a variety of ways to work with data tables, including a browser for the cell data, viewers for information about table and column metadata, dataset visualization, and even analysis. We discuss a small subset of TOPCAT's functionalities in this chapter. TOPCAT was originally developed as part of the Starlink program in the United Kingdom. It is now maintained by AstroGrid. The program is written in pure Java and available under the GNU General Public License. It is available for download and a version is included in the software distribution accompanying this book. TOPCAT is a GUI interface on top of the STIL library. A command line interface to this library, STILTS, described in Chapter 21 provides scriptable access to many of the capabilities described here. The purpose of this tutorial is to provide an overview of TOPCAT to the novice user. The best place to look for and learn about TOPCAT is the web page maintained by Mark B. Taylor. There, TOPCAT documentation is provided in HTML, PDF, via screen shots, etc. In this chapter we take the user through a few examples that give the general idea of how TOPCAT works. The majority of the functionality of TOPCAT is not included in this short tutorial. Our goal in this tutorial is to lead the reader through an exercise that would result in a publication quality figure (e.g. for a journal article). Specifically, we will use TOPCAT to show how the color-magnitude relation of a galaxy cluster compares to that of all galaxies in the Sloan Digital Sky Survey (York et al. 2000). This diagnostic is used not only in cluster finding, but its linear fit can provide insight into the age and/or metallicity of the oldest galaxies in galaxy clusters (which are some of the oldest galaxies in the Universe). The data we need for this exercise are: 1) the entire spectroscopic galaxy catalog from the SDSS, with galaxy positions, galaxy redshifts, and galaxy magnitudes and 2) galaxy members of a known galaxy cluster. For the former, we will download data directly from the SDSS servers to our local machine for analysis. For the latter, we will use TOPCAT's ability to call live cone search services.

Literacy is usually considered the ability to read at a basic level. Now it is beginning to be defined more broadly to include applying reading, writing, and mathematical skills to obtain and use information and solve problems at levels of proficiency necessary to function in society, to achieve one's goals and develop one's ...

The objective of this project is research, development and demonstration of innovative thermal management concepts that reduce the cell or battery weight, complexity (component count) and/or cost by at least 20%. The project addresses two issues that are common problems with current state of the art lithium ion battery packs used in vehicles; low power at cold temperatures and reduced battery life when exposed to high temperatures. Typically, battery packs are “oversized” to satisfy the two issues mentioned above. The first phase of the project was spent making a battery pack simulation model using AMEsim software. The battery pack used as a benchmark was from the Fiat 500EV. FCA and NREL provided vehicle data and cell data that allowed an accurate model to be created that matched the electrical and thermal characteristics of the actual battery pack. The second phase involved using the battery model from the first phase and evaluate different thermal management concepts. In the end, a gas injection heat pump system was chosen as the dedicated thermal system to both heat and cool the battery pack. Based on the simulation model. The heat pump system could use 50% less energy to heat the battery pack in -20°C ambient conditions, and by keeping the battery cooler at hot climates, the battery pack size could be reduced by 5% and still meet the warranty requirements. During the final phase, the actual battery pack and heat pump system were installed in a test bench at DENSO to validate the simulation results. Also during this phase, the system was moved to NREL where testing was also done to validate the results. In conclusion, the heat pump system can improve “fuel economy” (for electric vehicle) by 12% average in cold climates. Also, the battery pack size, or capacity, could be reduced 5%, or if pack size is kept constant, the pack life could be increased by two years. Finally, the total battery pack and thermal system cost could be reduced 5% only if the system is integrated with the vehicle cabin air conditioning system. The reason why we were not able to achieve the 20% reduction target is because of the natural decay of the battery cell due to the number of cycles. Perhaps newer battery chemistries that are not so sensitive to cycling would have more potential for reducing the battery size due to thermal issues.

In this feature, leading researchers in the field of microbial biotechnology speculate on the technical and conceptual developments that will drive innovative research and open new vistas over the next few years

One of the quickest means of tsunami evacuation is transfer to higher ground soon after strong and long ground shaking. Ground shaking itself is a good initiator of the evacuation from disastrous tsunami. Longer period seismic waves are considered to be more correlated with the earthquake magnitude. We investigated the possible application of this to tsunami hazard alarm using single-site ground motion observation. Information from the mass media is sometimes unavailable due to power failure soon after a large earthquake. Even when an official alarm is available, multiple information sources of tsunami alert would help people become aware of the coming risk of a tsunami. Thus, a device that indicates risk of a tsunami without requiring other data would be helpful to those who should evacuate. Since the sensitivity of a low-cost MEMS (microelectromechanical systems) accelerometer is sufficient for this purpose, tsunami alarm equipment for home use may be easily realized. Amplitude of long-period (20 s cutoff) displacement was proposed as the threshold for the alarm based on empirical relationships among magnitude, tsunami height, hypocentral distance, and peak ground displacement of seismic waves. Application of this method to recent major earthquakes indicated that such equipment could effectively alert people to the possibility of tsunami.

In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

Phase retrieval is a technique for extracting quantitative phase information from X-ray propagation-based phase-contrast tomography (PPCT). In this paper, the performance of different single distance phase retrieval algorithms will be investigated. The algorithms are herein called phase-attenuation duality Born Algorithm (PAD-BA), phase-attenuation duality Rytov Algorithm (PAD-RA), phase-attenuation duality Modified Bronnikov Algorithm (PAD-MBA), phase-attenuation duality Paganin algorithm (PAD-PA) and phase-attenuation duality Wu Algorithm (PAD-WA), respectively. They are all based on phase-attenuation duality property and on weak absorption of the sample and they employ only a single distance PPCT data. In this paper, they are investigated via simulated noise-free PPCT data considering the fulfillment of PAD property and weakly absorbing conditions, and with experimental PPCT data of a mixture sample containing absorbing and weakly absorbing materials, and of a polymer sample considering different degrees of statistical and structural noise. The simulation shows all algorithms can quantitatively reconstruct the 3D refractive index of a quasi-homogeneous weakly absorbing object from noise-free PPCT data. When the weakly absorbing condition is violated, the PAD-RA and PAD-PA/WA obtain better result than PAD-BA and PAD-MBA that are shown in both simulation and mixture sample results. When considering the statistical noise, the contrast-to-noise ratio values decreases as the photon number is reduced. The structural noise study shows that the result is progressively corrupted by ring-like artifacts with the increase of structural noise (i.e. phantom thickness). The PAD-RA and PAD-PA/WA gain better density resolution than the PAD-BA and PAD-MBA in both statistical and structural noise study.

Full Text Available Recently, wireless sensor networks (WSNs have drawn great interest due to their outstanding monitoring and management potential in medical, environmental and industrial applications. Most of the applications that employ WSNs demand all of the sensor nodes to run on a common time scale, a requirement that highlights the importance of clock synchronization. The clock synchronization problem in WSNs is inherently related to parameter estimation. The accuracy of clock synchronization algorithms depends essentially on the statistical properties of the parameter estimation algorithms. Recently, studies dedicated to the estimation of synchronization parameters, such as clock offset and skew, have begun to emerge in the literature. The aim of this article is to provide an overview of the state-of-the-art clock synchronization algorithms for WSNs from a statistical signal processing point of view. This article focuses on describing the key features of the class of clock synchronization algorithms that exploit the traditional two-way message (signal exchange mechanism. Upon introducing the two-way message exchange mechanism, the main clock offset estimation algorithms for pairwise synchronization of sensor nodes are first reviewed, and their performance is compared. The class of fully-distributed clock offset estimation algorithms for network-wide synchronization is then surveyed. The paper concludes with a list of open research problems pertaining to clock synchronization of WSNs.

Full Text Available The role of bearings is significant in reducing the down time of all rotating machineries. The increasing trend of bearing failures in recent times has triggered the need and importance of deployment of condition monitoring. There are multiple factors associated to a bearing failure while it is in operation. Hence, a predictive strategy is required to evaluate the current state of the bearings in operation. In past, predictive models with regression techniques were widely used for bearing lifetime estimations. The Objective of this paper is to estimate the remaining useful life of bearings through a machine learning approach. The ultimate objective of this study is to strengthen the predictive maintenance. The present study was done using classification approach following the concepts of machine learning and a predictive model was built to calculate the residual lifetime of bearings in operation. Vibration signals were acquired on a continuous basis from an experiment wherein the bearings are made to run till it fails naturally. It should be noted that the experiment was carried out with new bearings at pre-defined load and speed conditions until the bearing fails on its own. In the present work, statistical features were deployed and feature selection process was carried out using J48 decision tree and selected features were used to develop the prognostic model. The K-Star classification algorithm, a supervised machine learning technique is made use of in building a predictive model to estimate the lifetime of bearings. The performance of classifier was cross validated with distinct data. The result shows that the K-Star classification model gives 98.56% classification accuracy with selected features.

Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.

have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

Purpose: In order to enable a magnetic resonance imaging (MRI) only workflow in radiotherapy treatment planning, methods are required for generating Hounsfield unit (HU) maps (i.e., synthetic computed tomography, sCT) for dose calculations, directly from MRI. The Statistical Decomposition Algorithm (SDA) is a method for automatically generating sCT images from a single MR image volume, based on automatic tissue classification in combination with a model trained using a multimodal template material. This study compares dose calculations between sCT generated by the SDA and conventional CT in the male pelvic region. Methods: The study comprised ten prostate cancer patients, for whom a 3D T2 weighted MRI and a conventional planning CT were acquired. For each patient, sCT images were generated from the acquired MRI using the SDA. In order to decouple the effect of variations in patient geometry between imaging modalities from the effect of uncertainties in the SDA, the conventional CT was nonrigidly registered to the MRI to assure that their geometries were well aligned. For each patient, a volumetric modulated arc therapy plan was created for the registered CT (rCT) and recalculated for both the sCT and the conventional CT. The results were evaluated using several methods, including mean average error (MAE), a set of dose-volume histogram parameters, and a restrictive gamma criterion (2% local dose/1 mm). Results: The MAE within the body contour was 36.5 ± 4.1 (1 s.d.) HU between sCT and rCT. Average mean absorbed dose difference to target was 0.0% ± 0.2% (1 s.d.) between sCT and rCT, whereas it was −0.3% ± 0.3% (1 s.d.) between CT and rCT. The average gamma pass rate was 99.9% for sCT vs rCT, whereas it was 90.3% for CT vs rCT. Conclusions: The SDA enables a highly accurate MRI only workflow in prostate radiotherapy planning. The dosimetric uncertainties originating from the SDA appear negligible and are notably lower than the uncertainties

The utilization of syngas shows a highly potential to improve the economic potential of the stand-alone power unit-based gasification plants as well as enhancing the growing demand of transportation fuels. The thermochemical conversion of biomass via gasification to heat and power generations from...

For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

Full Text Available The epidemics of obesity and diabetes have aroused great interest in the analysis of energy balance, with the use of organisms ranging from nematode worms to humans. Although generating energy-intake or -expenditure data is relatively straightforward, the most appropriate way to analyse the data has been an issue of contention for many decades. In the last few years, a consensus has been reached regarding the best methods for analysing such data. To facilitate using these best-practice methods, we present here an algorithm that provides a step-by-step guide for analysing energy-intake or -expenditure data. The algorithm can be used to analyse data from either humans or experimental animals, such as small mammals or invertebrates. It can be used in combination with any commercial statistics package; however, to assist with analysis, we have included detailed instructions for performing each step for three popular statistics packages (SPSS, MINITAB and R. We also provide interpretations of the results obtained at each step. We hope that this algorithm will assist in the statistically appropriate analysis of such data, a field in which there has been much confusion and some controversy.

Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

Full Text Available Abstract We propose in this paper a detection algorithm based on a cost function that jointly tests the correlation induced by the cyclic prefix and the fact that this correlation is time-periodic. In the first part of the paper, the cost function is introduced and some analytical results are given. In particular, the noise and multipath channel impacts on its values are theoretically analysed. In a second part of the paper, some asymptotic results are derived. A first exploitation of these results is used to build a detection test based on the false alarm probability. These results are also used to evaluate the impact of the number of cycle frequencies taken into account in the cost function on the detection performances. Thanks to numerical estimations, we have been able to estimate that the proposed algorithm detects DVB-T signals with an SNR of dB. As a comparison, and in the same context, the detection algorithm proposed by the 802.22 WG in 2006 is able to detect these signals with an SNR of dB.

Full Text Available We propose in this paper a detection algorithm based on a cost function that jointly tests the correlation induced by the cyclic prefix and the fact that this correlation is time-periodic. In the first part of the paper, the cost function is introduced and some analytical results are given. In particular, the noise and multipath channel impacts on its values are theoretically analysed. In a second part of the paper, some asymptotic results are derived. A first exploitation of these results is used to build a detection test based on the false alarm probability. These results are also used to evaluate the impact of the number of cycle frequencies taken into account in the cost function on the detection performances. Thanks to numerical estimations, we have been able to estimate that the proposed algorithm detects DVB-T signals with an SNR of Ã¢ÂˆÂ’12Ã¢Â€Â‰dB. As a comparison, and in the same context, the detection algorithm proposed by the 802.22 WG in 2006 is able to detect these signals with an SNR of Ã¢ÂˆÂ’8Ã¢Â€Â‰dB.

In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

Full Text Available The analysis of the impact crater size-frequency distribution (CSFD is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to

The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA. The pres......The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA...... the performance of the proposed strategy, the SEA model updating of a railway passenger coach is carried out. First, a sensitivity analysis is carried out to select the most sensitive parameters of the SEA model. For the selected parameters of the model, prior probability density functions are then taken...

Full Text Available Multiple linear regression (MLR and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort.MLR, artificial neural network (ANN, regression tree (RT, multivariate adaptive regression splines (MARS, boosted regression tree (BRT, support vector regression (SVR, random forest regression (RFR, lasso regression (LAR and Bayesian additive regression trees (BART were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20% and the mean absolute error (MAE were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling.BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84-8.96 mg/week, mean percentage within 20%: 45.88%-46.35%. In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05. In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly higher mean percentage within

Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

We study the question of how to represent or summarize raw laboratory data taken from an electronic health record (EHR) using parametric model selection to reduce or cope with biases induced through clinical care. It has been previously demonstrated that the health care process (Hripcsak and Albers, 2012, 2013), as defined by measurement context (Hripcsak and Albers, 2013; Albers et al., 2012) and measurement patterns (Albers and Hripcsak, 2010, 2012), can influence how EHR data are distributed statistically (Kohane and Weber, 2013; Pivovarov et al., 2014). We construct an algorithm, PopKLD, which is based on information criterion model selection (Burnham and Anderson, 2002; Claeskens and Hjort, 2008), is intended to reduce and cope with health care process biases and to produce an intuitively understandable continuous summary. The PopKLD algorithm can be automated and is designed to be applicable in high-throughput settings; for example, the output of the PopKLD algorithm can be used as input for phenotyping algorithms. Moreover, we develop the PopKLD-CAT algorithm that transforms the continuous PopKLD summary into a categorical summary useful for applications that require categorical data such as topic modeling. We evaluate our methodology in two ways. First, we apply the method to laboratory data collected in two different health care contexts, primary versus intensive care. We show that the PopKLD preserves known physiologic features in the data that are lost when summarizing the data using more common laboratory data summaries such as mean and standard deviation. Second, for three disease-laboratory measurement pairs, we perform a phenotyping task: we use the PopKLD and PopKLD-CAT algorithms to define high and low values of the laboratory variable that are used for defining a disease state. We then compare the relationship between the PopKLD-CAT summary disease predictions and the same predictions using empirically estimated mean and standard deviation to a

Sea ice export through Fram Strait is a key component of the Arctic climate system. The East Greenland Current (EGC) carries most of the sea ice southwards until it melts. Lagrangian methods using sea ice buoys have been used to map ice features in polar regions. However, their spatial and temporal coverage is limited. Satellite data can provide a better tool to map sea ice flow and its variability. Here, an automated sea ice floe detection algorithm uses ice floes as tracers for surface ocean currents. We process Moderate Resolution Imaging Spectroradiometer satellite images to track ice floes (length scale 5-10 km) in the north-eastern Greenland Sea region. Our matlab-based routines effectively filter out clouds and adaptively modify the images to segment and identify ice floes. Ice floes were tracked based on persistent surface features common in successive images throughout 2016. Their daily centroid locations were extracted and its resulting trajectories are used to describe surface circulation and its variability using differential kinematic parameters. We will discuss the application of this method to a longer time series and larger spatial coverage. This enables us to derive the inter-annual variability of mesoscale features along the eastern coast of Greenland. Supported by UCR Mechanical Engineering Departmental Fellowship.

Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

Current methods for identification of potential triplex-forming sequences in genomes and similar sequence sets rely primarily on detecting homopurine and homopyrimidine tracts. Procedures capable of detecting sequences supporting imperfect, but structurally feasible intramolecular triplex structures are needed for better sequence analysis. We modified an algorithm for detection of approximate palindromes, so as to account for the special nature of triplex DNA structures. From available literature, we conclude that approximate triplexes tolerate two classes of errors. One, analogical to mismatches in duplex DNA, involves nucleotides in triplets that do not readily form Hoogsteen bonds. The other class involves geometrically incompatible neighboring triplets hindering proper alignment of strands for optimal hydrogen bonding and stacking. We tested the statistical properties of the algorithm, as well as its correctness when confronted with known triplex sequences. The proposed algorithm satisfactorily detects sequences with intramolecular triplex-forming potential. Its complexity is directly comparable to palindrome searching. Our implementation of the algorithm is available at http://www.fi.muni.cz/lexa/triplex as source code and a web-based search tool. The source code compiles into a library providing searching capability to other programs, as well as into a stand-alone command-line application based on this library. lexa@fi.muni.cz Supplementary data are available at Bioinformatics online.

Full Text Available Control charts have been widely utilized for monitoring process variation in numerous applications. Abnormal patterns exhibited by control charts imply certain potentially assignable causes that may deteriorate the process performance. Most of the previous studies are concerned with the recognition of single abnormal control chart patterns (CCPs. This paper introduces an intelligent hybrid model for recognizing the mixture CCPs that includes three main aspects: feature extraction, classifier, and parameters optimization. In the feature extraction, statistical and shape features of observation data are used in the data input to get the effective data for the classifier. A multiclass support vector machine (MSVM applies for recognizing the mixture CCPs. Finally, genetic algorithm (GA is utilized to optimize the MSVM classifier by searching the best values of the parameters of MSVM and kernel function. The performance of the hybrid approach is evaluated by simulation experiments, and simulation results demonstrate that the proposed approach is able to effectively recognize mixture CCPs.

One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific datasets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a nontrivial model to a dataset of 1 billion data points with a covariance matrix comprising of 1018 entries. Supplementary materials for this article are available online.

Full Text Available Abstract Background The development of effective environmental shotgun sequence binning methods remains an ongoing challenge in algorithmic analysis of metagenomic data. While previous methods have focused primarily on supervised learning involving extrinsic data, a first-principles statistical model combined with a self-training fitting method has not yet been developed. Results We derive an unsupervised, maximum-likelihood formalism for clustering short sequences by their taxonomic origin on the basis of their k-mer distributions. The formalism is implemented using a Markov Chain Monte Carlo approach in a k-mer feature space. We introduce a space transformation that reduces the dimensionality of the feature space and a genomic fragment divergence measure that strongly correlates with the method's performance. Pairwise analysis of over 1000 completely sequenced genomes reveals that the vast majority of genomes have sufficient genomic fragment divergence to be amenable for binning using the present formalism. Using a high-performance implementation, the binner is able to classify fragments as short as 400 nt with accuracy over 90% in simulations of low-complexity communities of 2 to 10 species, given sufficient genomic fragment divergence. The method is available as an open source package called LikelyBin. Conclusion An unsupervised binning method based on statistical signatures of short environmental sequences is a viable stand-alone binning method for low complexity samples. For medium and high complexity samples, we discuss the possibility of combining the current method with other methods as part of an iterative process to enhance the resolving power of sorting reads into taxonomic and/or functional bins.

Hail statistic in Western Europe based on a hybrid cell-tracking algorithm combining radar signals with hailstone observations Elody Fluck¹, Michael Kunz¹ , Peter Geissbühler², Stefan P. Ritz² With hail damage estimated over Billions of Euros for a single event (e.g., hailstorm Andreas on 27/28 July 2013), hail constitute one of the major atmospheric risks in various parts of Europe. The project HAMLET (Hail Model for Europe) in cooperation with the insurance company Tokio Millennium Re aims at estimating hail probability, hail hazard and, combined with vulnerability, hail risk for several European countries (Germany, Switzerland, France, Netherlands, Austria, Belgium and Luxembourg). Hail signals are obtained from radar reflectivity since this proxy is available with a high temporal and spatial resolution using several hail proxies, especially radar data. The focus in the first step is on Germany and France for the periods 2005- 2013 and 1999 - 2013, respectively. In the next step, the methods will be transferred and extended to other regions. A cell-tracking algorithm TRACE2D was adjusted and applied to two dimensional radar reflectivity data from different radars operated by European weather services such as German weather service (DWD) and French weather service (Météo-France). Strong convective cells are detected by considering 3 connected pixels over 45 dBZ (Reflectivity Cores RCs) in a radar scan. Afterwards, the algorithm tries to find the same RCs in the next 5 minute radar scan and, thus, track the RCs centers over time and space. Additional information about hailstone diameters provided by ESWD (European Severe Weather Database) is used to determine hail intensity of the detected hail swaths. Maximum hailstone diameters are interpolated along and close to the individual hail tracks giving an estimation of mean diameters for the detected hail swaths. Furthermore, a stochastic event set is created by randomizing the parameters obtained from the

Full Text Available Genome-wide gene expression profiling has become standard for assessing potential liabilities as well as for elucidating mechanisms of toxicity of drug candidates under development. Analysis of microarray data is often challenging due to the lack of a statistical model that is amenable to biological variation in a small number of samples. Here we present a novel non-parametric algorithm that requires minimal assumptions about the data distribution. Our method for determining differential expression consists of two steps: 1 We apply a nominal threshold on fold change and platform p-value to designate whether a gene is differentially expressed in each treated and control sample relative to the averaged control pool, and 2 We compared the number of samples satisfying criteria in step 1 between the treated and control groups to estimate the statistical significance based on a null distribution established by sample permutations. The method captures group effect without being too sensitive to anomalies as it allows tolerance for potential non-responders in the treatment group and outliers in the control group. Performance and results of this method were compared with the Significant Analysis of Microarrays (SAM method. These two methods were applied to investigate hepatic transcriptional responses of wild-type (PXR(+/+ and pregnane X receptor-knockout (PXR(-/- mice after 96 h exposure to CMP013, an inhibitor of β-secretase (β-site of amyloid precursor protein cleaving enzyme 1 or BACE1. Our results showed that CMP013 led to transcriptional changes in hallmark PXR-regulated genes and induced a cascade of gene expression changes that explained the hepatomegaly observed only in PXR(+/+ animals. Comparison of concordant expression changes between PXR(+/+ and PXR(-/- mice also suggested a PXR-independent association between CMP013 and perturbations to cellular stress, lipid metabolism, and biliary transport.

Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

This book contains a collection of the presentations that were given in October 2013 at the Les Houches Autumn School on statistical physics, optimization, inference, and message-passing algorithms. In the last decade, there has been increasing convergence of interest and methods between theoretical physics and fields as diverse as probability, machine learning, optimization, and inference problems. In particular, much theoretical and applied work in statistical physics and computer science has relied on the use of message-passing algorithms and their connection to the statistical physics of glasses and spin glasses. For example, both the replica and cavity methods have led to recent advances in compressed sensing, sparse estimation, and random constraint satisfaction, to name a few. This book’s detailed pedagogical lectures on statistical inference, computational complexity, the replica and cavity methods, and belief propagation are aimed particularly at PhD students, post-docs, and young researchers desir...

-dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

2014-04-15

To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

Several metrological applications base their measurement principle in the phase sum or difference between two patterns, one original s(r,φ) and another modified t(r,φ+Δφ). Additive or differential phase shifting algorithms directly recover the sum 2φ+Δφ or the difference Δφ of phases without requiring prior calculation of the individual phases. These algorithms can be constructed, for example, from a suitable combination of known phase shifting algorithms. Little has been written on the design, analysis and error compensation of these new two-stage algorithms. Previously we have used computer simulation to study, in a linear approach or with a filter process in reciprocal space, the response of several families of them to the main error sources. In this work we present an error analysis that uses Monte Carlo simulation to achieve results in good agreement with those obtained with spatial and temporal methods.

Shorn of all subtlety and led naked out of the protec­ tive fold of educational research literature, there comes a sheepish little fact: lectures don't work nearly as well as many of us would like to think. -George Cobb (1992) This book contains activities that guide students to discover statistical concepts, explore statistical principles, and apply statistical techniques. Students work toward these goals through the analysis of genuine data and through inter­ action with one another, with their instructor, and with technology. Providing a one-semester introduction to fundamental ideas of statistics for college and advanced high school students, Warkshop Statistics is designed for courses that employ an interactive learning environment by replacing lectures with hands­ on activities. The text contains enough expository material to standalone, but it can also be used to supplement a more traditional textbook. Some distinguishing features of Workshop Statistics are its emphases on active learning, conceptu...

Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactive...

Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…

This presentation provides an overview of open-source software packages addressing two challenging classes of astrostatistics problems. (1) CUDAHM is a C++ framework for hierarchical Bayesian modeling of cosmic populations, leveraging graphics processing units (GPUs) to enable applying this computationally challenging paradigm to large datasets. CUDAHM is motivated by measurement error problems in astronomy, where density estimation and linear and nonlinear regression must be addressed for populations of thousands to millions of objects whose features are measured with possibly complex uncertainties, potentially including selection effects. An example calculation demonstrates accurate GPU-accelerated luminosity function estimation for simulated populations of $10^6$ objects in about two hours using a single NVIDIA Tesla K40c GPU. (2) Time Series Explorer (TSE) is a collection of software in Python and MATLAB for exploratory analysis and statistical modeling of astronomical time series. It comprises a library of stand-alone functions and classes, as well as an application environment for interactive exploration of times series data. The presentation will summarize key capabilities of this emerging project, including new algorithms for analysis of irregularly-sampled time series.

U.S. Department of Health & Human Services — This is a Public Use File for Prescription Drug Events drawn from the 2008 Beneficiary Summary File of Medicare beneficiaries enrolled during the calendar year 2008,...

In October 1988, Congress passed the Defense Authorization Amendments and Base Closure and Realignment Act, Public Law 100-526. This legislation provided the framework for making decisions about military base closures and realignments. The overall objective of the legislation was to close and realign bases to maximize savings without impairing the Army's overall military mission. In December 1988, the Defense Secretary's ad hoc Commission on Base Realignment and Closure issued its final report nominating candidate installations. Among the installations affected by the commission's recommendations, which were subsequently approved by Congress, were 53 military housing areas. The paper recounts the process by which each of the 53 housing areas was assessed to determine if any environmental restoration was to be performed prior to property sale. Each phase of the program is addressed from the Enhanced Preliminary Assessment through the issuance of a 'Statement of Condition' which, in accordance with Army real property regulations, is required prior to property transfer. The unique challenges presented by the housing areas are high-lighted; where materials such as asbestos, radon and fuel oil predominated, as opposed to laboratory chemicals and hazardous waste from industrial operations. In addition, attention is given to the myriad disciplines which interface in preparing a housing area for closure. Forty-three Statements of Condition have been issued to date, with the remainder to be prepared upon completion of remediation activities at each site

It is essential for people conducting fishing, leisure, or research activities at the coasts to have timely and handy tidal information. Although tidal information can be found easily on the internet or using mobile device applications, this information is all applicable for only certain specific locations, not anywhere on the coast, and they need an internet connection. We have developed an application for Android devices, which allows the user to obtain hourly tidal height anywhere on the coast for the next 24 hours without having to have any internet connection. All the necessary information needed for the tidal height calculation is stored in the application. To develop this application, we first simulate tides in the Taiwan Sea using the hydrodynamic model (MIKE21 HD) developed by the DHI. The simulation domain covers the whole coast of Taiwan and the surrounding seas with a grid size of 1 km by 1 km. This grid size allows us to calculate tides with high spatial resolution. The boundary conditions for the simulation domain were obtained from the Tidal Model Driver of the Oregon State University, using its tidal constants of eight constituents: M2, S2, N2, K2, K1, O1, P1, and Q1. The simulation calculates tides for 183 days so that the tidal constants for the above eight constituents of each water grid can be extracted by harmonic analysis. Using the calculated tidal constants, we can predict the tides in each grid of our simulation domain, which is useful when one needs the tidal information for any location in the Taiwan Sea. However, for the mobile application, we only store the eight tidal constants for the water grids on the coast. Once the user activates the application, it reads the longitude and latitude from the GPS sensor in the mobile device and finds the nearest coastal grid which has our tidal constants. Then, the application calculates tidal height variation based on the harmonic analysis. The application also allows the user to input location and time to obtain tides for any historic or future dates for the input location. The predicted tides have been verified with the historic tidal records of certain tidal stations. The verification shows that the tides predicted by the application match the measured record well.

International Journal of Engineering, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 4, No 1 (2012) >. Log in or Register to get access to full text downloads.

This paper presents the performance evaluation of standalone hybrid system on Photovoltaic (PV)-Wind generator at Faculty of Electrical Engineering (FKE), UTeM. The hybrid PV-Wind in UTeM system is combining wind turbine system with the solar system and the energy capacity of this hybrid system can generate up to charge the battery and supply the LED street lighting load. The purpose of this project is to evaluate the performance of PV-Wind hybrid generator. Solar radiation meter has been used to measure the solar radiation and anemometer has been used to measure the wind speed. The effectiveness of the PV-Wind system is based on the various data that has been collected and compared between them. The result shows that hybrid system has greater reliability. Based on the solar result, the correlation coefficient shows strong relationship between the two variables of radiation and current. The reading output current followed by fluctuate of solar radiation. However, the correlation coefficient is shows moderate relationship between the two variables of wind speed and voltage. Hence, the wind turbine system in FKE show does not operate consistently to produce energy source for this hybrid system compare to PV system. When the wind system does not fully operate due to inconsistent energy source, the other system which is PV will operate and supply the load for equilibrate the extra load demand.

This paper presents the performance evaluation of standalone hybrid system on Photovoltaic (PV)-Wind generator at Faculty of Electrical Engineering (FKE), UTeM. The hybrid PV-Wind in UTeM system is combining wind turbine system with the solar system and the energy capacity of this hybrid system can generate up to charge the battery and supply the LED street lighting load. The purpose of this project is to evaluate the performance of PV-Wind hybrid generator. Solar radiation meter has been used to measure the solar radiation and anemometer has been used to measure the wind speed. The effectiveness of the PV-Wind system is based on the various data that has been collected and compared between them. The result shows that hybrid system has greater reliability. Based on the solar result, the correlation coefficient shows strong relationship between the two variables of radiation and current. The reading output current followed by fluctuate of solar radiation. However, the correlation coefficient is shows moderate relationship between the two variables of wind speed and voltage. Hence, the wind turbine system in FKE show does not operate consistently to produce energy source for this hybrid system compare to PV system. When the wind system does not fully operate due to inconsistent energy source, the other system which is PV will operate and supply the load for equilibrate the extra load demand.

This paper presents the performance evaluation of standalone hybrid system on Photovoltaic (PV)-Wind generator at Faculty of Electrical Engineering (FKE), UTeM. The hybrid PV-Wind in UTeM system is combining wind turbine system with the solar system and the energy capacity of this hybrid system can generate up to charge the battery and supply the LED street lighting load. The purpose of this project is to evaluate the performance of PV-Wind hybrid generator. Solar radiation meter has been used to measure the solar radiation and anemometer has been used to measure the wind speed. The effectiveness of the PV-Wind system is based on the various data that has been collected and compared between them. The result shows that hybrid system has greater reliability. Based on the solar result, the correlation coefficient shows strong relationship between the two variables of radiation and current. The reading output current followed by fluctuate of solar radiation. However, the correlation coefficient is shows moderate relationship between the two variables of wind speed and voltage. Hence, the wind turbine system in FKE show does not operate consistently to produce energy source for this hybrid system compare to PV system. When the wind system does not fully operate due to inconsistent energy source, the other system which is PV will operate and supply the load for equilibrate the extra load demand

Photosynthetic bacteria and green algae photoproduce H2. but do so utilizing different catalysts and substrates. Green algae use reductant generate mostly by water oxidation to catalyze the reduction of protons to H2 gas, while photosynthetic bacteria catalyze H2 production from organic acids using the nitrogenase enzyme. Moreover, these two organisms utilize different regions of the solar spectrum to perform photosynthesis: green algae's light harvesting antenna is comprised of chlorophyll molecules that absorb mostly blue and red light; photosynthetic bacteria harvest blue and far-red light through their light-harvesting pigments to run its non-oxygenic photosynthetic reactions. There is thus an opportunity to increase the range of solar spectrum used to photoproduce H2 by combining the light-harvesting and catalytic properties of these two organisms in a single process. In the current manuscript, we describe an experimental system that validates this hypothesis and demonstrates quantitatively the advantages of a two organism process for production of higher amounts of H2 and thus achieving solar light conversion efficiencies.

This article describes the activities of film director Shireen Pasha in promoting truth in the mass media in Pakistan. Pasha is described as one who finds it inexcusable in a state-subsidized system that national problems of poverty are not aired openly. Pasha has pursued the goal of exposing the real lives of Pakistanis on film in contrast to the publicly aired segments of "pretty girls in nice drawing rooms." Foreign channels available through satellite communications technology are viewed by Pasha as inappropriate with regard to people's needs and uncreative. Pakistan began with one channel, PTV, which recently refused to air her documentary on living conditions in Pakistan's rural areas "The Travelogue Pakistan." "The Walled City of Lahore" was her film about life in the old city. Both films poetically depicted the honor of humans and their struggle to stay alive. Some of her documentaries are made to show the value of indigenous skills, centuries old know-how, and traditions, regardless of the poverty. Pasha is described as fighting with PTV management over use of resources. Pasha desires to invest in training people to do documentaries or be more field-oriented rather than investing in equipment. Pasha joined PTV in 1975 and left in 1990. Pasha is recognized for her isolation as a woman in the business world, her commitment to exposing remote cultures and truth, and the odds she must confront in attaining her goals. Pasha is committed to doing extensive research, usually conducted during the summer months, in order to construct a credible story line that is produced usually during the winter months. One model of film story line is defined as one where women are portrayed as starting from an indigenous skill or knowledge and shifting to a greater position of power and control over their lives. Pasha believes that people who make films have the responsibility to evoke a reaction in people and to offer solutions. Two acclaimed films, which were supported by USAID and the government, were "Before It's Too Late" and "Only One Way." Both deal with resource issues and the environment. She is currently director of her own film house, The Film Makers, in Lahore. After graduating from the National College of Arts in 1968, she furthered her education in the US in the history of art.

At inception of the programme, systems were either undersized, or over utilised as clients did not reveal their usage needs in an attempt to reduce costs. This resulted in battery failures and other system breakdowns. The NPVREP categorised systems in order to alleviate the problem of under sizing and/or over utilization of ...

A modified hydrometeor classification algorithm (HCA) is developed in this study for Chinese polarimetric radars. This algorithm is based on the U.S. operational HCA. Meanwhile, the methodology of statistics-based optimization is proposed including calibration checking, datasets selection, membership functions modification, computation thresholds modification, and effect verification. Zhuhai radar, the first operational polarimetric radar in South China, applies these procedures. The systematic bias of calibration is corrected, the reliability of radar measurements deteriorates when the signal-to-noise ratio is low, and correlation coefficient within the melting layer is usually lower than that of the U.S. WSR-88D radar. Through modification based on statistical analysis of polarimetric variables, the localized HCA especially for Zhuhai is obtained, and it performs well over a one-month test through comparison with sounding and surface observations. The algorithm is then utilized for analysis of a squall line process on 11 May 2014 and is found to provide reasonable details with respect to horizontal and vertical structures, and the HCA results—especially in the mixed rain-hail region—can reflect the life cycle of the squall line. In addition, the kinematic and microphysical processes of cloud evolution and the differences between radar-detected hail and surface observations are also analyzed. The results of this study provide evidence for the improvement of this HCA developed specifically for China.

with additional weeds perpendicular to the maize rows. In total 299 parcels were randomly assigned with the 28 different treatment combinations. In the statistical analysis, bootstrapping was used for balancing the number of replicates. The achieved potential herbicide savings was found to be 70% to 95% depending...

Full text: A new on-line digital signal processing (DSP) based Dynamic Compensation (DyCom) algorithm has been developed [Das 1999] to speed-up the response time of slow-responding Vanadium self-powered neutron detectors. The algorithm is based on the principles of direct inversion and on-line nonlinear (median) filtering. The algorithm was tested at different research and power reactors on a PC based platform with encouraging results [Das 2001]. This paper discusses the stand-alone FPGA implementation of the DyCom algorithm with the experimental results obtained with prototype FPGA boards

A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear g...

Full Text Available In recent years, multi-temporal imagery from spaceborne sensors has provided a fast and practical means for surveying and assessing changes in terrain surfaces. Owing to the all-weather imaging capability, polarimetric synthetic aperture radar (PolSAR has become a key tool for change detection. Change detection methods include both unsupervised and supervised methods. Supervised change detection, which needs some human intervention, is generally ineffective and impractical. Due to this limitation, unsupervised methods are widely used in change detection. The traditional unsupervised methods only use a part of the polarization information, and the required thresholding algorithms are independent of the multi-temporal data, which results in the change detection map being ineffective and inaccurate. To solve these problems, a novel method of change detection using a test statistic based on the likelihood ratio test and the improved Kittler and Illingworth (K&I minimum-error thresholding algorithm is introduced in this paper. The test statistic is used to generate the comparison image (CI of the multi-temporal PolSAR images, and improved K&I using a generalized Gaussian model simulates the distribution of the CI. As a result of these advantages, we can obtain the change detection map using an optimum threshold. The efficiency of the proposed method is demonstrated by the use of multi-temporal PolSAR images acquired by RADARSAT-2 over Wuhan, China. The experimental results show that the proposed method is effective and highly accurate.

In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.

Despite technological advances in seismic instrumentation, the assessment of the intensity of an earthquake using an observational scale as given, for example, by the modified Mercalli intensity scale is highly useful for practical purposes. In order to link the qualitative numbers extracted from the acceleration record of an earthquake and other instrumental data such as peak ground velocity, epicentral distance, and moment magnitude on the one hand and the modified Mercalli intensity scale on the other, simple statistical regression has been generally employed. In this paper, we will employ three methods of nonlinear regression, namely support vector regression, multilayer perceptrons, and genetic programming in order to find a functional dependence between the instrumental records and the modified Mercalli intensity scale. The proposed methods predict the intensity of an earthquake while dealing with nonlinearity and the noise inherent to the data. The nonlinear regressions with good estimation results have been performed using the "Did You Feel It?" database of the US Geological Survey and the database of the Center for Engineering Strong Motion Data for the California region.

Matrices began in the 2nd century BC with the Chinese. One can find traces, which go to the 4th century BC to the Babylonians. The text ``Nine Chapters of the Mathematical Art\\'\\' written during the Han Dynasty in China gave the first known example of matrix methods. They were used to solve simultaneous linear equations (more in http://math.nie.edu.sg/bwjyeo/it/MathsOnline_AM/livemath/the/IT3AMMatricesHistory.html). The first ideas of the maximum likelihood estimation (MLE) was introduces by Laplace (1749-1827), by Gauss (1777-1855), the Likelihood was defined by Thiele Thorvald (1838-1910). Why we still use matrices? The matrix data format is more than 2200 years old. Our world is multi-dimensional! Why not to introduce a more appropriate data format and why not to reformulate the MLE method for it? In this work we are utilizing the low-rank tensor formats for multi-dimansional functions, which appear in spatial statistics.

Study of characteristics of sea clutter is very important for signal processing of radar, detection of targets on sea surface and remote sensing. The sea state is complex at Low grazing angle (LGA), and it is difficult with its large irradiation area and a great deal simulation facets. A practical and efficient model to obtain radar clutter of dynamic sea in different sea condition is proposed, basing on the physical mechanism of interaction between electromagnetic wave and sea wave. The classical analysis method for sea clutter is basing on amplitude and spectrum distribution, taking the clutter as random processing model, which is equivocal in its physical mechanism. To achieve electromagnetic field from sea surface, a modified phase from facets is considered, and the backscattering coefficient is calculated by Wu's improved two-scale model, which can solve the statistical sea backscattering problem less than 5 degree, considering the effects of the surface slopes joint probability density, the shadowing function, the skewness of sea waves and the curvature of the surface on the backscattering from the ocean surface. We make the assumption that the scattering contribution of each facet is independent, the total field is the superposition of each facet in the receiving direction. Such data characters are very suitable to compute on GPU threads. So we can make the best of GPU resource. We have achieved a speedup of 155-fold for S band and 162-fold for Ku/Χ band on the Tesla K80 GPU as compared with Intel® Core™ CPU. In this paper, we mainly study the high resolution data, and the time resolution is millisecond, so we may have 10,00 time points, and we analyze amplitude probability density distribution of radar clutter.

Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a standalone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.

The parallel microfluidic cytometer (PMC) is an imaging flow cytometer that operates on statistical analysis of low-pixel-count, one-dimensional (1D) line scans. It is highly efficient in data collection and operates on suspension cells. In this article, we present a supervised automated pipeline for the PMC that minimizes operator intervention by incorporating multivariate logistic regression for data scoring. We test the self-tuning statisticalalgorithms in a human primary T-cell activation assay in flow using nuclear factor of activated T cells (NFAT) translocation as a readout and readily achieve an average Z' of 0.55 and strictly standardized mean difference of 13 with standard phorbol myristate acetate/ionomycin induction. To implement the tests, we routinely load 4 µL samples and can readout 3000 to 9000 independent conditions from 15 mL of primary human blood (buffy coat fraction). We conclude that the new technology will support primary-cell protein-localization assays and "on-the-fly" data scoring at a sample throughput of more than 100,000 wells per day and that it is, in principle, consistent with a primary pharmaceutical screen.

Graphical abstract: SynBiofuel production through existing gasification plants in Thailand, using waste agricultural biomass as raw material, was studied. The process design was initiated conceptually in the areas of gas phase reaction system via Fischer-Tropsch (FT) synthesis. The development of FT configurations on syngas conversion to transportation fuels (e.g., diesel range) was investigated. In order to develop a techno-economic assessment, the three different capacities corresponding to 1 MW, 2 MW and 3 MW based on thermal input of syngas were evaluated. Once-through FT concept was proposed in which the unconverted syngas was combusted with air in an externally fired gas turbine (EFGT) to produce surplus electricity. The results of process simulation were discussed open-mindedly including the overall plant design and energy efficiency. Preliminary economics, and some site specific situations under which additional capital cost savings on existing infrastructure was realized. - Highlights: • Experimental results were used and integrated with a reactor model for SynBiofuel. • Process simulation with the lumped reaction rate was used to achieve accurate results. • Process simulation was performed using ASPEN Plus to design FT configurations. • Maximum energy FT efficiency was approximately 37%. • Economic potential was computed by ROI and PBP resulting in the attractive solutions. - Abstract: The utilization of syngas shows a highly potential to improve the economic potential of the stand-alone power unit-based gasification plants as well as enhancing the growing demand of transportation fuels. The thermochemical conversion of biomass via gasification to heat and power generations from the earlier study is further enhanced by integrating Fischer–Tropsch (FT) synthesis with the existing gasification pilot scale studied previously. To support the potential and perspectives in major economies due to scaling up in developing countries such as Thailand

Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

Microbial contamination of groundwater used for drinking water can affect public health and is of major concern to local water authorities and water suppliers. Potential hazards need to be identified in order to protect raw water resources. We propose a non-parametric data mining technique for exploring the presence of total coliforms (TC) in a groundwater abstraction well and its relationship to readily available, continuous time series of hydrometric monitoring parameters (seven year records of precipitation, river water levels, and groundwater heads). The original monitoring parameters were used to create an extensive generic dataset of explanatory variables by considering different accumulation or averaging periods, as well as temporal offsets of the explanatory variables. A classification tree based on the Chi-Squared Automatic Interaction Detection (CHAID) recursive partitioning algorithm revealed statistically significant relationships between precipitation and the presence of TC in both a production well and a nearby monitoring well. Different secondary explanatory variables were identified for the two wells. Elevated water levels and short-term water table fluctuations in the nearby river were found to be associated with TC in the observation well. The presence of TC in the production well was found to relate to elevated groundwater heads and fluctuations in groundwater levels. The generic variables created proved useful for increasing significance levels. The tree-based model was used to predict the occurrence of TC on the basis of hydrometric variables.

The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.

The objective of this project is to develop a computerized statistical model with the Integrated Neural-Genetic Algorithm (INGA) for predicting the probability of long-term leak of wells in CO2 sequestration operations. This object has been accomplished by conducting research in three phases: 1) data mining of CO2-explosed wells, 2) INGA computer model development, and 3) evaluation of the predictive performance of the computer model with data from field tests. Data mining was conducted for 510 wells in two CO2 sequestration projects in the Texas Gulf Coast region. They are the Hasting West field and Oyster Bayou field in the Southern Texas. Missing wellbore integrity data were estimated using an analytical and Finite Element Method (FEM) model. The INGA was first tested for performances of convergence and computing efficiency with the obtained data set of high dimension. It was concluded that the INGA can handle the gathered data set with good accuracy and reasonable computing time after a reduction of dimension with a grouping mechanism. A computerized statistical model with the INGA was then developed based on data pre-processing and grouping. Comprehensive training and testing of the model were carried out to ensure that the model is accurate and efficient enough for predicting the probability of long-term leak of wells in CO2 sequestration operations. The Cranfield in the southern Mississippi was select as the test site. Observation wells CFU31F2 and CFU31F3 were used for pressure-testing, formation-logging, and cement-sampling. Tools run in the wells include Isolation Scanner, Slim Cement Mapping Tool (SCMT), Cased Hole Formation Dynamics Tester (CHDT), and Mechanical Sidewall Coring Tool (MSCT). Analyses of the obtained data indicate no leak of CO2 cross the cap zone while it is evident that the well cement sheath was invaded by the CO2 from the storage zone. This observation is consistent

Full Text Available To alleviate the emission of greenhouse gas and the dependence on fossil fuel, Plug-in Hybrid Electrical Vehicles (PHEVs have gained an increasing popularity in current decades. Due to the fluctuating electricity prices in the power market, a charging schedule is very influential to driving cost. Although the next-day electricity prices can be obtained in a day-ahead power market, a driving plan is not easily made in advance. Although PHEV owners can input a next-day plan into a charging system, e.g., aggregators, day-ahead, it is a very trivial task to do everyday. Moreover, the driving plan may not be very accurate. To address this problem, in this paper, we analyze energy demands according to a PHEV owner’s historical driving records and build a personalized statistic driving model. Based on the model and the electricity spot prices, a rolling optimization strategy is proposed to help make a charging decision in the current time slot. On one hand, by employing a heuristic algorithm, the schedule is made according to the situations in the following time slots. On the other hand, however, after the current time slot, the schedule will be remade according to the next tens of time slots. Hence, the schedule is made by a dynamic rolling optimization, but it only decides the charging decision in the current time slot. In this way, the fluctuation of electricity prices and driving routine are both involved in the scheduling. Moreover, it is not necessary for PHEV owners to input a day-ahead driving plan. By the optimization simulation, the results demonstrate that the proposed method is feasible to help owners save charging costs and also meet requirements for driving.

PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

-processing tasks proper of a basic GIS-like software. The result is a predictive Bayesian multi-modal Gaussian model, SOSim (Sunken Oil Simulator) Version 1.0rcl, operational for use with limited, randomly-sampled, available subjective and numeric data on sunken oil concentrations and locations in relatively flat-bottomed bays. The SOSim model represents a new approach, coupling a Lagrangian modeling technique with predictive Bayesian capability for computing unconditional probabilities of mass as a function of space and time. The approach addresses the current need to rapidly deploy modeling capability without readily accessible information on ocean bottom currents. Contributions include (1) the development of the apparently first pollutant transport model for computing unconditional relative probabilities of pollutant location as a function of time based on limited available field data alone; (2) development of a numerical method of computing concentration profiles subject to curved, continuous or discontinuous boundary conditions; (3) development combinatorial algorithms to compute unconditional multimodal Gaussian probabilities not amenable to analytical or Markov-Chain Monte Carlo integration due to high dimensionality; and (4) the development of software modules, including a core module containing the developed Bayesian functions, a wrapping graphical user interface, a processing and operating interface, and the necessary programming components that lead to an open-source, stand-alone, executable computer application (SOSim -- Sunken Oil Simulator). Extensions and refinements are recommended, including the addition of capability for accepting available information on bathymetry and maybe bottom currents as Bayesian prior information, the creation of capability of modeling continuous oil releases, and the extension to tracking of suspended oil (3-D). Keywords: sunken oil, Bayesian, Gaussian, model, stochastic, emergency response, recovery, statistical model, multimodal.

An algorithm to search for isolated high pT tracks in the ATLAS precision tracker suitable for use in the LVL2 trigger is described. A histogramming method employing a Hough transform is used to select sets of points forming track candidates. For each selected set of points, fits are performed to all combinations of one point per detection plane. The best candidate is chosen on the basis of the fit residuals. The algorithm has been implemented in the t2scFex package of the ATRIG trigger simulation program and also for benchmarking studies in a stand-alone program SCTFEX. Efficiency, fake track rate and timing measurements are presented for a number of algorithm options.

Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

In this paper, estimation of distribution algorithms (EDAs) are used to solve nuclear reactor fuel management optimisation (NRFMO) problems. Similar to typical population based optimisation algorithms, e.g. genetic algorithms (GAs), EDAs maintain a population of solutions and evolve them during the optimisation process. Unlike GAs, new solutions are suggested by sampling the distribution estimated from all the solutions evaluated so far. We have developed new algorithms based on the EDAs approach, which are applied to maximize the effective multiplication factor (K eff ) of the CONSORT research reactor of Imperial College London. In the new algorithms, a new 'elite-guided' strategy and the 'stand-alone'K eff with fuel coupling is used as heuristic information to improve the optimisation. A detailed comparison study between the EDAs and GAs with previously published crossover operators is presented. A trained three-layer feed-forward artificial neural network (ANN) was used as a fast approximate model to replace the three-dimensional finite element reactor simulation code EVENT in predicting the K eff . Results from the numerical experiments have shown that the EDAs used provide accurate, efficient and robust algorithms for the test case studied here. This encourages further investigation of the performance of EDAs on more realistic problems

ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariate classification methods based on machine learning techniques are available via the TMVA package. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally

Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

A new hyperactive alkalophilic bacterial strain (Bacillus sp. BGS) was isolated from samples collected from soil that received the effluent of a milk processing industry located in Madurai, Tamilnadu, India, and this bacterial strain was used for the production of alkaline protease. Four out of eight variables, such as molasses, peptone, pH, and inoculum size, have been identified through Plackett-Burman (PB) design and used for the alkaline protease production. These significant variables were further optimized through a hybrid system of response surface methodology (RSM) followed by genetic algorithm (GA). The optimal combination of media components and culture conditions for maximal protease production was found to be 16.827 g/L of peptone, 1.128% (v/v) of molasses, pH value of 11, and 2% (v/v) of inoculum size. A 6.36-fold increase in protease production was achieved through the RSM-GA hybrid system. The protease activity increased significantly with an optimized medium (2,992.75 U/mL) as opposed to an unoptimized basal medium (470.35 U/mL).

To alleviate the emission of greenhouse gas and the dependence on fossil fuel, Plug-in Hybrid Electrical Vehicles (PHEVs) have gained an increasing popularity in current decades. Due to the fluctuating electricity prices in the power market, a charging schedule is very influential to driving cost....... Although the next-day electricity prices can be obtained in a day-ahead power market, a driving plan is not easily made in advance. Although PHEV owners can input a next-day plan into a charging system, e.g., aggregators, day-ahead, it is a very trivial task to do everyday. Moreover, the driving plan may...... not be very accurate. To address this problem, in this paper, we analyze energy demands according to a PHEV owner's historical driving records and build a personalized statistic driving model. Based on the model and the electricity spot prices, a rolling optimization strategy is proposed to help make...

The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)

Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

The errors on statistics measured in finite galaxy catalogs are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi (1996) is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly nonlinear to weakly nonlinear scales. The final analytic formu...

Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariate classification methods based on machine learning techniques are available via the TMVA package. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks — e.g. data mining in HEP — by using PROOF, which will take care of optimally

Full Text Available The Advanced Very High Resolution Radiometer (AVHRR carried on board the National Oceanic and Atmospheric Administration (NOAA and the Meteorological Operational Satellite (MetOp polar orbiting satellites is the only instrument offering more than 25 years of satellite data to analyse aerosols on a daily basis. The present study assessed a modified AVHRR aerosol optical depth τa retrieval over land for Europe. The algorithm might also be applied to other parts of the world with similar surface characteristics like Europe, only the aerosol properties would have to be adapted to a new region. The initial approach used a relationship between Sun photometer measurements from the Aerosol Robotic Network (AERONET and the satellite data to post-process the retrieved τa. Herein a quasi-stand-alone procedure, which is more suitable for the pre-AERONET era, is presented. In addition, the estimation of surface reflectance, the aerosol model, and other processing steps have been adapted. The method's cross-platform applicability was tested by validating τa from NOAA-17 and NOAA-18 AVHRR at 15 AERONET sites in Central Europe (40.5° N–50° N, 0° E–17° E from August 2005 to December 2007. Furthermore, the accuracy of the AVHRR retrieval was related to products from two newer instruments, the Medium Resolution Imaging Spectrometer (MERIS on board the Environmental Satellite (ENVISAT and the Moderate Resolution Imaging Spectroradiometer (MODIS on board Aqua/Terra. Considering the linear correlation coefficient R, the AVHRR results were similar to those of MERIS with even lower root mean square error RMSE. Not surprisingly, MODIS, with its high spectral coverage, gave the highest R and lowest RMSE. Regarding monthly averaged τa, the results were ambiguous. Focusing on small-scale structures, R was reduced for all sensors, whereas the RMSE solely for MERIS

In general, condition-based maintenance means the maintenance program which introduces additional sensors like vibration, acoustic, ultrasonic and infrared sensor to monitor equipment condition. The methodology which uses only process variables already installed in the system to detect anomalies, efficiency degradation or malfunction of sensors also have been studied. Among them, commercialization potential of process/sensor anomalies detection system using empirical model is considered high because it does not have additional system requirements for complex systems. However, some problems have been pointed out while these solutions having been introduced. For example, (1) missing important signals in variable selection and (2) variable duplicity problem for empirical model have been pointed out. This paper proposes complementary framework and detailed methodologies, and performs experimental validation by using a heat conduction experimental device which is available for flexible fault injections.

Full Text Available The target of stand-alone hybrid power generation system was to supply the load demand with high reliability and economically as possible. To design these criteria the optimal design of the proposed configuration should be done by using intelligent optimization technique. This study utilized Genetic Algorithm method to determine the optimal capacities of hydrogen, wind turbines and micro hydro unit according to the minimum cost objective functions that relate to these two factors. In this study, the cost objective function included the annual capital cost, annual operation maintenance cost, annual replacement cost and annual customer damage cost. The proposed method had been tested in the hybrid power generation system located in Leuwijawa village in Central Java of Indonesia. Simulation results showed that the optimum configuration can be achieved using 19.85 ton of hydrogen tanks, 21 x 100 kW wind turbines and 610 kW of micro hydro unit respectively.

Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.

Highlights: • An X-ray microbeam line for irradiation of living cultured cells was constructed. • A step by step explanation of working principles with engineering details, procedures and calculations is presented. • A model of beam and cell interaction is presented. • A method of uniform irradiation of living cells with an exact dose per a cell is presented. • Results of preliminary experiments are presented. - Abstract: The article describes an X-ray microbeam standalone facility dedicated for irradiation of living cultured cells. The article can serve as an advice for such facilities construction, as it begins from engineering details, through mathematical modeling and experimental procedures, ending up with preliminary experimental results and conclusions. The presented system consists of an open type X-ray tube with microfocusing down to about 2 μm, an X-ray focusing system with optical elements arranged in the nested Kirckpatrick-Baez (or Montel) geometry, a sample stand and an optical microscope with a scientific digital CCD camera. For the beam visualisation an X-ray sensitive CCD camera and a spectral detector are used, as well as a scintillator screen combined with the microscope. A method of precise one by one irradiation of previously chosen cells is presented, as well as a fast method of uniform irradiation of a chosen sample area. Mathematical models of beam and cell with calculations of kerma and dose are presented. The experiments on dose-effect relationship, kinetics of DNA double strand breaks repair, as well as micronuclei observation were performed on PC-3 (Prostate Cancer) cultured cells. The cells were seeded and irradiated on Mylar foil, which covered a hole drilled in the Petri dish. DNA lesions were visualised with γ-H2AX marker combined with Alexa Fluor 488 fluorescent dye.

The present paper adresses the AC microgrid control issue using the hierarchical control structure and droop controllers for load sharing. Once the droop controllers impose an operation with frequency and voltage deviations, depending on the load and droop parameters, a hierarchical control...... structure must be added to change the droop controller operating points. The hierarchical controllers operate with local measurements and shared signals from communication links among the distributed generation systems connected to the microgrid. Depending on the geographical size of the microgrid......, the communication links can be economically unviable. This paper thus proposes a fuzzy secondary controller for AC microgrids to reduce the link communication dependency by using only local measurements. The simulation results show that the deviations as happened with the conventional secondary controllers can...

This paper reports the findings of a longitudinal investigation into the effectiveness of skills education programmes within business and management undergraduate degree courses. During the period between 2005 and 2011, a large business school in the south-west of England was developed and implemented two distinct approaches to skills education.…

In this article, detailed modeling, simulation, and analysis of an isolated wind-hydrogen hybrid energy system is presented. Dynamic nonlinear models of all the major subsystems are developed based on sets of empirical and physical relationships. The performance of the integrated hybrid energy system is then analyzed through digital simulation. Design of dynamic controllers and supervisory control schemes are also presented. Expected behaviors during sudden load variation, wind speed change and hydrogen pressure drop are observed under both stochastic and step-variation conditions. MATLAB-Simulink TM is employed for dynamic system modeling. This exercise, in essence, outlines a process of wind-hydrogen off-grid system control synthesis and performance evaluation. Finally, results of the analysis are summarized, limitations of the simulation study are identified, and scope for future work is indicated.

PV-powered lighting systems, light-to-light systems (L2L), offer outdoor lighting where it is elsewhere cumbersome to enable lighting. Application of these systems at high latitudes, where the difference in day length between summer and winter is large and the solar energy is low requires smart...... dimming functions for reliable lighting. A barrier for exploiting use of standalone solar lighting for the urban environment seem to be lack of knowledge and lack of available tools for proper dimensioning. In this work the development of powerful dimensioning tool is described and initial measurements...

We describe a general-purpose dryer designed for continuous sampling of atmospheric aerosol, where a specified relative humidity (RH) of the sample flow (lower than the atmospheric humidity) is required. It is often prescribed to measure the properties of dried aerosol, for instance for monitoring networks. The specific purpose of our dryer is to dry cloud droplets (maximum diameter approximately 25 μm, highly charged, up to 5 × 102 charges). One criterion is to minimise losses from the droplet size distribution entering the dryer as well as on the residual dry particle size distribution exiting the dryer. This is achieved by using a straight vertical downwards path from the aerosol inlet mounted above the dryer, and removing humidity to a dry, closed loop airflow on the other side of a semi-permeable GORE-TEX membrane (total area 0.134 m2). The water vapour transfer coefficient, k, was measured to be 4.6 × 10-7 kg m-2 s-1% RH-1 in the laboratory (temperature 294 K) and is used for design purposes. A net water vapour transfer rate of up to 1.2 × 10-6 kg s-1 was achieved in the field. This corresponds to drying a 5.7 L min-1 (0.35 m3 h-1) aerosol sample flow from 100% RH to 27% RH at 293 K (with a drying air total flow of 8.7 L min-1). The system was used outdoors from 9 May until 20 October 2010, on the mountain Brocken (51.80° N, 10.67° E, 1142 m a.s.l.) in the Harz region in central Germany. Sample air relative humidity of less than 30% was obtained 72% of the time period. The total availability of the measurement system was >94% during these five months.

Full Text Available We describe a general-purpose dryer designed for continuous sampling of atmospheric aerosol, where a specified relative humidity (RH of the sample flow (lower than the atmospheric humidity is required. It is often prescribed to measure the properties of dried aerosol, for instance for monitoring networks. The specific purpose of our dryer is to dry cloud droplets (maximum diameter approximately 25 μm, highly charged, up to 5 × 102 charges. One criterion is to minimise losses from the droplet size distribution entering the dryer as well as on the residual dry particle size distribution exiting the dryer. This is achieved by using a straight vertical downwards path from the aerosol inlet mounted above the dryer, and removing humidity to a dry, closed loop airflow on the other side of a semi-permeable GORE-TEX membrane (total area 0.134 m2.

The water vapour transfer coefficient, k, was measured to be 4.6 × 10-7 kg m−2 s−1% RH−1 in the laboratory (temperature 294 K and is used for design purposes. A net water vapour transfer rate of up to 1.2 × 10-6 kg s−1 was achieved in the field. This corresponds to drying a 5.7 L min−1 (0.35 m3 h−1 aerosol sample flow from 100% RH to 27% RH at 293 K (with a drying air total flow of 8.7 L min−1. The system was used outdoors from 9 May until 20 October 2010, on the mountain Brocken (51.80° N, 10.67° E, 1142 m a.s.l. in the Harz region in central Germany. Sample air relative humidity of less than 30% was obtained 72% of the time period. The total availability of the measurement system was >94% during these five months.

Drawing upon ethnographic fieldwork in Seke, a semi-rural area outside Harare, Zimbabwe, this paper explores the social mechanism behind the seeming invisibility of children left on their own and how this form of 'invisibility' challenges established notions of childhood, parenthood, kinship, and community. It argues that ...

Highlights: • Heating-up and cooling-down processes of HT-PEMFC are the mainly interested topics. • Dynamic behaviours, power and energy demand of the heating and cooling was studied. • Hybrid system based on LiFeYPO 4 battery for heating and cooling is built and tested. • The concept of combining different heating sources together is recommended. - Abstract: One key issue pertaining to the cold-start of High temperature PEM fuel cell (HT-PEMFC) is the requirement of high amount of thermal energy for heating up the stack to a temperature of 120 °C or above before it can generate electricity. Furthermore, cooling down the stack to a certain temperature (e.g. 50 °C) is necessary before stopping. In this study, the dynamic behaviours, power and energy demand of a 6 kW liquid cooled HT-PEMFC stack during heating-up, operation and cooling-down were investigated experimentally. The dynamic behaviours of fuel cell under heating-up and cooling-down processes are the mainly interested topics. Then a hybridisation of HT-PEMFC with Li-ion battery to demonstrate the synergistic effect on dynamic behaviour was conducted and validated for its feasibility. At last, the concept of combining different heating sources together is analysed to reduce the heating time of the HT-PEMFC as well.

This paper discusses the operational assessment of IFCI 6.0 against a small suite of experiments representative of the existing fuel-coolant interaction (FCI) database. The simulations should shakedown any obvious problems and demonstrate the functionality of the models contained within FCI for all phases of FCI phenomena. It was anticipated that these simulations should reasonably represent the experimental data. The IFCI 6.0 simulations were not expected, or required, to exactly reproduce the experimental results. IFCI 6.0 was assessed against a generic FITS-type pouring mode experiment, a FARO quenching experiment, and the IET-8 A ampersand B experiments to: (1) demonstrate that the code can reliably reproduce the results of the previous versions of the ifci code; (2) demonstrate the capability of qualitatively simulating all phases of fuel-coolant interactions (FCIs), including explosive cases, and; (3) identify shortcomings and areas for code enhancement

Full Text Available On remote islands photovoltaic (PV panels with battery energy storage systems (BESSs supply electric power to customers in parallel operation with engine generators (EGs to reduce fuel consumption and environmental burden. A BESS operates in voltage control mode when it supplies power to loads alone, while it operates in current control mode when it supplies power to loads in parallel with the EG. This paper proposes a smooth mode change of the BESS from current control to voltage control by using initial value at the output of integral part in the voltage controller, and a smooth mode change from voltage control to current control by tracking the EG output voltage to the BESS output voltage using a phase-locked loop (PLL. The feasibility of the proposed scheme was verified through computer simulations and experiments with a scaled prototype.

Risk-Risk-based inspection (RBI) has been developed in order to identify risky equipment that can cause major accidents or damages in large-scale plants. This assessment evaluates the equipment's risk, categorizes their priorities based on risk level, and then determines the urgency of their maintenance or allocates maintenance resources. An earlier version of the risk-based assessment software is already installed within the equipment management system; however, the assessment is based on examination by an inspector, and the results can be influenced by his subjective judgment, rather than assessment being based on failure probability. Moreover, the system is housed within a server, which limits the inspector's work space and time, and such a system can be used only on site. In this paper, the development of independent risk-based assessment software is introduced; this software calculates the failure probability by an analytical method, and analyzes the field inspection results, as well as inspection effectiveness. It can also operate on site, since it can be installed on an independent platform, and has the ability to generate an I/O function for the field inspection results regarding the period for an optimum maintenance cycle. This program will provide useful information not only to the field users who are participating in maintenance, but also to the engineers who need to decide whether to extend the life cycle of the power machinery or replace only specific components

One quarter of all 134 HEC modules are tested with electron, pion and muon beams: two "partial HEC wheels", three HEC1 modules and three HEC2 modules, are used in a standard setup using the HEC cryostat in the H6 beam line. The picture shows a view of the set-up in the cryostat during the installation. MC results show that in this setup the energy leakage is well under control - well below 5 %. In addition, the other three quarters of modules are tested in technical cold tests. Using calibration signals, a detailed test of the cabling, cold electronics, crosstalk and noise performance is being done. The beam tests started with four prototype modules per run in '97, when technological optimization was still the key issue. From '98 onwards, modules of the "module 0" type have been tested, typically in two run periods per year. Finally in '99 the series production has started, with first beam test of series modules in 2000. Since then 57 series modules have been cold tested, 24 of them actually in beam tests. T...

Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...

Westinghouse Hanford Company (WHC) has developed Computer Based Training (CBT) materials for radiation and industrial safety. First released for general Fast Flux Test Facility in November, 1984. This course has now been taken by nearly 350 people. Completion times for new personnel average around eight hours. The next project undertaken was construction of a Radiation Worker Safety course generic enough for use by all contractors at the Hanford site. The design process of the Hanford site course indicated that the quantity of ''DOE common material'' may be sufficient to warrant consideration of a larger target population. Specifically, the course will be designed to run on an IBM-PC or compatible computer having 256K RAM, a standard IBM color graphics card or equivalent, a color graphics monitor, and two floppy disk drives or one hard disk. The target student population includes those who routinely work in Radiation Areas, especially crafts people. We are not targeting Health Physics personnel, except, possibly, for introductory training, nor are we directing the course toward ''casual'' or escorted workers

quality and reliability is adequate for implementation of a road pricing system. The GPS log files was imported into ArcGIS and analyzed in relation to the digital road network and the density of the high rise areas in order to examine where the high buildings and narrow street canyons causes too many...

Because of the capability of free movement in the treatment room, we recently introduced a Hercules treatment couch on one of our linear accelerators. One of the advantages of this couch is that it allows for a more flexible way of patient setup and that it can be moved entirely out of the way to

Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .

This paper proposes a design steps in sizing of standalone photovoltaic system with hydrogen storage using intuitive method. The main advantage of this method is it uses a direct mathematical approach to find system’s size based on daily load consumption and average irradiation data. The keys of system design are to satisfy a pre-determined load requirement and maintain hydrogen storage’s state of charge during low solar irradiation period. To test the effectiveness of the proposed method, a case study is conducted using Kuala Lumpur’s generated meteorological data and rural area’s typical daily load profile of 2.215 kWh. In addition, an economic analysis is performed to appraise the proposed system feasibility. The finding shows that the levelized cost of energy for proposed system is RM 1.98 kWh. However, based on sizing results obtained using a published method with AGM battery as back-up supply, the system cost is lower and more economically viable. The feasibility of PV system with hydrogen storage can be improved if the efficiency of hydrogen storage technologies significantly increases in the future. Hence, a sensitivity analysis is performed to verify the effect of electrolyzer and fuel cell efficiencies towards levelized cost of energy. Efficiencies of electrolyzer and fuel cell available in current market are validated using laboratory’s experimental data. This finding is needed to envisage the applicability of photovoltaic system with hydrogen storage as a future power supply source in Malaysia.

MicroRNAs (miRNA) are small non-coding RNA molecules of ∼22 nucleotides which regulate large numbers of genes by binding to seed sequences at the 3′-untranslated region of target gene transcripts. The target mRNA is then usually degraded or translation is inhibited, although thus resulting in posttranscriptional down regulation of gene expression at the mRNA and/or protein level. Due to the bioinformatic difficulties in predicting functional miRNA binding sites, several publically available d...

Risk-Risk-based inspection (RBI) has been developed in order to identify risky equipment that can cause major accidents or damages in large-scale plants. This assessment evaluates the equipment s risk, categorizes their priorities based on risk level, and then determines the urgency of their maintenance or allocates maintenance resources. An earlier version of the risk-based assessment software is already installed within the equipment management system; however, the assessment is based on examination by an inspector, and the results can be influenced by his subjective judgment, rather than assessment being based on failure probability. Moreover, the system is housed within a server, which limits the inspector s work space and time, and such a system can be used only on site. In this paper, the development of independent risk-based assessment software is introduced; this software calculates the failure probability by an analytical method, and analyzes the field inspection results, as well as inspection effectiveness. It can also operate on site, since it can be installed on an independent platform, and has the ability to generate an I/O function for the field inspection results regarding the period for an optimum maintenance cycle. This program will provide useful information not only to the field users who are participating in maintenance, but also to the engineers who need to decide whether to extend the life cycle of the power machinery or replace only specific components

An important element of hybrid photo- voltaic(PV) + diesel sytem is battery storage. Size of battery storage plays a role in optimum operation of the hybrid system. Emphasis needs to be placed on this issue. In this perspective, hourly solar radiation data, for the period 1986 93 recorded at Dhahran, Saudi Arabia, have been analyzed to investigate the optimum size of battery storage capacity for hybrid (PV + diesel) power systems. Various sizing configurations have been simulated. The monthly average daily values of solar global radiation range from 3.61 to 7.96kWh/m2. As a case study, hybrid systems considered in the present analysis consist of 225m2 PV array area (panels/modules) supplemented with battery storage unit and diesel backup generators (to meet the load requirements of a typical residential building with annual electrical energy demand of 35,200kWh). The monthly average energy generated from the aforementioned hybrid system for different scenarios has been presented. More importantly, the study explores the influence of variation of battery storage capacity on hybrid power generation. The results exhibit a trade-off between size of the storage capacity and diesel power to be generated to cope with annual load distribution. Concurrently, the energy to be generated from the diesel generator and the number of operational hours of the diesel system to meet the load demand have been also addressed.The study shows that for optimum operation of diesel system, storage capacity equivalent to 12 18h of maximum monthly average hourly demand need to be used. It has been found that in the absence of battery bank, ˜58% of the load needs to be provided by the diesel system. However, use of 12h of battery storage (autonomy) reduces diesel energy generation by ˜49% and the number of hours of operation of the diesel system get reduced by about ˜82%. The findings of this study can be employed as a tool for sizing of battery storage for PV/diesel systems for other regions having climates similar to the location considered in the study.

The low energetic efficiency of photovoltaic s panels is known, in addition, due to the use of linear regulators, which dissipate an important bit of the generated energy, the efficiency of the photovoltaic systems is still smaller.Also, the I-V characteristic curve of the photovoltaic modules depends on the solar radiation and the own temperature; consequently, the maximum power point (Wp) changes permanently.In conclusion, to produce electricity with photovoltaic panels is very expensive. However due to preserve the environment this technology is widely used.With the purpose of optimizing the amount of energy produced by the photovoltaic system, two complementary methods are used.One is the Maximum Power Point Tracker (MPPT) system and the other one is the Solar Tracker system.The objective of this project is to reduce that cost increasing the amount of energy produced by the solar panels using a Maximum Power Point Tracker system.This device consists of a DC/DC buck converter of high performance, controlled by a PIC 16F873 micro controller; which carries out the conversions of the analogical signals of the solar array to digital signals (ADC), the PIC output digital signals to the PWM control of the power FET (DAC), and calculates the Duty Cycle (D) for the point of I-V curve where this product becomes maximum.Measurements for different loads and battery charges were made.With the obtained results, the comparisons with a conventional system were made, a greater cession of energy to the load is observed.The main conclusion of this work is: Using a MPPT device to making work the PV module during the greater possible time near the maximum power point, the efficiency of the photovoltaic systems can be increased

Experience with dc photovoltaic systems (without backup power) and ranging in output from 23 to 3,500 peak watts, in a wide range of environmental conditions and with a wide range of insolation, is described. Cooperation of NASA with other government agencies resulted in the installation of an air pollution monitor in New Jersey, a seismic sensor in Hawaii, power for lookout towers in national forests in California, an electric power system for a Papago Indian village in Arizona, and a power system for a grain mill and water pump in Tangaye, Upper Volta. Significant operational results are discussed and system reliability is assessed for the 20 experimental systems installed since 1976. Additional systems to be installed overseas are highlighted, and economic factors are considered.

Distributed Artificial Intelligence (DAI) systems in which multiple problem solving agents cooperate to achieve a common objective is a rapidly emerging and promising technology. However, as yet, there have been relatively few reported cases of such systems being employed to tackle real-world problems in realistic domains. One of the reasons for this is that DAI researchers have given virtually no consideration to the process of incorporating pre-existing systems into a community of cooperating agents. Yet reuse is a primary consideration for any organisation with a large software base. To redress the balance, this paper reports on an experiment undertaken at the CERN laboratories, in which two pre-existing and standalone expert systems for diagnosing faults in a particle accelerator were transformed into a community of cooperating agents. The experiences and insights gained during this process provide a valuable first step towards satisfying the needs of potential users of DAI technology - identifying the ty...

The present study investigates the relation between sketching and communication in teams during the idea generation process in early concept generation. A quasi-experiment study has been conducted with Masters students of Industrial Design Engineering at Delft University of Technology, Netherlands.

UNISWA Research Journal of Agriculture, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 5, No 2 (2001) >. Log in or Register to get access to full text downloads.

In this paper, the disturbance rejection performance of the cascaded control strategy for UPS system is investigated. The comparison of closed loop system performance between conventional cascaded control (CCC) strategy and state-decoupling cascaded control (SDCC) strategy are further explored....... In order to further increase the load current disturbance rejection capability of the state-decoupling in UPS system, a feedforward control strategy is proposed. In addition, the design principle for the current and voltage regulators are discussed. Simulation and experimental results are provided...

This program profile explains and illustrates a pedagogical application of Rhetorical Genre Studies (RGS) to a one-semester, upper-division online Professional Writing course. We explain our use of a heuristic, which we liken to "night-vision goggles," that enables students to systematically analyze field data that they gather from a…

The main goal of this study is to contribute to the design of an optimized hybrid renewable power plant for the island of Agios Efstratios. The initial step is to analyze the attributes and applications of various energy generation and storage technologies and focus on the most suitable ones for ...

North Carolina's Early College model is the subject of an IES-funded eleven-year longitudinal experimental study that utilized a lottery process to assign early college applicants to either treatment or control groups. This paper presents findings related to high school outcomes. The primary goal of the early college model is to increase the…

This paper presents results from the longitudinal experimental study conducted on North Carolina's early college model described in an earlier paper. The primary purpose of this paper is to present the impact of the early college model on outcomes related to postsecondary enrollment. The specific research questions driving this study include: (1)…

textabstractThe unpredictable nature of wind energy makes its integration to the electric grid highly challenging. However, these challenges can be addressed by incorporating storage devices (batteries) in the system. We perform an overall assessment of a single domestic power system with a wind

Full Text Available coastal structures rely on breakwaters to protect them and therefore the study of breakwater structures and armour units (for example dolosse) are of importance in coastal engineering. Traditionally, these studies involved the building of three... single armour unit [5]. A recent study used a model containing a relatively large number of interlocking armour units [6]. This paper describes the development of a method that allows individual armour units to move relative to each other. Such a...

A single-crate CAMAC system was configured to control a negative ion source development facility at ORNL and control software was written for the crate microcomputer. The software uses inputs from a touch panel and a shaft encoder to control the various operating parameters of the test facility and uses the touch panel to display the operating status. Communication to and from the equipment at ion source potential is accomplished over optical fibers from an ORNL-built CAMAC module. A receiver at ion source potential stores the transmitted data and some of these stored values are then used to control discrete parameters of the ion source (i.e., power supply on or off). Other stored values are sent to a multiplexed digital-to-analog converter to provide analog control signals. A transmitter at ion source potential transmits discrete status information and several channels of analog data from an analog-to-digital converter back to the ground-potential receiver where it is stored to be read and displayed by the software

Full Text Available This paper addresses the design and analysis of a voltage and frequency control (VFC strategy for full converter (FC-based wind energy conversion systems (WECSs and its applicability for the supply of an isolated load. When supplying an isolated load, the role of the back-to-back converters in the FC must change with respect to a grid-connected application. Voltage and frequency are established by the FC line side converter (LSC, while the generator side converter (GSC is responsible for maintaining constant voltage in the DC link. Thus, the roles of the converters in the WECS are inverted. Under such control strategies, the LSC will automatically supply the load power and hence, in order to maintain a stable operation of the WECS, the wind turbine (WT power must also be controlled in a load-following strategy. The proposed VFC is fully modelled and a stability analysis is performed. Then, the operation of the WECS under the proposed VFC is simulated and tested on a real-time test bench, demonstrating the performance of the VFC for the isolated operation of the WECS.

Teaching critical thinking, an educational goal widely discussed in the last 30 years (Halpern, 1993), is an essential element of professional and higher education as it promotes reasoned judgments under "conditions of uncertainty," a hallmark of professionalism (Levine, 2010; Shulman, 2005; Perry, 1970). In this study, the researchers…

Reflectors to bifacial PV-cells are simulated and prototyped in this work. The aim is to optimize the reflector to specific latitudes, and particularly northern latitudes. Specifically, by using minimum semiconductor area the reflector must be able to deliver the electrical power required at the ...... at the condition of minimum solar travel above the horizon, worst weather condition etc. We will test a bifacial PV-module with a retroreflector, and compare the output with simulations combined with local solar data....

climatic and environmental effects on PV system. A discussion of results obtained from simulations is also ... dant solar energy available throughout the year with reserve estimate of 3.5 - 7.0 kW/m2/day [1]. In order to ... based MPPT method [4], fractional open-circuit. Nigerian Journal of Technology. Vol. 31, No. 1, March ...

A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

A simple control strategy has been implemented for maintaining the set values of voltage magnitude and frequency at the stator terminals of DFIG, which serve as a virtual grid for connecting ac loads and SCIM. Based on the availability of power in the wind, PHSP and battery, various operating modes of the proposed ...

Full Text Available A capacity for self-regulated learning (SRL has long been recognised as an important factor in successful studies. Although educational researchers have started to investigate the concept of SRL in the context of online education, very little is yet known about SRL in relation to massive open online courses (MOOCs or of appropriate strategies to foster SRL skills in MOOC learners. Self-regulation is particularly important in a MOOC-based study, which demands effective independent learning, and where widely acknowledged high dropout rates are observed. This study reports an investigation and assessment of the concept of SRL using a novel MOOC platform (eLDa by providing study options (either via a self-directed learning or instructor-led learning using a novel learning tool. In view of this, the research presents general description of self-regulated learning and explored the various existing dimensions used to expose the learners SRL skills. Drawing comparison of the online tool, the results and findings of the data were analysed. The study dis¬cusses how the various dimensions contributed to the knowledge representation of the self-regulated learning abilities shown by the learners. We present how these SRL dimensions captured using the measuring instrument contributes to our growing understanding of the distinctive features of the individual learner’s self-regulated learning. MOOCs success required a high performance of self-regulated learning abilities which at the moment very little has shown these degree of supporting SRL skills. This paper presents preliminary evaluation of a novel e-learning tool known, as ‘eLDa’ developed to implement this investi¬gation of self-regulation of learning. The research applied a modified online self-regulated learning questionnaire (OSLQ as the instrument to measure the SRL skills. The modified questionnaire known as MOOC OSLQ (MOSLQ was developed with a 19-item scale questions that exposes the six SRL dimensions used in this study.

The subsystems of the CMS silicon strip tracker were integrated and commissioned at the Tracker Integration Facility (TIF) in the period from November 2006 to July 2007. As part of the commissioning, large samples of cosmic ray data were recorded under various running conditions in the absence of a magnetic field. Cosmic rays detected by scintillation counters were used to trigger the readout of up to 15% of the final silicon strip detector, and over 4.7 million events were recorded. This document describes the cosmic track reconstruction and presents results on the performance of track and hit reconstruction as from dedicated analyses.

Highlights: • A complete state feedback controller for the voltage conditioning stage of a hybrid power plant is proposed. • The controller explicitly handles the state and controller constraints. • The developed control methodology can be applied to various power electronics architectures. - Abstract: In this paper, a complete control architecture is proposed for the voltage conditioning stage of a hybrid power generation system composed of a Stirling engine coupled with a supercapacitor. Such a solar energy-based generation system aims at providing electricity to off-grid regions. The novelty of the proposed architecture is that it completely handles constraints on all the state variables of the electric stage while providing near to optimal performances in terms of settling time. The derivation of the control law enables a deep understanding of the main issues involved in the success of the closed-loop control. Moreover, the resulting feedback laws are real-time compatible and are given in a complete explicit form

Highlights: • An X-ray microbeam line for irradiation of living cultured cells was constructed. • A step by step explanation of working principles with engineering details, procedures and calculations is presented. • A model of beam and cell interaction is presented. • A method of uniform irradiation of living cells with an exact dose per a cell is presented. • Results of preliminary experiments are presented. - Abstract: The article describes an X-ray microbeam standalone facility dedicated for irradiation of living cultured cells. The article can serve as an advice for such facilities construction, as it begins from engineering details, through mathematical modeling and experimental procedures, ending up with preliminary experimental results and conclusions. The presented system consists of an open type X-ray tube with microfocusing down to about 2 μm, an X-ray focusing system with optical elements arranged in the nested Kirckpatrick-Baez (or Montel) geometry, a sample stand and an optical microscope with a scientific digital CCD camera. For the beam visualisation an X-ray sensitive CCD camera and a spectral detector are used, as well as a scintillator screen combined with the microscope. A method of precise one by one irradiation of previously chosen cells is presented, as well as a fast method of uniform irradiation of a chosen sample area. Mathematical models of beam and cell with calculations of kerma and dose are presented. The experiments on dose-effect relationship, kinetics of DNA double strand breaks repair, as well as micronuclei observation were performed on PC-3 (Prostate Cancer) cultured cells. The cells were seeded and irradiated on Mylar foil, which covered a hole drilled in the Petri dish. DNA lesions were visualised with γ-H2AX marker combined with Alexa Fluor 488 fluorescent dye.

Full Text Available This paper presents an experimental work to design and size a diesel generator (DG. The basic system is equipped with a 1.5 kW self-excited induction generator (SEIG, a diesel motor (DM, a static voltage compensator (SVC and controllers. A proportional integral controller is used to meet the requirement of the SEIG frequency regulation. A controlled voltage source is performed by using an SVC with a fuzzy controller, which adjusts voltage by varying the amount of the injected reactive power. An experimental set-up is used to identify the SEIG parameters and select the convenient bank of capacitors that minimize the SEIG starting up time and fix the convenient margin of voltage. The system has been tested by simulation using models implemented by Matlab/Simulink software. The simulation results confirm the efficiency of the proposed strategy of voltage regulation. Keywords: Diesel motor, SEIG, SVC, Voltage regulation, Frequency regulation

This book provides a clear introduction to this important area of statistics. The author provides a wide of coverage of different kinds of multilevel models, and how to interpret different statistical methodologies and algorithms applied to such models. This 4th edition reflects the growth and interest in this area and is updated to include new chapters on multilevel models with mixed response types, smoothing and multilevel data, models with correlated random effects and modeling with variance.

... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

... Coping with Alzheimer’s COPD Caregiving Take Care! Caregiver StatisticsStatistics on Family Caregivers and Family Caregiving Caregiving Population ... Health Care Caregiver Self-Awareness State by State Statistics Caregiving Population The value of the services family ...

Intensity Modulated Radiation Therapy (I.M.R.T.) is currently considered as a technique of choice to increase the local control of the tumour while reducing the dose to surrounding organs at risk. However, its routine clinical implementation is partially held back by the excessive amount of work required to prepare the patient treatment. In order to increase the efficiency of the treatment preparation, two axes of work have been defined. The first axis concerned the automatic optimisation of beam orientations. We integrated the simplex algorithm in the treatment planning system. Starting from the dosimetric objectives set by the user, it can automatically determine the optimal beam orientations that best cover the target volume while sparing organs at risk. In addition to time sparing, the simplex results of three patients with a cancer of the oropharynx, showed that the quality of the plan is also increased compared to a manual beam selection. Indeed, for an equivalent or even a better target coverage, it reduces the dose received by the organs at risk. The second axis of work concerned the optimisation of pre-treatment quality control. We used an industrial method: Statistical Process Control (S.P.C.) to retrospectively analyse the absolute dose quality control results performed using an ionisation chamber at Centre Alexis Vautrin (C.A.V.). This study showed that S.P.C. is an efficient method to reinforce treatment security using control charts. It also showed that our dose delivery process was stable and statistically capable for prostate treatments, which implies that a reduction of the number of controls can be considered for this type of treatment at the C.A.V.. (author)

The forensic fingerprint community has faced increasing amounts of criticism by scientific and legal commentators, challenging the validity and reliability of fingerprint evidence due to the lack of an empirically demonstrable basis to evaluate and report the strength of the evidence in a given case. This paper presents a method, developed as a stand-alone software application, FRStat, which provides a statistical assessment of the strength of fingerprint evidence. The performance was evaluated using a variety of mated and non-mated datasets. The results show strong performance characteristics, often with values supporting specificity rates greater than 99%. This method provides fingerprint experts the capability to demonstrate the validity and reliability of fingerprint evidence in a given case and report the findings in a more transparent and standardized fashion with clearly defined criteria for conclusions and known error rate information thereby responding to concerns raised by the scientific and legal communities. Published by Elsevier B.V.

Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earth science metadata records. Second, the methodology has been implemented as a stand-alone web service that is utilized to augment search and usability of data in a variety of tools.

This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR......A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments....... The results show clearly an improved segmentation performance for the full polarimetric algorithm compared to single channel approaches....

Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

Full Text Available Abstract Background Target genes of a transcription factor (TF Pou5f1 (Oct3/4 or Oct4, which is essential for pluripotency maintenance and self-renewal of embryonic stem (ES cells, have previously been identified based on their response to Pou5f1 manipulation and occurrence of Chromatin-immunoprecipitation (ChIP-binding sites in promoters. However, many responding genes with binding sites may not be direct targets because response may be mediated by other genes and ChIP-binding site may not be functional in terms of transcription regulation. Results To reduce the number of false positives, we propose to separate responding genes into groups according to direction, magnitude, and time of response, and to apply the false discovery rate (FDR criterion to each group individually. Using this novel algorithm with stringent statistical criteria (FDR Pou5f1 suppression and published ChIP data, we identified 420 tentative target genes (TTGs for Pou5f1. The majority of TTGs (372 were down-regulated after Pou5f1 suppression, indicating that the Pou5f1 functions as an activator of gene expression when it binds to promoters. Interestingly, many activated genes are potent suppressors of transcription, which include polycomb genes, zinc finger TFs, chromatin remodeling factors, and suppressors of signaling. Similar analysis showed that Sox2 and Nanog also function mostly as transcription activators in cooperation with Pou5f1. Conclusion We have identified the most reliable sets of direct target genes for key pluripotency genes – Pou5f1, Sox2, and Nanog, and found that they predominantly function as activators of downstream gene expression. Thus, most genes related to cell differentiation are suppressed indirectly.

We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... by accounting for the significance of the materials and the equipment that enters into the production of statistics. Key words: Reversible statistics, diverse materials, constructivism, economics, science, and technology....

Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).

Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses. Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing...

The lightning jump algorithm has a robust history in correlating upward trends in lightning to severe and hazardous weather occurrence. The algorithm uses the correlation between the physical principles that govern an updraft's ability to produce microphysical and kinematic conditions conducive for electrification and its role in the development of severe weather conditions. Recent work has demonstrated that the lightning jump algorithm concept holds significant promise in the operational realm, aiding in the identification of thunderstorms that have potential to produce severe or hazardous weather. However, a large amount of work still needs to be completed in spite of these positive results. The total lightning jump algorithm is not a stand-alone concept that can be used independent of other meteorological measurements, parameters, and techniques. For example, the algorithm is highly dependent upon thunderstorm tracking to build lightning histories on convective cells. Current tracking methods show that thunderstorm cell tracking is most reliable and cell histories are most accurate when radar information is incorporated with lightning data. In the absence of radar data, the cell tracking is a bit less reliable but the value added by the lightning information is much greater. For optimal application, the algorithm should be integrated with other measurements that assess storm scale properties (e.g., satellite, radar). Therefore, the recent focus of this research effort has been assessing the lightning jump's relation to thunderstorm tracking, meteorological parameters, and its potential uses in operational meteorology. Furthermore, the algorithm must be tailored for the optically-based GOES-R Geostationary Lightning Mapper (GLM), as what has been observed using Very High Frequency Lightning Mapping Array (VHF LMA) measurements will not exactly translate to what will be observed by GLM due to resolution and other instrument differences. Herein, we present some of

The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep