Most classical scheduling formulations assume a fixed and known duration for each activity. In this paper, we weaken this assumption, requiring instead that each duration can be represented by an independent random variable with a known mean and variance. The best solutions are ones which have a high probability of achieving a good makespan. We first create a theoretical framework, formally showing how Monte Carlo simulation can be combined with deterministic scheduling algorithms to solve this problem. We propose an associated deterministic scheduling problem whose solution is proved, under certain conditions, to be a lower bound for the probabilistic problem. We then propose and investigate a number of techniques for solving such problems based on combinations of Monte Carlo simulation, solutions to the associated deterministic problem, and either constraint programming or tabu search. Our empirical results demonstrate that a combination of the use of the associated deterministic problem and Monte Carlo sim...

Time-Triggered Controller Area Network is widely accepted as a viable solution for real-time communication systems such as in-vehicle communications. However, although TTCAN has been designed to support both periodic and sporadic real-time messages, previous studies mostly focused on providing deterministic real-time guarantees for periodic messages while barely addressing the performance issue of sporadic messages. In this paper, we present an O(n2) scheduling algorithm that can minimize the maximumduration of exclusive windows occupied by periodic messages, thereby minimizing the worst-case scheduling delays experienced by sporadic messages.

In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.

We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...

Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.

Two experiments explored preference and resistance to change in concurrent chains in which the terminal links were variable-interval schedules that ended either after a single reinforcer had been delivered (variable duration) or after a fixed period of access to the schedule (constant duration). In Experiment 1, pigeons' preference between the same pair of terminal links overmatched relative reinforcement rate when the terminal links were of constant duration, but not when they were of variable duration. Responding during the richer terminal link decreased less, relative to baseline, when response-independent food was presented during the initial links according to a variable-time schedule. In Experiment 2, all subjects consistently preferred a terminal link that consisted of 20-s access to a variable-interval 20-s schedule over a terminal link that ended after one reinforcer had been delivered by the same schedule. Results of resistance-to-change tests corresponded to preference, as responding during the constant-duration terminal link decreased less, relative to baseline, when disrupted by both response-independent food during the initial links and prefeeding. Overall, these data extend the general covariation of preference and resistance to change seen in previous studies. However, they suggest that reinforcement numerosity, including variability in the number of reinforcers per terminal-link entry, may sometimes affect preference and resistance to change in ways that are difficult to explain in terms of current models.

In two experiments, demand curves were generated by exposing rats to a sequence of fixed-durationschedules in which the response requirement doubled each experimental session. Holding down the response lever for the requisite amount of time resulted in the delivery of sweetened condensed milk. Response durations shorter than those required for reinforcer delivery did not result in any programmed consequences, nor were cumulative durations across multiple presses applied towards the duration requirements. The number of reinforcer deliveries decreased as a function of reinforcer requirements. Reinforcer delays alone also decreased consumption, but to a lesser extent than increasing duration requirements. Results are congruent with previous research demonstrating that parameters of reinforcement schedules may have similar effects on both continuous and discrete dimensions of operant behavior. Hursh and Silberberg's (2008) exponential demand equation provided a good fit for several of the data sets.

Full Text Available We present novel two-stage dynamic scheduling of earth observation satellites to provide emergency response by making full use of the duration of the imaging task execution. In the first stage, the multiobjective genetic algorithm NSGA-II is used to produce an optimal satellite imaging schedule schema, which is robust to dynamic adjustment as possible emergent events occur in the future. In the second stage, when certain emergent events do occur, a dynamic adjusting heuristic algorithm (CTM-DAHA is applied to arrange new tasks into the robust imaging schedule. Different from the existing dynamic scheduling methods, the imaging duration is embedded in the two stages to make full use of current satellite resources. In the stage of robust satellite scheduling, total task execution time is used as a robust indicator to obtain a satellite schedule with less imaging time. In other words, more imaging time is preserved for future emergent events. In the stage of dynamic adjustment, a compact task merging strategy is applied to combine both of existing tasks and emergency tasks into a composite task with least imaging time. Simulated experiments indicate that the proposed method can produce a more robust and effective satellite imaging schedule.

It is well known that traffic in downlink will be much greater than that in uplink in 3 G and that beyond. High Speed Downlink Packet Access(HSDPA) is the solution to transmission for high-speed downlink packet service in UMTS, of which Maximum C/I scheduling is one of the important algorithms related to performance enhancement. An improved scheme, Thorough Maximum C/I scheduling algorithm, is presented in this article, in which every transmitted frame has the maximum C/I. The simulation results show that the new Maximum C/I scheme outperforms the conventional scheme in throughput performance and delay performance, and that the FER decreases faster as the maximum number of the retransmission increases.

Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

Scheduling in onsite construction is based on commitments. Unmet commitments result in non-completions which leads to waste. Moreover, it is important that commitments are realistic to avoid both positive and negative variation in duration. Negative variation is destructive to plans and schedules...... and results in delays, while positive variation is destructive to productivity by creating unexploited gaps between activities, thus inducing unexploited capacity. By registering non-completion at three construction sites, the magnitude of activities inducing negative variation has been mapped. In total, 5424...... activities have been registered whereof 1450 activities ended up as non-completions. Thus, 27% of the scheduled activities did not finish on schedule. Both positive and negative variation can be minimized by improving the quality of the commitments. Moreover, positive variation can be exploited by: (a...

Previous research on preference between variable-interval terminal links in concurrent chains has most often used variable-duration terminal links ending with a single reinforcer. By contrast, most research on resistance to change in multiple schedules has used constant-duration components that include variable numbers of reinforcers in each presentation. Grace and Nevin (1997) examined both preference and resistance in variable-duration components; here, preference and resistance were examined in constant-duration components. Reinforcer rates were varied across eight conditions, and a generalized-matching-law analysis showed that initial-link preference strongly over-matched terminal-link reinforcer ratios. In multiple schedules, baseline response rates were unaffected by reinforcer rates, but resistance to intercomponent food, to extinction, and to intercomponent food plus extinction was greater in the richer component. The between-component difference in resistance to change exhibited additive effects for the three resistance tests, and was systematically related to reinforcer ratios. However, resistance was less sensitive to reinforcer ratios than was preference. Resistance to intercomponent food and to intercomponent food plus extinction was more sensitive to reinforcer ratios in the present study than in Grace and Nevin (1997). Thus, relative to variable-duration components, constant-duration components increased the sensitivity of both preference and relative resistance, supporting the proposition that these are independent and convergent measures of the effects of a history of reinforcement.

In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

Full Text Available This paper investigates a scheduling problem on a single machine with maintenance, in which the starting time of the maintenance is given in advance but its duration depends on the load of the machine before the maintenance. The goal is to minimize the makespan. We formulate it as an integer programming model and show that it is NP-hard in the ordinary sense. Then, we propose an FPTAS and point out that a special case is polynomial solvable. Finally, we design fast heuristic algorithms to solve the scheduling problem. Numerical experiments are implemented to evaluate the performance of the proposed heuristic algorithms. The results show the proposed heuristic algorithms are effective.

Full Text Available Knowing the properties like amount, duration, intensity, spatial and temporal variation etc of precipitation which is the primary input of water resources is required for planning, design, construction and operation studies of various sectors like water resources, agriculture, urbanization, drainage, flood control and transportation. For executing the mentioned practices, reliable and realistic estimations based on existing observations should be made. The first step of making a reliable estimation is to test the reliability of existing observations. In this study, Kolmogorov-Smirnov, Anderson-Darling and Chi-Square goodness of distribution fit tests were applied for determining to which distribution the measured standard durationmaximum precipitation values (in the years 1929-2005 fit in the meteorological stations operated by the Turkish State Meteorological Service (DMİ which are located in the city and town centers of Aegean Region. While all the observations fit to GEV distribution according to Anderson-Darling test, it was seen that short, mid-term and long duration precipitation observations generally fit to GEV, Gamma and Log-normal distribution according to Kolmogorov-Smirnov and Chi-square tests. To determine the parameters of the chosen probability distribution, maximum likelihood (LN2, LN3, EXP2, Gamma3, probability-weighted distribution (LP3,Gamma2, L-moments (GEV and least squares (Weibull2 methods were used according to different distributions.

Two different models for analyzing extreme hydrologic events, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto distribution for modeling threshold exceedances corresponding to a generalized extreme value...... model with ML estimation for large positive shape parameters. Since heavy-tailed distributions, corresponding to negative shape parameters, are far the most common in hydrology, the PDS model generally is to be preferred for at-site quantile estimation....... distribution for annual maxima. The performance of the two models in terms of the uncertainty of the T-year event estimator is evaluated in the cases of estimation with, respectively, the maximum likelihood (ML) method, the method of moments (MOM), and the method of probability weighted moments (PWM...

Decreased maximum oxygen consumption (VO2max) during and after space flight may impair a crewmember s ability to perform mission-critical work that is high intensity and/or long duration in nature (Human Research Program Integrated Research Plan Risk 2.1.2: Risk of Reduced Physical Performance Capabilities Due to Reduced Aerobic Capacity). When VO2max was measured in Space Shuttle experiments, investigators reported that it did not change during short-duration space flight but decreased immediately after flight. Similar conclusions, based on the heart rate (HR) response of Skylab crewmembers, were made previously concerning long-duration space flight. Specifically, no change in the in-flight exercise HR response in 8 of 9 Skylab crewmembers indicated that VO2max was maintained during flight, but the elevated exercise HR after flight indicated that VO2max was decreased after landing. More recently, a different pattern of in-flight exercise HR response, and assumed changes in VO2max, emerged from routine testing of International Space Station (ISS) crewmembers. Most ISS crewmembers experience an elevated in-flight exercise HR response early in their mission, with a gradual return toward preflight levels as the mission progresses. Similar to previous reports, exercise HR is elevated after ISS missions and returns to preflight levels by 30 days after landing. VO2max has not been measured either during or after long-duration space flight. The purposes of the ISS VO2max experiment are (1) to measure VO2max during and after long-duration spaceflight, and (2) to determine if submaximal exercise test results can be used to accurately estimate VO 2max.

Full Text Available We consider a single machine scheduling problem with multiple maintenance activities, where the maintenance duration function is of the linear form ft=a+bt with a≥0 and b>1. We propose an approximation algorithm named FFD-LS2I with a worst-case bound of 2 for problem. We also show that there is no polynomial time approximation algorithm with a worst-case bound less than 2 for the problem with b≥0 unless P=NP, which implies that the FFD-LS2I algorithm is the best possible algorithm for the case b>1 and that the FFD-LS algorithm, which is proposed in the literature, is the best possible algorithm for the case b≤1 both from the worst-case bound point of view.

Rainfall intensity-frequency-duration (IFD) relationships are commonly required for the design and planning of water supply and management systems around the world. Currently, IFD information is based on the "stationary climate assumption" that weather at any point in time will vary randomly and that the underlying climate statistics (including both averages and extremes) will remain constant irrespective of the period of record. However, the validity of this assumption has been questioned over the last 15 years, particularly in Australia, following an improved understanding of the significant impact of climate variability and change occurring on interannual to multidecadal timescales. This paper provides evidence of regime shifts in annual maximum rainfall time series (between 1913-2010) using 96 daily rainfall stations and 66 sub-daily rainfall stations across Australia. Furthermore, the effect of these regime shifts on the resulting IFD estimates are explored for three long-term (1913-2010) sub-daily rainfall records (Brisbane, Sydney, and Melbourne) utilizing insights into multidecadal climate variability. It is demonstrated that IFD relationships may under- or over-estimate the design rainfall depending on the length and time period spanned by the rainfall data used to develop the IFD information. It is recommended that regime shifts in annual maximum rainfall be explicitly considered and appropriately treated in the ongoing revisions of the Engineers Australia guide to estimating and utilizing IFD information, Australian Rainfall and Runoff (ARR), and that clear guidance needs to be provided on how to deal with the issue of regime shifts in extreme events (irrespective of whether this is due to natural or anthropogenic climate change). The findings of our study also have important implications for other regions of the world that exhibit considerable hydroclimatic variability and where IFD information is based on relatively short data sets.

65 x Figure 22: Pie Chart of Standard Betas ... Corporation found that increases in schedule effort tend to be the reason for increases in the cost of acquiring a new weapons system due to, at a minimum...in-depth finance and schedule data for selected programs (Brown et al., 2015). We also give extra focus on Research Development Test & Evaluation

Full Text Available BACKGROUND: With rapid urbanization accompanied by lifestyle changes, children and adolescents living in metropolitan areas are faced with many time use choices that compete with sleep. This study reports on the sleep hygiene of urban Chinese school students, and investigates the relationship between habitual after-school activities and sleep duration, schedule and quality on a regular school day. METHODS: Cross-sectional, school-based survey of school children (Grades 4-8 living in Shanghai, China, conducted in 2011. Self-reported data were collected on students' sleep duration and timing, sleep quality, habitual after-school activities (i.e. homework, leisure-time physical activity, recreational screen time and school commuting time, and potential correlates. RESULTS: Mean sleep duration of this sample (mean age: 11.5-years; 48.6% girls was 9 hours. Nearly 30% of students reported daytime tiredness. On school nights, girls slept less (p<0.001 and went to bed later (p<0.001, a sex difference that was more pronounced in older students. Age by sex interactions were observed for both sleep duration (p=0.005 and bedtime (p=0.002. Prolonged time spent on homework and mobile phone playing was related to shorter sleep duration and later bedtime. Adjusting for all other factors, with each additional hour of mobile phone playing, the odds of daytime tiredness and having difficulty maintaining sleep increased by 30% and 27% among secondary students, respectively. CONCLUSION: There are sex differences in sleep duration, schedule and quality. Habitual activities had small but significant associations with sleep hygiene outcomes especially among secondary school students. Intervention strategies such as limiting children's use of electronic screen devices after school are implicated.

Full Text Available Abstract Shoulder lesions are caused by tissue breakdown of the skin and/or underlying tissue as a result of long lasting pressure. The lesions are commonly seen in sows during the period of lactation and contribute to poor animal welfare as well as affecting the consumers' attitudes towards the swine industry. The aim of this study was to investigate the correlation between prolonged recumbency during early lactation and development of shoulder lesions, in particular the lying bout time. Eighteen sows of Swedish Landrace were observed for 24 hours during the day of farrowing and day 2, 4, 9 and 11 after farrowing in May 2009. The data were analysed for correlations between the duration of the longest observed uninterrupted lying bout and the prevalence of shoulder lesions recorded at weaning (week 5. In the study, shoulder lesions were observed in eight of the eighteen sows at the time of weaning. The total lying time of the sows was highest on day 0 and day 2, when the proportion of time spent in lateral recumbency over the 24-hour period was on average 80 percent. The longest lying bout had an average duration of 6,3 hours (right side and 7,2 hours (left side. A significant correlation (Spearman rank coefficient = 0,88; P

Full Text Available Estimation of Distribution Algorithm (EDA is a new kinds of colony evolution algorithm, through counting excellent information of individuals of the present colony EDA construct probability distribution model, then sample the model produces newt generation. To solve the NP-Hard question as EDA searching optimum network structure a new Maximum Entropy Distribution Algorithm (MEEDA is provided. The algorithm takes Jaynes principle as the basis, makes use of the maximum entropy of random variables to estimate the minimum bias probability distribution of random variables, and then regard it as the evolution model of the algorithm, which produces the optimal/near optimal solution. Then this paper presents a rough programming model for job shop scheduling under uncertain information problem. The method overcomes the defects of traditional methods which need pre-set authorized characteristics or amount described attributes, designs multi-objective optimization mechanism and expands the application space of a rough set in the issue of job shop scheduling under uncertain information environment. Due to the complexity of the proposed model, traditional algorithms have low capability in producing a feasible solution. We use MEEDA in order to enable a definition of a solution within a reasonable amount of time. We assume that machine flexibility in processing operations to decrease the complexity of the proposed model. Muth and Thompson’s benchmark problems tests are used to verify and validate the proposed rough programming model and its algorithm. The computational results obtained by MEEDA are compared with GA. The compared results prove the effectiveness of MEEDA in the job shop scheduling problem under uncertain information environment.

Background: Previous studies indicated that measurement of sleep only by duration and quality may be biased. This study aimed to investigate the interactive association of self-reported sleep duration, quality and shift-work schedule with hypertension prevalence in Chinese adult males. Methods: A total of 4519 Chinese adult males (≥18 years) were enrolled into the cross-sectional survey. Sleep attributes were measured from the responses to the standard Pittsburgh Sleep Quality Index and relevant questions in a structured questionnaire survey. The association of sleep duration, quality and shift-work schedule with hypertension prevalence was analyzed using multivariate logistic regression, considering the interaction between them or not. Results: Taking the potential interaction of the three aspects of sleep into consideration, only short sleep duration combined with poor sleep quality was found to be related to hypertension prevalence in Chinese adult males (odds ratio (OR): 1.74, 95% confidence interval (CI): 1.31–2.31), which could be modified by occasional and frequent shift-work schedule (OR: 1.43, 95% CI: 1.05–1.95; OR: 1.97, 95% CI: 1.40–2.79). Conclusions: Short sleep duration was not associated with the prevalence of hypertension in Chinese adult males unless poor sleep quality exists, which could be further modified by shift-work schedule. Assessment of sleep by measuring sleep duration only was not sufficient when exploring the association of sleep with hypertension. PMID:28230809

Full Text Available This paper studies the unrelated parallel machine scheduling problem with three minimization objectives – makespan, maximum earliness, and maximum tardiness (MET-UPMSP. The last two objectives combined are related to just-in-time (JIT performance of a solution. Three hybrid algorithms are presented to solve the MET-UPMSP: reactive GRASP with path relinking, dual-archived memetic algorithm (DAMA, and SPEA2. In order to improve the solution quality, min-max matching is included in the decoding scheme for each algorithm. An experiment is conducted to evaluate the performance of the three algorithms, using 100 (jobs x 3 (machines and 200 x 5 problem instances with three combinations of two due date factors – tight and range. The numerical results indicate that DAMA performs best and GRASP performs second for most problem instances in three performance metrics: HVR, GD, and Spread. The experimental results also show that incorporating min-max matching into decoding scheme significantly improves the solution quality for the two population-based algorithms. It is worth noting that the solutions produced by DAMA with matching decoding can be used as benchmark to evaluate the performance of other algorithms.

Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

An algorithm based on an alternative scheduling approach for iterative acyclic and cyclid DFGs (data-flow graphs) with limited resources that exploits inter- and intra-iteration parallelism is presented. The method is based on guiding the scheduling algorithm with the information supplied by a

Two regional estimation schemes, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto (GP) distribution for modeling threshold exceedances corresponding to a generalized extreme value (GEV) distribution for annual maxima. First, the accuracy of PDS/GP and AMS/GEV regional index-flood T-year event estimators are compared using Monte Carlo simulations. For estimation in typical regions assuming a realistic degree of heterogeneity, the PDS/GP index-flood model is more efficient. The regional PDS and AMS procedures are subsequently applied to flood records from 48 catchments in New Zealand. To identify homogeneous groupings of catchments, a split-sample regionalization approach based on catchment characteristics is adopted. The defined groups are more homogeneous for PDS data than for AMS data; a two-way grouping based on annual average rainfall is sufficient to attain homogeneity for PDS, whereas a further partitioning is necessary for AMS. In determination of the regional parent distribution using L- moment ratio diagrams, PDS data, in contrast to AMS data, provide an unambiguous interpretation, supporting a GP distribution.

Massive flood basalt volcanism in the NE Atlantic 56 million years ago can be related to the initial manifestation of the Iceland plume and ensuing continental rifting, and has been correlated with a short (c. 200,000 years) global warming period, the Paleocene-Eocene thermal maximum (PETM). A hypothesis is that magmatic sills emplaced into organic-rich sediments on the Norwegian margin triggered rapid release of greenhouse gases. However, the largest exposed volcanic succession in the region, the E Greenland flood basalts provide additional details. The alkaline Ash-17 provides regional correlation of continental volcanism and pertubation of the oceanic environment. In E Greenland Ash-17 is interbedded with the uppermost part of the flood basalt succession. In the marine sections of Denmark, Ash-17 postdates PETM, most likely by 3-400,000 years. While radiometric ages bracket the duration of the main flood basalt event to less than a million years, the subsidence history of the Skaergaard intrusion due to flood basalt emplacement indicates it took less than 300,000 years. It is therefore possible that the main flood basalts in E Greenland postdates PETM. This is supported by a scarcity of ash layers within the PETM interval. Continental flood basalt provinces represent some of the highest sustained volcanic outputs preserved within the geologic record. Recent studies have focused on estimating the atmospheric loading of volatile elements and have led to the suggestion that they may be associated with significant global climate changes and mass extinctions. Estimates suggest that c. 400,000 km3 of basaltic lava erupted in E Greenland and the Faeroe islands. Based on measurements of melt inclusions and solubility models, approximately 3000 Gt of SO2 and 220 Gt of HCl were released by these basalts. Calculated yearly fluxes approach 10 Mt/y SO2 and 0.7 Mt/y HCl. Refinements of these estimates, based largely on further melt inclusion measurements, are proceeding. Our

The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.

Approved for public release; distribution is unlimited Earned value management is a project management tool that integrates project scope with cost, schedule, and performance elements for optimum project planning and control. Earned value management is required by the Department of Defense for cost and incentive type contracts equal or greater than $20 million as part of a comprehensive approach to improving critical acquisitions. It is used to forecast the programs schedule performance us...

This study investigated the effects of positive and negative reinforcement on superstitious behaviors. Participants were instructed to produce the word "GOOD" on a computer display (positive reinforcement condition) or to remove the word "BAD" (negative reinforcement condition) by pressing any of six keys. The words GOOD or BAD were presented at fixed-time intervals regardless of the participant's responses. In Experiment 1, only participants exposed to the negative reinforcement condition acquired superstitious behaviors. However, the observed asymmetry may not have been due to the polarity of consequences (positive vs. negative) but instead to the amount of time of goal states, because the period of the absence of BAD was longer than the period of the presence of GOOD. Experiment 2 varied the duration of word presentations to match the period of goal state between the positive and negative reinforcement conditions, and found that participants acquired superstitious behaviors equally under the two conditions. These results indicate that the duration of a consequence rather than its polarity is a critical factor controlling superstitious behaviors. The theoretical relationship between superstitious behavior and the illusion of control is discussed.

Full Text Available We used a bio-economic model to analyze the role that alternative seeding-harvesting schedules, temperature, dissolved oxygen, stocking density, and duration of cultivation play in the economic performance of semi-intensive shrimp cultivation in Mexico. The highest production was predicted for the May-August schedule (1130-2300 kg ha-1, while the lowest yields were obtained for the March-June schedule (949-1300 kg ha-1. The highest net revenues were projected for the August-November schedule (US$354-1444 ha-1, while the lowest was projected for the May-August schedule (US$330-923 ha-1. The highest annual net revenues were predicted for the combination of the March-June and August-November schedules (US$1432-2562 ha-1. Sensitivity analysis indicated temperature and dissolved oxygen were the most important factors determining net revenues in March-June schedule. For the May-August and August-November schedules, stocking density was the most important factor. Duration of cultivation was the least sensitive variable. Break-even production analysis confirmed that the combination of the March-June and August-November schedules were more efficient from an economic perspective. We recommend test some ponds with higher stocking density in the March-June and August-November schedules, and in the latter case, seeding in June or July rather than August.

Previous studies with concurrent-chains procedures have shown that preference for a terminal-link signaling a higher reinforcement rate decreases as initial-link durations increase. Using a concurrent-chains procedure, the present experiment examined the effects of manipulating initial-link duration on preference and resistance to disruption with rats nose poking for different rates of food reinforcement in the terminal links. Consistent with previous findings, preference for a terminal link with a higher reinforcement rate decreased with longer initial links. Conversely, relative resistance to disruption in the terminal link with a higher reinforcement rate increased with longer initial links. These findings are counter to the prediction of behavioral momentum theory that preference and resistance to change should be positively related.

A surgical operation scheduling problem with the significant duration of surgery uncertain was studied, and a mathematical model for which was proposed based on absolute robustness strategy jointly considering between the cost of the hospital and the satisfaction level of patients. Furthermore, a two-loop hybrid algorithm integrating partheno-genetic algorithm and interior point method was designed. The worst-case performance of a schedule over the range of duration of surgery was optimized. The outer loop of the designed algorithm was to determine the sequence of surgical operation on each operational bed and the interior point method was to search for the duration scenario with worst-case performance for a given sequence. The simulation results show the proposed strategy is effective compared with the deterministic scheduling strategy based on expected duration of surgery.%针对病人手术持续时间有较大范围不确定性的手术排程问题,综合考虑医院成本和病人满意度,采用绝对鲁棒优化策略,构建了手术持续时间不确定的手术排程优化模型,并设计了将单亲遗传算法和内点法相结合的两层混合优化算法,外层的单亲遗传算法确定病人在不同手术台的手术顺序,内层的内点法确定在给定的手术顺序下实现最差性能的手术持续时间.通过对大量随机算例进行仿真实验,并与基于期望值的确定性优化策略进行对比,结果验证了所提绝对鲁棒优化策略的有效性.

Background Hepatitis A is mostly a self-limiting disease but causes substantial economic burden. Consequently, United States Advisory Committee for Immunization Practices recommends inactivated hepatitis A vaccination for all children beginning at age 1 year and for high risk adults. The hepatitis A vaccine is highly effective but the duration of protection is unknown. Methods We examined the proportion of children with protective hepatitis A antibody levels (anti-HAV ≥20 mIU/mL) as well as the geometric mean concentration (GMC) of anti-HAV in a cross sectional convenience sample of individuals aged 12–24 years, who had been vaccinated with a two-dose schedule in childhood, with the initial dose at least 5 years ago. We compared a subset of data from persons vaccinated with two-doses (720 EL.U.) at age 3–6 years with a demographically similar prospective cohort that received a three-dose (360 EL.U.) schedule and have been followed for 17 years. Results No significant differences were observed when comparing GMC between the two cohorts at 10 (P = 0.467), 12 (P = 0.496), and 14 (P = 0.175) years post-immunization. For the three-dose cohort, protective antibody levels remain for 17 years and have leveled-off over the past 7 years. Conclusion The two- and three-dose schedules provide similar protection >14 years after vaccination, indicating a booster dose is not needed at this time. Plateauing anti-HAV GMC levels suggest protective antibody levels may persist long-term. PMID:23470239

The addition of major amounts of carbon to the exogenic carbon pool caused rapid climate change and faunal turnover during the Paleocene-Eocene Thermal Maximum (PETM) around 56 million years ago. Constraints are still needed on the duration of the onset, main body, and recovery of the event. The Bighorn Basin in Wyoming provides expanded terrestrial sections spanning the PETM and lacking the carbonate dissolution present in many marine records. Here we provide new carbon isotope records for the Polecat Bench and Head of Big Sand Coulee sections, two parallel sites in the northern Bighorn Basin, at unprecedented resolution. Cyclostratigraphic analysis of these fluvial sediment records using descriptive sedimentology and proxy records allows subdivision into intervals dominated by avulsion deposits and intervals dominated by overbank deposits. These sedimentary sequences alternate in a regular fashion and are related to climatic precession. Correlation of the two, 8-km-spaced sections shows that the avulsion-overbank cycles are laterally consistent. The presence of longer-period alternations, related to modulation by the 100-kyr eccentricity cycle, corroborates the precession influence on the sediments. Sedimentary cyclicity is then used to develop a floating precession-scale age model for the PETM carbon isotope excursion (CIE). We find a CIE body encompassing 95 kyrs aligning with marine cyclostratigraphic age models. The duration of the CIE onset is estimated at 5 kyrs, but difficult to determine because sedimentation rates vary at the sub-precession scale. The CIE recovery starts with a 2 to 4 per mille step and lasts 40 or 90 kyrs, depending on what is considered the carbon isotope background state.

Aiming at some elastic activity duration in block erection scheduling, definitions of rigid activity duration and elastic activity duration were proposed. Impacts of elastic activity duration on resource configuration and total project cost were studied. Complexity and research significance of block erection scheduling with minimization of total project cost as optimization objective were analyzed. The planning model of block erection scheduling considering elastic activity duration was constructed. Search algorithm based on tabu search strategy was used to solve the proposed model. Experiments were conducted on randomly generated both small-scale and large-scale examples and a real block erection scheduling part. Results verified the feasibility and efficiency of the proposed algorithm.%针对船台吊装计划中部分任务具有工期弹性的特点,提出刚性工期任务与弹性工期任务的定义,研究了任务工期弹性对资源配置方式以及项目总成本的影响,分析了以项目总成本最小为优化目标的船台吊装计划问题的复杂性与研究意义,建立了考虑任务工期弹性的船台吊装计划模型.采用基于禁忌搜索策略的搜索算法求解所建立的问题模型,并分别对随机生成的小规模和大规模两类问题实例以及某一实际的船台吊装计划片段进行测试,验证了该算法对于所提出的问题具有较好的求解质量和优化效率.

The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

idle hours, thereby minimizing the operational costs of public transportation companies. In the first phase of this study, assuming that the timetables are already divided into long and short durationschedules, the short schedules can be combined to make up an employee's workday. This combination is done by the maximum weight Matching Algorithm, in which the scales are represented by vertices on a graph and the maximum weight is attributed to combinations of scales that do not lead to overtime or idle hours. In the second phase, a weekend schedule is assigned for every weekly work schedule. Based on these two phases, the weekly work schedules of bus drivers and bus fare collectors can be arranged at a minimal cost. The third and final phase of this study consisted of assigning a weekly work schedule to each bus driver and collector, considering his/her preferences. The maximum weight Matching Algorithm was also used in this phase. This method was applied in three public transportation companies in Curitiba, state of Paraná, which had until then used old heuristic algorithms based solely on managerial experience.

The DSN (Deep Space Network) Scheduling Engine targets all space missions that use DSN services. It allows clients to issue scheduling, conflict identification, conflict resolution, and status requests in XML over a Java Message Service interface. The scheduling requests may include new requirements that represent a set of tracks to be scheduled under some constraints. This program uses a heuristic local search to schedule a variety of schedule requirements, and is being infused into the Service Scheduling Assembly, a mixed-initiative scheduling application. The engine resolves conflicting schedules of resource allocation according to a range of existing and possible requirement specifications, including optional antennas; start of track and track duration ranges; periodic tracks; locks on track start, duration, and allocated antenna; MSPA (multiple spacecraft per aperture); arraying/VLBI (very long baseline interferometry)/delta DOR (differential one-way ranging); continuous tracks; segmented tracks; gap-to-track ratio; and override or block-out of requirements. The scheduling models now include conflict identification for SOA(start of activity), BOT (beginning of track), RFI (radio frequency interference), and equipment constraints. This software will search through all possible allocations while providing a best-effort solution at any time. The engine reschedules to accommodate individual emergency tracks in 0.2 second, and emergency antenna downtime in 0.2 second. The software handles doubling of one mission's track requests over one week (to 42 total) in 2.7 seconds. Further tests will be performed in the context of actual schedules.

U.S. Department of Health & Human Services — A fee schedule is a complete listing of fees used by Medicare to pay doctors or other providers-suppliers. This comprehensive listing of fee maximums is used to...

针对不确定性工期问题的研究方法无法正确和清晰地描述工序状态变化、只能近似求解的现象，提出一种扩展Petri网建模方法，对工期确定和可变工期受限资源多项目调度问题进行建模。该方法将托肯分为逻辑托肯与资源托肯，托肯的转移分别表示任务的执行和资源的分配。通过对库所和变迁的分类以及对库所的赋时，将库所分为活动库所、资源库所、等待库所和终极库所，将变迁分为协调变迁、资源调度变迁和资源释放变迁。等待库所和活动库所通过协调变迁连接反映任务之间的时序关系，通过资源库所、资源调度变迁与资源释放变迁，反映任务之间对资源的竞争、占用和释放。对于可变工期问题，通过增加库所和变迁种类以及修改变迁触发规则来描述实际系统。采用PSPLIBlibrary中的实例对提出的建模方法进行模型求解成功率分析、模型求解适应度能力分析和对比性实验分析表明，所提方法比其他方法具有更好的求解能力和表现，并通过一个实际工程应用求解验证了建模方法的有效性。%Uncertain project duration method couldn't describe state changes in sequence accurately and explicitly, there was only approximate solution. In order to deal with this problem, a modeling method by extending Petri net was proposed. Scheduling problems in resource-constrained projects with definite duration and changeable duration were modeled. This method classified tokens into logic token and resource token, and token's transformation repre- sented the task execution as well as resource allocation respectively. Through classifying place, transitions and timed place, the places were divided into activity place, resource place, waiting place, and final place, and transitions were divided into coordination transition, resource scheduling transition and resource releasing transition. Through coor

Full Text Available In the Universal Equation of Soil Loss (USLE, erosivity is the factor related to rain and express its potential to cause soil erosion, being necessary to know its kinetic energy and the maximum intensities of rain in duration of 30 min. Thus, the aim of this study was to verify and quantify the impact of the rain duration, considering 15 and 30 min, on the USLE erosivity factor. To achieve this, 863 rain gauge records were used, duiring the period of 1983 to 1998 in the city of Pelotas, RS, obtained from the Agrometeorological Station - Covenant EMBRAPA/UFPel, INMET (31o51´S; 52o21´O and altitude of 13,2 m. With the records, it was estimated the erosivity values from the maximum intensities of rain during the period evaluated. The average annual values of erosivity was 2551,3 MJ ha-1 h-1 ano-1 and 1406,1 MJ ha-1 h-1 ano-1, for the average intensities of 6,40 mm h-1 and 3,74 mm h-1, in durations of 15 and 30 min, respectively. The results of this study have shown that the percentage of erosive rainfalls in relation to the total precipitation was of 91.0%, and that the erosivity was influenced by the duration of the maximum intensity of rain.= Na Equação Universal de Perdas de Solo (EUPS a erosividade é o fator relacionado à chuva e expressa o seu potencial em provocar a erosão do solo, sendo necessário que se conheça a energia cinética da mesma e as máximas intensidades da chuva na duração de 30 min. Objetivou-se com este trabalho verificar e quantificar o impacto da duração da chuva, considerando 15 e 30 min, sobre o fator erosividade da EUPS. Para tanto foram utilizados 863 registros pluviográficos de chuva, no período de 1983 a 1998 da localidade de Pelotas, RS, obtidos na Estação Agroclimatológica – Convênio EMBRAPA/UFPel, INMET (31o51´S;52o21´O e altitude de 13,2 m. Com os registros foram estimados os valores de erosividade a partir de intensidades máximas de chuva nas durações consideradas. Os valores m

Despite several attempts to accurately predict duration and cost of software projects, initial plans still do not reflect real-life situations. Since commitments with customers are usually decided based on these initial plans, software companies frequently fail to deliver on time and many projects...... overrun both their budget and time. To improve the quality of initial project plans, we show in this paper the importance of (1) reflecting features’ priorities/risk in task schedules and (2) considering uncertainties related to human factors in plan schedules. To make simulation tasks reflect features......’ priority as well as multimodal team allocation, enhanced project schedules (EPS), where remedial actions scenarios (RAS) are added, were introduced. They reflect potential schedule modifications in case of uncertainties and promote a dynamic sequencing of involved tasks rather than the static conventional...

This work addresses the refinery scheduling problem using mathematical programming techniques. The solution adopted was to decompose the entire refinery model into a crude oil scheduling and a product scheduling problem. The envelope for the crude oil scheduling problem is composed of a terminal, a pipeline and the crude area of a refinery, including the crude distillation units. The solution method adopted includes a decomposition technique based on the topology of the system. The envelope for the product scheduling comprises all tanks, process units and products found in a refinery. Once crude scheduling decisions are Also available the product scheduling is solved using a rolling horizon algorithm. All models were tested with real data from PETROBRAS' REFAP refinery, located in Canoas, Southern Brazil. (author)

Ankara : Department of Industrial Engineering and the Institute of Engineering and Science of Bilkent Univ., 1999. Thesis (Master's) -- Bilkent University, 1999. Includes bibliographical references. Distributed Scheduling (DS) is a new paradigm that enables the local decisionmakers make their own schedules by considering local objectives and constraints within the boundaries and the overall objective of the whole system. Local schedules from different parts of the system are...

Personnel scheduling can become a particularly difficult optimisation problem due to human factors. And yet: people working in healthcare, transportation and other round the clock service regimes perform their duties based on a schedule that was often manually constructed. The unrewarding manual scheduling task deserves more attention from the timetabling community so as to support computation of fair and good quality results. The present abstract touches upon a set of particular characterist...

In accordance with the present disclosure, embodiments of an exemplary scheduling controller module or device implement an improved scheduling process such that the targeted reduction in schedule length can be achieve while incurring minimal energy penalty by allowing for a large rate (or duration) selection alphabet.

The purpose of this study was to examine the use of percentile schedules as a method of quantifying the shaping procedure in an educational setting. We compared duration of task engagement during baseline measurements for 4 students to duration of task engagement during a percentile schedule. As a secondary purpose, we examined the influence on…

no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

In the context of future global warming induced by human activities, it is essential to assess the role of natural climatic variations. Precise knowledge of the duration of past interglacial periods is fundamental to the understanding of the potential future evolution of the Holocene. Past ice age cycles provide a natural laboratory for exploring the progression and duration of interglacial climate. Palaeorecords from ice, land and oceans extend over the last 800 ka, revealing eight glacial-interglacial cycles, with a range of insolation and greenhouse gas influences. The interglacials display a correspondingly large variety of intensity and duration, thus providing an opportunity for major insights into the mechanisms involved in the behaviour of interglacial climates. A comparison of the duration of these interglacials, however, is often difficult, as the definition of an interglacial depends on the archive that is considered. Therefore, to compare interglacial length and climate conditions from different archives, a consistent definition of interglacial conditions is required, ideally one that is not bound to the method nor to the archive under consideration. Here we present a method to identify interglacials and to calculate their length by mean of a simple statistical approach. We based our method on ~ 400 ka windows of time to determine mean climatic conditions while allowing for the possibility of long term evolution of the climatic baseline. For our study of interglacials of the past 800 ka, we used two windows that largely align with the pre- (800-430 ka ago) and post- (430-0 ka ago) mid-Brunhes event (MBE), although the resulting conclusions are not sensitive to this particular division. We applied this method to the last 800 ka of a few palaeoclimate records: the deuterium ice core (EDC) record as a climatic proxy, the benthic δ18O stack (LR04) as a proxy for sea level/ice volume, ice core (Vostok, EDC) atmospheric CO2 and additional records. Although

In this paper, we study a class of simple and easy-to-construct shop schedules, known as dense schedules. We present tight bounds on the maximum deviation in makespan of dense flow-shop and job-shop schedules from their optimal ones. For dense open-shop schedules, we do the same for the special case of four machines and thus add a stronger supporting case for proving a standing conjecture.

Full Text Available Earth observation satellites play a significant role in rapid responses to emergent events on the Earth’s surface, for example, earthquakes. In this paper, we propose a robust satellite scheduling model to address a sequence of emergency tasks, in which both the profit and robustness of the schedule are simultaneously maximized in each stage. Both the multiobjective genetic algorithm NSGA2 and rule-based heuristic algorithm are employed to obtain solutions of the model. NSGA2 is used to obtain a flexible and highly robust initial schedule. When every set of emergency tasks arrives, a combined algorithm called HA-NSGA2 is used to adjust the initial schedule. The heuristic algorithm (HA is designed to insert these tasks dynamically to the waiting queue of the initial schedule. Then the multiobjective genetic algorithm NSGA2 is employed to find the optimal solution that has maximum revenue and robustness. Meanwhile, to improve the revenue and resource utilization, we adopt a compact task merging strategy considering the duration of task execution in the heuristic algorithm. Several experiments are used to evaluate the performance of HA-NSGA2. All simulation experiments show that the performance of HA-NSGA2 is significantly improved.

The Proportional Scheduler was recently proposed as a scheduling algorithm for multi-hop switch networks. For these networks, the BackPressure scheduler is the classical benchmark. For networks with fixed routing, the Proportional Scheduler is maximum stable, myopic and, furthermore, will alleviate

We consider an online preemptive scheduling problem where jobs with deadlines arrive sporadically. A commitment requirement is imposed such that the scheduler has to either accept or decline a job immediately upon arrival. The scheduler's decision to accept an arriving job constitutes a contract with the customer; if the accepted job is not completed by its deadline as promised, the scheduler loses the value of the corresponding job and has to pay an additional penalty depending on the amount of unfinished workload. The objective of the online scheduler is to maximize the overall profit, i.e., the total value of the admitted jobs completed before their deadlines less the penalty paid for the admitted jobs that miss their deadlines. We show that the maximum competitive ratio is $3-2\\sqrt{2}$ and propose a simple online algorithm to achieve this competitive ratio. The optimal scheduling includes a threshold admission and a greedy scheduling policies. The proposed algorithm has direct applications to the chargin...

This paper addresses the problem of operating room scheduling at the tactical level of hospital planning and control. Hospitals repetitively construct operating room schedules, which is a time consuming tedious and complex task. The stochasticity of the durations of surgical procedures complicates t

This "how-to-do-it" manual on the intricacies of school scheduling offers both technical information and common sense advice about the process of secondary school scheduling. The first of six chapters provides an overview of scheduling; chapter 2 examines specific considerations for scheduling; chapter 3 surveys the scheduling models and their…

Depending on the intensity, duration and type of physical exercise, equine metabolism has to adapt to nervous, cardiovascular, endocrine and respiratory system requirements. In horses, exercise and training are known to have considerable effects on the mechanisms of hemostatic system involving platelet activity. The aim of the present study was to evaluate the effect of different training schedules on platelet aggregation in 15 Italian Saddle jumping horses. Animals were divided into three equal groups: Group A was subjected to a high intensity-training program; group B to a light training program, group C included sedentary horses. From each animal, blood samples were collected by jugular venipuncture at rest on the 1st, 3rd and 5th days, and afterwards, once a week, for a total of 5 weeks data recording, in order to assess the maximum degree of platelet aggregation and the initial velocity of aggregation (slope) platelet aggregation. Two-way analysis of variance (ANOVA) showed a significant effect of the different training schedules on studied parameters. The results revealed a different degree of platelet aggregation and a different initial velocity of platelet aggregation that changes during the different training schedules in horses that could represent a different protective endothelial mechanism. These findings could have an important role for a clearer knowledge of the physiological reference values of platelet aggregation and for a better interpretation of these variations during the training.

Pigeons' keypecking was maintained under two- and three-component chained schedules of food presentation. The component schedules were all fixed-interval schedules of either 1- or 2-min duration. Across conditions the presence of houselight illumination within each component schedule was manipulated. For each pigeon, first-component response rates…

We consider the problem of scheduling n jobs on m identical parallel machines to minimize a regular cost function. The standard list scheduling algorithm converts a list into a feasible schedule by focusing on the job start times. We prove that list schedules are dominant for this type of problem.

We propose a novel job scheduling approach for homogeneous cluster computing platforms. Its key feature is the use of virtual machine technology to share fractional node resources in a precise and controlled manner. Other VM-based scheduling approaches have focused primarily on technical issues or on extensions to existing batch scheduling systems, while we take a more aggressive approach and seek to find heuristics that maximize an objective metric correlated with job performance. We derive absolute performance bounds and develop algorithms for the online, non-clairvoyant version of our scheduling problem. We further evaluate these algorithms in simulation against both synthetic and real-world HPC workloads and compare our algorithms to standard batch scheduling approaches. We find that our approach improves over batch scheduling by orders of magnitude in terms of job stretch, while leading to comparable or better resource utilization. Our results demonstrate that virtualization technology coupled with light...

Robust scheduling is aiming at constructing proactive schedules capable of dealing with multiple disruptions during project execution. Insertion a time buffer, before an activity start time, is a method to improve the robustness (stability) of a baseline schedule. In this paper, we introduce new heuristics for inserting time buffers in a given baseline schedule while the project due date is predefined and stochastic activity duration is considered. Computational results obtained from a set of benchmark projects show that the proposed heuristics capable of generating proactive schedules with acceptable quality and solution robustness.

Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

Full Text Available The article presents the changes Immunization Schedule Russia introduced in 2014. In new Schedule now is the vaccination against pneumococcal disease for all children of 2 months with revaccination at 4 and 15 months old; excluded from the Schedule second revaccination against tuberculosis. At risk groups to be vaccinated against the flu, entered pregnant women and persons subject to military conscription. Recommendations on use of vaccines containing relevant antigens for the Russian Federation, to provide maximum effectiveness of immunization and vaccines that do not contain preservatives in children under 1 year of age. Offered to create a council of experts in the field of vaccinology and vaccine prevention

Explains that favorable market and working conditions influence the scheduling of school construction projects. Facility planners, architects, and contractors are advised to develop a realistic time schedule for the entire project. (MLF)

The goal of this research is to apply reinforcement learning methods to real-world problems like scheduling. In this preliminary paper, we show that learning to solve scheduling problems such as the Space Shuttle Payload Processing and the Automatic Guided Vehicle (AGV) scheduling can be usefully studied in the reinforcement learning framework. We discuss some of the special challenges posed by the scheduling domain to these methods and propose some possible solutions we plan to implement.

textabstractScheduling is essential when activities need to be allocated to scarce resources over time. Motivated by the problem of scheduling barges along container terminals in the Port of Rotterdam, this thesis designs and analyzes algorithms for various on-line and off-line scheduling problems

Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

This paper describes the Complex product development project scheduling problem (CPDPSP) with a great number of activities, complicated resource, precedence and calendar constraints. By the conversion of precedence constraint relations, the CPDPSP is simplified. Then, according to the predictive control principle, we propose a new scheduling algorithm Based on prediction (BoP-procedure). In order to get the problem characteristics coming from resource status and precedence constraints of the scheduling problem at the scheduling time, a sub-project is constructed on the basis of a sub-AoN (Activity on node) graph of the project. Then, we use the modified GDH-procedure to solve the sub-project scheduling problem and to obtain the maximum feasible active subset for determining the activity group which satisfies resource, precedence and calendar constraints and has the highest scheduling priority at the scheduling time. Additionaily, we make a great number of numerical computations and compare the performance of BoP-procedure algorithm with those of other scheduling algorithms. Computation results show that the BoP-procedure algorithm is more suitable for the CPDPSP. At last, we discuss briefly future research work in the CPDPSP.

Title 44 US Code, ``Public Printing and Documents,`` regulations issued by the General Service Administration (GSA) in 41 CFR Chapter 101, Subchapter B, ``Management and Use of Information and Records,`` and regulations issued by the National Archives and Records Administration (NARA) in 36 CFR Chapter 12, Subchapter B, ``Records Management,`` require each agency to prepare and issue a comprehensive records disposition schedule that contains the NARA approved records disposition schedules for records unique to the agency and contains the NARA`s General Records Schedules for records common to several or all agencies. The approved records disposition schedules specify the appropriate duration of retention and the final disposition for records created or maintained by the NRC. NUREG-0910, Rev. 3, contains ``NRC`s Comprehensive Records Disposition Schedule,`` and the original authorized approved citation numbers issued by NARA. Rev. 3 incorporates NARA approved changes and additions to the NRC schedules that have been implemented since the last revision dated March, 1992, reflects recent organizational changes implemented at the NRC, and includes the latest version of NARA`s General Records Schedule (dated August 1995).

textabstractDuration intervals measure the dynamic impact of advertising on sales. More precise, the p per cent duration interval measures the time lag between the advertising impulse and the moment that p per cent of its effect has decayed. In this paper, we derive an expression for the duration

Rats' lever pressing produced tokens according to a 20-response fixed-ratio schedule. Sequences of token schedules were reinforced under a second-order schedule by presentation of periods when tokens could be exchanged for food pellets. When the exchange period schedule was a six-response fixed ratio, patterns of completing the component token schedules were bivalued, with relatively long and frequent pauses marking the initiation of each new sequence. Altering the exchange period schedule to a six-response variable ratio resulted in sharp reductions in the frequency and duration of these initial pauses, and increases in overall rates of lever pressing. These results are comparable to those ordinarily obtained under simple fixed-ratio and variable-ratio schedules.

Tardiness from scheduled start times in a surgical suite is a common source of frustration for both operating room personnel and patients. Data from two surgical suites were used to investigate the relative importance of various factors that contribute to tardiness, including average case duration, time of day, prolonged turnovers, whether a surgeon follows himself or another surgeon, the potential for starting cases early, concurrency (e.g., number of residents supervised simultaneously), expected under-utilized or over-utilized time, and case duration bias. Average tardiness per case did not depend on the individual durations of preceding cases or on the relative numbers of long and short cases. In contrast, the total duration of preceding cases was important in determining tardiness. Tardiness per case grew larger as the day progressed because the total duration of preceding cases increased, but began to decline for cases scheduled to commence 6 h after the start of the workday. Tardiness was not affected by prolonged turnovers, differences in average case duration among services, or whether a surgeon followed himself or another surgeon in the same operating room. Tardiness was affected by expected under-utilized or over-utilized time at the end of the workday and by case duration bias. Factors associated with the largest numbers of cases had the biggest influence on tardiness. Greater understanding of these factors aided in the development of several mathematical interventions to reduce tardiness in the two surgical suites. These interventions and their applicability for reducing tardiness are described in a companion article. At two surgical suites, tardiness from scheduled start times did not depend on average case duration or prolonged turnovers. Tardiness did depend on the total duration of preceding cases, expected under-utilized or over-utilized time at the end of the day, and case duration bias.

A fungicide-spray scheduling scheme for tomatoes called TOM-CAST (tomato forecaster) was adapted for use with operational weather data in order to increase the number of users by eliminating the need for in-field measurements of hourly temperature and leaf wetness duration. Such schemes reduce cost, environmental risk, and the development of resistance to the fungicide. Duration of wetness was estimated as the length of time that the dewpoint depression (TTd) remained between two specified limits, indicating the onset and offset of wetness. Several methods of obtaining the necessary temperature and dewpoint data were investigated. The preferred method, considering accuracy and simplicity, involved synthesis of hourly temperatures from locally observed daily maximum and minimum temperatures, and estimation of dewpoints from two Environment Canada hourly weather stations. With appropriate calibration, the scheme was able to match the number of sprays required by TOM-CAST exactly or within one spray.

be that the objects routed have an availability time window and a delivery time window or that locations on the path have a service time window. When routing moving transportation objects such as vehicles and vessels schedules are made in connection with the routing. Such schedules represent the time for the presence...... to a destination on a predefined network, the routing and scheduling of vessels in a liner shipping network given a demand forecast to be covered, the routing of manpower and vehicles transporting disabled passengers in an airport and the vehicle routing with time windows where one version studied includes edge...... of a connection between two locations. This could be an urban bus schedule where busses are routed and this routing creates a bus schedule which the passengers between locations use. In this thesis various routing and scheduling problems will be presented. The topics covered will be routing from an origin...

To formally reason about the temporal quality of systems discounting was introduced to CTL and LTL. However, these logic are discrete and they cannot express duration properties. In this work we introduce discounting for a variant of Duration Calculus. We prove decidability of model checking...... for a useful fragment of discounted Duration Calculus formulas on timed automata under mild assumptions. Further, we provide an extensive example to show the usefulness of the fragment....

the supply of resources in each component. We specifically investigate two different techniques to widen the set of provably schedulable systems: 1) a new supplier model; 2) restricting the potential task offsets. We also provide a way to estimate the minimum resource supply (budget) that a component...

In interval scheduling, not only the processing times of the jobs but also their starting times are given. This article surveys the area of interval scheduling and presents proofs of results that have been known within the community for some time. We first review the complexity and approximability o

Driven by stable or declining financial resources many school districts are considering the costs and benefits of a seven-period day. While there is limited evidence that any particular scheduling model has a greater impact on student learning than any other, it is clear that the school schedule is a tool that can significantly impact teacher…

Based on the empirical analysis of data contained in the International Software Benchmarking Standards Group(ISBSG) repository, this paper presents software engineering project duration models based on project effort. Duration models are built for the entire dataset and for subsets of projects developed for personal computer, mid-range and mainframeplatforms. Duration models are also constructed for projects requiring fewer than 400 person-hours of effort and for projectsre quiring more than 400 person-hours of effort. The usefulness of adding the maximum number of assigned resources as asecond independent variable to explain duration is also analyzed. The opportunity to build duration models directly fromproject functional size in function points is investigated as well.

Grids are facing the challenge of seamless integration of the Grid power into everyday use. One critical component for this integration is responsiveness, the capacity to support on-demand computing and interactivity. Grid scheduling is involved at two levels in order to provide responsiveness: the policy level and the implementation level. The main contributions of this paper are as follows. First, we present a detailed analysis of the performance of the EGEE Grid with respect to responsiveness. Second, we examine two user-level schedulers located between the general scheduling layer and the application layer. These are the DIANE (distributed analysis environment) framework, a general-purpose overlay system, and a specialized, embedded scheduler for gPTM3D, an interactive medical image analysis application. Finally, we define and demonstrate a virtualization scheme, which achieves guaranteed turnaround time, schedulability analysis, and provides the basis for differentiated services. Both methods target a br...

In this study, developmental changes in duration of the icon (visual sensory store) were investigated with three converging tachistoscopic tasks. (1) Stimulus interuption detection (SID), a variation of the two-flash threshold method, was performed by 29 first- and 32 fifth-graders, and 32 undergraduates. Icon duration was estimated by stimulus…

Although scheduling multiple tasks in motor learning to maximize long-term retention of performance is of great practical importance in sports training and motor rehabilitation after brain injury, it is unclear how to do so. We propose here a novel theoretical approach that uses optimal control theory and computational models of motor adaptation to determine schedules that maximize long-term retention predictively. Using Pontryagin's maximum principle, we derived a control law that determines the trial-by-trial task choice that maximizes overall delayed retention for all tasks, as predicted by the state-space model. Simulations of a single session of adaptation with two tasks show that when task interference is high, there exists a threshold in relative task difficulty below which the alternating schedule is optimal. Only for large differences in task difficulties do optimal schedules assign more trials to the harder task. However, over the parameter range tested, alternating schedules yield long-term retention performance that is only slightly inferior to performance given by the true optimal schedules. Our results thus predict that in a large number of learning situations wherein tasks interfere, intermixing tasks with an equal number of trials is an effective strategy in enhancing long-term retention.

A plant's reproductive allocation (RA) schedule describes the fraction of surplus energy allocated to reproduction as it increases in size. While theorists use RA schedules as the connection between life history and energy allocation, little is known about RA schedules in real vegetation. Here we review what is known about RA schedules for perennial plants using studies either directly quantifying RA or that collected data from which the shape of an RA schedule can be inferred. We also briefly review theoretical models describing factors by which variation in RA may arise. We identified 34 studies from which aspects of an RA schedule could be inferred. Within those, RA schedules varied considerably across species: some species abruptly shift all resources from growth to reproduction; most others gradually shift resources into reproduction, but under a variety of graded schedules. Available data indicate the maximum fraction of energy allocated to production ranges from 0.1 to 1 and that shorter lived species tend to have higher initial RA and increase their RA more quickly than do longer-lived species. Overall, our findings indicate, little data exist about RA schedules in perennial plants. Available data suggest a wide range of schedules across species. Collection of more data on RA schedules would enable a tighter integration between observation and a variety of models predicting optimal energy allocation, plant growth rates, and biogeochemical cycles.

Full Text Available This study examined the effects of a tri-schedule on the academic achievement of students in a high school. The tri-schedule consists of traditional, 4x4 block, and hybrid schedules running at the same time in the same high school. Effectiveness of the schedules was determined from the state mandated test of basic skills in reading, language, and mathematics. Students who were in a particular schedule their freshman year were tested at the beginning of their sophomore year. A statistical ANCOVA test was performed using the schedule types as independent variables and cognitive skill index and GPA as covariates. For reading and language, there was no statistically significant difference in test results. There was a statistical difference mathematics-computation. Block mathematics is an ideal format for obtaining more credits in mathematics, but the block format does little for mathematics achievement and conceptual understanding. The results have content specific implications for schools, administrations, and school boards who are considering block scheduling adoption.

The present general purpose automated planner/scheduler generates parallel plans aimed at the achievement of goals having imposed time constraints, with both durations and start time windows being specifiable for sets of goal conditions. Deterministic durations of such parallel plan activities as actions, events triggered by circumstances, inferences, and scheduled events entirely outside the actor's control, are explicitly modeled and may be any computable function of the activity variables. The final plan network resembles a PERT chart. Examples are given from the traditional 'blocksworld', and from a realistic 'Spaceworld' in which an autonomous spacecraft photographs objects in deep space and transmits the information to earth.

The purpose of schedule management is to provide the framework for time-phasing, resource planning, coordination, and communicating the necessary tasks within a work effort. The intent is to improve schedule management by providing recommended concepts, processes, and techniques used within the Agency and private industry. The intended function of this handbook is two-fold: first, to provide guidance for meeting the scheduling requirements contained in NPR 7120.5, NASA Space Flight Program and Project Management Requirements, NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project Requirements, NPR 7120.8, NASA Research and Technology Program and Project Management Requirements, and NPD 1000.5, Policy for NASA Acquisition. The second function is to describe the schedule management approach and the recommended best practices for carrying out this project control function. With regards to the above project management requirements documents, it should be noted that those space flight projects previously established and approved under the guidance of prior versions of NPR 7120.5 will continue to comply with those requirements until project completion has been achieved. This handbook will be updated as needed, to enhance efficient and effective schedule management across the Agency. It is acknowledged that most, if not all, external organizations participating in NASA programs/projects will have their own internal schedule management documents. Issues that arise from conflicting schedule guidance will be resolved on a case by case basis as contracts and partnering relationships are established. It is also acknowledged and understood that all projects are not the same and may require different levels of schedule visibility, scrutiny and control. Project type, value, and complexity are factors that typically dictate which schedule management practices should be employed.

Different assumptions about travelers' scheduling preferences yield different measures of the cost of travel time variability. Only few forms of scheduling preferences provide non-trivial measures which are additive over links in transport networks where link travel times are arbitrarily...... of car drivers' route and mode choice under uncertain travel times. Our analysis exposes some important methodological issues related to complex non-linear scheduling models: One issue is identifying the point in time where the marginal utility of being at the destination becomes larger than the marginal...

were formed. Four groups were subjected to short-term strength tests, and four groups were subjected to long-term tests. Creep and time to failure were moni-tored. Time to failure as a function of stress level was established and the reliability of stress level assessment was discussed. A significant...... mechanosorptive effect was demonstrated both in terms of increased creep and shortening of time to failure. The test results were employed for the calibration of four existing duration of load models. The effect of long-term loading was expressed as the stress level SL50 to cause failure after 50 years of loading...... and of the short-term and long-term strengths. For permanent and imposed library loads, reliability-based estimation of the load duration factor gave almost the same results as direct, deterministic calibration. Keywords: Creep, damage models, duration of load, equal rank assumption, load duration factor, matched...

U.S. Department of Health & Human Services — The list contains the fee schedule amounts, floors, and ceilings for all procedure codes and payment category, jurisdication, and short description assigned to each...

time by maximising expected total utility over the day, their departure times are conditional on rates of utility derived at these locations. For forecasting and economic evaluation of planning alternatives, it is desirable to have simple forms of utility rates with few parameters. Several forms...... the travel time is random, Noland and Small (1995) suggested using expected utility theory to derive the reduced form of expected travel time cost that includes the cost of TTV. For the α-β-γ formulation of scheduling preferences and exponential or uniform distribution of travel time, Noland and Small (1995....... The purpose of this paper is to explore how well these scheduling preferences explain behaviour, compared to other possible scheduling models, and whether empirical estimation of the more complex exponential scheduling preferences is feasible. We use data from a stated preference survey conducted among car...

The CERN Council held its 125th session on 20 June. Highlights of the meeting included confirmation that the LHC is on schedule for a 2007 start-up, and the announcement of a new organizational structure in 2004.

Typically, ground staff scheduling is centrally planned for each terminal in an airport. The advantage of this is that the staff is efficiently utilized, but a disadvantage is that staff spends considerable time walking between stands. In this paper a decentralized approach for ground staff...... scheduling is investigated. The airport terminal is divided into zones, where each zone consists of a set of stands geographically next to each other. Staff is assigned to work in only one zone and the staff scheduling is planned decentralized for each zone. The advantage of this approach is that the staff...... work in a smaller area of the terminal and thus spends less time walking between stands. When planning decentralized the allocation of stands to flights influences the staff scheduling since the workload in a zone depends on which flights are allocated to stands in the zone. Hence solving the problem...

U.S. Department of Health & Human Services — Outpatient clinical laboratory services are paid based on a fee schedule in accordance with Section 1833(h) of the Social Security Act. The clinical laboratory fee...

time by maximising expected total utility over the day, their departure times are conditional on rates of utility derived at these locations. For forecasting and economic evaluation of planning alternatives, it is desirable to have simple forms of utility rates with few parameters. Several forms...... the travel time is random, Noland and Small (1995) suggested using expected utility theory to derive the reduced form of expected travel time cost that includes the cost of TTV. For the α-β-γ formulation of scheduling preferences and exponential or uniform distribution of travel time, Noland and Small (1995....... The purpose of this paper is to explore how well these scheduling preferences explain behaviour, compared to other possible scheduling models, and whether empirical estimation of the more complex exponential scheduling preferences is feasible. We use data from a stated preference survey conducted among car...

U.S. Department of Health & Human Services — This website is designed to provide information on services covered by the Medicare Physician Fee Schedule (MPFS). It provides more than 10,000 physician services,...

Given a set J of jobs, where each job j is associated with release date r j , deadline d j and processing time p j , our goal is to schedule all jobs using the minimum possible number of machines. Scheduling a job j requires selecting an interval of length p j between its release date and deadline, and assigning it to a machine, with the restriction that each machine executes at most one job at any given time. This is one of the basic settings in the resource-minimization job scheduling, and the classical randomized rounding technique of Raghavan and Thompson provides an O(logn/loglogn)-approximation for it. This result has been recently improved to an O(sqrt{log n})-approximation, and moreover an efficient algorithm for scheduling all jobs on O((OPT)^2) machines has been shown. We build on this prior work to obtain a constant factor approximation algorithm for the problem.

The schedule for the installation of the PAR slurry loop experiment in the South Facility of the ORR has been reviewed and revised. The design, fabrications and Installation is approximately two weeks behind schedule at this time due to many factors; however, indications are that this time can be made up. Design is estimated to be 75% complete, fabrication 32% complete and installation 12% complete.

In this paper, we present the defini-tion of maximum loop speedup, which is the metricof parallelism hidden in loop body. We also studythe classes of Do-loop and their dependence as wellas the parallelism they contain. How to exploit suchparallelism under heterogeneous computing environ-ment? The paper proposes several approaches, whichare eliminating serial bottleneck by means of heteroge-neous computing, heterogeneous Do-all-loop schedul-ing, heterogeneous Do-a-cross scheduling. We findthat, not only on theoretical analysis but also on ex-perimental results, these schemes acquire better per-formance than in homogeneous computing.

The goal during the last few months has been to freeze and baseline as much as possible the schedules of various ATLAS systems and activities. The main motivations for the re-baselining of the schedules have been the new LHC schedule aiming at first collisions in early 2006 and the encountered delays in civil engineering as well as in the production of some of the detectors. The process was started by first preparing a new installation schedule that takes into account all the new external constraints and the new ATLAS staging scenario. The installation schedule version 3 was approved in the March EB and it provides the Ready For Installation (RFI) milestones for each system, i.e. the date when the system should be available for the start of the installation. TCn is now interacting with the systems aiming at a more realistic and resource loaded version 4 before the end of the year. Using the new RFI milestones as driving dates a new summary schedule has been prepared, or is under preparation, for each system....

Tardiness from scheduled start times is a common source of frustration for both operating room (OR) personnel and patients. Factors that influence tardiness were quantified in a companion paper and have been used to develop interventions that have the potential for reducing tardiness. Data from two surgical suites were used to compare the effectiveness of several interventions to reduce tardiness, including i) moving cases to different ORs on the afternoon of surgery, ii) recalculating the OR schedule when it is published to correct for average lateness in first cases of the day, iii) recalculating the OR schedule when it is published to correct for average service-specific case duration bias, and iv) scheduling a gap (time buffer) before the cases of a "to follow" surgeon if the day is expected to end early. These last three interventions involve creation of a modified schedule with revised start times that are more accurate for both patient and "to follow" surgeon. The surgeon performing the first case of the day would not be affected. Moving cases to different ORs when a room was running late produced a 50%-70% reduction in the tardiness for those cases that were moved. However, overall tardiness in each suite was reduced by only 6%-9%, because few cases were moved. Scheduling a gap between surgeons if the day was expected to end early reduced tardiness by more than 50% for those cases that were preceded by gaps. However, overall tardiness in each suite was reduced by only 4%-8%, because few gaps could be scheduled. In contrast, correcting for the combination of lateness in first cases of the day and service-specific case duration bias reduced overall tardiness in each suite by 30%-35%. Interventions which involve small numbers of cases have little potential to reduce overall tardiness. Generating a modified or auxiliary OR schedule that compensates for known causes of tardiness can significantly reduce patient and "to follow" surgeon waiting times. Modifying

A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

The current schedule of data collection for the routine environmental surveillance program at the Hanford Site is provided. Questions about specific entries should be referred to the authors since modifications to the schedule are made during the year and special areas of study, usually of short duration, are not scheduled. The environmental surveillance program objectives are to evaluate the levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in Manual Chapter 0513, and to monitor Hanford operations for compliance with applicable environmental criteria given in Manual Chapter 0524 and Washington State Water Quality Standards. Air quality data obtained in a separate program are also reported. The collection schedule for potable water is shown but it is not part of the routine environmental surveillance program. Schedules are presented for the following subjects: air, Columbia River, sanitary water, surface water, ground water, foodstuffs, wildlife, soil and vegetation, external radiation measurement, portable instrument surveys, and surveillance of waste disposal sites. (JGB)

Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

Full Text Available In collaborative product development project plan phase, a reasonable allocation plan can shorten project duration. Static scheduling is a useful way to obtain the allocation result. Matching degree and resource categories are introduced into the standard model of multi-mode resource-constrained project scheduling problem to reflect the resource characteristics including three aspects: (1 human resource is different from technology resource, (2 different technology resources will influence project duration, and (3 technology resource quantities also influence duration. Aimed at two static scheduling models, simple genetic algorithm and two-layer parthenogenetic algorithm are designed. Finally, a case study was presented to validate the method.

Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

A new proppant schedule is suggested to obtain maximum propping surface area and maximum fracture conductivity near the borehole. The small and middle size sands are suspended in the fracturing fluids under fracturing condition to prop the total fracture surface area created by fracturing. The coarse sands pumped into fracture later deposit in the fracture at the dynamic width near the borehole during fracturing and reach the equilibrium height in the time interval of pumping coarse sands. Thus, fracture conductivity near the borehole is increased. Computer calculation of the sand program, the corresponding fluid program and pumping rates are presented also.

This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a

The Duration Calculus (abbreviated DC) represents a logical approach to formal design of real-time systems, where real numbers are used to model time and Boolean valued functions over time are used to model states and events of real-time systems. Since it introduction, DC has been applied to many...

textabstractThe author considers the problem of the duration of development and its consequences for development assistance, in the developing as well as developed countries. Emphasis is given to the influence of development aid and it is argued that the time dimension has important policy implicati

Full Text Available Recent work has shown that adapting to a visual or auditory stimulus of a particular duration leads to a repulsive distortion of the perceived duration of a subsequently presented test stimulus. This distortion seems to be modality-specific and manifests itself as an expansion or contraction of perceived duration dependent upon whether the test stimulus is longer or shorter than the adapted duration. It has been shown (Berger et al 2003, Journal of Vision 3, 406–412 that perceived events can be as effective as actual events in inducing improvements in performance. In light of this, we investigated whether an illusory visual duration was capable of inducing a duration after-effect in a visual test stimulus that was actually no different in duration from the adaptor. Pairing a visual stimulus with a concurrent auditory stimulus of subtly longer or shorter duration expands or contracts the duration of the visual stimulus. We mapped out this effect and then chose two auditory durations (one long, one short that produced the maximum distortion in the perceived duration of the visual stimulus. After adapting to this bimodal stimulus, our participants were asked to reproduce a visual duration. Group data showed that participants, on average, reproduced the physical duration of the visual test stimulus accurately; in other words, there was no consistent effect of adaptation to an illusory duration.

Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

Marijuana smokers are frequently observed to hold the smoke in their lungs for prolonged periods (10-15 sec) apparently in the belief that prolonged breathholding intensifies the effects of the drug. The actual influence of breathhold duration on response to marijuana smoke has not been studied. The present study examined the effects of systematic manipulation of breathhold duration on the physiological, cognitive and subjective response to marijuana smoke in a group of eight regular marijuana smokers. Subjects were exposed to each of three breathhold duration conditions (0, 10 and 20 sec) on three occasions, scheduled according to a randomized block design. A controlled smoking procedure was used in which the number of puffs, puff volume and postpuff inhalation volume were held constant. Expired air carbon monoxide levels were measured before and after smoking to monitor smoke intake. Typical marijuana effects (increased heart rate, increased ratings of "high" and impaired memory performance) were observed under each of the breathhold conditions, but there was little evidence that response to marijuana was a function of breathhold duration.

In the next years, processor architectures based on much larger numbers of cores will be most likely the model to continue 'Moore's Law' style throughput gains. This not only results in many more jobs in parallel running the LHC Run 1 era monolithic applications, but also the memory requirements of these processes push the workernode architectures to the limit. One solution is parallelizing the application itself, through forking and memory sharing or through threaded frameworks. CMS is following all of these approaches and has a comprehensive strategy to schedule multicore jobs on the GRID based on the glideinWMS submission infrastructure. The main component of the scheduling strategy, a pilot-based model with dynamic partitioning of resources that allows the transition to multicore or whole-node scheduling without disallowing the use of single-core jobs, is described. This contribution also presents the experiences made with the proposed multicore scheduling schema and gives an outlook of further developments working towards the restart of the LHC in 2015.

This report draws on the fundamental regularity exhibited by age profiles of migration all over the world to develop a system of hypothetical model schedules that can be used in multiregional population analyses carried out in countries that lack adequate migration data.

We describe an assignment problem particular to the personnel scheduling of organisations such as laboratories. Here we have to assign tasks to employees. We focus on the situation where this assignment problem reduces to constructing maximal matchings in a set of interrelated bipartite graphs. We d

The coordination of activities and resources in order to establish an effective production flow is central to the management of construction projects. The traditional technique for coordination of activities and resources in construction projects is the CPM-scheduling, which has been the predomin......The coordination of activities and resources in order to establish an effective production flow is central to the management of construction projects. The traditional technique for coordination of activities and resources in construction projects is the CPM-scheduling, which has been...... the predominant scheduling method since it was introduced in the late 1950s. Over the years, CPM has proven to be a very powerful technique for planning, scheduling and controlling projects, which among other things is indicated by the development of a large number of CPM-based software applications available...... on the market. However, CPM is primarily an activity based method that takes the activity as the unit of focus and there is criticism raised, specifically in the case of construction projects, on the method for deficient management of construction work and continuous flow of resources. To seek solutions...

Presented is a computer program written in BASIC that covers round-robin schedules for team matches in competitions. The program was originally created to help teams in a tennis league play one match against every other team. Part of the creation of the program involved use of modulo arithmetic. (MP)

The area of personnel scheduling is very broad. Here we focus on the ‘shift assignment problem’. Our aim is to discuss how ORTEC HARMONY handles this planning problem. In particular we go into the structure of the optimization engine in ORTEC HARMONY, which uses techniques from genetic algorithms, l

The area of personnel scheduling is very broad. Here we focus on the ‘shift assignment problem’. Our aim is to discuss how ORTEC HARMONY handles this planning problem. In particular we go into the structure of the optimization engine in ORTEC HARMONY, which uses techniques from genetic algorithms,

Rats' lever pressing produced tokens according to a 20-response fixed-ratio schedule. Sequences of token schedules were reinforced under a second-order schedule by presentation of periods when tokens could be exchanged for food pellets. When the exchange period schedule was a six-response fixed ratio, patterns of completing the component token schedules were bivalued, with relatively long and frequent pauses marking the initiation of each new sequence. Altering the exchange period schedule to a six-response variable ratio resulted in sharp reductions in the frequency and duration of these initial pauses, and increases in overall rates of lever pressing. These results are comparable to those ordinarily obtained under simple fixed-ratio and variable-ratio schedules. PMID:16812101

Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.

PURPOSE: To study the relationship between the durations of symptoms before the start of radiotherapy and treatment outcome in Stage I-III glottic cancer. METHODS AND MATERIALS: From 1965 to 1997, 611 glottic cancer patients from the Southern Region of Denmark were treated with primary radiotherapy....... A total of 544 patients fulfilled the criteria for inclusion to the study (Stage I-III glottic cancer, a duration of symptoms less than or equal to 36 months, primary radiotherapy with at least 50 Gy and sufficient data for analysis). The total radiation dose ranged from 50.0 to 71.6 Gy in 22 to 42...... of symptoms was a significant factor (p symptoms was statistically...

Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....

In the two-step process of student scheduling, the initial phase of course selection is the most important. At Chesterton High School in Indiana, student self-scheduling is preferred over computer loading. (Author/MLF)

Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

This paper presents another formal proof for the correctness of the Deadline Driven Scheduler (DDS). This proof is given in terms of Duration Calculus which provides abstraction for random preemption of processor. Compared with other approaches, this proof relies on many intuitive facts. Therefore this proof is more intuitive, while it is still formal.

In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

The effects of conjugate reinforcement on the responding of 13 college students were examined in three experiments. Conjugate reinforcement was provided via key presses that changed the clarity of pictures displayed on a computer monitor in a manner proportional to the rate of responding. Experiment 1, which included seven parameters of clarity change per response, revealed that responding decreased as the percentage clarity per response increased for all five participants. These results indicate that each participant's responding was sensitive to intensity change, which is a parameter of conjugate reinforcement schedules. Experiment 2 showed that responding increased during conjugate reinforcement phases and decreased during extinction phases for all four participants. Experiment 3 also showed that responding increased during conjugate reinforcement and further showed that responding decreased during a conjugate negative punishment condition for another four participants. Directions for future research with conjugate schedules are briefly discussed.

Full Text Available Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and motivated towards adding more advanced features and sophisticated applications to the existing electronic system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more sophistication in vehicle electronics system. All these, directly make the vehicle software system more complex and computationally more intensive. In turn, this demands very high computational capability of the microprocessor used in electronic control unit (ECU. In this regard, multicore processors have already been implemented in some of the task rigorous ECUs like, power train, image processing and infotainment. To achieve greater performance from these multicore processors, parallelized ECU software needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the computational cores to the maximum extent possible and meet the real time constraint. In this paper, we propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems Architecture (AUTOSAR

Full Text Available Automobile manufacturers are controlled by stringen t govt. regulations for safety and fuel emissions a nd motivated towards adding more advanced features and sophisticated applications to the existing electro nic system. Ever increasing customer’s demands for high level of comfort also necessitate providing even m ore sophistication in vehicle electronics system. All t hese, directly make the vehicle software system mor e complex and computationally more intensive. In turn , this demands very high computational capability o f the microprocessor used in electronic control unit (ECU. In this regard, multicore processors have already been implemented in some of the task rigoro us ECUs like, power train, image processing and infotainment. To achieve greater performance from t hese multicore processors, parallelized ECU softwar e needs to be efficiently scheduled by the underlayin g operating system for execution to utilize all the computational cores to the maximum extent possible and meet the real time constraint. In this paper, w e propose a dynamic task scheduler for multicore engi ne control ECU that provides maximum CPU utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their real time deadlines while compared to the static pr iority scheduling suggested by Automotive Open Syst ems Architecture (AUTOSAR.

We consider a version of multiprocessor scheduling with the special feature that jobs may be rejected for a certain penalty. An instance of the problem is given by m identical parallel machines and a set of n jobs, each job characterized by a processing time and a penalty. In the on-line version the jobs arrive one by one and we have to schedule or reject a job before we have any information about future jobs. The objective is to minimize the makespan of the schedule for accepted jobs plus the sum of the penalties of rejected jobs. The main result is a 1 + {phi} {approx} 2.618 competitive algorithm for the on-line version of the problem, where 0 is the golden ratio. A matching lower bound shows that this is the best possible algorithm working for all m. For fixed m we give improved bounds, in particular for m = 2 we give an optimal {phi} {approx} 1.618 competitive algorithm. For the off-line problem we present a fully polynomial approximation scheme for fixed m and an approximation algorithm which runs in time O(n log n) for arbitrary m and guarantees 2 - 1/m approximation ratio.

In the next years, processor architectures based on much larger numbers of cores will be most likely the model to continue Moores Law style throughput gains. This not only results in many more jobs in parallel running the LHC Run 1 era monolithic applications. Also the memory requirements of these processes push the workernode architectures to the limit. One solution is parallelizing the application itself, through forking and memory sharing or through threaded frameworks. CMS is following all of these approaches and has a comprehensive strategy to schedule multi-core jobs on the GRID based on the glideIn WMS submission infrastructure. We will present the individual components of the strategy, from special site specific queues used during provisioning of resources and implications to scheduling; to dynamic partitioning within a single pilot to allow to transition to multi-core or whole-node scheduling on site level without disallowing single-core jobs. In this presentation, we will present the experiences mad...

Almost all of the current process scheduling algorithms which are used in modern operating systems (OS) have their roots in the classical scheduling paradigms which were developed during the 1970's. But modern computers have different types of software loads and user demands. We think it is important to run what the user wants at the current moment. A user can be a human, sitting in front of a desktop machine, or it can be another machine sending a request to a server through a network connection. We think that OS should become intelligent to distinguish between different processes and allocate resources, including CPU, to those processes which need them most. In this work, as a first step to make the OS aware of the current state of the system, we consider process dependencies and interprocess communications. We are developing a model, which considers the need to satisfy interactive users and other possible remote users or customers, by making scheduling decisions based on process dependencies and interproce...

Citing literature that supports the benefits of flexible scheduling on student achievement, the author exhorts readers to campaign for flexible scheduling in their library media centers. She suggests tips drawn from the work of Graziano (2002), McGregor (2006) and Stripling (1997) for making a smooth transition from fixed to flexible scheduling:…

. This holds in the classical context of arbitrary schedulers, but it has been argued that this class of schedulers is unrealistically powerful. This paper studies a strictly coarser notion of bisimilarity, which still enjoys these properties in the context of realistic subclasses of schedulers: Trace...

Full Text Available In this paper the authors propose modified proportional share scheduling algorithm considering the characteristics of continuous media such as its continuity and time dependency. Proposed scheduling algorithm shows graceful degradation of performance in overloaded situation and decreases the number of context switching. Proposed scheduling algorithm is evaluated using several numerical tests under various conditions, especially overloaded situation.

The JAC Observation Management Project (OMP) provides software for the James Clerk Maxwell (JCMT) and the United Kingdom Infrared (UKIRT) telescopes that manages the life-cycle of flexibly scheduled observations. Its aim is to increase observatory efficiency under flexible (queue) scheduled observing, without depriving the principal investigator (PI) of the flexibility associated with classical scheduling.

The relation between the duration of prior wakefulness and EEG power density during sleep in humans was assessed by means of a study of naps. The duration of prior wakefulness was varied from 2 to 20 hr by scheduling naps at 1000 hr, 1200 hr, 1400 hr, 1600 hr, 1800 hr, 2000 hr, and 0400 hr. In

Full Text Available Each phase of the software design consumes some resources and hence has cost associated with it. In most of the cases cost will vary to some extent with the amount of time consumed by the design of each phase .The total cost of project, which is aggregate of the activities costs will also depends upon the project duration, can be cut down to some extent. The aim is always to strike a balance between the cost and time and to obtain an optimum software project schedule. An optimum minimum cost project schedule implies lowest possible cost and the associated time for the software project management. In this research an attempt has been made to solve the cost and schedule problem of software project using PERT network showing the details of the activities to be carried out for a software project development/management with the help of crashing, reducing software project duration at a minimum cost by locating a minimal cut in the duration of an activity of the original project design network. This minimal cut is then utilized to identify the project phases which should experience a duration modification in order to achieve the total software duration reduction. Crashing PERT networks can save a significant amount of money in crashing and overrun costs of a company. Even if there are no direct costs in the form of penalties for late completion of projects, there is likely to be intangible costs because of reputation damage.

This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

Full Text Available Abstract Background Our understanding of the transmission dynamics of respiratory syncytial virus (RSV infection will be better informed with improved data on the patterns of shedding in cases not limited only to hospital admissions. Methods In a household study, children testing RSV positive by direct immunofluorescent antibody test (DFA were enrolled. Nasal washings were scheduled right away, then every three days until day 14, every 7 days until day 28 and every 2 weeks until a maximum of 16 weeks, or until the first DFA negative RSV specimen. The relationship between host factors, illness severity and viral shedding was investigated using Cox regression methods. Results From 151 families a total of 193 children were enrolled with a median age of 21 months (range 1-164 months, 10% infants and 46% male. The rate of recovery from infection was 0.22/person/day (95% CI 0.19-0.25 equivalent to a mean duration of shedding of 4.5 days (95%CI 4.0-5.3, with a median duration of shedding of 4 days (IQR 2-6, range 1-14. Children with a history of RSV infection had a 40% increased rate of recovery i.e. shorter duration of viral shedding (hazard ratio 1.4, 95% CI 1.01-1.86. The rate of cessation of shedding did not differ significantly between males and females, by severity of infection or by age. Conclusion We provide evidence of a relationship between the duration of shedding and history of infection, which may have a bearing on the relative role of primary versus re-infections in RSV transmission in the community.

Full Text Available This paper deals with the representation of scheduling results and it introduces a new tool for visualization and simulation in time scheduling called VISIS. The purpose of this tool is to provide an environment for visualization, e.g. in production line scheduling. The simulation also proposes a way to simulate the influence of a schedule on a user defined system, e.g. for designing filters in digital signal processing. VISIS arises from representing scheduling results using the well-known Gantt chart. The application is implemented in the Matlab programming environment using Simulink and the Virtual Reality toolbox.

Full Text Available Scheduling is the fundamental function of operating system. For scheduling, resources of system shared among processes which are going to be executed. CPU scheduling is a technique by which processes are allocating to the CPU for a specific time quantum. In this paper the review of different scheduling algorithms are perform with different parameters, such as running time, burst time and waiting times etc. The reviews algorithms are first come first serve, Shortest Job First, Round Robin, and Priority scheduling algorithm.

Long duration flights (LDF) require a special management to take the best decisions in terms of ballast consumption and instant of separation. As a contrast to short duration flights, where meteorological conditions are relatively well known, for LDF we need to include the meteorological model accuracy in trajectory simulations. Dispersions on the fields of model (wind, temperature and IR fluxes) could make the mission incompatible with safety rules, authorized zones and others flight requirements. Last CNES developments for LDF act on three main axes: 1. Although ECMWF-NCEP forecast allows generating simulations from a 4D point (altitude, latitude, longitude and UT time), result is not statistical, it is determinist. To take into account model dispersion a meteorological NCEP data base was analyzed. A comparison between Analysis (AN) and Forecast (FC) for the same time frame had been done. Result obtained from this work allows implementing wind and temperature dispersions on balloon flight simulator. 2. For IR fluxes, NCEP does not provide ascending IR fluxes in AN mode but only in FC mode. To obtain the IR fluxes for each time frame, satellite images are used. A comparison between FC and satellites measurements had been done. Results obtained from this work allow implementing flux dispersions on balloon flight simulator. 3. An improved cartography containing a vast data base had been included in balloon flight simulator. Mixing these three points with balloon flight dynamics we have obtained two new tools for observing balloon evolution and risk, one of them is called ASTERISK (Statistic Tool for Evaluation of Risk) for calculations and the other one is called OBERISK (Observing Balloon Evolution and Risk) for visualization. Depending on the balloon type (super pressure, zero pressure or MIR) relevant information for the flight manager is different. The goal is to take the best decision according to the global situation to obtain the largest flight duration with

This new edition of the well-established text Scheduling: Theory, Algorithms, and Systems provides an up-to-date coverage of important theoretical models in the scheduling literature as well as important scheduling problems that appear in the real world. The accompanying website includes supplementary material in the form of slide-shows from industry as well as movies that show actual implementations of scheduling systems. The main structure of the book, as per previous editions, consists of three parts. The first part focuses on deterministic scheduling and the related combinatorial problems. The second part covers probabilistic scheduling models; in this part it is assumed that processing times and other problem data are random and not known in advance. The third part deals with scheduling in practice; it covers heuristics that are popular with practitioners and discusses system design and implementation issues. All three parts of this new edition have been revamped, streamlined, and extended. The reference...

For most scheduling problems the set of machines is fixed initially and remainsunchanged for the duration of the problem. Recently Imreh and Nogaproposed to add theconcept of machine cost to scheduling problems and considered the so-called List Model problem.An online algorithm with a competitive ratio 1.618 was given while the lower bound is 4/3. Inthis paper, two different semi-online versions of this problem are studied. In the first case, it isassumed that the processing time of the largest job is known a priori. A semi-online algorithmis presented with the competitive ratio at most 1.5309 while the lower bound is 4/3. In thesecond case, it is assumed that the total processing time of all jobs is known in advance. Asemi-online algorithm is presented with the competitive ratio at most 1.414 while the lowerbound is 1.161. It is shown that the additional partial available information about the jobsleads to the possibility of constructing a schedule with a smaller competitive ratio than that ofonline algorithms.

Full Text Available This study analyzed 363 Clark County Department of Public Works (CCDPW projects to determine construction cost and schedule overruns in various types and sizes of the projects. The sample projects were constructed from 1991 to 2008, with a total construction cost of $1.85 billion, equivalent to 2012 cost. A one-factor ANOVA test was conducted to determine whether construction cost and schedule overruns significantly varied based on types and sizes of the projects. The study showed that large, long-duration projects had significantly higher cost and schedule overruns than smaller, short-duration projects.

There seems to be a significant gap between the theoretical and the practical aspects of scheduling problems in the job shop environment. Theoretically, scheduling systems are designed on the basis of an optimum approach to the scheduling model. However in the practice, the optimum that is built into the scheduling applications seems to face some challenges when dealing with the dynamic character of a scheduling system, for instance machine breakdown or change of orders. Scheduling systems have become quite complex in the past few years. Competitive business environments and shorter product life cycles are the imminent challenges being faced by many companies these days.These challenges push companies to anticipate a demand driven supply chain in their business environment. A demand-driven supply chain incorporates the customer view into the supply chain processes. As a consequence of this, scheduling as a core process of the demand-driven supply chain must also reflect the customer view. In addition, other approaches to solving scheduling problems, for instance approaches based on human factors, prefer the scheduling system to be more flexible in both design and implementation. After discussion of these factors, the authors propose the integration of a different set of criteria for the development of scheduling systems which not only appears to have a better flexibility but also increased customer-focus.

This schedule is available for the contract purchase of Firm Power to be used within the Pacific Northwest (PNW). Priority Firm (PF) Power may be purchased by public bodies, cooperatives, and Federal agencies for resale to ultimate consumers, for direct consumption, and for Construction, Test and Start-Up, and Station Service. Rates in this schedule are in effect beginning October 1, 2006, and apply to purchases under requirements Firm Power sales contracts for a three-year period. The Slice Product is only available for public bodies and cooperatives who have signed Slice contracts for the FY 2002-2011 period. Utilities participating in the Residential Exchange Program (REP) under Section 5(c) of the Northwest Power Act may purchase Priority Firm Power pursuant to the Residential Exchange Program. Rates under contracts that contain charges that escalate based on BPA's Priority Firm Power rates shall be based on the three-year rates listed in this rate schedule in addition to applicable transmission charges. This rate schedule supersedes the PF-02 rate schedule, which went into effect October 1, 2001. Sales under the PF-07 rate schedule are subject to BPA's 2007 General Rate Schedule Provisions (2007 GRSPs). Products available under this rate schedule are defined in the 2007 GRSPs. For sales under this rate schedule, bills shall be rendered and payments due pursuant to BPA's 2007 GRSPs and billing process.

Full Text Available Scheduling with learning effects has been widely studied. However, there are situations where the learning effect might accelerate. In this paper, we propose a new model where the learning effect accelerates as time goes by. We derive the optimal solutions for the single-machine problems to minimize the makespan, total completion time, total weighted completion time, maximum lateness, maximum tardiness, and total tardiness.

The Large Synoptic Survey Telescope (LSST) is a complex system of systems with demanding performance and operational requirements. The nature of its scientific goals requires a special Observatory Control System (OCS) and particularly a very specialized automatic Scheduler. The OCS Scheduler is an autonomous software component that drives the survey, selecting the detailed sequence of visits in real time, taking into account multiple science programs, the current external and internal conditions, and the history of observations. We have developed a SysML model for the OCS Scheduler that fits coherently in the OCS and LSST integrated model. We have also developed a prototype of the Scheduler that implements the scheduling algorithms in the simulation environment provided by the Operations Simulator, where the environment and the observatory are modeled with real weather data and detailed kinematics parameters. This paper expands on the Scheduler architecture and the proposed algorithms to achieve the survey goals.

DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

Full Text Available We investigated the limits of the number of events observers can simultaneously time. For single targets occurring in one of eight positions sensitivity to duration was improved for spatially pre-cued items as compared to post-cued items indicating that exogenous driven attention can improve duration discrimination. Sensitivity to duration for pre-cued items was also marginally better for single items as compared to eight items indicating that even after the allocation of focal attention, distracter items can interfere with the encoding of duration. For an eight item array discrimination was worse for post-cued locations as compared to pre-cued locations indicating both that attention can improve duration discrimination performance and that it was not possible to access a perfect memory trace of the duration of eight elements. The interference from the distracters in the pre-cued eight item array may reflect some mandatory averaging of target and distracter events. To further explore duration averaging we asked subjects to explicitly compare average durations of multiple item arrays against a single item standard duration. Duration discrimination thresholds were significantly lower for single elements as compared to multiple elements, showing that averaging, either automatically or intentionally, impairs duration discrimination. There was no set size effect. Performance was the same for averages of two and eight items, but performance with even an average of two items was worse than for one item. This was also true for sequential presentation indicating poor performance was not due to limits on the division of attention across items. Rather performance appears to be limited by an inability to remember or aggregate duration information from two or more items. Although it is possible to manipulate perceived duration locally, there appears to be no perceptual mechanisms for aggregating local durations across space.

Full Text Available Instruction scheduling algorithms are used in compilers to reduce run-time delays for the compiled code by the reordering or transformation of program statements, usually at the intermediate language or assembly code level. Considerable research has been carried out on scheduling code within the scope of basic blocks, i.e., straight line sections of code, and very effective basic block schedulers are now included in most modern compilers and especially for pipeline processors. In previous work Golumbic and Rainis: IBM J. Res. Dev., Vol. 34, pp.93–97, 1990, we presented code replication techniques for scheduling beyond the scope of basic blocks that provide reasonable improvements of running time of the compiled code, but which still leaves room for further improvement. In this article we present a new method for scheduling beyond basic blocks called SHACOOF. This new technique takes advantage of a conventional, high quality basic block scheduler by first suppressing selected subsequences of instructions and then scheduling the modified sequence of instructions using the basic block scheduler. A candidate subsequence for suppression can be found by identifying a region of a program control flow graph, called an S-region, which has a unique entry and a unique exit and meets predetermined criteria. This enables scheduling of a sequence of instructions beyond basic block boundaries, with only minimal changes to an existing compiler, by identifying beneficial opportunities to cover delays that would otherwise have been beyond its scope.

The Large Hadron Collider is now entering in its final phase before receiving beam, and the activities at CERN between 2007 and 2008 have shifted from installation work to the commissioning of the technical systems ("hardware commissioning"). Due to the unprecedented complexity of this machine, all the systems are or will be tested as far as possible before the cool-down starts. Systems are firstly tested individually before being globally tested together. The architecture of LHC, which is partitioned into eight cryogenically and electrically independent sectors, allows the commissioning on a sector by sector basis. When a sector reaches nominal cryogenic conditions, commissioning of the magnet powering system to nominal current for all magnets can be performed. This paper briefly describes the different activities to be performed during the powering tests of the superconducting magnet system and presents the scheduling issues raised by co-activities as well as the management of resources.

In today’s globalized society, transport contributes to our daily life in many different ways. The production of the parts for a shelf ready product may take place on several continents and our travel between home and work, vacation travel and business trips has increased in distance the last......, the effectiveness of the network is of importance aiming at satisfying as many costumer demands as possible at a low cost. Routing represent a path between locations such as an origin and destination for the object routed. Sometimes routing has a time dimension as well as the physical paths. This may...... to a destination on a predefined network, the routing and scheduling of vessels in a liner shipping network given a demand forecast to be covered, the routing of manpower and vehicles transporting disabled passengers in an airport and the vehicle routing with time windows where one version studied includes edge...

Public transportation schedules are designed by agencies to optimize service quality under multiple constraints. However, real service usually deviates from the plan. Therefore, transportation analysts need to identify, compare and explain both eventual and systemic performance issues that must be addressed so that better timetables can be created. The purely statistical tools commonly used by analysts pose many difficulties due to the large number of attributes at trip- and station-level for planned and real service. Also challenging is the need for models at multiple scales to search for patterns at different times and stations, since analysts do not know exactly where or when relevant patterns might emerge and need to compute statistical summaries for multiple attributes at different granularities. To aid in this analysis, we worked in close collaboration with a transportation expert to design TR-EX, a visual exploration tool developed to identify, inspect and compare spatio-temporal patterns for planned and real transportation service. TR-EX combines two new visual encodings inspired by Marey's Train Schedule: Trips Explorer for trip-level analysis of frequency, deviation and speed; and Stops Explorer for station-level study of delay, wait time, reliability and performance deficiencies such as bunching. To tackle overplotting and to provide a robust representation for a large numbers of trips and stops at multiple scales, the system supports variable kernel bandwidths to achieve the level of detail required by users for different tasks. We justify our design decisions based on specific analysis needs of transportation analysts. We provide anecdotal evidence of the efficacy of TR-EX through a series of case studies that explore NYC subway service, which illustrate how TR-EX can be used to confirm hypotheses and derive new insights through visual exploration.

Full Text Available This paper addresses an integrated decision on production scheduling and delivery operations, which is one of the most important issues in supply chain scheduling. We study a model in which a set of jobs ordered by only one customer and a set of decentralized manufacturers located at different locations are considered. Specifically, each job must be assigned to one of the decentralized manufacturers to process on its single machine facility. Then, the job is delivered to the customer directly in batch without intermediate inventory. The objective is to find a joint schedule of production and distribution to optimize the customer service level and delivery cost. In our work, we discuss this problem considering two different situations in terms of the customer service level. In the first one, the customer service is measured by the maximum arrival time, while the customer service is measured by the total arrival time in the second one. For each situation, we develop a dynamic programming algorithm to solve, respectively. Moreover, we identify a special case for the latter situation by introducing its corresponding solutions.

This paper considers the nonpreemptive scheduling of a given set of jobs on several identical, parallel machines. Each job must be processed on one of the machines. Prior to processing, a job must be loaded (setup) by a single server onto the relevant machine. The server may be a human operator, a robot, or a piece of specialized equipment. We study a number of classical scheduling objectives in this environment, including makespan, maximum lateness, the sum of completion times, the number of late jobs, and total tardiness, as well as weighted versions of some of these. The number of machines may be constant or arbitrary. Setup times may be unit, equal, or arbitrary. Processing times may be unit or arbitrary. For each problem considered, we attempt to provide either an efficient algorithm, or a proof that such an algorithm is unlikely to exist. Our results provide a mapping of the computational complexity of these problems. Included in these results are generalizations of the classical algorithms of Moore, Lawler and Moore and Lawler. In addition, we describe two heuristics for makespan scheduling in this environment, and provide an exact analysis of their worst-case performance.

Full Text Available Packet scheduling algorithms enhances the packet delivery rate effectively in wireless networks; it helps to improve the quality of service of the wireless networks. Many algorithms had been deployed in the area of packet scheduling in wireless networks but less attention is paid to security. Some algorithms which offer security often compromise performances such as schedulability, this is not desirable. This performance problem will become worse when the system is under heavy load. In this paper we propose Round robin based Secure- Aware Packet Scheduling algorithm (RSAPS for wireless networks which focuses on secure scheduling. RSAPS is an adaptive algorithm which gives priority to scheduling when system is under heavy load. Under light load RSAPS provide maximum security for the incoming packets. Simulation has been performed using the proposed method and compared with existing algorithms SPSS and ISPAS. And it is found that RSAPS shows excellent scheduling quality holding the security levels.

Multi-core homogeneous processors have been widely used to deal with computation-intensive embed-ded applications. However, with the continuous down scaling of CMOS technology, within-die variations in the manufacturing process lead to a significant spread in the operating speeds of cores within homogeneous multi-core processors. Task scheduling approaches, which do not consider such heterogeneity caused by within-die variations, can lead to an overly pessimistic result in terms of performance. To realize an optimal performance according to the actual maximum clock frequencies at which cores can run, we present a heterogeneity-aware schedule refining (HASR) scheme by fully exploiting the heterogeneities of homogeneous multi-core processors in embedded domains. We analyze and show how the actual maximum frequencies of cores are used to guide the scheduling. In the scheme, representative chip operating points are selected and the corresponding optimal schedules are generated as candidate schedules. During the booting of each chip, according to the actual maximum clock frequencies of cores, one of the candidate schedules is bound to the chip to maximize the performance. A set of applications are designed to evaluate the proposed scheme. Experimental results show that the proposed scheme can improve the performance by an average value of 22.2%, compared with the baseline schedule based on the worst case timing analysis. Compared with the conventional task scheduling approach based on the actual maximum clock frequencies, the proposed scheme also improves the performance by up to 12%.

This report describes a program for the exploration of block scheduling. The targeted population consists of high school students in a growing, middle-class community, located in a suburban setting of a large mid-western city. The historical background of block scheduling is documented through data gathered using attendance reports, student…

Nontraditional work schedules for pharmacists at three institutions are described. The demand for pharmacists and health care in general continues to increase, yet significant material changes are occurring in the pharmacy work force. These changing demographics, coupled with historical vacancy rates and turnover trends for pharmacy staff, require an increased emphasis on workplace changes that can improve staff recruitment and retention. At William S. Middleton Memorial Veterans Affairs Hospital in Madison, Wisconsin, creative pharmacist work schedules and roles are now mainstays to the recruitment and retention of staff. The major challenge that such scheduling presents is the 8 hours needed to prepare a six-week schedule. Baylor Medical Center at Grapevine in Dallas, Texas, has a total of 45 pharmacy employees, and slightly less than half of the 24.5 full-time-equivalent staff work full-time, with most preferring to work one, two, or three days per week. As long as the coverage needs of the facility are met, Envision Telepharmacy in Alpine, Texas, allows almost any scheduling arrangement preferred by individual pharmacists or the pharmacist group covering the facility. Staffing involves a great variety of shift lengths and intervals, with shifts ranging from 2 to 10 hours. Pharmacy leaders must be increasingly aware of opportunities to provide staff with unique scheduling and operational enhancements that can provide for a better work-life balance. Compressed workweeks, job-sharing, and team scheduling were the most common types of alternative work schedules implemented at three different institutions.

AOSS is a highly efficient scheduling application that uses various tools to schedule astronauts weekly appointment information. This program represents an integration of many technologies into a single application to facilitate schedule sharing and management. It is a Windows-based application developed in Visual Basic. Because the NASA standard office automation load environment is Microsoft-based, Visual Basic provides AO SS developers with the ability to interact with Windows collaboration components by accessing objects models from applications like Outlook and Excel. This also gives developers the ability to create newly customizable components that perform specialized tasks pertaining to scheduling reporting inside the application. With this capability, AOSS can perform various asynchronous tasks, such as gathering/ sending/ managing astronauts schedule information directly to their Outlook calendars at any time.

When studying safety properties of (formal) protocol models, it is customary to view the scheduler as an adversary: an entity trying to falsify the safety property. We show that in the context of security protocols, and in particular of anonymizing protocols, this gives the adversary too much power; for instance, the contents of encrypted messages and internal computations by the parties should be considered invisible to the adversary. We restrict the class of schedulers to a class of admissible schedulers which better model adversarial behaviour. These admissible schedulers base their decision solely on the past behaviour of the system that is visible to the adversary. Using this, we propose a definition of anonymity: for all admissible schedulers the identity of the users and the observations of the adversary are independent stochastic variables. We also develop a proof technique for typical cases that can be used to proof anonymity: a system is anonymous if it is possible to `exchange' the behaviour of two...

The Robert C. Byrd Green Bank Telescope (GBT) is implementing a new Dynamic Scheduling System (DSS) designed to maximize the observing efficiency of the telescope while ensuring that none of the flexibility and ease of use of the GBT is harmed and that the data quality of observations is not adversely affected. To accomplish this, the GBT DSS is implementing a dynamic scheduling system which schedules observers, rather than running scripts. The DSS works by breaking each project into one or more sessions which have associated observing criteria such as RA, Dec, and frequency. Potential observers may also enter dates when members of their team will not be available for either on-site or remote observing. The scheduling algorithm uses those data, along with the predicted weather, to determine the most efficient schedule for the GBT. The DSS provides all observers at least 24 hours notice of their upcoming observing. In the uncommon (DSS project, including the ranking and scheduling algorithms for the sessions, the scheduling probabilities generation, the web framework for the system, and an overview of the results from the beta testing which were held from June - September, 2008.

Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

Full Text Available Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

Introduction The endgame of CMS installation in the underground cavern is in full swing, with several major milestones having been passed since the last CMS week. The Tracker was installed inside the Vactank just before the CERN end-of-year shutdown. Shortly after the reopening in 2008, the two remaining endcap disks, YE-2 and YE-1, were lowered, marking the completion of eight years of assembly in the surface building SX5. The remaining tasks, before the detector can be closed for the Cosmic Run At Four Tesla (CRAFT), are the installation of the thermal shields, the cabling of the negative endcap, the cabling of the tracker and the beam pipe installation. In addition to these installation tasks, a test closure of the positive endcap is planned just before the installation of the central beam pipe. The schedule is tight and complicated but the goal to close CMS by the end of May for a cosmic test with magnetic field remains feasible. Safety With all large components now being underground, the shortage...

The final edition (Nos 51-52/2011 and 1-2-3/2012) of the Bulletin this year will be published on Friday 16 December and will cover events at CERN from 19 December 2011 to 19 January 2012. Announcements for publication in this issue should reach the Communication Group or the Staff Association, as appropriate, by noon on Tuesday 13 December. Bulletin publication schedule for 2012 The table below lists the 2012 publication dates for the paper version of the Bulletin and the corresponding deadlines for the submission of announcements. Please note that all announcements must be submitted by 12.00 noon on Tuesdays at the latest. Bulletin No. Week number Submission of announcements (before 12.00 midday) Bulletin Web version Bulletin Printed version 4-5 Tuesday 17 January Fridays 20 and 27 January Wednesday25 January 6-7 Tuesday 31 January Fridays 3 and 10 February Wednesday 8 February 8-9 Tuesday 14 February Fridays 17 and 24 february Wednesday 22 Februa...

Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector has been derived by using the Youla parameterization. The use...... in connection with H_inf gain scheduling controllers....

The primary scheduling tool in use during the Spacelab Life Science (SLS-1) planning phase was the operations research (OR) based, tabular form Experiment Scheduling System (ESS) developed by NASA Marshall. PLAN-IT is an artificial intelligence based interactive graphic timeline editor for ESS developed by JPL. The PLAN-IT software was enhanced for use in the scheduling of Spacelab experiments to support the SLS missions. The enhanced software SLS-PLAN-IT System was used to support the real-time reactive scheduling task during the SLS-1 mission. SLS-PLAN-IT is a frame-based blackboard scheduling shell which, from scheduling input, creates resource-requiring event duration objects and resource-usage duration objects. The blackboard structure is to keep track of the effects of event duration objects on the resource usage objects. Various scheduling heuristics are coded in procedural form and can be invoked any time at the user's request. The system architecture is described along with what has been learned with the SLS-PLAN-IT project.

The aim of this study was to evaluate the effects of various practice schedules on learning a novel speech task. Forty healthy Cantonese speakers were asked to learn to produce a Cantonese phrase with two target utterance durations (2500 and 3500 milliseconds). They were randomly assigned to one of four learning conditions, each completing a different practice schedule, namely Blocked only, Random only, Blocked-then-Random, and Random-then-Blocked. Two retention tests (one immediate and one delayed) and a transfer test were administered. The four groups of participants showed different patterns of learning, but achieved comparable levels of performance at the end of the acquisition phase. However, participants in the Blocked only condition were less able to differentiate the two target durations than those in the Random only condition during retention. Furthermore, participants who received both blocked and random practice were less adversely affected by the secondary task during the transfer test than those who received either blocked or random practice alone. These findings suggest that mixed practice schedules are more effective than either blocked or random practice, especially in transferring the acquired speech motor skills to a cognitively demanding situation. The results have clinical implications regarding optimal practice schedules for treatment intervention.

Full Text Available Earned value management (EVM was originally developed for cost management and has not widely been used for forecasting project duration. In addition, EVM based formulas for cost or schedule forecasting are still deterministic and do not provide any information about the range of possible outcomes and the probability of meeting the project objectives. The objective of this paper is to develop three models to forecast the estimated duration at completion. Two of these models are deterministic; earned value (EV and earned schedule (ES models. The third model is a probabilistic model and developed based on Kalman filter algorithm and earned schedule management. Hence, the accuracies of the EV, ES and Kalman Filter Forecasting Model (KFFM through the different project periods will be assessed and compared with the other forecasting methods such as the Critical Path Method (CPM, which makes the time forecast at activity level by revising the actual reporting data for each activity at a certain data date. A case study project is used to validate the results of the three models. Hence, the best model is selected based on the lowest average percentage of error. The results showed that the KFFM developed in this study provides probabilistic prediction bounds of project duration at completion and can be applied through the different project periods with smaller errors than those observed in EV and ES forecasting models.

Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...

Any sports league needs a schedule of play, and such a schedule can be important, as it may influence the outcome of the sports competition itself and since it may have an impact on the interests of all parties involved. As in many other sports leagues and countries, the interest for Belgian soccer has increased over the last years. This paper describes our experiences in scheduling the highest Belgian soccer league. We describe how we automated and improved the way in which the calendar is ...

Lean construction has been newly applied to construction industry. The best performance of a project can be achieved through the precise definition of construction product, rational work break structure, lean supply chain, decrease of resources waste, objective control and so forth. Referring to the characteristics of schedule planning of construction projects and lean construction philosophy, we proposed optimizing methodology of real-time and dynamic schedule of construction projects based on lean construction. The basis of the methodology is process reorganization and lean supply in construction enterprises. The traditional estimating method of the activity duration is fuzzy and random; however, a newly proposed lean forecasting method employs multi-components linear-regression, back-propagation artificial neural networks and learning curve. Taking account of the limited resources and the fixed duration of a project, the optimizing method of the real-time and dynamic schedule adopts the concept of resource driving. To optimize the schedule of a construction project timely and effectively, an intellectualized schedule management system was developed. It can work out the initial schedule, optimize the real-time and dynamic schedule, and display the schedule with the Gant Chart, the net-work graph and the space-time line chart. A case study was also presented to explain the proposed method.

Planning and scheduling significantly influence organizational performance, but literature that pays attention to how organizations could or should organize and assess their planning processes is limited. We extend planning and scheduling theory with a categorization of scheduling performance

The purpose of this paper is to present examples for the sometimes surprisingly different behavior of deterministic and stochastic scheduling problems. In particular, it demonstrates some seemingly counterintuitive properties of optimal scheduling policies for stochastic machine scheduling problems.

The purpose of this paper is to present examples for the sometimes surprisingly different behavior of deterministic and stochastic scheduling problems. In particular, it demonstrates some seemingly counterintuitive properties of optimal scheduling policies for stochastic machine scheduling problems.

Full Text Available Since the dynamicity and inhomogeneity of resources complicates scheduling, it is not possible to use accurate scheduling algorithms. Therefore, many studies focus on heuristic algorithms like the artificial bee colony algorithm. Since, the artificial bee colony algorithm searches the problem space locally and has a poor performance in global search; global search algorithms like genetic algorithms should also be used to overcome this drawback. This study proposes a scheduling algorithm, which is combination of the genetic and artificial bee colony algorithms for the independent scheduling problem in a computing grid. This study aims to reduce the maximum total scheduling time. Simulation results indicate that the proposed algorithm reduces the maximum execution time (makespan by 10% in comparison to the compared methods.

We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.

This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

The Robert C. Byrd Green Bank Telescope (GBT) Dynamic Scheduling System (DSS), in use since September, 2009, was designed to maximize observing efficiency while preserving telescope flexibility and data quality without creating undue adversity for the observers. Using observing criteria; observer availability and qualifications for remote observing; three-dimensional weather forecasts; and telescope state, the DSS software optimally schedules observers 24 to 48 hours in advance for a telescope that has a wide-range of capabilities and a geographical location with variable weather patterns. The DSS project was closed October 28, 2011 and will now enter a continuing maintenance and enhancement phase. Recent improvements include a new resource calendar for incorporating telescope maintenance activities, a sensitivity calculator that leverages the scheduling algorithms to facilitate consistent tools for proposal preparation, improved support for monitoring observations, scheduling of high frequency continuum and spectral line observations for both sparse and fully sampled array receivers, and additional session parameters for observations having special requirements.

Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents

The problem of scheduling the commercial advertisements in the television industry is investigated. Each advertiser client demands that the multiple airings of the same brand advertisement should be as spaced as possible over a given time period. Moreover, audience rating requests have to be taken into account in the scheduling. This is the first time this hard decision problem is dealt with in the literature. We design two mixed integer linear programming (MILP) models. Two constructive heur...

. The parameters in these models are fitted by the Maximum Likelihood Method using data relevant for Danish structural timber and the statistical uncertainty is quantified. The reliability is evaluated using representative shortand long-term limit states, and the load duration factor kmod is estimated using...

With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.

Duration Calculus is a logic for reasoning about requirements for real-time systems at a high level of abstraction from operational detail, which qualifies it as an interesting starting point for embedded controller design. Such a design activity is generally thought to aim at a control device...... the physical behaviours of which satisfy the requirements formula, i.e. the refinement relation between requirements and implementations is taken to be trajectory inclusion. Due to the abstractness of the vocabulary of Duration Calculus, trajectory inclusion between control requirements and controller designs...... for embedded controller design and exploit this fact for developing an automatic procedure for controller synthesis from specifications formalized in Duration Calculus. As far as we know, this is the first positive result concerning feasibility of automatic synthesis from dense-time Duration Calculus....

A small backpack , for use by Naval aviators, containing a long duration emergency oxygen system and a separate humidifier for the aircraft’s oxygen supply, has been devised and a feasibility model built. (Author)

In this paper, we investigate the problem of scheduling a 6 DOF robotic arm to carry out a sequence of spray painting tasks. The duration of any given painting task is process dependent and fixed, but the duration of an “intertask”, corresponding to the process of relocating and reorienting...... the robot arm from one painting task to the next one, is influenced by the order of tasks and must be minimized by the scheduler. There are multiple solutions for reaching any given painting task and tasks can be performed in either of two different directions. Further complicating the problem...... are characteristics of the painting process application itself. Unlike spot-welding, painting tasks require movement of the entire robot arm. In addition to minimizing intertask duration, the scheduler must strive to maximize painting quality and the problem is formulated as a multi-objective optimization problem...

Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

Full Text Available In universities scheduling curriculum activity is an essential job. Primarily, scheduling is a distribution of limited resources under interrelated constraints. The set of hard constraints demand the highest priority and should not to be violated at any cost, while the maximum soft constraints satisfaction mounts the quality scale of solution. In this research paper, a novel bisected approach is introduced that is comprisesd of GA (Genetic Algorithm as well as Backtracking Recursive Search. The employed technique deals with both hard and soft constraints successively. The first phase decisively is focused over elimination of all the hard constraints bounded violations and eventually produces partial solution for subsequent step. The second phase is supposed to draw the best possible solution on the search space. Promising results are obtained by implementation on the real dataset. The key points of the research approach are to get assurance of hard constraints removal from the dataset and minimizing computational time for GA by initializing pre-processed set of chromosomes.

We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

Full Text Available There are many functions which are provided by operating system like process management, memory management, file management, input/outputmanagement, networking, protection system and command interpreter system. In these functions, the process management is most important function because operating system is a system program that means at the runtime process interact with hardware. Therefore, we can say that for improving the efficiency of a CPU we need to manage all process. For managing the process we use various types scheduling algorithm. There are many algorithm are available for CPU scheduling. But all algorithms have its own deficiency and limitations. In this paper, I proposed a new approach for round robin scheduling algorithm which helps to improve the efficiency of CPU.

Full Text Available The main objective of this paper is to develop a new approach for round robin scheduling which help to improve the CPU efficiency in real time and time sharing operating system. There are many algorithms available for CPU scheduling. But we cannot implemented in real time operating system because of high context switch rates, large waiting time, large response time, large trn around time and less throughput. The proposed algorithm improves all the drawback of simple round robin architecture. The author have also given comparative analysis of proposed with simple round robin scheduling algorithm. Therefore, the author strongly feel that the proposed architecture solves all the problem encountered in simple round robin architecture by decreasing the performance parameters to desirable extent and thereby increasing the system throughput.

Gain scheduling controllers are considered in this paper. The gain scheduling problem where the scheduling parameter vector theta cannot be measured directly, but needs to be estimated is considered. An estimation of the scheduling vector theta has been derived by using the Youla parameterization...

Full Text Available This study applied engineering techniques to develop a nurse scheduling model that, while maintaining the highest level of service, simultaneously minimized hospital-staffing costs and equitably distributed overtime pay. In the mathematical model, the objective function was the sum of the overtime payment to all nurses and the standard deviation of the total overtime payment that each nurse received. Input data distributions were analyzed in order to formulate a simulation model to determine the optimal demand for nurses that met the hospital’s service standards. To obtain the optimal nurse schedule with the number of nurses acquired from the simulation model, we proposed a genetic algorithm (GA with two-point crossover and random mutation. After running the algorithm, we compared the expenses and number of nurses between the existing and our proposed nurse schedules. For January 2013, the nurse schedule obtained by GA could save 12% in staffing expenses per month and 13% in number of nurses when compare with the existing schedule, while more equitably distributing overtime pay between all nurses.

The Network Queueing System (NQS) was designed to schedule jobs based on limits within queues. As systems obtain more memory, the number of queues increased to take advantage of the added memory resource. The problem now becomes too many queues. Having a large number of queues provides users with the capability to gain an unfair advantage over other users by tailoring their job to fit in an empty queue. Additionally, the large number of queues becomes confusing to the user community. The High Speed Processors group at the Numerical Aerodynamics Simulation (NAS) Facility at NASA Ames Research Center developed a new approach to batch job scheduling. This new method reduces the number of queues required by eliminating the need for queues based on resource limits. The scheduler examines each request for necessary resources before initiating the job. Also additional user limits at the complex level were added to provide a fairness to all users. Additional tools which include user job reordering are under development to work with the new scheduler. This paper discusses the objectives, design and implementation results of this new scheduler

Feature-based Scheduler offers a sequencing strategy for ground-based telescopes. This scheduler is designed in the framework of Markovian Decision Process (MDP), and consists of a sub-linear online controller, and an offline supervisory control-optimizer. Online control law is computed at the moment of decision for the next visit, and the supervisory optimizer trains the controller by simulation data. Choice of the Differential Evolution (DE) optimizer, and introducing a reduced state space of the telescope system, offer an efficient and parallelizable optimization algorithm. In this study, we applied the proposed scheduler to the problem of Large Synoptic Survey Telescope (LSST). Preliminary results for a simplified model of LSST is promising in terms of both optimality, and computational cost.

It has been recently shown that linearly indexed Assignment Codes can be efficiently used for coding several problems especially in signal processing and matrix algebra. In fact, mathematical expressions for many algorithms are directly in the form of linearly indexed codes, and examples include the formulas for matrix multiplication, any m-dimensional convolution/correlation, matrix transposition, and solving matrix Lyapunov's equation. Systematic procedures for converting linearly indexed Assignment Codes to localized algorithms that are closely related to Regular Iterative Algorithms (RIAs) have also been developed. These localized algorithms can be often efficiently scheduled by modeling them as RIAs; however, it is not always efficient to do so. In this paper we shall analyze and develop systematic procedures for determining efficient schedules directly for the linearly indexed ACs and the localized algorithms. We shall also illustrate our procedures by determining schedules for examples such as matrix transposition and Gauss-Jordan elimination algorithm.

Java finalizers perform clean-up and finalisation of objects at garbage collection time. In real-time Java profiles the use of finalizers is either discouraged (RTSJ, Ravenscar Java) or even disallowed (JSR-302), mainly because of the unpredictability of finalizers and in particular their impact...... on the schedulability analysis. In this paper we show that a controlled scoped memory model results in a structured and predictable execution of finalizers, more reminiscent of C++ destructors than Java finalizers. Furthermore, we incorporate finalizers into a (conservative) schedulability analysis for Predictable Java...... programs. Finally, we extend the SARTS tool for automated schedulability analysis of Java bytecode programs to handle finalizers in a fully automated way....

In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.

Java finalizers perform clean-up and finalisation of objects at garbage collection time. In real-time Java profiles the use of finalizers is either discouraged (RTSJ, Ravenscar Java) or even disallowed (JSR-302), mainly because of the unpredictability of finalizers and in particular their impact ...... programs. Finally, we extend the SARTS tool for automated schedulability analysis of Java bytecode programs to handle finalizers in a fully automated way.......Java finalizers perform clean-up and finalisation of objects at garbage collection time. In real-time Java profiles the use of finalizers is either discouraged (RTSJ, Ravenscar Java) or even disallowed (JSR-302), mainly because of the unpredictability of finalizers and in particular their impact...... on the schedulability analysis. In this paper we show that a controlled scoped memory model results in a structured and predictable execution of finalizers, more reminiscent of C++ destructors than Java finalizers. Furthermore, we incorporate finalizers into a (conservative) schedulability analysis for Predictable Java...

Dynamic models of congestion so far rely on exogenous scheduling preferences of travelers, based for example on disutility of deviation from a preferred departure or arrival time for a trip. This paper provides a more fundamental view in which travelers derive utility just from consumption...... and leisure, but agglomeration economies at home and at work lead to scheduling preferences forming endogenously. Using bottleneck congestion technology, we obtain an equilibrium queuing pattern consistent with a general version of the Vickrey bottleneck model. However, the policy implications are different....... Compared to the predictions of an analyst observing untolled equilibrium and taking scheduling preferences as exogenous, we find that both the optimal capacity and the marginal external cost of congestion have changed. The benefits of tolling are greater, and the optimal time varying toll is different....

Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL)(a) for the US Department of Energy (DOE). This document contains the planned 1997 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. In addition, Section 3.0, Biota, also reflects a rotating collection schedule identifying the year a specific sample is scheduled for collection. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1, General Environmental Protection Program, and DOE Order 5400.5, Radiation Protection of the Public and the Environment. The sampling methods will be the same as those described in the Environmental Monitoring Plan, US Department of Energy, Richland Operations Office, DOE/RL91-50, Rev. 1, US Department of Energy, Richland, Washington.

We put forward an optimal disk schedule with n disk requests and prove its optimality mathematically. Generalizing the idea of an optimal disk schedule, we remove the limit of n requests and, at the same time, consider the dynamically arrival model of disk requests to obtain an algorithm, shortest path first-fit first (SPFF). This algorithm is based on the shortest path of disk head motion constructed by all the pendent requests. From view of the head-moving distance, it has the stronger globality than SSTF. From view of the head-moving direction, it has the better flexibility than SCAN. Therefore, SPFF keeps the advantage of SCAN and, at the same time, absorbs the strength of SSTF. The algorithm SPFF not only shows the more superiority than other scheduling polices, but also have higher adjustability to meet the computer system's different demands.

In this paper we present Safe Self-Scheduling (SSS), a new scheduling scheme that schedules parallel loops with variable length iteration execution times not known at compile time. The scheme assumes a shared memory space. SSS combines static scheduling with dynamic scheduling and draws favorable advantages from each. First, it reduces the dynamic scheduling overhead by statistically scheduling a major portion of loop iterations. Second, the workload is balanced with simple and efficient self-scheduling scheme by applying a new measure, the smallest critical chore size. Experimental results comparing SSS with other scheduling schemes indicate that SSS surpasses other scheduling schemes. In the experiment on Gauss-Jordan, an application that is suitable for static scheduling schemes, SSS is the only self-scheduling scheme that outperforms the static scheduling scheme. This indicates that SSS achieves a balanced workload with a very small amount of overhead.

Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)

We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman and Co., 1979. [13] Eliyahu M. Goldratt ...techniques is the one developed by Goldratt and his colleagues within the context of the OPT factory scheduling system [13, 15, 10]. OPT demonstrated

In this paper we initiate the study of minimizing power consumption in the broadcast scheduling model. In this setting there is a wireless transmitter. Over time requests arrive at the transmitter for pages of information. Multiple requests may be for the same page. When a page is transmitted, all requests for that page receive the transmission simulteneously. The speed the transmitter sends data at can be dynamically scaled to conserve energy. We consider the problem of minimizing flow time plus energy, the most popular scheduling metric considered in the standard scheduling model when the scheduler is energy aware. We will assume that the power consumed is modeled by an arbitrary convex function. For this problem there is a $\\Omega(n)$ lower bound. Due to the lower bound, we consider the resource augmentation model of Gupta \\etal \\cite{GuptaKP10}. Using resource augmentation, we give a scalable algorithm. Our result also gives a scalable non-clairvoyant algorithm for minimizing weighted flow time plus energ...

Dynamic models of congestion so far rely on exogenous scheduling preferences of travelers, based for example on disutility of deviation from a preferred departure or arrival time for a trip. This paper provides a more fundamental view in which travelers derive utility just from consumption...... and leisure, but agglomeration economies at home and at work lead to scheduling preferences forming endogenously. Using bottleneck congestion technology, we obtain an equilibrium queuing pattern consistent with a general version of the Vickrey bottleneck model. However, the policy implications are different...

This high angle overall view shows the top side components of the Extended Duration Orbiter (EDO) Waste Collection System (WCS) scheduled to fly aboard NASA's Endeavour, Orbiter Vehicle (OV) 105, for the STS-54 mission. Detailed Test Objective 662, Extended duration orbiter WCS evaluation, will verify the design of the new EDO WCS under microgravity conditions for a prolonged period. OV-105 has been modified with additional structures in the waste management compartment (WMC) and additional avionics to support/restrain the EDO WCS. Among the advantages the new IWCS is hoped to have over the currect WCS are greater dependability, better hygiene, virtually unlimited capacity, and more efficient preparation between shuttle missions. Unlike the previous WCS, the improved version will not have to be removed from the spacecraft to be readied for the next flight. The WCS was documented in JSC's Crew Systems Laboratory Bldg 7.

Based on pooled register data from Norway and Sweden, we find that differences in unemployment duration patterns reflect dissimilarities in unemployment insurance (UI) systems in a way that convincingly establishes the link between economic incentives and job search behaviour. Specifically, UI...

After a short review of gamma ray bursts (GRBs), we discuss the physical implications of strong statistical correlations seen among some of the parameters of short duration bursts (90 < 2 s). Finally, we conclude with a brief sketch of a new unified model for long and short GRBs.

Full Text Available We consider the problems of scheduling deteriorating jobs with release dates on a single machine (parallel machines and jobs can be rejected by paying penalties. The processing time of a job is a simple linear increasing function of its starting time. For a single machine model, the objective is to minimize the maximum lateness of the accepted jobs plus the total penalty of the rejected jobs. We show that the problem is NP-hard in the strong sense and presents a fully polynomial time approximation scheme to solve it when all jobs have agreeable release dates and due dates. For parallel-machine model, the objective is to minimize the maximum delivery completion time of the accepted jobs plus the total penalty of the rejected jobs. When the jobs have identical release dates, we first propose a fully polynomial time approximation scheme to solve it. Then, we present a heuristic algorithm for the case where all jobs have to be accepted and evaluate its efficiency by computational experiments.

to be considered. These two cases are maximum load level exceeding load-carrying capacity and damage accumulation (caused by the load and its duration) leading to failure. The effect of both load intensity and load duration on the capacity of timber has been an area of large interest over the last decades...... models for the representation of damage accumulation in timber are considered. The parameters of these models are calibrated by the use of test data whereas uncertainties associated with the model formulation and limited sample size are considered. The DOL effect is usually taken into account in code...

is available. The CHI'13 Interactive Schedule helps attendees navigate this wealth of video content in order to identify events they would like to attend. It consists of a number of large display screens throughout the conference venue which cycle through a video playlist of events. Attendees can interact...

Astronomical scheduling problem has several external conditions that change dynamically at any time during observations, like weather condition (humidity, temperature, wind speed, opacity, etc.), and target visibility conditions (target over the horizon, Sun/Moon blocking the target). Therefore, a dynamic re-scheduling is needed. An astronomical project will be scheduled as one or more Scheduling Blocks (SBs) as an atomic unit of astronomical observations. We propose a mixed integer linear programming (MILP) solution to select the best SBs, favors SBs with high scientific values, and thus maximizing the quantity of completed observation projects. The data content of Atacama Large Millimeter/Submillimeter Array (ALMA) projects of cycle 0 and cycle 1 were analyzed, and a synthetic set of tests of the real instances was created. Two configurations, one of 5000 SBs in a 3 months season and another 10,000 SBs a 6 months season were created. These instances were evaluated with excellent results. Through the testing it is showed that the MILP proposal has optimal solutions.

The use of mobile devices is often limited by the lifetime of their batteries. For devices that have multiple batteries or that have the option to connect an extra battery, battery scheduling, thereby exploiting the recovery properties of the batteries, can help to extend the system lifetime. Due to

The use of mobile devices is often limited by the lifetime of its battery. For devices that have multiple batteries or that have the option to connect an extra battery, battery scheduling, thereby exploiting the recovery properties of the batteries, can help to extend the system lifetime. Due to the

The use of mobile devices is often limited by the lifetime of their batteries. For devices that have multiple batteries or that have the option to connect an extra battery, battery scheduling, thereby exploiting the recovery properties of the batteries, can help to extend the system lifetime. Due to

The use of mobile devices is often limited by the lifetime of its battery. For devices that have multiple batteries or that have the option to connect an extra battery, battery scheduling, thereby exploiting the recovery properties of the batteries, can help to extend the system lifetime. Due to the

This study presents crop coefﬁcient (Kc) values of TMV 1 -ST maize variety ... given time from planting to the time it is harvested. ... real time irrigation scheduling for high frequent and non-ﬁequent water .... 10 m, and the average soil bulk density was 1420 kg/m'. ...... Performance Evaluation of Fadama Irrigation Practice.

Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the US Department of Energy (DOE). This document contains the planned 1996 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project.

This thesis deals with various models of cooperation in networks and scheduling. The main focus is how the benefits of this cooperation should be divided among the participating individuals. A major part of this analysis is concerned with stability of the cooperation. In addition, allocation rules a

The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.

Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.

The installed energy savings for advanced residential hot water systems can depend greatly on detailed occupant use patterns. Quantifying these patterns is essential for analyzing measures such as tankless water heaters, solar hot water systems with demand-side heat exchangers, distribution system improvements, and recirculation loops. This paper describes the development of an advanced spreadsheet tool that can generate a series of year-long hot water event schedules consistent with realistic probability distributions of start time, duration and flow rate variability, clustering, fixture assignment, vacation periods, and seasonality. This paper also presents the application of the hot water event schedules in the context of an integral-collector-storage solar water heating system in a moderate climate.

This report represents the schedule contingency evaluation done on the FY-93 Major System Acquisition (MSA) Baseline for the Idaho National Engineering Laboratory`s (INEL) Environmental Restoration Program (EPP). A Schedule Contingency Evaluation Team (SCET) was established to evaluate schedule contingency on the MSA Baseline for the INEL ERP associated with completing work within milestones established in the baseline. Baseline schedules had been established considering enforceable deadlines contained in the Federal Facilities Agreement/Consent Order (FFA/CO), the agreement signed in 1992, by the State of Idaho, Department of Health & Welfare, the U.S. Environmental Protection Agency, Region 10, and the U.S. Department of Energy, Idaho Operations Office. The evaluation was based upon the application of standard schedule risk management techniques to the specific problems of the INEL ERP. The schedule contingency evaluation was designed to provided early visibility for potential schedule delays impacting enforceable deadlines. The focus of the analysis was on the duration of time needed to accomplish all required activities to achieve completion of the milestones in the baseline corresponding to the enforceable deadlines. Additionally, the analysis was designed to identify control of high-probability, high-impact schedule risk factors.

The topic of this book is known as dynamic scheduling, and is used to refer to three dimensions of project management and scheduling: the construction of a baseline schedule and the analysis of a project schedule's risk as preparation of the project control phase during project progress. This dynamic scheduling point of view implicitly assumes that the usability of a project's baseline schedule is rather limited and only acts as a point of reference in the project life cycle.

In order to achieve the high performance, we need to have an efficient scheduling of a parallelprogram onto the processors in multiprocessor systems that minimizes the entire executiontime. This problem of multiprocessor scheduling can be stated as finding a schedule for ageneral task graph to be executed on a multiprocessor system so that the schedule length can be minimize [10]. This scheduling problem is known to be NP- Hard.In multi processor task scheduling, we have a number of CPU’s on ...

Full Text Available This paper selects flexible job-shop scheduling problem as the research object, and Constructs mathematical model aimed at minimizing the maximum makespan. Taking the transmission reverse gear production line of a transmission corporation as an example, genetic algorithm is applied for flexible jobshop scheduling problem to get the specific optimal scheduling results with MATLAB. DELMIA/QUEST based on 3D discrete event simulation is applied to construct the physical model of the production workshop. On the basis of the optimal scheduling results, the logical link of the physical model for the production workshop is established, besides, importing the appropriate process parameters to make virtual simulation on the production workshop. Finally, through analyzing the simulated results, it shows that the scheduling results are effective and reasonable.

The authors present a semi-definite relaxation algorithm for the scheduling problem with controllable times on a single machine. Their approach shows how to relate this problem with the maximum vertex-cover problem with kernel constraints (MKVC).The established relationship enables to transfer the approximate solutions of MKVCinto the approximate solutions for the scheduling problem. Then, they show how to obtain an integer approximate solution for MKVC based on the semi-definite relaxation and randomized rounding technique.

In this paper we consider a single-machine scheduling model with deteriorating jobs and simultaneous learning, and we introduce polynomial solutions for single machine makespan minimization, total flow times minimization and maximum lateness minimization corresponding to the first and second special cases of our model under some agreeable conditions. However,corresponding to the third special case of our model, we show that the optimal schedules may be different from those of the classical version for the above objective functions.

Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize

The energy utilization of sensor nodes in large scale wireless sensor network points out the crucial need for scalable and energy efficient clustering protocols. Since sensor nodes usually operate on batteries, the maximum utility of network is greatly dependent on ideal usage of energy leftover in these sensor nodes. In this paper, we propose an Energy Efficient Cluster Based Scheduling Scheme for wireless sensor networks that balances the sensor network lifetime and energy efficiency. In the first phase of our proposed scheme, cluster topology is discovered and cluster head is chosen based on remaining energy level. The cluster head monitors the network energy threshold value to identify the energy drain rate of all its cluster members. In the second phase, scheduling algorithm is presented to allocate time slots to cluster member data packets. Here congestion occurrence is totally avoided. In the third phase, energy consumption model is proposed to maintain maximum residual energy level across the network. Moreover, we also propose a new packet format which is given to all cluster member nodes. The simulation results prove that the proposed scheme greatly contributes to maximum network lifetime, high energy, reduced overhead, and maximum delivery ratio.

Full Text Available Modern manufacturing systems are constantly increasing in complexity and become more agile in nature such system has become more crucial to check feasibility of machine scheduling and sequencing because effective scheduling and sequencing can yield increase in productivity due to maximum utilization of available resources but when number of machine increases traditional scheduling methods e.g. Johnson‟s ,rule is becomes in effective Due to the limitations involved in exhaustive enumeration, for such problems meta-heuristics has become greater choice for solving NP hard problems because of their multi solution and strong neighbourhood search capability in a reasonable time.

Full Text Available This Cloud computing is a latest new computing paradigm where applications, data and IT services are provided across dynamic and geographically dispersed organization. Job scheduling systemproblem is a nucleus and demanding issue in Cloud Computing. How to utilize Cloud computing resources proficiently and gain the maximum profits with job scheduling system is one of the Cloud computing service providers’ ultimate objectives. In this paper we have used credit based scheduling decision to evaluate the entire group of task in the task queue and find the minimal completion time of alltask. Here cost matrix has been generated as the fair tendency of a task to be assigned in a resource.

The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...

The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....

SUMMARY Forty-five patients of psychotic depression have been compared with 22 neurotic depressives, on SADD schedule regarding the socioderaographic and clinical variables. Two groups of these patients have differed on variables like age, duration of present attack, past history and family history of psychiatric illness and the precipitating factors. PMID:21927089

Conclusion: Shorter fractionation schedule is very much effective in preventing recurrent breast cancer and it provides a high level of patient satisfaction as well as reduce money and overall treatment time. Its shorter duration offers the added advantage of a more efficient use of resources and greater patient convenience. [Int J Res Med Sci 2014; 2(2.000: 536-540

It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.

This chapter considers admission control and scheduling rules for a single machine production environment. Orders arrive at a single machine and can be grouped into serveral product families. Each order has a family dependent due-date, production duration, and reward. When an order cannot be served

Several studies have shown that children prefer contingent reinforcement (CR) rather than yoked noncontingent reinforcement (NCR) when continuous reinforcement is programmed in the CR schedule. Preference has not, however, been evaluated for practical schedules that involve CR. In Study 1, we assessed 5 children's preference for obtaining social interaction via a multiple schedule (periods of fixed-ratio 1 reinforcement alternating with periods of extinction), a briefly signaled delayed reinforcement schedule, and an NCR schedule. The multiple schedule promoted the most efficient level of responding. In general, children chose to experience the multiple schedule and avoided the delay and NCR schedules, indicating that they preferred multiple schedules as the means to arrange practical schedules of social interaction. In Study 2, we evaluated potential controlling variables that influenced 1 child's preference for the multiple schedule and found that the strong positive contingency was the primary variable.

Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned

This paper provides a classification of real scheduling problems. Various ways have been examined and described on the problem. Scheduling problem faces a tremendous challenges and difficulties in order to meet the preferences of the consumer. Dealing with scheduling problem is complicated, inefficient and time-consuming. This study aims to develop a mathematical model for scheduling the operating theatre during peak and off peak time. Scheduling problem is a well known optimization problem and the goal is to find the best possible optimal solution. In this paper, we used integer linear programming technique for scheduling problem in a high level of synthesis. In addition, time and resource constrained scheduling was used. An optimal result was fully obtained by using the software GLPK/AMPL. This model can be adopted to solve other scheduling problems, such as the Lecture Theatre, Cinemas and Work Shift.

Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

The paper describes the procedures and the system to build and maintain the schedules needed to manage time, resources, and progress of the CMS project. The system is based on the decomposition of the project into work packages, which can be each considered as a complete project with its own structure. The system promotes the distribution of the decision making and responsibilities to lower levels in the organisation by providing a state-of-the-art system to formalise the external commitments of the work packages without limiting their ability to modify their internal schedules to best meet their commitments. The system lets the project management focus on the interfaces between the work packages and alerts the management immediately if a conflict arises. The proposed system simplifies the planning and management process and eliminates the need for a large, centralised project management system.

This LDRD project was a campus exec fellowship to fund (in part) Donald Nguyen’s PhD research at UT-Austin. His work has focused on parallel programming models, and scheduling irregular algorithms on shared-memory systems using the Galois framework. Galois provides a simple but powerful way for users and applications to automatically obtain good parallel performance using certain supported data containers. The naïve user can write serial code, while advanced users can optimize performance by advanced features, such as specifying the scheduling policy. Galois was used to parallelize two sparse matrix reordering schemes: RCM and Sloan. Such reordering is important in high-performance computing to obtain better data locality and thus reduce run times.

Full Text Available This system will address the problem of online scheduling sequential data objects with the principle of periodicity in the context of dynamic information dissemination. Many modern information applications spread dynamically generated data objects and answer the complex query for retrieving multiple data objects. In dynamic environments, data streams need to be online processed rather than being stored and later retrieved to answer queries. Particularly, data objects are produced dynamically by the information providers, interleaved and disseminated efficiently by the broadcasting servers, and associated sequentially in the client sides. The proposed algorithm with a well-specific gain measure function prominently outperforms the FIFO schedule and is able to minimize the mean service access time to the extent close to the theoretical optimum.

The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.

Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program.

Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

In this paper, we consider the scheduling of jobs that may be competing for mutually exclusive resources. We model the conflicts between jobs with a conflict graph, so that all concurrently running jobs must form an independent set in the graph. We believe that this model is natural and general enough to have applications in a variety of settings; however, we are motivated by the following two specific applications: traffic intersection control and session scheduling in high speed local area networks with spatial reuse. In both of these applications, guaranteeing the best turnaround time to any job entering the system is important. Our results focus on two special classes of graphs motivated by our applications: bipartite graphs and interval graphs. Although the algorithms for bipartite and intervals graphs are quite different, the bounds they achieve are the same: we prove that for any sequence of jobs in which the maximum completion time of a job in the optimal schedule is bounded by A, the algorithm can complete every job in time O(n{sup 3} A{sup 2}). n is the number of nodes in the conflict graph. We also show that the best competitive ratio achievable by any online algorithm for the maximum completion time on interval or bipartite graphs is {Omega}(n).

Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

The learning curve shows the relationship between time and the cumulative number of units produced which using the mathematical description on the performance of workers in performing repetitive works. The problems of this study is level differences in the labors performance before and after the break which affects the company's production scheduling. The study was conducted in the garment industry, which the aims is to predict the company production scheduling using the learning curve and forgetting curve. By implementing the learning curve and forgetting curve, this paper contributes in improving the labors performance that is in line with the increase in maximum output 3 hours productive before the break are 15 unit product with learning curve percentage in the company is 93.24%. Meanwhile, the forgetting curve improving maximum output 3 hours productive after the break are 11 unit product with the percentage of forgetting curve in the company is 92.96%. Then, the obtained 26 units product on the productive hours one working day is used as the basic for production scheduling.

We investigate the scheduling of a common resource between several concurrent users when the feasible transmission rate of each user varies randomly over time. Time is slotted and users arrive and depart upon service completion. This may model for example the flow-level behavior of end-users in a narrowband HDR wireless channel (CDMA 1xEV-DO). As performance criteria we consider the stability of the system and the mean delay experienced by the users. Given the complexity of the problem we investigate the fluid-scaled system, which allows to obtain important results and insights for the original system: (1) We characterize for a large class of scheduling policies the stability conditions and identify a set of maximum stable policies, giving in each time slot preference to users being in their best possible channel condition. We find in particular that many opportunistic scheduling policies like Score-Based, Proportionally Best or Potential Improvement are stable under the maximum stability conditions, whereas ...

Decommissioning has recently become an issue highlighted in Korea due to the Permanent Shutdown (PS) of Kori-1 plant. Since Korea Hydro and Nuclear Power (KHNP) Company decided the PS of Kori-1 instead of further continued operation, Kori-1 will be the first decommissioning plant of the commercial reactors in Korea. Korean regulatory authority demands Initial Decommissioning Plan (IDP) for all the plants in operation and under construction. In addition, decommissioning should be considered for the completion of the life cycle of NPPs. To date, Korea has no experience regarding decommissioning of the commercial reactor and a lot of uncertainties will be expected due to its site-specific factors. However, optimized decommissioning process schedule must be indispensable in the safety and economic efficiency of the project. Differed from USA, Korea has no experience and know-hows of the operation and site management for decommissioning. Hence, in Korea, establishment of decommissioning schedule has to give more weight to safety than precedent cases. More economical and rational schedule will be composed by collecting and analyzing the experience data and site-specific data and information as the decommissioning progresses. In a long-range outlook, KHNP having capability of NPP decommissioning will try to decommissioning business in Korea and foreign countries.

Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess…

We consider a model for scheduling under uncertainty. In this model, we combine the main characteristics of online and stochastic scheduling in a simple and natural way. Job processing times are assumed to be stochastic, but in contrast to traditional stochastic scheduling models, we assume that

Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess…

Research on patients' experiences of wait time for scheduled surgery has centered predominantly on the relative tolerability of perceived wait time and impacts on quality of life. We explored patients' experiences of time while waiting for three types of surgery with varied wait times--hip or knee replacement, shoulder surgery, and cardiac surgery. Thirty-two patients were recruited by their surgeons. We asked participants about their perceptions of time while waiting in two separate interviews. Using interpretative phenomenological analysis (IPA), we discovered connections between participant suffering, meaningfulness of time, and agency over the waiting period and the lived duration of time experience. Our findings reveal that chronological duration is not necessarily the most relevant consideration in determining the quality of waiting experience. Those findings helped us create a conceptual framework for lived wait time. We suggest that clinicians and policy makers consider the complexity of wait time experience to enhance preoperative patient care.

In this paper we analyse how rent control affects the duration of individual unemployment. In atheoretical search model we distinguish between two effects of rent control. On one hand, rentcontrol reduces housing mobility and hence mobility in the labour market. On the other hand, tomaintain rent...... control benefits, unemployed individuals are more likely to accept job offers in the local labour market. Based on a rich Danish data set, we find that the probability of finding a local job increases with the rent control intensity of the housing unit, whereas the probability of finding ajob outside...

Based on pooled register data from Norway and Sweden, we find that differences in unemployment duration patterns reflect dissimilarities in unemployment insurance (UI) systems in a way that convincingly establishes the link between economic incentives and job search behaviour. Specifically, UI...... benefits are relatively more generous for low-income workers in Sweden than in Norway, leading to relatively longer unemployment spells for low-income workers in Sweden. Based on the between-countries variation in replacement ratios, we find that the elasticity of the outflow rate from insured unemployment...

The main goal of this article is to define the problem of vowel duration in Civili (H12a). It shows that the so-called Civili vowel-length desperately needs to be re-examined, because previous works on the sound system of this language hardly explain a number of phonological phenomena, such as vowel lengthening, on the basis of data at hand. Demonstrating the problem in question, the author first reviews previous works that all identify a vowel lengthening in Civili. From different analyses t...

Full Text Available A plant construction project always involves lots of activities. Precise information about the activities duration is unfortunately unavailable due to the uncertain resources capacity. The fuzzy program evaluation and review technique (PERT has been widely applied to solve the fuzzy project scheduling problem. This paper presents an extended fuzzy PERT approach with four major improvement aspects to support the construction project scheduling management: 1 Evaluate operation fuzzy times based on available working volumes, resources quantity and fuzzy capacity of resources, 2 Adopting a maximal alpha_i-level cut method to compare the fuzzy precedent activities times to determine the reasonable earliest starting times of each activity, 3 Using fuzzy algebra method instead of fuzzy subtraction method to compute the fuzzy latest starting times and 4 Developing a project scheduling risk index (PSRI to assist the decision maker to evaluate the project scheduling risk. Simulations experiments are conducted and demonstrated satisfactory results.

In this paper, we propose joint user-and-hop scheduling over dual-hop block-fading broadcast channels in order to exploit multi-user diversity gains and multi-hop diversity gains all together. To achieve this objective, the first and second hops are scheduled opportunistically based on the channel state information and as a prerequisite we assume that the relay, which is half-duplex and operates using decode-and-forward, is capable of storing the received packets from the source until the channel condition of the destined user becomes good to be scheduled. We formulate the joint scheduling problem as maximizing the weighted sum of the long term achievable rates by the users under a stability constraint, which means that on the long term the rate received by the relay should equal the rate transmitted by it, in addition to constant or variable power constraints. We show that this problem is equivalent to a single-hop broadcast channel by treating the source as a virtual user with an optimal priority weight that maintains the stability constraint. We show how to obtain the source weight either off-line based on channel statistics or on real-time based on channel measurements. Furthermore, we consider special cases including the maximum sum rate scheduler and the proportional fair scheduler. We demonstrate via numerical results that our proposed joint scheduling scheme enlarges the rate region as compared with a scheme that employs multi-user scheduling alone.

Full Text Available Abstract Channel aware and opportunistic scheduling algorithms exploit the channel knowledge and fading to increase the average throughput. Alternatively, each user could be served equally in order to maximize fairness. Obviously, there is a tradeoff between average throughput and fairness in the system. In this paper, we study four representative schedulers, namely the maximum throughput scheduler (MTS, the proportional fair scheduler (PFS, the (relative opportunistic round robin scheduler (ORS, and the round robin scheduler (RRS for a space-time coded multiple antenna downlink system. The system applies TDMA based scheduling and exploits the multiple antennas in terms of spatial diversity. We show that the average sum rate performance and the average worst-case delay depend strongly on the user distribution within the cell. MTS gains from asymmetrical distributed users whereas the other three schedulers suffer. On the other hand, the average fairness of MTS and PFS decreases with asymmetrical user distribution. The key contribution of this paper is to put these tradeoffs and observations on a solid theoretical basis. Both the PFS and the ORS provide a reasonable performance in terms of throughput and fairness. However, PFS outperforms ORS for symmetrical user distributions, whereas ORS outperforms PFS for asymmetrical user distribution.

Full Text Available Channel aware and opportunistic scheduling algorithms exploit the channel knowledge and fading to increase the average throughput. Alternatively, each user could be served equally in order to maximize fairness. Obviously, there is a tradeoff between average throughput and fairness in the system. In this paper, we study four representative schedulers, namely the maximum throughput scheduler (MTS, the proportional fair scheduler (PFS, the (relative opportunistic round robin scheduler (ORS, and the round robin scheduler (RRS for a space-time coded multiple antenna downlink system. The system applies TDMA based scheduling and exploits the multiple antennas in terms of spatial diversity. We show that the average sum rate performance and the average worst-case delay depend strongly on the user distribution within the cell. MTS gains from asymmetrical distributed users whereas the other three schedulers suffer. On the other hand, the average fairness of MTS and PFS decreases with asymmetrical user distribution. The key contribution of this paper is to put these tradeoffs and observations on a solid theoretical basis. Both the PFS and the ORS provide a reasonable performance in terms of throughput and fairness. However, PFS outperforms ORS for symmetrical user distributions, whereas ORS outperforms PFS for asymmetrical user distribution.

We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.

In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.

Reference desk scheduling is one of the most challenging tasks in the organizational structure of an academic library. The ability to turn this challenge into a workable and effective function lies with the scheduler and indirectly the cooperation of all librarians scheduled for reference desk service. It is the scheduler's sensitivity to such…

A method for rate-optimal scheduling of recursive DSP algorithms is presented. The approach is based on the determination of the scheduling window of each operation and the construction of a scheduling-range chart. The information in the chart is used during scheduling to optimize some quality crite

Full Text Available The purpose of this study was to adjust equations that establish relationships between rainfall events with different duration and data from weather stations in the state of Santa Catarina, Brazil. In this study, the relationships between different duration heavy rainfalls from 13 weather stations of Santa Catarina were analyzed. From series of maximum annual rainfalls, and using the Gumbel-Chow distribution, the maximum rainfall for durations between 5 min and 24 h were estimated considering return periods from 2 to 100 years. The data fit to the Gumbel-Chow model was verified by the Kolmogorov-Smirnov test at 5 % significance. The coefficients of Bell's equation were adjusted to estimate the relationship between rainfall duration t (min and the return period T (y in relation to the maximum rainfall with a duration of 1 hour and a 10 year return period. Likewise, the coefficients of Bell's equation were adjusted based on the maximum rainfall with a duration of 1 day and a 10 year return period. The results showed that these relationships are viable to estimate short-duration rainfall events at locations where there are no rainfall records.

Full Text Available Problem statement: The ratio scheduling algorithm to solve the allocation of jobs in the
shop floor was proposed. The problem was to find an optimal schedule so as to minimize the
maximum completion time, the sum of distinct earliness and tardiness penalties from a given common
due date d. Approach: The objective of the proposed algorithm was to reduce the early penalty and the
late penalty and to increase the overall profit of the organization. The proposed method was discussed
with different possible instances. Results: The test results showed that the algorithm was robust and
simple and can be applied for any job size problem. Conclusion: The proposed algorithm gave
encouraging result for the bench mark instances when the due date is less than half of the total
processing time.

In this paper, we consider the schedule-based network localization concept, which does not require synchronization among nodes and does not involve communication overhead. The concept makes use of a common transmission sequence, which enables each node to perform self-localization and to localize the entire network, based on noisy propagation-time measurements. We formulate the schedule-based localization problem as an estimation problem in a Bayesian framework. This provides robustness with respect to uncertainty in such system parameters as anchor locations and timing devices. Moreover, we derive a sequential approximate maximum a posteriori (AMAP) estimator. The estimator is fully decentralized and copes with varying noise levels. By studying the fundamental constraints given by the considered measurement model, we provide a system design methodology which enables a scalable solution. Finally, we evaluate the performance of the proposed AMAP estimator by numerical simulations emulating an impulse-radio ultra-wideband (IR-UWB) wireless network.

Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurses assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.

Patient satisfaction with botulinum toxin treatment is a key success factor in aesthetic procedures and is governed by the interaction of numerous variables. Duration of effect is important because it influences retreatment intervals as well as affecting cost and convenience to the patient. In order to review the evidence on the duration of benefit associated with various commercial formulations of botulinum toxin, MEDLINE was searched using the following terms: 'botulinum' and 'duration'/'retreatment' (limits: 'clinical trials,' 'meta-analyses,' 'English'). I also searched my existing reference files, reference lists of identified articles, and meeting/conference abstracts to ensure completeness. The focus was on clinical medicine and aesthetic trials. To be eligible for the analysis, studies had to include efficacy assessments at multiple timepoints. To estimate duration of benefit, the following outcomes were examined and summarized: responder rates, mean wrinkle severity scores at various timepoints (with or without changes from baseline), and relapse rates. Duration at both repose and maximum attempted muscle contraction was considered when provided. Where possible, duration was assessed by formulation and dose. The initial search yielded 164 articles. Of these, 35 included an adequate measure of duration in aesthetic indications. The majority of these (22) were on the glabellar area. Study designs and endpoints were highly heterogeneous, and duration of effect varied between studies. Several studies with the BOTOX Cosmetic (onabotulinumtoxinA; Allergan, Inc., Irvine, CA, USA) formulation of botulinum toxin type A (BoNTA) included relapse rates, defined conservatively as return to baseline levels of line severity for two consecutive visits approximately 30 days apart (at repose and maximum contraction). In these studies, duration of effect ranged from 3 to 5 months in female patients and from 4 to 6 months in male patients. Individual patients had longer

Full Text Available The acoustic propagation speed under water poses significant challenges to the design of underwater sensor networks and their medium access control protocols. Similar to the air, scheduling transmissions under water has significant impact on throughput, energy consumption, and reliability. In this paper we present an extended set of simplified scheduling constraints which allows easy scheduling of underwater acoustic communication. We also present two algorithms for scheduling communications, i.e. a centralized scheduling approach and a distributed scheduling approach. The centralized approach achieves the highest throughput while the distributed approach aims to minimize the computation and communication overhead. We further show how the centralized scheduling approach can be extended with transmission dependencies to reduce the end-to-end delay of packets. We evaluate the performance of the centralized and distributed scheduling approaches using simulation. The centralized approach outperforms the distributed approach in terms of throughput, however we also show the distributed approach has significant benefits in terms of communication and computational overhead required to setup the schedule. We propose a novel way of estimating the performance of scheduling approaches using the ratio of modulation time and propagation delay. We show the performance is largely dictated by this ratio, although the number of links to be scheduled also has a minor impact on the performance.﻿

Scheduling is an important operation in process industries for improving resource utilization resulting in direct economic benefits. It has a two-fold objective of fulfilling customer orders within the specified time as well as maximizing the plant profit. Unexpected disturbances such as machine breakdown, arrival of rush orders and cancellation of orders affect the schedule of the plant. Reactive scheduling is generation of a new schedule which has minimum deviation from the original schedule in spite of the occurrence of unexpected events in the plant operation. Recently, Shaik & Floudas (2009) proposed a novel unified model for short-term scheduling of multipurpose batch plants using unit-specific event-based continuous time representation. In this paper, we extend the model of Shaik & Floudas (2009) to handle reactive scheduling.

This document provides Bechtel Hanford, Inc. (BHI) and Westinghouse Hanford Company (WHC) a schedule of monitoring and sampling routines for the Operational Environmental Monitoring (OEM) program during calendar year (CY) 1995. Every attempt will be made to consistently follow this schedule; any deviation from this schedule will be documented by an internal memorandum (DSI) explaining the reason for the deviation. The DSI will be issued by the scheduled performing organization and directed to Near-Field Monitoring. The survey frequencies for particular sites are determined by the technical judgment of Near-Field Monitoring and may depend on the site history, radiological status, use and general conditions. Additional surveys may be requested at irregular frequencies if conditions warrant. All radioactive wastes sites are scheduled to be surveyed at least annually. Any newly discovered wastes sites not documented by this schedule will be included in the revised schedule for CY 1995.

(schedules expected to perform well after some degree of modification when the environment changes). This thesis presents two fundamentally different approaches for scheduling job shops facing machine breakdowns. The first method is called neighbourhood based robustness and is based on an idea of minimising...... environments include machine breakdowns, uncertain processing times, workers getting sick, materials being delayed and the appearance of new jobs. These possible environmental changes mean that a schedule which was optimal for the information available at the time of scheduling can end up being highly...... suboptimal when it is implemented and subjected to the uncertainty of the real world. For this reason it is very important to find methods capable of creating robust schedules (schedules expected to perform well after a minimal amount of modification when the environment changes) or flexible schedules...

Lifetime prediction techniques developed by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) are described. These techniques were developed to predict the Solar Maximum Mission (SMM) spacecraft orbit, which is decaying due to atmospheric drag, with reentry predicted to occur before the end of 1989. Lifetime predictions were also performed for the Long Duration Exposure Facility (LDEF), which was deployed on the 1984 SMM repair mission and is scheduled for retrieval on another Space Transportation System (STS) mission later this year. Concepts used in the lifetime predictions were tested on the San Marco spacecraft, which reentered the Earth's atmosphere on December 6, 1988. Ephemerides predicting the orbit evolution of the San Marco spacecraft until reentry were generated over the final 90 days of the mission when the altitude was less than 380 kilometers. The errors in the predicted ephemerides are due to errors in the prediction of atmospheric density variations over the lifetime of the satellite. To model the time dependence of the atmospheric densities, predictions of the solar flux at the 10.7-centimeter wavelength were used in conjunction with Harris-Priester (HP) atmospheric density tables. Orbital state vectors, together with the spacecraft mass and area, are used as input to the Goddard Trajectory Determination System (GTDS). Propagations proceed in monthly segments, with the nominal atmospheric drag model scaled for each month according to the predicted monthly average value of F10.7. Calibration propagations are performed over a period of known orbital decay to obtain the effective ballistic coefficient. Progagations using plus or minus 2 sigma solar flux predictions are also generated to estimate the despersion in expected reentry dates. Definitive orbits are compared with these predictions as time expases. As updated vectors are received, these are also propagated to reentryto continually update the lifetime predictions.

An oil-adjuvant Moraxella bovis bacterin was administered to weanling calves, using different vaccination schedules. Calves were given a booster vaccination after 3 weeks and were challenge exposed 2 weeks later with virulent M bovis recovered from calves with clinical infectious bovine keratoconjunctivitis (IBK). The effects of different routes of vaccination and homologous and heterologous challenge exposure on the incidence, severity, and duration of induced IBK was evaluated. All calves given a placebo developed clinical IBK. Calves vaccinated subcutaneously in the neck had the shortest duration of M bovis infection, the lowest incidence and the shortest duration of acute IBK, and the lowest disease severity score, compared with effects in calves given a placebo or vaccinated subconjunctivally. Calves challenge exposed with the homologous strain of M bovis had more infected eyes, more eyes with acute IBK, longer duration of infection, and a higher severity and duration disease score.

In complex work domains and organizations, understanding scheduleing dynamics can ensure objectives are reached and delays are mitigated. In the current paper, we examine the scheduling dynamics for NASA’s Exploration Flight Test 1 (EFT-1) activities. For this examination, we specifically modeled...

Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

Full Text Available In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.

In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.

In this paper we study the interactions of TCP and IEEE 802.11 MAC in Wireless Mesh Networks (WMNs). We use a Markov chain to capture the behavior of TCP sessions, particularly the impact on network throughput due to the effect of queue utilization and packet relaying. A closed form solution is derived to numerically determine the throughput. Based on the developed model, we propose a distributed MAC protocol called Timestamp-ordered MAC (TMAC), aiming to alleviate the unfairness problem in WMNs. TMAC extends CSMA/CA by scheduling data packets based on their age. Prior to transmitting a data packet, a transmitter broadcasts a request control message appended with a timestamp to a selected list of neighbors. It can proceed with the transmission only if it receives a sufficient number of grant control messages from these neighbors. A grant message indicates that the associated data packet has the lowest timestamp of all the packets pending transmission at the local transmit queue. We demonstrate that a loose ordering of timestamps among neighboring nodes is sufficient for enforcing local fairness, subsequently leading to flow rate fairness in a multi-hop WMN. We show that TMAC can be implemented using the control frames in IEEE 802.11, and thus can be easily integrated in existing 802.11-based WMNs. Our simulation results show that TMAC achieves excellent resource allocation fairness while maintaining over 90% of maximum link capacity across a large number of topologies.

Full Text Available The scheduling of both absorption cycle and vapour compression cycle chillers in trigeneration plants is investigated in this work. Many trigeneration plants use absorption cycle chillers only but there are potential performance advantages to be gained by using a combination of absorption and compression chillers especially in situations where the building electrical demand to be met by the combined heat and power (CHP plant is variable. Simulation models of both types of chillers are developed together with a simple model of a variable-capacity CHP engine developed by curve-fitting to supplier’s data. The models are linked to form an optimisation problem in which the contribution of both chiller types is determined at a maximum value of operating cost (or carbon emission saving. Results show that an optimum operating condition arises at moderately high air conditioning demands and moderately low power demand when the air conditioning demand is shared between both chillers, all recovered heat is utilised, and the contribution arising from the compression chiller results in an increase in CHP power generation and, hence, engine efficiency.

Maritime transportation is the backbone of world trade and is accountable for around 3% of the worlds CO2 emissions. We present the Vessel Schedule Recovery Problem (VSRP) to evaluate a given disruption scenario and to select a recovery action balancing the trade off between increased bunker cons...... consumption and the impact on the remaining network and the customer service level. The model is applied to 4 real cases from Maersk Line. Solutions are comparable or superior to those chosen by operations managers. Cost savings of up to 58% may be achieved....

Scheduling is the task of assigning resources to operations. When the resources are mobile vehicles, they describe routes through the served stations. To emphasize such aspect, this problem is usually referred to as the routing problem. In particular, if vehicles are aircraft and stations are airports, the problem is known as aircraft routing. This paper describes the solution to such a problem developed in OMAR (Operative Management of Aircraft Routing), a system implemented by Bull HN for Alitalia. In our approach, aircraft routing is viewed as a Constraint Satisfaction Problem. The solving strategy combines network consistency and tree search techniques.

A substantial proportion of schizophrenia-spectrum patients exhibit a cognitive impairment at illness onset. However, the long-term course of neurocognition and a possible neurotoxic effect of time spent in active psychosis, is a topic of controversy. Furthermore, it is of importance to find out...... what predicts the long-term course of neurocognition. Duration of untreated psychosis (DUP), accumulated time in psychosis the first year after start of treatment, relapse rates and symptoms are potential predictors of the long-term course. In this study, 261 first-episode psychosis patients were...... relationship between psychosis before (DUP) or after start of treatment and the composite score was found, providing no support for the neurotoxicity hypothesis, and indicating that psychosis before start of treatment has no significant impact on the course and outcome in psychosis. We found no association...

Research finds that many impoverished urban Black adults engage in a pattern of partnering and family formation involving a succession of short cohabitations yielding children, a paradigm referred to as transient domesticity. Researchers have identified socioeconomic status, cultural adaptations, and urbanicity as explanations for aspects of this pattern. We used longitudinal data from the 2001 Survey of Income and Program Participation to analyze variation in cohabitation and marriage duration by race/ethnicity, income, and urban residence. Proportional hazards regression indicated that separation risk is greater among couples that are cohabiting, below 200% of the federal poverty line, and Black but is not greater among urban dwellers. This provides empirical demographic evidence to support the emerging theory of transient domesticity and suggests that both socioeconomic status and race explain this pattern. We discuss the implications of these findings for understanding transient domesticity and make recommendations for using the Survey of Income and Program Participation to further study this family formation paradigm.

Despite their advantages in analysis, 4D NMR experiments are still infrequently used as a routine tool in protein NMR projects due to the long duration of the measurement and limited digital resolution. Recently, new acquisition techniques for speeding up multidimensional NMR experiments, such as nonlinear sampling, in combination with non-Fourier transform data processing methods have been proposed to be beneficial for 4D NMR experiments. Maximum entropy (MaxEnt) methods have been utilised for reconstructing nonlinearly sampled multi-dimensional NMR data. However, the artefacts arising from MaxEnt processing, particularly, in NOESY spectra have not yet been clearly assessed in comparison with other methods, such as quantitative maximum entropy, multidimensional decomposition, and compressed sensing. We compared MaxEnt with other methods in reconstructing 3D NOESY data acquired with variously reduced sparse sampling schedules and found that MaxEnt is robust, quick and competitive with other methods. Next, nonlinear sampling and MaxEnt processing were applied to 4D NOESY experiments, and the effect of the artefacts of MaxEnt was evaluated by calculating 3D structures from the NOE-derived distance restraints. Our results demonstrated that sufficiently converged and accurate structures (RMSD of 0.91Å to the mean and 1.36Å to the reference structures) were obtained even with NOESY spectra reconstructed from 1.6% randomly selected sampling points for indirect dimensions. This suggests that 3D MaxEnt processing in combination with nonlinear sampling schedules is still a useful and advantageous option for rapid acquisition of high-resolution 4D NOESY spectra of proteins.

A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring the onsite drinking water falls outside the scope of the SESP. The Hanford Environmental Health Foundation is responsible for monitoring the nonradiological parameters as defined in the National Drinking Water Standards while PNL conducts the radiological monitoring of the onsite drinking water. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize the expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control and reporting. The ground-water sampling schedule identifies ground-water sampling events used by PNL for environmental surveillance of the Hanford Site.

In this paper, we address the scheduling problem with rejection and non-identical job arrivals, in which we may choose not to process certain jobs and each rejected job incurs a penalty.Our goal is to minimize the sum of the total penalties of the rejected jobs and the maximum completion time of the processed ones. For the off-line variant, we prove its NP-hardness and present a PTAS, and for the on-line special case with two job arrivals, we design a best possible algorithm with competitive ratio (√5+1)/2.

Full Text Available In this study, P wave duration and dispersion (PWD were measured in 30 heavy smoker, 30 light smoker and 30 nonsmoker subjects. There were no significant difference among heavy smokers, light smokers and nonsmokers with respect to maximum P wave duration and PWD (117±9ms, 116±8ms, 115±6ms ANOVA p=0.78, and 48±10ms, 45±11ms, 43±8ms, one-way ANOVA p=0.14, respectively. Minimum P wave duration was also similar in three groups. (69±10 ms, 70±13ms, 72±9ms, one-way ANOVA p=0.51. We also found no dose dependent-relation betweenthe duration of smoking, the number of cigarettes smoked, and P wave duration. Habitual cigarette smoking alone does not alter P wave duration and PWD in otherwise healthy young subjects.

Full Text Available BACKGROUND: Clonidine is an α 2 adrenoreceptor agonist that has been shown to effectively prolong the duration of analgesia when administered intrathecally or in the epidural space along with local anaesthetic. AIMS AND OBJECTIVE: This study was designed to evaluate the effect of two different doses of intrathecal clonidine (37.5 μg and 75 μg on the duration of analgesia and side effects produced by hyperbaric bupivacaine 0.5%. MATERIALS AND METHODS : A prospective hospital based, randomized and double blind study. Selected 75 patients who was scheduled for elective below umbilical surgeries were randomly allocated to one of three groups. Group I (n=25, control group received 3ml hyperbaric bupivacaine, Group II (n=25 3ml hyperbar ic bupivacaine + 37.5 μg clonidine and Group III (n=25 3 ml hyperbaric bupivacaine + 75μg clonidine intrathecally. Total volume (4ml remained constant by adding sterile water. Data were analyzed by using SPSS software ver.18. RESULTS: The (mean ±SD dura tion of analgesia was found to be 171.3±6.37 mins in Group I, 217.7±7.01 mins in Group II and 257.1±6.50 mins in Group III (p<0.05. It shows that 37.5  g & 75  g intrathecal clonidine increases the duration of analgesia of 15mg hyperbaric bupivacaine by abo ut 46 mins & 86 mins respectively. The addition of intrathecal clonidine upto 75 μg does not cause any significant major side effect except mild sedation, without an increase in incidence of hypotension, bradycardia and respiratory depression. CONCLUSION: Intrathecal clonidine (37.5  g & 75  g as an adjuvant to hyperbaric bupivacaine 0.5% prolong the duration of analgesia in a dose dependent manner without increase in incidence of significant side effects

overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview......The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM...... and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve...

Mine countermeasures (MCM) missions entail planning and operations in very dynamic and uncertain operating environments, which pose considerable risk to personnel and equipment. Frequent schedule repairs are needed that consider the latest operating conditions to keep mission on target. Presently no decision support tools are available for the challenging task of MCM mission rescheduling. To address this capability gap, we have developed the CARPE system to assist operation planners. CARPE constantly monitors the operational environment for changes and recommends alternative repaired schedules in response. It includes a novel schedule repair algorithm called Case-Based Local Schedule Repair (CLOSR) that automatically repairs broken schedules while satisfying the requirement of minimal operational disruption. It uses a case-based approach to represent repair strategies and apply them to new situations. Evaluation of CLOSR on simulated MCM operations demonstrates the effectiveness of case-based strategy. Schedule repairs are generated rapidly, ensure the elimination of all mines, and achieve required levels of clearance.

The traditional method for planning, scheduling and controlling activities and resources in construction projects is the CPM-scheduling, which has been the predominant scheduling method since its introduction in the late 1950s. Over the years, CPM has proven to be a very powerful technique...... for planning, scheduling and controlling projects. However, criticism has been raised on the CPM method, specifically in the case of construction projects, for deficient management of construction work and discontinuous flow of resources. Alternative scheduling techniques, often called repetitive or linear...... that will be used in this study. LBS is a scheduling method that rests upon the theories of line-of-balance and which uses the graphic representation of a flowline chart. As such, LBS is adapted for planning and management of workflows and, thus, may provide a solution to the identified shortcomings of CPM. Even...

In this paper we consider greedy scheduling algorithms in wireless networks, i.e., the schedules are computed by adding links greedily based on some priority vector. Two special cases are considered: 1) Longest Queue First (LQF) scheduling, where the priorities are computed using queue lengths, and 2) Static Priority (SP) scheduling, where the priorities are pre-assigned. We first propose a closed-form lower bound stability region for LQF scheduling, and discuss the tightness result in some scenarios. We then propose an lower bound stability region for SP scheduling with multiple priority vectors, as well as a heuristic priority assignment algorithm, which is related to the well-known Expectation-Maximization (EM) algorithm. The performance gain of the proposed heuristic algorithm is finally confirmed by simulations.

The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM...... overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview...... and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve...

We consider the problem of designing a fair scheduling algorithm for discrete-time constrained queuing networks. Each queue has dedicated exogenous packet arrivals. There are constraints on which queues can be served simultaneously. This model effectively describes important special instances like network switches, interference in wireless networks, bandwidth sharing for congestion control and traffic scheduling in road roundabouts. Fair scheduling is required because it provides isolation to different traffic flows; isolation makes the system more robust and enables providing quality of service. Existing work on fairness for constrained networks concentrates on flow based fairness. As a main result, we describe a notion of packet based fairness by establishing an analogy with the ranked election problem: packets are voters, schedules are candidates and each packet ranks the schedules based on its priorities. We then obtain a scheduling algorithm that achieves the described notion of fairness by drawing upon ...

Full Text Available Scheduling is crucial to the operation of logistics service supply chain (LSSC, so scientific performance evaluation method is required to evaluate the scheduling performance. Different from general project performance evaluation, scheduling activities are usually continuous and multiperiod. Therefore, the weight of scheduling performance evaluation index is not unchanged, but dynamically varied. In this paper, the factors that influence the scheduling performance are analyzed in three levels which are strategic environment, operating process, and scheduling results. Based on these three levels, the scheduling performance evaluation index system of LSSC is established. In all, a new performance evaluation method proposed based on dynamic index weight will have three innovation points. Firstly, a multiphase dynamic interaction method is introduced to improve the quality of quantification. Secondly, due to the large quantity of second-level indexes and the requirements of dynamic weight adjustment, the maximum attribute deviation method is introduced to determine weight of second-level indexes, which can remove the uncertainty of subjective factors. Thirdly, an adjustment coefficient method based on set-valued statistics is introduced to determine the first-level indexes weight. In the end, an application example from a logistics company in China is given to illustrate the effectiveness of the proposed method.

Full Text Available In this study, we present a literature review, classification schemes and analysis of methodology for scheduling problems on Batch Processing machine (BP with both processing time and job size constraints which is also regarded as Two-Dimensional (TD scheduling. Special attention is given to scheduling problems with non-identical job sizes and processing times, with details of the basic algorithms and other significant results.

The Spent Nuclear Fuel Integrated Schedule Plan establishes the organizational responsibilities, rules for developing, maintain and status of the SNF integrated schedule, and an implementation plan for the integrated schedule. The mission of the SNFP on the Hanford site is to provide safe, economic, environmentally sound management of Hanford SNF in a manner which stages it to final disposition. This particularly involves K Basin fuel.

Approved for public release; distribution unlimited. In this thesis we study the Marine Corps Tactical Aerial Reconnaissance Vehicle routing and scheduling problem. the present method of routing and scheduling is presented, along with possible implications for routing and scheduling when future expansion of vehicle assets becomes available. A review of current literature is given and comparisons are drawn between our problem and recent work. A model for the problem, which we call the Multi...

The Spent Nuclear Fuel Integrated Schedule Plan establishes the organizational responsibilities, rules for developing, maintain and status of the SNF integrated schedule, and an implementation plan for the integrated schedule. The mission of the SNFP on the Hanford site is to provide safe, economic, environmentally sound management of Hanford SNF in a manner which stages it to final disposition. This particularly involves K Basin fuel.

Some dominance rules are proposed for the problems of scheduling N jobs on a single machine with due dates,sequence dependent setup times and no preemption. Two algorithms based on Ragatz's branch and bound scheme are developed including the dominance rules where the objective is to minimize the maximum tardiness or the total tardiness. Computational experiments demonstrate the effectiveness of the dominance rules.

of home appliances into priority classes and the definition of a maximum power consumption threshold which is not allowed to be exceeded during peak hours. According to the bandwidth demand and priority of each class, the reversible fair scheduling algorithm delays some of the appliances and prolongs...

The School Reinforcement Survey Schedule (SRSS) was administered to 2,828 boys and girls in middle schools in the United States and an Italian translation was administered to 342 boys and girls in middle schools in Northern Italy. An exploratory factor analysis using half the American data set was performed using maximum likelihood estimation with…

Lamb's hydrostatic adjustment problem for the linear response of an infinite, isothermal atmosphere to an instantaneous heating of infinite horizontal extent is generalized to include the effects of heating of finite duration. Three different time sequences of the heating are considered: a top hat, a sine, and a sine-squared heating. The transient solution indicates that heating of finite duration generates broader but weaker acoustic wave fronts. However, it is shown that the final equilibrium is the same regardless of the heating sequence provided the net heating is the same.A Lagrangian formulation provides a simple interpretation of the adjustment. The heating generates an entropy anomaly that is initially realized completely as a pressure excess with no density perturbation. In the final state the entropy anomaly is realized as a density deficit with no pressure perturbation. Energetically the heating generates both available potential energy and available elastic energy. The former remains in the heated layer while the latter is carried off by the acoustic waves.The wave energy generation is compared for the various heating sequences. In the instantaneous case, 28.6% of the total energy generation is carried off by waves. This fraction is the ratio of the ideal gas constant R to the specific heat at constant pressure cp. For the heatings of finite duration considered, the amount of wave energy decreases monotonically as the heating duration increases and as the heating thickness decreases. The wave energy generation approaches zero when (i) the duration of the heating is comparable to or larger than the acoustic cutoff period, 2/NA 300 s, and (ii) the thickness of the heated layer approaches zero. The maximum wave energy occurs for a thick layer of heating of small duration and is the same as that for the instantaneous case.The effect of a lower boundary is also considered.

Full Text Available We present a network-aware HEFT. The original HEFT does not take care of parallel network flows while designing its schedule for a computational environment where computing nodes are physically at distant locations. In the proposed mechanism, such data transfers are stretched to their realistic completion time. A HEFT schedule with stretched data transfers exhibits the realistic makespan of the schedule. It is shown how misleading a schedule can be if the impact of parallel data transfers that share a bottleneck is ignored. A network-aware HEFT can be used to yield a benefit for Grid applications.

Full Text Available Grid computing is a high performance computing used to solve larger scale computational demands. Task scheduling is a major issue in grid computing systems. Scheduling of tasks is the NP hard problem. The heuristic approach provides optimal solution for NP hard problems .The ant colony algorithm provides optimal solution. The existing ant colony algorithm takes more time to schedule the tasks. In this paper ant colony algorithm improved by enhancing pheromone updating rule such that it schedules the tasks efficiently and better resource utilization. The simulation results prove that proposed method reduces the execution time of tasks compared to existing ant colony algorithm.

Full Text Available This paper represents the scheduling process in furniture manufacturing unit. It gives the fuzzy logic application in flexible manufacturing system. Flexible manufacturing systems are production system in furniture manufacturing unit. FMS consist of same multipurpose numerically controlled machines. Here in this project the scheduling has been done in FMS by using fuzzy logic tool in Matlab software. The fuzzy logic based scheduling model in this paper will deals with the job and best alternative route selection with multi-criteria of machine. Here two criteria for job and sequencing and routing with rules. This model is applicable to the scheduling of any manufacturing industry.

In the view of staff shortages and the huge inventory of products in the current market, we put forward a personnel scheduling model in the target of closing to the delivery date considering the parallelism. Then we designed a scheduling algorithm based on genetic algorithm and proposed a flexible parallel decoding method which take full use of the personal capacity. Case study results indicate that the flexible personnel scheduling considering the order-shop scheduling, machine automatic capabilities and personnel flexible in the target of closing to the delivery date optimize the allocation of human resources, then maximize the efficiency.

The Cassini Uplink Scheduler (CASSIUS) is cross-platform software used to generate a radiation sequence plan for commands being sent to the Cassini spacecraft. Because signals must travel through varying amounts of Earth's atmosphere, several different modes of constant telemetry rates have been devised. These modes guarantee that the spacecraft and the Deep Space Network agree with respect to the data transmission rate. However, the memory readout of a command will be lost if it occurs on a telemetry mode boundary. Given a list of spacecraft message files as well as the available telemetry modes, CASSIUS can find an uplink sequence that ensures safe transmission of each file. In addition, it can predict when the two on-board solid state recorders will swap. CASSIUS prevents data corruption by making sure that commands are not planned for memory readout during telemetry rate changes or a solid state recorder swap.

Full Text Available This paper deals with the scheduling analysis of hard real-time streaming applications. These applications are mapped onto a heterogeneous multiprocessor system-on-chip (MPSoC, where we must jointly meet the timing requirements of several jobs. Each job is independently activated and processes streams at its own rate. The dynamic starting and stopping of jobs necessitates the usage of self-timed schedules (STSs. By modeling job implementations using multirate data flow (MRDF graph semantics, real-time analysis can be performed. Traditionally, temporal analysis of STSs for MRDF graphs only aims at evaluating the average throughput. It does not cope well with latency, and it does not take into account the temporal behavior during the initial transient phase. In this paper, we establish an important property of STSs: the initiation times of actors in an STS are bounded by the initiation times of the same actors in any static periodic schedule of the same job; based on this property, we show how to guarantee strictly periodic behavior of a task within a self-timed implementation; then, we provide useful bounds on maximum latency for jobs with periodic, sporadic, and bursty sources, as well as a technique to check latency requirements. We present two case studies that exemplify the application of these techniques: a simplified channel equalizer and a wireless LAN receiver.

In this paper, we consider a dynamical model of computer networks and derive a synthesis method for congestion control. First, we show a model of TCP/AQM (Transmission Control Protocol/Active Queue Management) as a dynamical model of computer networks. The dynamical model of TCP/AQM networks consists of models of TCP window size, queue length and AQM mechanisms. Second, we propose to describe the dynamical model of TCP/AQM networks as linear systems with self-scheduling parameters, which also depend on information delay. Here we focus on the constraints on the maximum queue length and TCP window-size, which are the network resources in TCP/AQM networks. We derive TCP/AQM networks as the LPV system (linear parameter varying system) with information delay and self-scheduling parameter. We design a memoryless state feedback controller of the LPV system based on a gain-scheduling method. Finally, the effectiveness of the proposed method is evaluated by using MATLAB and the well-known ns-2 (Network Simulator Ver.2) simulator.

We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well as a maximum cost per click, and may not be assigned to more than one slot for a particular search. We give a truthful mechanism under the utility model where bidders try to maximize their clicks, subject to their personal constraints. In addition, we show that the revenue-maximizing mechanism is not truthful, but has a Nash equilibrium whose outcome is identical to our mechanism. As far as we can tell, this is the first treatment of sponsored search that directly incorporates both multiple slots and budget constraints into an analysis of incentives. Our mechanism employs a descending-price auction that maintains a solution to a certain machine scheduling problem whose job lengths depend on the price, and hence is variable over the auction. The price stops when the set of bidders that can afford that price pack exa...

Full Text Available During the last few years, users all over the world have become more and more familiar to the availability of broadband access. When users want broadband Internet service, they are generally restricted to a DSL (Digital Subscribers Line, or cable-modem-based connection. Proponents are advocating worldwide interoperability for microwave access (WiMAX, a technology based on an evolving standard for point-to multipoint wireless networking. Scheduling algorithms that support Quality of Service (QoS differentiation and guarantees for wireless data networks are crucial to the deployment of broadband wireless networks. The performance affecting parameters like fairness, bandwidth allocation, throughput, latency are studied and found out that none of the conventional algorithms perform effectively for both fairness and bandwidth allocation simultaneously. Hence it is absolutely essential for an efficient scheduling algorithm with a better trade off for these two parameters. So we are proposing a novel Scheduling Algorithm using Fuzzy logic and Artificial neural networks that addresses these aspects simultaneously. The initial results show that a fair amount of fairness is attained while keeping the priority intact. Results also show that maximum channel utilization is achieved with a negligible increment in processing time.

Operating Room (OR) scheduling is crucial to allow efficient use of ORs. Currently, the predicted durations of surgical procedures are unreliable and the OR schedulers have to follow the progress of the procedures in order to update the daily planning accordingly. The OR schedulers often acquire the needed information through verbal communication with the OR staff, which causes undesired interruptions of the surgical process. The aim of this study was to develop a system that predicts in real-time the remaining procedure duration and to test this prediction system for reliability and usability in an OR. The prediction system was based on the activation pattern of one single piece of equipment, the electrosurgical device. The prediction system was tested during 21 laparoscopic cholecystectomies, in which the activation of the electrosurgical device was recorded and processed in real-time using pattern recognition methods. The remaining surgical procedure duration was estimated and the optimal timing to prepare the next patient for surgery was communicated to the OR staff. The mean absolute error was smaller for the prediction system (14 min) than for the OR staff (19 min). The OR staff doubted whether the prediction system could take all relevant factors into account but were positive about its potential to shorten waiting times for patients. The prediction system is a promising tool to automatically and objectively predict the remaining procedure duration, and thereby achieve optimal OR scheduling and streamline the patient flow from the nursing department to the OR.

Twelve office workers participated in a study investigating effects of four sit/stand schedules (90-min sit/30-min stand, 80/40, 105/15, and 60/60) via several objective and subjective measures (muscle fatigue, foot swelling, spinal shrinkage, and self-reported discomfort). Results showed that there were no significant differences in shoulder and low back static muscle activities between sitting and standing. Muscle fatigue was developed during workday under all schedules. The longest standing schedule seemed to have a tendency of reducing muscle fatigue. None of the schedules helped or worsened foot swelling and spinal shrinkage. More active break-time activities seemed reducing muscle fatigue and foot swelling. While the self-reported bodily discomfort levels were generally low, the preferred schedules among the participants were varied, although the least standing schedule was the least preferred. We may conclude that effects of using sit-stand workstation to improve musculoskeletal health may be limited but promoting more active break-time activities can help. Practitioner Summary: Sit-stand workstations are used to reduce work-related musculoskeletal disorders. This study shows that office workers prefer sit/stand durations in the range between 1:1 and 3:1. Longer standing may have the potential to reduce muscle fatigue. However, active break-time activities may be more effective in reducing muscle fatigue and foot swelling.

Full Text Available Anodized-aluminum pressure-sensitive paint (AA-PSP uses the dipping deposition method to apply a luminophore on a porous anodized-aluminum surface. We study the dipping duration, one of the parameters of the dipping deposition related to the characterization of AA-PSP. The dipping duration was varied from 1 to 100,000 s. The properties characterized are the pressure sensitivity, temperature dependency, and signal level. The maximum pressure sensitivity of 65% is obtained at the dipping duration of 100 s, the minimum temperature dependency is obtained at the duration of 1 s, and the maximum signal level is obtained at the duration of 1,000 s, respectively. Among the characteristics, the dipping duration most influences the signal level. The change in the signal level is a factor of 8.4. By introducing a weight coefficient, an optimum dipping duration can be determined. Among all the dipping parameters, such as the dipping duration, dipping solvent, and luminophore concentration, the pressure sensitivity and signal level are most influenced by the dipping solvent.

The value of a health state may depend on how long an individual has had to endure the health state (i.e. hedonic load). In this paper, we test the constant proportionality (CP) assumption and determine the sign of relationship between duration and health state value for 42 health states using the nationally representative data from the United Kingdom Measurement and Valuation of Health study. The results reject the CP assumption and suggest that the relationship is negative for optimal health (i.e. fair innings argument) and that the relationship is positive for poorer health states (i.e. adaptation). We find no evidence of the maximum endurable time hypothesis using these data. This evidence on the duration effect has important implications for outcomes research and the economic evaluation of interventions.

Since second generation pneumococcal conjugate vaccines (PCVs) targeting 10 and 13 serotypes became available in 2010, the number of national policy makers considering these vaccines has steadily increased. An important consideration for a national immunization program is the timing and number of doses—the schedule—that will best prevent disease in the population. Data on disease epidemiology and the efficacy or effectiveness of PCV schedules are typically considered when choosing a schedule. Practical concerns, such as the existing vaccine schedule, and vaccine program performance are also important. In low-income countries, pneumococcal disease and deaths typically peak well before the end of the first year of life, making a schedule that provides PCV doses early in life (eg, a 6-, 10- and 14-week schedule) potentially the best option. In other settings, a schedule including a booster dose may address disease that peaks in the second year of life or may be seen to enhance a schedule already in place. A large and growing body of evidence from immunogenicity studies, as well as clinical trials and observational studies of carriage, pneumonia and invasive disease, has been systematically reviewed; these data indicate that schedules of 3 or 4 doses all work well, and that the differences between these regimens are subtle, especially in a mature program in which coverage is high and indirect (herd) effects help enhance protection provided directly by a vaccine schedule. The recent World Health Organization policy statement on PCVs endorsed a schedule of 3 primary doses without a booster or, as a new alternative, 2 primary doses with a booster dose. While 1 schedule may be preferred in a particular setting based on local epidemiology or practical considerations, achieving high coverage with 3 doses is likely more important than the specific timing of doses. PMID:24336059

Assessment Centres are used as a tool for psychologists and coaches to observe a number of dimensions in a person's behaviour and test his/her potential within a number of chosen focus areas. This is done in an intense course, with a number of different exercises which expose each participant...... Centres usually last two days and involve 3-6 psychologists or trained coaches as assessors. An entire course is composed of a number of rounds, with each round having its individual duration. In each round, the participants are divided into a number of groups with prespecifed pairing of group sizes...

Full Text Available Events can sometimes appear longer or shorter in duration than other events of equal length. For example, in a repeated presentation of auditory or visual stimuli, an unexpected object of equivalent duration appears to last longer. Illusions of duration distortion beg an important question of time representation: when durations dilate or contract, does time in general slow down or speed up during that moment? In other words, what entailments do duration distortions have with respect to other timing judgments? We here show that when a sound or visual flicker is presented in conjunction with an unexpected visual stimulus, neither the pitch of the sound nor the frequency of the flicker is affected by the apparent duration dilation. This demonstrates that subjective time in general is not slowed; instead, duration judgments can be manipulated with no concurrent impact on other temporal judgments. Like spatial vision, time perception appears to be underpinned by a collaboration of separate neural mechanisms that usually work in concert but are separable. We further show that the duration dilation of an unexpected stimulus is not enhanced by increasing its saliency, suggesting that the effect is more closely related to prediction violation than enhanced attention. Finally, duration distortions induced by violations of progressive number sequences implicate the involvement of high-level predictability, suggesting the involvement of areas higher than primary visual cortex. We suggest that duration distortions can be understood in terms of repetition suppression, in which neural responses to repeated stimuli are diminished.

Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient's waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system's performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served. 1. CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2 A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting

Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

This thesis presents research on scheduling in an uncertain environment, which forms a part of the rolling stock life cycle logistics applied research and development program funded by Dutch railway industry companies. The focus therefore lies on scheduling of maintenance operations on rolling stock

textabstract: In this paper we describe the use of a set covering model with additional constraints for scheduling train drivers and conductors for the Dutch railway operator NS Reizigers. The schedules were generated according to new rules originating from the project "Destination: Customer"

In long-term evolution (LTE) downlink transmission, modified least weighted delay first (MLWDF) scheduler is a quality of service (QoS) aware scheduling scheme for real-time (RT) services. Nevertheless, MLWDF performs below optimal among the trade-off between strict delay and loss restraints of RT and non-RT traffic flows, respectively. This is further worsened with the implementation of hybrid automatic retransmission request (HARQ). As these restraints grow unabated with increasing number of user demands, the performance of MLWDF further reduces. In order to ameliorate this situation, there is a need to directly incorporate the variations in user demands and HARQ implementation as parameters to the MLWDF scheduler. In this work, an improvement to the MLWDF scheduler is proposed. The improvement entails adding two novel parameters that characterise user demand and HARQ implementation. The scheduler was tested using varying three classes of service in QoS class identifiers (QCIs) table standardised by Third Generation Partnership Project for LTE network to characterise different services. It was also tested on the basis of packet prioritisation. The proposed scheduler was simulated with LTE-SIM simulator and compared with the MLWDF and proportional fairness schedulers. In terms of delay, throughput and packet loss ratio; the proposed scheduler increased overall system performance.

The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.

The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.

This thesis concerns scheduling of network traffic in grid context. Grid computing consists of a number of geographically distributed computers, which work together for solving large problems. The computers are connected through a network. When scheduling job execution in grid computing, data...

Full Text Available This is an investigated research article on resource block scheduling of Long Term Evolution (LTE. LTE is one of the evolutions of the Universal Mobile Telecommunication System (UMTS. It provides internet access to mobile users through smart phone, laptop and other android devices. LTE offers a high speed data and multimedia services. It supports data rates up to 100 Mbps in the downlink and 50 Mbps in the uplink transmission. Our research investigation was aim to the downlink scheduling. We have considered The Best CQI scheduling algorithm and the Round Robin scheduling algorithm. The implementation, analysis and comparison of these scheduling algorithms have been performed through MATLAB simulator. We have analyzed the impact of the scheduling schemes on the throughput and the fairness of both scheduling schemes. Here we have proposed a new scheduling algorithm that achieves a compromise between the throughput and the fairness. Orthogonal Frequency Division Multiplexing (OFDM has been adopted as the downlink transmission scheme. We have considered the impact of the channel delay on the throughput. In addition, MIMO transceiver systems have been implemented to increase the throughput

High performance routers are the basic building blocks of the Internet. Most high performance routers built today use crossbars and a centralized scheduler. Due to their high scheduling complexity, crossbar-based routers are not scalable and cannot keep pace with the explosive growth of the Internet

The existence of a well-designed scheduling procedure is a major condition for an effective integration of a flexible manufacturing cell (FMC) in the material flow of a firm. This paper shows the presence and relative importance of three parameter types in the scheduling of operations on a flexible

The existence of a well-designed scheduling procedure is a major condition for an effective integration of a flexible manufacturing cell (FMC) in the material flow of a firm. This paper shows the presence and relative importance of three parameter types in the scheduling of operations on a flexible

Synchronous dataflow graphs (SDFGs) are used extensively to model streaming applications. An SDFG can be extended with scheduling decisions, allowing SDFG analysis to obtain properties, such as throughput or buffer sizes for the scheduled graphs. Analysis times depend strongly on the size of the

We present an approach to the analysis and optimization of heterogeneous distributed embedded systems. The systems are heterogeneous not only in terms of hardware components, but also in terms of communication protocols and scheduling policies. When several scheduling policies share a resource...

Proper scheduling maintenance for Air Force C-130 aircraft involves the prevention of mismatches concerning the availability of the aircraft and a new wing kit which would prolong its use. Scheduling maintenance problems include the designation of wing kits in the production sequence to match for a

Two important characteristics encountered in many real-world scheduling problems are heterogeneous processors and a certain degree of uncertainty about the processing times of jobs. In this paper we address both, and study for the first time a scheduling problem that combines the classical unrelated

Two important characteristics encountered in many real-world scheduling problems are heterogeneous processors and a certain degree of uncertainty about the processing times of jobs. In this paper we address both, and study for the first time a scheduling problem that combines the classical unrelated

We consider machine scheduling on unrelated parallel machines with the objective to minimize the schedule makespan. We assume that, in addition to its machine dependence, the processing time of any job is dependent on the usage of a discrete renewable resource, e.g. workers. A given amount of that

McSweeney and Weatherly (1998) argued that differential habituation to the reinforcer contributes to the behavioral interactions observed during multiple schedules. The present experiment confirmed that introducing dishabituators into one component of a multiple schedule increases response rate in the other, constant, component. During baseline,…

textabstractWe address the problem of scheduling n identical jobs on m uniform parallel machines to optimize scheduling criteria that are nondecreasing in the job completion times. It is well known that this can be formulated as a linear assignment problem, and subsequently solved in O(n3) time. We

The planning and scheduling of the deicing and anti-icing activities is an important and challenging part of airport departure planning. Deicing planning has to be done in a highly dynamic environment involving several autonomous and self-interested parties. Traditional centralized scheduling approa

Considering the complex constraint between operations in nonstandard job shop scheduling problem (NJSSP), critical path of job manufacturing tree is determined according to priority scheduling function constructed. Operations are divided into dependent operations and independent operations with the idea of subsection, and corresponding scheduling strategy is put forward according to operation characteristic in the segment and the complementarities of identical function machines. Forward greedy rule is adopted mainly for dependent operations to make operations arranged in the right position of machine selected, then each operation can be processed as early as possible, and the total processing time of job can be shortened as much as possible. For independent operations optimum scheduling rule is adopted mainly, the inserting position of operations will be determined according to the gap that the processing time of operations is subtracted from idle time of machine, and the operation will be inserted in the position with minimal gap. Experiments show, under the same conditions, the result that operations are scheduled according to the object function constructed, and the scheduling strategy adopted is better than the result that operations are scheduled according to efficiency scheduling algorithm.

In the United States most teacher compensation issues are decided at the school district level. However, a group of states have chosen to play a role in teacher pay decisions by instituting statewide teacher salary schedules. Education Commission of the States has found that 17 states currently make use of teacher salary schedules. This education…

Sleep duration has been identified as risk factor for obesity already in children. Besides investigating the role of fat mass (FM), this study addressed the question whether endocrine mechanisms act as intermediates in the association between sleep duration and overweight/obesity. Within......-specific measure of sleep duration was derived to account for alteration in sleep duration during childhood/period of growth. Multivariate linear regression and quantile regression models confirmed an inverse relationship between sleep duration and measures of overweight/obesity. The estimate for the association...... of sleep duration and body mass index (BMI) was approximately halved after adjustment for FM, but remained significant. The strength of this association was also markedly attenuated when adjusting for insulin mainly for the upper BMI quantiles (Q80, β = −0.36 vs. β = −0.26; Q95, β = −0.87 vs. β = −0...

In the previous work of garbage collection (GC) models, scheduling analysis was given based on an assumption that there were no aperiodic mutator tasks. However, it is not true in practical real-time systems. The GC algorithm which can schedule aperiodic tasks is proposed, and the variance of live memory is analyzed. In this algorithm, active tasks are deferred to be processed by GC until the states of tasks become inactive, and the saved sporadic server time can be used to schedule aperiodic tasks. Scheduling the sample task sets demonstrates that this algorithm in this paper can schedule aperiodic tasks and decrease GC work. Thus, the GC algorithm proposed is more flexible and portable.

The generalized matching equation provides a good description of response allocation in concurrent schedules of positive reinforcement in nonhumans as well as in humans. The present experiment was conducted to further investigate the allocation of responding under concurrent schedules of negative reinforcement (i.e., timeouts from pressing a force cell) in humans. Each of three participants was exposed to different reinforcement ratios (9:1, 1:1 and 1:9) in the terminal links of a concurrent-chains schedule of negative reinforcement. The allocation of responding under this schedule was well described by the generalized matching equation, for each participant. These results replicate previous findings obtained with nonhumans and humans under concurrent schedules of positive reinforcement. In addition, they extend the results reported by Alessandri and Rivière (2013) showing that human behavior maintained by timeouts from an effortful response is sensitive to changes in relative reinforcement ratios as well as relative delays of reinforcement.

Aim of this research is to minimize makespan in the flexible job shop environment by the use of genetic algorithms and scheduling rules.Software is developed using genetic algorithms and scheduling rules based on certain constraints such as non-preemption of jobs,recirculation,set up times,non-breakdown of machines etc.Purpose of the software is to develop a schedule for flexible job shop environment,which is a special case of job shop scheduling problem.Scheduling algorithm used in the software is verified and tested by using MT10 as benchmark problem,presented in the flexible job shop environment at the end.LEKIN(R) software results are also compared with results of the developed software by the use of MT10 benchmark problem to show that the latter is a practical software and can be used successfully at BIT Training Workshop.

In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I/O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider's hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost...

ISAPS is a scheduling and planning tool for shop floor personnel working in a Flexible Manufacturing System (FMS) environment. The ISAP system has two integrated components: the Predictive Scheduler (PS) and the Reactive Scheduler (RS). These components work cooperatively to satisfy the four goals of the ISAP system, which are: (G1) meet production due dates, (G2) maximize machining center utilization, (G3) minimize cutting tool migration, and (G4) minimize product flow time. The PS is used to establish schedules for new production requirements. The RS is used to adjust the schedules produced by the PS for unforeseen events that occur during production operations. The PS and RS subsystems have been developed using IntelliCorp's Knowledge Engineering Environment (KEE), an expert system development shell, and Common LISP. Software Quality Assurance (SQA) techniques have been incorporated throughout the development effort to assure the ISAP system meets the manufacturing goals and end user requirements. 5 refs., 4 figs.

Pk|fix|Cmax problem is a new scheduling problem based on the multiprocessor parallel job, and it is proved to be NP-hard problem when k≥3. This paper focuses on the case of k=3. Some new observations and new techniques for P3|fix|Cmax problem are offered. The concept of semi-normal schedulings is introduced, and a very simple linear time algorithm Semi-normal Algorithm for constructing semi-normal schedulings is developed. With the method of the classical Graham List Scheduling, a thorough analysis of the optimal scheduling on a special instance is provided, which shows that the algorithm is an approximation algorithm of ratio of 9/8 for any instance of P3|fix|Cmax problem, and improves the previous best ratio of 7/6 by M.X.Goemans.

This document provides the Environmental Restorations Contractor (ERC) and the Project Hanford Management Contractor.(PHMC) a schedule in accordance with the WHC-CM-7-5, Environmental Compliance` and BHI- EE-02, Environmental Requirements, of monitoring and sampling routines for the Near-Field Monitoring (NFM) program during calendar year (CY) 1997. Every attempt will be made to consistently follow this schedule; any deviation from this schedule will be documented by an internal memorandum (DSI) explaining the reason for the deviation. The DSI will be issued by the scheduled performing organization and directed to Near-Field Monitoring. The survey frequencies for particular sites are determined by the technical judgment of Near- Field Monitoring and may depend on the site history, radiological status, use, and general conditions. Additional surveys may be requested at irregular frequencies if conditions warrant. All radioactive wastes sites are scheduled to be surveyed at least annually. Any newly discovered wastes sites not documented by this schedule will be included in the revised schedule for CY 1998. The outside perimeter road surveys of 200 East and West Area and the rail survey from the 300 Area to Columbia Center will be performed in the year 2000 per agreement with Department of Energy. Richland Field Office. This schedule does not discuss staffing needs, nor does it list the monitoring equipment to be used in completing specific routines. Personnel performing routines to meet this schedule shall communicate any need for assistance in completing these routines to Radiological Control management and Near-Field Monitoring. After each routine survey is completed, a copy of the survey record, maps, and data sheets will be forwarded to Near-Field Monitoring. These routine surveys will not be considered complete until this documentation is received. At the end of each month, the ERC and PHMC radiological control organizations shall forward a copy of the Routine

Full Text Available In this article, an Arrival and Departure Time Predictor (ADTP for scheduling communication in opportunistic Internet of Things (IoT is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process.

Full Text Available In this paper, we discuss project scheduling with conflicting activity-resources. Several project activities require same resources but, may be scheduled with the certain lapse of time resulting in repeatedly using the same kind of resources for executing dissimilar activities. Due to the frequent usage of same resources multiple times, expenditure become more expensive and project duration extends. The problem is to find out such kind of activities which are developing implicit relations amid them. , we proposed a solution by introducing TVs (Transparent view of Scheduling model. First, we analyze and enlists activities according to required resources, categorize them and then we segregate dependent and independent activities by indicating a value. Performing Dependency test on activities by using Pearson's Correlation Coefficient (PCC to calculate the rate of relations among the ordered activities for similar resources. By using this model we can reschedule activities to avoid confusion and disordering of resources without consumption of time and capital.

In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process.

This paper aims to present a comprehensive proposal for project scheduling and control by applying fuzzy eamed value.It goes a step further than the existing literature:in the formulation of the fuzzy earned value we consider not only its duration,but also cost and production,and alternatives in the scheduling between the earliest and latest times.The mathematical model is implemented in a prototypical construction project with all the estimated values taken as fuzzy numbers.Our findings suggest that different possible schedules and the fuzzy arithmetic provide more objective results in uncertain environments than the traditional methodology.The proposed model allows for controlling the vagueness of the environment through the adjustment of the a-cut,adapting it to the specific circumstances of the project.

Activity recognition has become a key issue in smart home environments. The problem involves learning high level activities from low level sensor data. Activity recognition can depend on several variables; one such variable is duration of engagement with sensorised items or duration of intervals between sensor activations that can provide useful information about personal behaviour. In this paper a probabilistic learning algorithm is proposed that incorporates episode, time and duration information to determine inhabitant identity and the activity being undertaken from low level sensor data. Our results verify that incorporating duration information consistently improves the accuracy.

... PLACEMENT (GENERAL) Federal Employment Priority Consideration Program for Displaced Employees of the District of Columbia Department of Corrections § 330.1102 Duration. This program terminates 1 year...

Introduction Despite the holiday season affecting available manpower, many key internal milestones have been passed over the summer, thanks to the dedication and commitment of the team at point 5. In particular, the installation on, and within, YB0 has progressed steadily through several potentially difficult phases. The v36 planning contingency of lowering YB-1 and YB-1 wheels on schedule in October, before Tracker installation, will be executed in order to give more time to complete YB0 services work, whilst still being consistent with completion of heavy lowering by the end of 2007. Safety In the underground areas the peak level of activity and parallel work has been reached and this will continue for the coming months. Utmost vigilance is required of everybody working underground and this must be maintained. However, it is encouraging to note that the compliance with safety rules is, in general, good. More and more work will be carried out from scaffolding and mobile access platforms. (cherry-picke...

Introduction and Schedule After nearly seven months of concentrated effort, the installation of services on YB0 moved off the CMS critical path in late November. In line with v36 planning provisions, the additional time needed to finish this challeng¬ing task was accommodated by reducing sequential dependencies between assembly tasks, putting more tasks (especially heavy logistic movements) in parallel with activities on, or within, the central wheel. Thus the lowering of wheels YB-1 and YB -2 and of disk YE-3 is already complete, the latter made possible, in the shadow of YB0 work, by inverting the order of the 3 endcap disks in the surface building. Weather conditions permitting, the Tracker will be transported to point 5 during CMS week for insertion in EB before CERN closes. The lowering of the last two disks will take place mid- and end-of January, respectively. Thus central beampipe installation can be confidently planned to start in February as foreseen, allowing closure of CMS in time for CRA...

Introduction Since the lowering of YB0 in February, less spectacular but nonetheless crucial progress has been made along the critical path to CMS completion. The YB0 has been aligned with the beamline to a fraction of a mm, and the HCAL has been fully installed. Cabling scaffolding for YB0 services has been built and one half (-z end) of the ECAL barrel has been installed. The YB0 services installation has begun, with two of the major technical challenges delaying bulk installation, namely PP1 detailed design, manufacture and installation plus Tracker cooling pipe insulation, now apparently solved. Significant difficulties in detailed design, integration and procurement of cable ducts remain. Despite this, the design of the +end is close to complete, and Tracker power cable installation on two sectors of the +end is well advanced. A new master schedule, v36.0, is being prepared to account for the updated actual situation at point 5 and for the revised LHC machine planning. Safety The enormous amount of...

Full Text Available Differences in the duration of interglacials have long been apparent in palaeoclimate records of the Late and Middle Pleistocene. However, a systematic evaluation of such differences has been hampered by the lack of a metric that can be applied consistently through time and by difficulties in separating the local from the global component in various proxies. This, in turn, means that a theoretical framework with predictive power for interglacial duration has remained elusive. Here we propose that the interval between the terminal oscillation of the bipolar seesaw and three thousand years (kyr before its first major reactivation provides an estimate that approximates the length of the sea-level highstand, a measure of interglacial duration. We apply this concept to interglacials of the last 800 kyr by using a recently-constructed record of interhemispheric variability. The onset of interglacials occurs within 2 kyr of the boreal summer insolation maximum/precession minimum and is consistent with the canonical view of Milankovitch forcing pacing the broad timing of interglacials. Glacial inception always takes place when obliquity is decreasing and never after the obliquity minimum. The phasing of precession and obliquity appears to influence the persistence of interglacial conditions over one or two insolation peaks, leading to shorter (~ 13 kyr and longer (~ 28 kyr interglacials. Glacial inception occurs approximately 10 kyr after peak interglacial conditions in temperature and CO2, representing a characteristic timescale of interglacial decline. Second-order differences in duration may be a function of stochasticity in the climate system, or small variations in background climate state and the magnitude of feedbacks and mechanisms contributing to glacial inception, and as such, difficult to predict. On the other hand, the broad duration of an interglacial may be determined by the phasing of astronomical parameters and the history of

Active schedule is one of the most basic and popular concepts in production scheduling research. For identical parallel machine scheduling with jobs' dynamic arrivals, the tight performance bounds of active schedules under the measurement of four popular objectives are respectively given in this paper. Similar analysis method and conclusions can be generalized to static identical parallel machine and single machine scheduling problem.

In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.

Effective sleep/wake schedules for space operations must balance severe time constraints with allocating sufficient time for sleep in order to sustain high levels of neurobehavioral performance. Developing such schedules requires knowledge about the relationship between scheduled "time in bed" (TIB) and actual physiological sleep obtained. A ground-based laboratory study in N=93 healthy adult subjects was conducted to investigate physiological sleep obtained in a range of restricted sleep schedules. Eighteen different conditions with restricted nocturnal anchor sleep, with and without diurnal naps, were examined in a response surface mapping paradigm. Sleep efficiency was found to be a function of total TIB per 24 h regardless of how the sleep was divided among nocturnal anchor sleep and diurnal nap sleep periods. The amounts of sleep stages 1+2 and REM showed more complex relationships with the durations of the anchor and nap sleep periods, while slow-wave sleep was essentially preserved among the different conditions of the experiment. The results of the study indicated that when sleep was chronically restricted, sleep duration was largely unaffected by whether the sleep was placed nocturnally or split between nocturnal anchor sleep periods and daytime naps. Having thus assessed that split-sleep schedules are feasible in terms of obtaining physiological sleep, further research will reveal whether these schedules and the associated variations in the distribution of sleep stages may be advantageous in mitigating neurobehavioral performance impairment in the face of limited time for sleep.

Full Text Available The specificity of the yard work requires the particularly careful treatment of the issues of scheduling and budgeting in the production planning processes. The article presents the method of analysis of the assembly sequence taking into account the duration of individual activities and the demand for resources. A method of the critical path and resource budgeting were used. Modelling of the assembly was performed using the acyclic graphs. It has been shown that the assembly sequences can have very different feasible budget regions. The proposed model is applied to the assembly processes of large-scale welded structures, including the hulls of ships. The presented computational examples have a simulation character. They show the usefulness of the model and the possibility to use it in a variety of analyses.

We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.

Transport of farm animals gives rise to concern about their welfare. Specific attention has been given to the duration of animal transport, and maximum journey durations are used in legislation that seek to minimise any negative impact of transport on animal welfare. This paper reviews......, and those aspects that may be exacerbated by journey time. We identify four aspects of animal transport, which have increasing impact on welfare as transport duration increases. These relate to (i) the physiological and clinical state of the animal before transport; and - during transport - to (ii) feeding...... the relatively few scientific investigations into effects of transport duration on animal welfare in cattle, sheep, horses, pigs and poultry. From the available literature, we attempt to distinguish between aspects, which will impair welfare on journeys of any duration, such as those associated with loading...

Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.

Full Text Available Schedule Tribes (ST’s are Indian population groups that are explicitly recognized by the constitution of India order 1950. The order lists 744 tribes across 22 states in its first schedule. In Andhra Pradesh 33 types of Schedule Tribes are living in 8 districts. ST’s are 6.6% are in total population of Andhra Pradesh. They have rich heritage along with their innocent life style. As they are living in hill areas and forests they have some peculiar characters like indications of primitive traits, distinctive culture, and shyness of contact with other communities, geographical isolation, backwardness etc. So, for their development central and state governments are implementing different programmes and schemes since 1951. After the Ministry of Tribal affairs were constituted in 1999, there is more focus on development of Schedule Tribes in Indian society especially in Andhra Pradesh. The persisting problems like low literacy and high drop-outs, inadequate health services, lack of nutrition food, extreme poverty, and ineffective implementation of schemes etc are putting them away from economic development. Hence, there should be more commitment by both central and state government and local bodies to develop Schedule Tribes in the society. As literacy is 37% NGO’s and other voluntary organizations have to play key role to bring awareness among schedule tribes regarding programs and scheme for their development. Awareness and participation of Schedule Tribes in the implementation of policies leads to prosperity of ST community in the state as well as country.

The operating system's role in a computer system is to manage the various resources. One of these resources is the Central Processing Unit. It is managed by a component of the operating system called the CPU scheduler. Schedulers are optimized for typical workloads expected to run on the platform. However, a single scheduler may not be appropriate for all workloads. That is, a scheduler may schedule a workload such that the completion time is minimized, but when another type of workload is run on the platform, scheduling and therefore completion time will not be optimal; a different scheduling algorithm, or a different set of parameters, may work better. Several approaches to solving this problem have been proposed. The objective of this survey is to summarize the approaches based on data mining, which are available in the literature. In addition to solutions that can be directly utilized for solving this problem, we are interested in data mining research in related areas that have potential for use in operat...

Full Text Available We present a design of a complete and practical scheduler for the 3GPP Long Term Evolution (LTE downlink by integrating recent results on resource allocation, fast computational algorithms, and scheduling. Our scheduler has low computational complexity. We define the computational architecture and describe the exact computations that need to be done at each time step (1 milliseconds. Our computational framework is very general, and can be used to implement a wide variety of scheduling rules. For LTE, we provide quantitative performance results for our scheduler for full buffer, streaming video (with loose delay constraints, and live video (with tight delay constraints. Simulations are performed by selectively abstracting the PHY layer, accurately modeling the MAC layer, and following established network evaluation methods. The numerical results demonstrate that queue- and channel-aware QoS schedulers can and should be used in an LTE downlink to offer QoS to a diverse mix of traffic, including delay-sensitive flows. Through these results and via theoretical analysis, we illustrate the various design tradeoffs that need to be made in the selection of a specific queue-and-channel-aware scheduling policy. Moreover, the numerical results show that in many scenarios strict prioritization across traffic classes is suboptimal.

We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

Full Text Available Purpose : To quantitate the level of difficulty and determine consistency of hemodynamic responses with various expiratory strain (ES durations. Methods : Thirty-four healthy subjects performed the Valsalva maneuver (VM with an ES duration of 10, 12, and 15 seconds in random order. Level of difficulty after each trial was rated 1 to 10, with 10 being the most difficult. Blood pressure and heart rate (HR were recorded continuously and non-invasively. Parameters studied were Valsalva ratio (VR, early phase II (IIE, late phase II (IIL, tachycardia latency (TL, bradycardia latency (BL, and overshoot latency (OV-L. Consistency of responses was calculated. Results : Difficulty increased significantly with increased ES duration: 5.1±0.1 (mean±SEM at 10 seconds, 5.9±0.1 at 12 seconds, and 6.8±0.1 at 15 seconds (p<0.001. Phase IIE, TL, BL, OV-L, and VR response did not differ statistically with increasing ES durations, and there were no differences in variability. Phase IIL response increased significantly with increasing ES duration. Phase IIL was poorly delineated in 14 of 102 trials with 10 seconds ES duration. Conclusions : ES duration of 10 seconds created a low level of difficulty in healthy individuals. This strain duration produced consistent hemodynamic response for all parameters tested except IIL phase. The absence of IIL phase with 10 seconds ES should not be interpreted as an indicator of sympathetic vasoconstrictor failure.

This paper analyzes the wage effects of unemployment duration and frequency for different regional labor market situations in The Netherlands using a simultaneous equations approach. The main finding is that unemployment duration has a significant negative effect and the frequency of unemployment a

In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172

In this paper,we study a model on joint decisions of scheduling and subcontracting,in which jobs(orders) can be either processed by parallel machines at the manufacturer in-house or subcontracted to a subcontractor.The manufacturer needs to determine which jobs should be produced in-house and which jobs should be subcontracted.Furthermore,it needs to determine a production schedule for jobs to be produced in-house.We discuss five classical scheduling objectives as production costs.For each problem with different objective functions,we give optimality conditions and propose dynamic programming algorithms.

WiDom is a wireless prioritized medium access control (MAC) protocol which offers a very large number of priority levels. Hence, it brings the potential for employing non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. One schedulability analysis for WiDom has already been proposed but recent research has created a new version of WiDom with lower overhead (we call it: WiDo...

Event-B is a refinement-based formal method that has been shown to be useful in developing concurrent and distributed programs. Large models can be decomposed into sub-models that can be refined semi-independently and executed in parallel. In this paper, we show how to introduce explicit control flow for the concurrent sub-models in the form of event schedules. We explore how schedules can be designed so that their application results in a correctness-preserving refinement step. For practical application, two patterns for schedule introduction are provided, together with their associated proof obligations. We demonstrate our method by applying it on the dining philosophers problem.

This paper deals with the problem of simultaneously scheduling machines and a number of autonomous mobile robots in a flexible manufacturing system (FMS). Besides capability of transporting materials between machines, the considered mobile robots are different from other material handling devices...... in terms of their advanced ability to perform tasks at machines by using their manipulation arms. The mobile robots thus have to be scheduled in relation to scheduling of machines so as to increase the efficiency of the overall system. The performance criterion is to minimize time required to complete all...

To help in planning Fenton Hill experimental operations in concert with preparations for the Long-Term Flow Test (LTFT) next summer, the following schedule is proposed. This schedule fits some of the realities of the next few months, including the Laboratory closure during the Holidays, the seismic monitoring tests in Roswell, and the difficulties of operating during the winter months. Whenever possible, cyclic pumping operations during the colder months will be scheduled so that the pump will be on during the late evening and early morning hours to prevent freezeup.

Full Text Available Tasks scheduling is the most challenging problem in the parallel computing. Hence, the inappropriate scheduling will reduce or even abort the utilization of the true potential of the parallelization. Genetic algorithm (GA has been successfully applied to solve the scheduling problem. The fitness evaluation is the most time consuming GA operation for the CPU time, which affect the GA performance. The proposed synchronous master-slave algorithm outperforms the sequential algorithm in case of complex and high number of generations problem.

Risk control on project schedule is one of the focus problems in the academic circle and the practical area all the time. Lots of research about risk control on project schedule have been fulfilled and many achievements have appeared in recent several decades. The literature on the techniques of schedule uncertainty control was reviewed. A summary analysis on those chievements is presented such as CPM, PERT, MC, BBN, and so on and in light of that summary analysis a deep discussion in terms of advantages and disadvantages of existing research has been analyzed, so that researchers can continue to refine their research.

Planning research in Artificial Intelligence (AI) has often focused on problems where there are cascading levels of action choice and complex interactions between actions. In contrast. Scheduling research has focused on much larger problems where there is little action choice, but the resulting ordering problem is hard. In this paper, we give an overview of M planning and scheduling techniques, focusing on their similarities, differences, and limitations. We also argue that many difficult practical problems lie somewhere between planning and scheduling, and that neither area has the right set of tools for solving these vexing problems.

Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

In the initial link of a complex schedule, one discriminative stimulus was presented and lever pressing produced tokens on fixed-ratio schedules. In the terminal link, signalled by a second discriminative stimulus, deposits of the tokens produced food. With two rats, the terminal link was presented after each sixth component schedule of token reinforcement was completed. With the other two rats, the terminal link was presented following the first component schedule completed after a fixed interval. During the terminal link, each token deposit initially produced food. The schedule of food presentation was subsequently increased such that an increasing number of token deposits in the terminal link was required for each food presentation. Rates of lever pressing in the initial link were inversely related to the schedule of food presentation in the terminal link. These results are similar to those of experiments that have varied schedules of food presentation in chained schedules. Rates and patterns of responding controlled throughout the initial link were more similar to those ordinarily controlled by second-order brief-stimulus schedules than to those controlled by comparable extended chained schedules. PMID:16811869

Reinforcement schedules are considered in relation to applied behavior analysis by examining several recent laboratory experiments with humans and other animals. The experiments are drawn from three areas of contemporary schedule research: behavioral history effects on schedule performance, the role of instructions in schedule performance of humans, and dynamic schedules of reinforcement. All of the experiments are discussed in relation to the role of behavioral history in current schedule pe...

Through theoretical analysis,we show how aligning pulse durations affect the degree and the time-rate slope of nitrogen field-free alignment at a fixed pulse intensity.It is found that both the degree and the slope first increase,then saturate,and finally decrease with the increasing pump duration.The optimal durations for the maximum degree and the maximum slope of the alignment are found to be different.Additionally,they are found to mainly depend on the molecular rotational period,and are affected by the temperature and the aligning pump intensities.The mechanism of molecular alignment is also discussed.

The purpose of the study was to determine how long humans could sustain the discharge of single motor units during a voluntary contraction. The discharge of motor units in first dorsal interosseus of subjects (27.8 ± 8.1 years old) was recorded for as long as possible. The task was terminated when the isolated motor unit stopped discharging action potentials, despite the ability of the individual to sustain the abduction force. Twenty-three single motor units were recorded. Task duration was 21.4 ± 17.8 min. When analysed across discharge duration, mean discharge rate (10.6 ± 1.8 pulses s(-1)) and mean abduction force (5.5 ± 2.8% maximum) did not change significantly (discharge rate, P = 0.119; and abduction force, P = 0.235). In contrast, the coefficient of variation for interspike interval during the initial 30 s of the task was 22.2 ± 6.0% and increased to 31.9 ± 7.0% during the final 30 s (P < 0.001). All motor units were recruited again after 60 s of rest. Although subjects were able to sustain a relatively constant discharge rate, the cessation of discharge was preceded by a gradual increase in discharge variability. The findings also showed that the maximal duration of human motor unit discharge exceeds that previously reported for the discharge elicited in motor neurons by intracellular current injection in vitro.

Decades ago two classes of gamma-ray bursts were identified and delineated as having durations shorter and longer than about 2 s. Subsequently indications also supported the existence of a third class. Using maximum likelihood estimation we analyze the duration distribution of 888 Swift BAT bursts observed before October 2015. Fitting three log-normal functions to the duration distribution of the bursts provides a better fit than two log-normal distributions, with 99.9999% significance. Similarly to earlier results, we found that a fourth component is not needed. The relative frequencies of the distribution of the groups are 8% for short, 35% for intermediate and 57% for long bursts which correspond to our previous results. We analyse the redshift distribution for the 269 GRBs of the 888 GRBs with known redshift. We find no evidence for the previously suggested difference between the long and intermediate GRBs' redshift distribution. The observed redshift distribution of the 20 short GRBs differs with high si...

Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam

... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...

The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously

Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.

The European research project SEMIAH aims at designing a scalable infrastructure for residential demand response. This paper presents the progress towards a centralized load scheduling algorithm for controlling home appliances taking power grid constraints and satisfaction of consumers into account....

We consider an integrated job scheduling and network routing problem which appears in Grid Computing and production planning. The problem is to schedule a number of jobs at a finite set of machines, such that the overall profit of the executed jobs is maximized. Each job demands a number...... of resources which must be sent to the executing machine through a network with limited capacity. A job cannot start before all of its resources have arrived at the machine. The scheduling problem is formulated as a Mixed Integer Program (MIP) and proved to be NP-hard. An exact solution approach using Dantzig...... instances with 1,000 jobs and 1,000 machines covering 24 hours of scheduling activity on a Grid network. The algorithm is also compared to simulations of a real-life Grid, and results show that the solution quality significantly increases when solving the problem to optimality. The promising results...

National Aeronautics and Space Administration — Scheduling the daily activities of the crew on a human space mission is currently a cumbersome job performed by a large team of operations experts on the ground....

We assess the impact of the introduction of schedules of non-economic damages (i.e. tiered caps systems) on the behavior of insurers operating in the medical liability market for hospitals while controlling the performance of the judicial system, measured as court backlog. Using a difference......-in-differences strategy on Italian data, we find that the introduction of schedules increases the presence of insurers (i.e. medical liability market attractiveness) only in inefficient judicial districts. In the same way, court inefficiency is attractive to insurers for average values of schedules penetration...... of the market, with an increasing positive impact of inefficiency as the territorial coverage of schedules increases. Finally, no significant impact is registered on paid premiums. Our analysis sheds light on a complex set of elements affecting the decisions of insurers in malpractice markets. The analysis...

The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

Full Text Available an efficient feedback scheduling scheme based on the proposed Feed Forward Neural Network (FFNN scheme is employed to improve the overall control performance while minimizing the overhead of feedback scheduling which exposed using the optimal solutions obtained offline by mathematical optimization methods. The previously described FFNN is employed to adapt online the sampling periods of concurrent control tasks with respect to changes in computing resource availability. The proposed intelligent scheduler will be examined with different optimization algorithms. An inverted pendulum cost function is used in these experiments. Then, simulation of three inverted pendulums as intelligent Real Time System (RTS is described in details. Numerical simulation results demonstrates that the proposed scheme can reduce the computational overhead significantly while delivering almost the same overall control performance as compared to optimal feedback scheduling

National Aeronautics and Space Administration — The allocation and scheduling of limited communication assets to an increasing number of satellites and other spacecraft remains a complex and challenging problem....

Project Physics is taught at Gibault High School (Waterloo, IL) using a modular schedule and learning activity packets. A description of the course, instructional strategies used, and the learning activity packets is provided. (JN)

This document contains the CY2000 schedules for the routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section includes sampling locations, sample types, and analyses to be performed.

Over a period of about four months, the IVS Coordinating Center (IVSCC) each year composes the Master Schedule for the IVS observing program of the next calendar year. The process begins in early July when the IVSCC contacts the IVS Network Stations to request information about available station time as well as holiday and maintenance schedules for the upcoming year. Going through various planning stages and a review process with the IVS Observing Program Committee (OPC), the final version of the Master Schedule is posted by early November. We describe the general steps of the composition and illustrate them with the example of the planning for the Master Schedule of the 2010 observing year.

...) Inservice training plans for enforcement personnel will be developed within 18 months of plan approval. (f... Inspection Scheduling System will be fully implemented and in operation March 31, 1975....

The Robert C. Byrd Green Bank Telescope's (GBT) Dynamic Scheduling System (DSS), in production use since September 2009, was designed to maximize observing efficiency while maintaining the GBT's flexibility, improving data quality, and minimizing any undue adversity for the observers. Using observing criteria, observer availability and qualifications, three-dimensional weather forecasts, and telescope state, the DSS software is capable of optimally scheduling observers 24 to 48 hours in advance on a telescope having a wide-range of capabilities in a geographical location with variable weather patterns. Recent improvements for the GBT include an expanded frequency coverage (0.390-90 GHz), proper treatment of fully sampled array receivers, increasingly diverse observing criteria, the ability to account for atmospheric instability from clouds, and new tools for scheduling staff to control and interact with generated schedules and the underlying database.

The functioning of the Deep Space Network Operations Scheduling, Jet Propulsion Laboratory, CA is reviewed. The primary objectives of the Operations Scheduling are: to schedule the worldwide global allocation of ground communications, tracking facilities, and equipment; and to provide deep space telecommunications for command, tracking, telemetry, and control in support of flight mission operations and tests. Elements of the earth set are Deep Space Stations (DSS) which provide the telecommunications link between the earth and spacecraft; NASA Communications Network; Network Data Processing Area; Network Operations Control Area which provides operational direction to the DSS; Mission Control and Computing systems; and Mission Support areas which provide flight control of the spacecraft. Elements of the space set include mission priorities and requirements which determine the spacecraft queue for allocating network resources. Scheduling is discussed in terms of long-range (3 years), mid-range (8 weeks), and short-range (2 weeks).

This paper considers parallel machine scheduling with special jobs. Normal jobs can be processed on any of the parallel machines, while the special jobs can only be processed on one machine. The problem is analyzed for various manufacturing conditions and service requirements. The off-line scheduling problem is transformed into a classical parallel machine scheduling problem. The on-line scheduling uses the FCFS (first come, first served), SWSC (special window for special customers), and FFFS (first fit, first served) algorithms to satisfy the various requirements. Furthermore, this paper proves that FCFS has a competitive ratio of m, where m is the number of parallel machines, and this bound is asymptotically tight, SWSC has a competitive ratio of 2 and FFFS has a competitive ratio of , and these bounds are tight.

traditionally resolved by consulting an STM-level contention manager. Consequently, the contention managers of these "conventional" TM implementations suffer from a lack of precision and often fail to ensure reasonable performance in high-contention workloads. Recently, scheduling-based TM contention-management...... has been proposed for increasing TM efficiency under high-contention [2, 5, 19]. However, only user-level schedulers have been considered. In this work, we propose, implement and evaluate several novel kernel-level scheduling support mechanisms for TM contention management. We also investigate...... different strategies for efficient communication between the kernel and the user-level TM library. To the best of our knowledge, our work is the first to investigate kernel-level support for TM contention management. We have introduced kernel-level TM scheduling support into both the Linux and Solaris...

The optimization of space operations is examined in the light of optimization heuristics for computer algorithms and iterative search techniques. Specific attention is given to the search concepts known collectively as intelligent perturbation algorithms (IPAs) and their application to crew/resource allocation problems. IPAs iteratively examine successive schedules which become progressively more efficient, and the characteristics of good perturbation operators are listed. IPAs can be applied to aerospace systems to efficiently utilize crews, payloads, and resources in the context of systems such as Space-Station scheduling. A program is presented called the MFIVE Space Station Scheduling Worksheet which generates task assignments and resource usage structures. The IPAs can be used to develop flexible manifesting and scheduling for the Industrial Space Facility.

Full Text Available Cloud Computing offers the avant-garde services at a stretch that are too attractive for any cloud user to ignore. With its growing application and popularization, IT companies are rapidly deploying distributed data centers globally, posing numerous challenges in terms of scheduling of resources under different administrative domains. This perspective brings out certain vital factors for efficient scheduling of resources providing a wide genre of characteristics, diversity in context of level of service agreements and that too with user-contingent elasticity. In this paper, a comprehensive survey of research related to various aspects of cloud resource scheduling is provided. A comparative analysis of various resource scheduling techniques focusing on key performance parameters like Energy efficiency, Virtual Machine allocation and migration, Cost-effectiveness and Service-Level Agreement is also presented.

The present study had 2 main objectives: (1) examine the effect of Parkinson's disease (PD) on vowel and consonant duration in French read speech and (2) investigate whether the durational contrasts of consonants and vowels are maintained or compromised. The data indicated that the consonant durations were shortened in Parkinsonian speech (PS), compared to control speech (CS). However, this shortening was consonant dependent: unvoiced occlusives and fricatives were significantly shortened compared to other consonant categories. All vowels were slightly longer in PS than in CS, however, the observed differences were below the level of significance. Despite significant shortening of some consonant categories, the general pattern of intrinsic duration was maintained in PS. There was slightly less agreement for vowels with the normal contrast of intrinsic durations, possibly because vowel durational contrasts are more sensitive to PD disorders. Most PD patients tended to maintain the intrinsic duration contrasts of both vowels and consonants, suggesting that low-level articulatory constraints operate in a similar way and with the same weight in PS and CS. Copyright 2009 S. Karger AG, Basel.

International audience; Distributed systems require effective mechanisms to manage the reliable provisioning of computational resources from different and distributed providers. Moreover, the dynamic environment that affects the behaviour of such systems and the complexity of these dynamics demand autonomous capabilities to ensure the behaviour of distributed scheduling platforms and to achieve business and user objectives. In this paper we propose a self-adaptive distributed scheduling platf...

This Springer Brief covers emerging maritime wideband communication networks and how they facilitate applications such as maritime distress, urgency, safety and general communications. It provides valuable insight on the data transmission scheduling and protocol design for the maritime wideband network. This brief begins with an introduction to maritime wideband communication networks including the architecture, framework, operations and a comprehensive survey on current developments. The second part of the brief presents the resource allocation and scheduling for video packet transmission wit

The IVS scheduled a special astrometric VLBI session for the International Year of Astronomy 2009 (IYA09) commemorating 400 years of optical astronomy and 40 years of VLBI. The IYA09 session is the most ambitious geodetic session to date in terms of network size, number of sources, and number of observations. We describe the process of designing, coordinating, scheduling, pre-session station checkout, correlating, and analyzing this session.

Full Text Available Grid computing is concerned with coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations. Efficient scheduling of complex applications in a grid environment reveals several challenges due to its high heterogeneity, dynamic behavior and space shared utilization. Objectives of scheduling algorithms are increase in system throughput, efficiency and reduction in task completion time. The main focus of this paper is to highlight the merits of resource and task selection technique based on certain heuristics.

This document presents the revised data taking schedule of NA61 with ion beams. The revision takes into account limitations due to the new LHC schedule as well as final results concerning the physics performance with secondary ion beams. It is proposed to take data with primary Ar and Xe beams in 2012 and 2014, respectively, and to test and use for physics a secondary B beam from primary Pb beam fragmentation in 2010, 2011 and 2013.

The Advanced Reactors Transition (ART) Resource Loaded Schedule (RLS) provides a cost and schedule baseline for managing the project elements within the ART Program. The Fast Flux Test Facility (FETF) activities are delineated through the end of FY 2000, assuming continued standby. The Nuclear Energy (NE) Legacies and Plutonium Recycle Test Reactor (PRTR) activities are delineated through the end of the deactivation process. This revision reflects the 19 Oct 1999 baseline.