Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This paper presents the Bayesian analysis of a general multivariate exponential smoothing model that allows us to forecast timeseries jointly, subject to correlated random disturbances. The general multivariate model, which can be formulated as a seemingly unrelated regression model, includes the previously studied homogeneous multivariate Holt-Winters model as a special case when all of the univariate series share a common structure. MCMC simulation techniques are required in order to approach the non-analytically tractable posterior distribution of the model parameters. The predictive distribution is then estimated using Monte Carlo integration. A Bayesian model selection criterion is introduced into the forecasting scheme for selecting the most adequate multivariate model for describing the behaviour of the timeseries under study. The forecasting performance of this procedure is tested using some real examples.

To bridge the gap between a physical Langevin equation and a stochastic equation used in the time-series analysis, and to clarify the physical foundations of the latter, the time-seriesmodel from the Langevin equation is derived with the aid of two manipulationselimination of irrelevant variables and projection of state variables upon a space spanned by observed quantities. The order of the two manipulations is shown to be important to find an equation called the Kalman filter in control theory. All the results are summarized in a concise schematic diagram which relates various models and equations established so far in different fields.

This paper presents a novel clustering model for mining patterns from imprecise electric load timeseries. The model consists of three components. First, it contains a process that deals with representation an...

Interval-valued timeseries are interval-valued data that are collected in a chronological sequence over time. This paper introduces three approaches to forecasting interval-valued timeseries. The first two approaches are based on multilayer perceptron (MLP) neural networks and Holts exponential smoothing methods, respectively. In Holts method for interval-valued timeseries, the smoothing parameters are estimated by using techniques for non-linear optimization problems with bound constraints. The third approach is based on a hybrid methodology that combines the MLP and Holt models. The practicality of the methods is demonstrated through simulation studies and applications using real interval-valued stock market timeseries.

Publisher Summary This chapter provides an overview of timeseries. A timeseries is a set of observations of a variable made at different points of time and arranged in chronological order, each observation representing the value of the variable either at a given moment or during the interval of time between this observation and the preceding one. In general, the observations forming a time-series as made at equidistant intervals of time are considered. The factors affecting time-series may be recurring or nonrecurring, or evolutionary, periodic, or random. The method of moving averages consists in determining the average value for a certain number of terms of a timeseries and taking this average as the trend normal value for the middle of the period covered in the calculation of the average, that is, the period extent of the moving average.

High-frequency (less than monthly) timeseries data provide valuable information for designing the adequate yield policy of the organisation. However, it is not easy to extract this information from raw data; although the evolution of the series is usually induced by stable patterns of behaviour of the economic agents, these patterns are so complex that simple smoothing techniques or subjective forecasting cannot consider all underlying factors. In this paper, we discuss timeseriesmodels as a tool for carrying out a full and efficient analysis. The main ideas are illustrated with an application to Spanish daily electricity consumption.

Inc. Clifton Park, NY, USA arslan.basharat@kitware.com Mubarak Shah+ + University of Central Florida Orlando, FL, USA shah@cs.ucf.edu Abstract We use concepts from chaos theory in order to model nonlinear dynamical systems that exhibit deterministic be- havior. Observed timeseries from such a system can be em

Time-series analysis is an important domain of machine learning and a plethora of methods have been developed for the task. This paper proposes a new representation of timeseries, which in contrast to existing approaches, decomposes a time-series dataset ... Keywords: Data mining, Time-series classification, Time-series factorization

This study reports a statistical analysis of monthly sunspot number timeseries and observes non homogeneity and asymmetry within it. Using Mann-Kendall test a linear trend is revealed. After identifying stationarity within the timeseries we generate autoregressive AR(p) and autoregressive moving average (ARMA(p,q)). Based on minimization of AIC we find 3 and 1 as the best values of p and q respectively. In the next phase, autoregressive neural network (AR-NN(3)) is generated by training a generalized feedforward neural network (GFNN). Assessing the model performances by means of Willmott's index of second order and coefficient of determination, the performance of AR-NN(3) is identified to be better than AR(3) and ARMA(3,1).

Abstract A new hybrid model for forecasting the electric power load several months ahead is proposed. To allow for distinct responses from individual load sectors, this hybrid model, which combines dynamic (i.e., air temperature dependency of power load) and fuzzy timeseries approaches, is applied separately to the household, public, service, and industrial sectors. The hybrid model is tested using actual load data from the Seoul metropolitan area, and its predictions are compared with those from two typical dynamic models. Our investigation shows that, in the case of four-month forecasting, the proposed model gives the actual monthly power load of every sector with only less than 3% absolute error and satisfactory reduction of forecasting errors compared to other models from previous studies.

series analyses of air pollution and health attracted the attention of the scientific community, policy uncertainty in timeseries studies of air pollution and health. This discovery delayed the completion) for six "criteria" air pollutants at a level that protects the public's health (Environmental Protection

Combined forecasters have been in the vanguard of stochastic timeseriesmodeling. In this way it has been usual to suppose that each single model generates a residual or prediction error like a white noise. However, mostly because of disturbances not ... Keywords: Artificial neural networks hybrid systems, Linear combination of forecasts, Maximum likelihood estimation, Timeseries forecasters, Unbiased forecasters

Benchmarking consists of the adjustment of timeseries data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A general approach for modeling wind speed and wind power is described. Because wind power is a function of wind speed, the methodology is based on the development of a model of wind speed. Values of wind power are estimated by applying the ...

......time indoors, 7 .2% in or near vehicles, and only 5 .6% outdoors. Consequently...rather than individual, data in an ecological analysis like ours. It will likely...and policies of the EPA, or motor vehicle or engine manufacturers. The authors......

The price forecasts embody crucial information for generators when planning bidding strategies to maximise profits. Therefore, generation companies need accurate price forecasting tools. Comparison of neural network and auto regressive integrated moving average (ARIMA) models to forecast commodity prices in previous researches showed that the artificial neural network (ANN) forecasts were considerably more accurate than traditional ARIMA models. This paper provides an accurate and efficient tool for short-term price forecasting based on the combination of ANN and ARIMA. Firstly, input variables for ANN are determined by timeseries analysis. This model relates the current prices to the values of past prices. Secondly, ANN is used for one day-ahead price forecasting. A three-layered feed-forward neural network algorithm is used for forecasting next-day electricity prices. The ANN model is then trained and tested using data from electricity market of Iran. According to previous studies, in the case of neural networks and ARIMA models, historical demand data do not significantly improve predictions. The results show that the combined ANNâ??ARIMA forecasts prices with high accuracy for short-term periods. Also, it is shown that policy-making strategies would be enhanced due to increased precision and reliability.

Timeseries arise frequently in many sciences and engineering application, including finance, digital audio, motion capture, network security, and transportation. In this work, we propose a technique for discovering anomalies in timeseries that takes ...

Recently, a rigorous yet concise formula has been derived to evaluate the information flow, and hence the causality in a quantitative sense, between timeseries. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing three types of fundamental mechanisms that govern the marginal entropy change of the flow recipient. A normalized or relative flow measures its importance relative to other mechanisms. In analyzing realistic series, both absolute and relative information flows need to be taken into account, since the normalizers for a pair of reverse flows belong to two different entropy balances; it is quite normal that two identical flows may differ a lot in relative importance in their respective balances. We have reproduced these results with several autoregressive models. We have also shown applications to a climate change problem and a financial analysis problem. For the former, reconfirmed is the role of the Indian Ocean Dipole as ...

TimeSeries Analysis James D. Hamilton Since its publication just over ten years ago, James Hamilton's TimeSeries Analysis has taken its place in the canon of modern technical eco- nomic literature study, Hamilton's book en- joyed popularity among econometricians in seminars in Europe and North

1 Can biomass timeseries be reliably assessed from CPUE timeseries data only? Francis LaloÃ«1 to abundance. This means (i) that catchability is constant and (ii) that all the biomass is catchable. If so, relative variations in CPUE indicate the same relative variations in biomass. Myers and Worm consider

The method of surrogates is one of the key concepts of nonlinear data analysis. Here, we demonstrate that commonly used algorithms for generating surrogates often fail to generate truly linear timeseries. Rather, they create surrogate realizations with Fourier phase correlations leading to nondetections of nonlinearities. We argue that reliable surrogates can only be generated, if one tests separately for static and dynamic nonlinearities.

A concept called balanced estimator of diffusion entropy is proposed to detect quantitatively scalings in short timeseries. The effectiveness is verified by detecting successfully scaling properties for a large number of artificial fractional Brownian motions. Calculations show that this method can give reliable scalings for short timeseries with length ?102. It is also used to detect scalings in the Shanghai Stock Index, five stock catalogs, and a total of 134 stocks collected from the Shanghai Stock Exchange Market. The scaling exponent for each catalog is significantly larger compared with that for the stocks included in the catalog. Selecting a window with size 650, the evolution of scaling for the Shanghai Stock Index is obtained by the window's sliding along the series. Global patterns in the evolutionary process are captured from the smoothed evolutionary curve. By comparing the patterns with the important event list in the history of the considered stock market, the evolution of scaling is matched with the stock index series. We can find that the important events fit very well with global transitions of the scaling behaviors.

The identification of the rainfallrunoff relationship is a significant precondition for surfaceatmosphere process research and operational flood forecasting, especially in inadequately monitored basins. Based on an information diffusion model (...

Publisher Summary This chapter discusses the seasonal fluctuations in timeseries. Seasonal variations are defined either to be studied separately ormore oftento be removed, thus, allowing concentrating on the remaining variation. If the annual data are not strongly influenced by trend movements or cyclical changes, it is possible to compare seasonal data usuallymonthly data with averages not adjusted for trend and express them as percentages of these averages. In reality, the amplitude and period of seasonal movements vary in most cases from year to year, being affected by seasonal as well as by cyclical, random, and other nonseasonal factors. The averages, expressed in percentages, thus obtained are preliminary seasonal indexes. Seasonal indices based on the fitting of curves to monthly ratios, expressed as percentages, to moving averages are called moving seasonal indexes.

Commonly used statistical tests of hypothesis, also termed inferential tests, that are available to meteorologists and climatologists all require independent data in the timeseries to which they are applied. However, most of the timeseries that ...

A physical (e.g. astrophysical, geophysical, meteorological etc.) data may appear as an output of an experiment or it may contain some sociological, economic or biological information. Whatever be the source of a timeseries data some amount of noise is always expected to be embedded in it. Analysis of such data in presence of noise may often fail to give accurate information. Although text book data filtering theory is primarily concerned with the presences of random, zero mean errors; but in reality, errors in data are often systematic rather than random. In the present paper we produce different models of systematic error in the timeseries data. This will certainly help to trace the systematic error present in the data and consequently that can be removed as possible to make the data compatible for further study.

SUMMARY Series of electric organ discharges of Gnathonemus petersii were recorded by means of a pair of symmetrical electrode assemblies on each side of the fish. The spontaneous \\{EODs\\} in the stationary phase were differentiated (into two groups) according to amplitude variations; the left-dominant and the right-dominant groups. It is concluded that the electric organs are discharging alternately with a monomodal interval distribution on each side. A small distance between bilateral electric organs which would produce temporal EOD slipping-off is far more effective for detection of the electric field disturbances.

This work describes an approach devised by the authors for timeseries classification. In our approach genetic programming is used in combination with a serial processing of data, where the last output is the result of the classification. The use of ... Keywords: Classification, genetic programming, real world applications, serial data processing, timeseries

This paper presents a new method of integrated fuzzy timeseries with the exponential smoothing method to forecast university enrolments. The data of historical enrolments of the University of Alabama shown in Liu et al. (2011) are adopted to illustrate the forecasting process of the proposed method. A comparison has been made with five previous fuzzy timeseriesmodels. Meanwhile, the mean squared error has also been calculated as the evaluation criterion to illustrate the performance of the proposed method. The empirical analysis shows that the proposed model reflects the fluctuations in fuzzy timeseries better and provides better overall forecasting results than the five listed previous models.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A method for creating scenarios of timeseries of monthly mean surface temperature at a specific site is developed. It is postulated that surface temperature can be specified as a linear combination of regional and local temperature components, ...

Unevenly spaced timeseries are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time, corrupt measurements, for example, or be inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. This paper aims at improving the accuracy of common statistical parameters for the characterization of irregularly sampled signals. The uneven representation of timeseries, often including clumps of measurements and gaps with no data, can severely disrupt the values of estimators. A weighting scheme adapting to the sampling density and noise level of the signal is formulated. Its application to timeseries from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the sugg...

of Figure 1 we present a time-series resulting from the NYMEX light sweet crude oil futures price data set sweet crude oil futures curves and demonstrate that it contains significant advan- tages over FPCA as opposed to a modification of the usual PCA is quite simple: futures price data on a given day

Trade and Income Â­ Exploiting TimeSeries in Geography James Feyrer Dartmouth College October 23. Rodriguez and Rodrik (2000) show that these results are not robust to controlling for omitted variables conferences for helpful comments. All errors are my own. james.feyrer@dartmouth.edu, Dartmouth College

#12;Timeseries of a CME blasting out from the Sun Composite image of the Sun in UV light with the naked eye, the Sun seems static, placid, constant. From the ground, the only notice- able variations in the Sun are its location (where will it rise and set today?) and its color (will clouds cover

Timeseries forecasting (TSF) is an important tool to support decision making (e.g., planning production resources). Artificial neural networks (ANNs) are innate candidates for TSF due to advantages such as nonlinear learning and noise tolerance. However, ... Keywords: Estimation distribution algorithm, Multilayer perceptron, Regression, Timeseries

Online timeseries change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing timeseries change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic timeseries or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic timeseries. The algorithm uses a Gaussian process based non-parametric timeseries prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a timeseries of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing timeseries change detection algorithms on a set of synthetic and real timeseries. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

A series of precision shock timing experiments have been performed on NIF. These experiments continue to adjust the laser pulse shape and employ the adjusted cone fraction (CF) in the picket (1st 2 ns of the laser pulse) as determined from the re-emit experiment series. The NIF ignition laser pulse is precisely shaped and consists of a series of four impulses, which drive a corresponding series of shock waves of increasing strength to accelerate and compress the capsule ablator and fuel layer. To optimize the implosion, they tune not only the strength (or power) but also, to sub-nanosecond accuracy, the timing of the shock waves. In a well-tuned implosion, the shock waves work together to compress and heat the fuel. For the shock timing experiments, a re-entrant cone is inserted through both the hohlraum wall and the capsule ablator allowing a direct optical view of the propagating shocks in the capsule interior using the VISAR (Velocity Interferometer System for Any Reflector) diagnostic from outside the hohlraum. To emulate the DT ice of an ignition capsule, the inside of the cone and the capsule are filled with liquid deuterium.

Gaia is an ESA cornerstone mission, which was successfully launched December 2013 and commenced operations in July 2014. Within the Gaia Data Processing and Analysis consortium, Coordination Unit 7 (CU7) is responsible for the variability analysis of over a billion celestial sources and nearly 4 billion associated timeseries (photometric, spectrophotometric, and spectroscopic), encoding information in over 800 billion observations during the 5 years of the mission, resulting in a petabyte scale analytical problem. In this article, we briefly describe the solutions we developed to address the challenges of timeseries variability analysis: from the structure for a distributed data-oriented scientific collaboration to architectural choices and specific components used. Our approach is based on Open Source components with a distributed, partitioned database as the core to handle incrementally: ingestion, distributed processing, analysis, results and export in a constrained time window.

Ensembles have been shown to provide better generalization performance than single models. However, the creation, selection and combination of individual predictors is critical to the success of an ensemble, as each individual model needs to be both ... Keywords: Ensembles, Hybrid multi-objective evolutionary algorithms, Recurrent neural networks, Selection, Timeseries prediction

The use of memory kernels stemming from a Mori-Zwanzig approach to timeseries analysis is discussed. We show that despite its success in determining properties from an analytical model, the kernel itself is not easily interpreted. We consider a recently introduced discretization of the kernel and show that its properties can be quite different from its continuous counterpart. We provide a rigorous analysis of the discrete case and show for several analytically calculated memory kernels of simple timeseries processes that their features are not readily detectable in the kernel. We show furthermore that practical relevant Mori-Zwanzig models with a finite kernel form a true subclass of the autoregressive moving average (ARMA) models. The fact that this approach already veils the properties of these simple timeseries gives rise to severe doubts about its applicability in more complex situations.

This study aimed to assess individual and gender differences in power spectra in the body sway timeseries and sway velocity timeseries during a static upright standing posture using 30 preschool children and...

The 2013 version of this database presents a timeseries recording 1Â° The 2013 version of this database presents a timeseries recording 1Â° latitude by 1Â° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2010. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2010 were published earlier (Boden et al. 2013). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1Â° data on

Abstract How well do existing ocean observation programs monitor the oceans through space and time? A meta-analysis of ocean observation programs in the Pacific Ocean was carried out to determine where and how key parameters defining the physics, chemistry, and biology of the oceans were measured. The analysis indicates that although the chemistry and physics of the Pacific Ocean are reasonably well monitored, ecological monitoring remains largely ad hoc, patchy, unsystematic, and inconsistent. The California Cooperative Oceanic Fisheries Investigations (CalCOFI), for example, is the only Pacific Ocean program in which the zooplankton and micronekton are resolved to species with consistent timeseries of greater than 20 years duration. Several studies now indicate massive changes to nearshore, mesopelagic and other fish communities of the southern California Current but available timeseries do not allow these potential changes to be examined more widely. Firm commitment from the global community to sustained, representative, quantitative marine observations at the species level is required to adequately assess the ecological status of the oceans.

stationary timeseries {Xt} is said to have long memory when long memory parameter d is between 0 and 0.5. Many methods of estimating long memory parameter based on its decay rate of autocorrelation or behavior of spectral density around zero have been....For consistency, some people consider the original dataXt asv0,t. Due to decimating property, we haveN/2j wavelet and scaling coefficients at levelj. The constraint on sample size, N = 2J can be relaxed by considering partial discrete wavelet transform. 2...

This report deals specifically with changes made to the survey forms in January 1981 and the resulting changes to the data-timeseries. Naturally, when a series has changed at some time point, the data after the change are no longer comparable to those before. In many cases, though, comparisons are desired that use pre- and post-intervention data as a series. It is thus necessary to have a methodology for updating the older data so that such comparisons can be made validly. To produce this methodology, the particular intervention must be modeled. However, when attempting to analyze one particular intervention, other types of interventions must be considered also. If effects of the other interventions can be modeled, the overall variability of the series can be reduced and the intervention of interest can be better isolated. Thus, in the following, we discuss (in addition to the format modifications of the forms) the trends and changes noted in the JPRS since January 1976 to December 1982. The year 1976 was chosen since it corresponds to the first year for which microdata are computerized in a universal format in the JPRS master files. We discuss, in particular, changes to the data series for inventories of: (a) motor gasoline, (b) distillate oil, (c) residual fuel oil, and (d) crude oil. These are the series studied in detail in subsequent sections of this report.

In this work, we propose a novel method to transform a timeseries into a weighted and directed network. For a given timeseries, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given timeseries: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the timeseries has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that timeseries with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

. Applications include a discussion of the timing and potential causes of the British Industrial Revolution, convergence, long memory, graphical modelling, British Industrial Revolution. JEL classifications N33, O47, O

Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static timeseries (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of timeseries power flow analysis readily accessible to utilities and others responsible for evaluating potential PV impacts.

TimeSeries was introduced to improve the forecasting made by statistical methods in vague or imprecise data and in timeseries with few samples available. However, the integration of these concepts is a little e...

An application of multiway spectral clustering with out-of-sample extensions towards clustering timeseries is presented. The data correspond to power load timeseries acquired from substations in the ... eigenve...

This contribution discusses some recent applications of time-series analysis in Random Matrix Theory (RMT), and applications of RMT in the statistial analysis of eigenspectra of correlation matrices of multivariate timeseries.

The correlation between wind speed and failure rate (FR) of wind turbines is analyzed with timeseries approach. The timeseries of power index (PI) and FR of wind turbines are established based on historical data which are pretreated by singularity processing stationarity processing and wavelet de-noising. The trend variations of the timeseries are analyzed from both time domain and frequency domain by extracting the indicator functions including auto-correlation function cross-correlation function and spectral density function. A case study is given out to verify the validity of the model and the method which is based on the wind speed and failure data from January 1995 to December of 2002 in Nordjylland Denmark. Auto-correlation function and spectral density function show that timeseries of PI and FR have strong seasonal characteristics and quite similar periodicity while the cross-correlation function shows they keep high consistency and strong correlation. The results indicate that by calculating and monitoring PI the failure rule of wind turbines can be forecast which provides theoretical basis for preventive maintenance of wind turbines.

This paper describes the development of a Bayesian procedure to analyse and forecast positive demand time-series data with a proportion of zero values and a high level of variability for the non-zero data. The resulting forecasts play decisive roles in organisational planning, budgeting, and performance monitoring. Exponential smoothing methods are widely used as forecasting techniques in industry and business. However, they can be unsuitable for the analysis of non-negative demand time-series data with the aforementioned features. In this paper, an unconstrained latent demand underlying the observed demand is introduced into the linear heteroscedastic model associated with the Holt-Winters model. Accurate forecasts for the observed demand can readily be derived from those obtained with exponential smoothing for the latent demand. The performance of the proposed procedure is illustrated using a simulation study and two real time-series datasets which correspond to tourism demand and book sales. [Received 4 November 2010; Revised 7 September 2011, 10 April 2012; Accepted 10 May 2012

SumTime-Turbine: A Knowledge-Based System to Communicate Gas Turbine Time-Series Data Jin Yu produces textual summaries of archived time- series data from gas turbines. These summaries should help evaluated. 1 Introduction In order to get the most out of gas turbines, TIGER [2] has been developed

The Living Planet Index: using species population timeseries to track trends in biodiversity of the world's biodiversity over time. It uses time-series data to calculate average rates of change in a large number of populations of terrestrial, freshwater and marine vertebrate species. The dataset contains

Abstract The objective of this paper is assessing use of series DC motor in electric car with its rotation speed controller, and evaluating its performances when different running cases of electric car with different loads. The mathematical equations model of series DC motor and electronic inverter in dynamic state with reference frame d  q were considered. Computer model of these equations was implemented using MATLAB/SIMPOWER facilities obtaining a complete model for motor and controller. Series DC motor is considered and its parameters were used for simulation. The electronic controller operates based on PWM control technique. Simulation of series DC motor performances was conducted within presumptions of changing car load and different resistant torques. Changing loads was realised changing number of passengers when electric car is running on normal streets, when running on streets with slope, when car is accelerate to reach a stead speed and when car is changing speed rigidly and suddenly running on country road having some holes and small slopes. Some conclusions and remarks about performances and behaviour of series DC motor were concluded. The simulated series DC motor was tested and mounted experimentally in a small truck car in Faculty of Mechanical & Electrical Engineering at Damascus University.

procedure can also be used to produce prediction intervals. When 18 the Y (?) process is Gaussian, these prediction intervals should perform no better than the interval in (D.14). However, when working with real world data, the assumptions of a Gaussian...- series representation given by ?(x) = ?? k=0 dk(x? ?(s0))k, x ? R (E.19) for some d0, d1, . . . ? R. Further, supppose that E [ Z?n(s0) ]2 = O(1) and that for some k1 ? (0,?), ?? k=1 ?? j=1 kj|dkdj|2 (k+j?2)/2? ( k + j ? 1 2 )[ ?j+k?2...

applications. Examples of timeseries include historical price and trading volume obtained from financial stock' judgment. In technical analysis, timeseries data such as historical price, volume, and other statistical the movement of the price. Analysts may also take into account other factors such as government policies

. environmental air pollution which has gained rapidly increasing attention by many European research projects million timeseries, each representing the daily course of air pollution parameters1 . It is important data mining in timeseries databases is essential in many application domains as for instance

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This study forecasts the monthly peak demand of electricity in the northern region of India using univariate time-series techniques namely Multiplicative Seasonal Autoregressive Integrated Moving Average (MSARIMA) and Holt-Winters Multiplicative Exponential Smoothing (ES) for seasonally unadjusted monthly data spanning from April 2000 to February 2007. In-sample forecasting reveals that the MSARIMA model outperforms the ES model in terms of lower root mean square error, mean absolute error and mean absolute percent error criteria. It has been found that ARIMA (2, 0, 0) (0, 1, 1)12 is the best fitted model to explain the monthly peak demand of electricity, which has been used to forecast the monthly peak demand of electricity in northern India, 15 months ahead from February 2007. This will help Northern Regional Load Dispatch Centre to make necessary arrangements a priori to meet the future peak demand.

We detect long-range correlations and trends in timeseries extracted from the data of seismic events occurred since 1973 until 2011 in a rectangular region that contain mainly all the continental part of Colombia. The long-range correlations are detected by the calculation of the Hurst exponents for the timeseries of interevent intervals, separation distances, depth differences and magnitude differences. By using a geometrical modification of the classical R/S method that has been developed to detect long-range correlations in short timeseries, we find the existence of persistence for all the timeseries considered. We find also, by using the DFA until the third order, that the timeseries of interevent intervals, separation distances and depth differences are influenced by quadratic trends, while the timeseries of magnitude differences is influenced by a linear trend. Finally, for the timeseries of interevent intervals, we present an analysis of the Hurst exponent as a function of the time and the minim...

One major acknowledged challenge in daily precipitation is the inability to model extreme events in the spectrum of events. These extreme events are rare but may cause large losses. How to realistically simulate extreme behavior of daily...

We describe methods of estimating the entire Lyapunov spectrum of a spatially extended system from multivariate time-series observations. Provided that the coupling in the system is short range, the Jacobian has a banded structure and can be estimated using spatially localised reconstructions in low embedding dimensions. This circumvents the ``curse of dimensionality'' that prevents the accurate reconstruction of high-dimensional dynamics from observed timeseries. The technique is illustrated using coupled map lattices as prototype models for spatio-temporal chaos and is found to work even when the coupling is not strictly local but only exponentially decaying.

Stated is an approach to the simulation of timeseries of storms and weather ... on their frequency. Using the results of wind reanalysis for the Norwegian, Barents, and ... values. Based on the revealed regulari...

Time-series data from wide-field sensors, acquired for
the period of a growing season or longer, capitalize on
phenological changes in vegetation and make it possible to
identify vegetated land cover types in greater ...

THE RELATION BETWEEN BRAZILIAN AND CHICAGO BOARD OF TRADE SOYBEAN PRICES ? A TIMESERIES TEST A Thesis BRUNO MELCHER Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE May 1991 Major Subject: Agricultural Economics THE RELATION BETWEEN BRAZILIAN AND CHICAGO BOARD OF TRADE SOYBEAN PRICES ? A TIMESERIES TEST A Thesis by BRUNO MELCHER Approved as to style and content by: ' f J David...

This paper concerns the forecasting of seasonal intraday timeseries that exhibit repeating intraweek and intraday cycles. A recently proposed exponential smoothing method involves smoothing a different intraday cycle for each distinct type of day of the week. Similar days are allocated identical intraday cycles. A limitation is that the method allows only whole days to be treated as identical. We introduce a new exponential smoothing formulation that allows parts of different days of the week to be treated as identical. The result is a method that involves the smoothing and initialisation of fewer terms. We evaluate forecasting up to a day ahead using two empirical studies. For electricity load data, the new method compares well with a range of alternatives. The second study involves a series of arrivals at a call centre that is open for a shorter duration at the weekends than on weekdays. Among the variety of methods considered, the new method is the only one that can model in a satisfactory way in this situation, where the number of periods on each day of the week is not the same.

Folded conformations of proteins in thermodynamically stable states have long lifetimes. Before it folds into a stable conformation, or after unfolding from a stable conformation, the protein will generally stray from one random conformation to another leading thus to rapid fluctuations. Brief structural changes therefore occur before folding and unfolding events. These short-lived movements are easily overlooked in studies of folding/unfolding for they represent momentary excursions of the protein to explore conformations in the neighborhood of the stable conformation. The present study looks for precursory signatures of protein folding/unfolding within these rapid fluctuations through a combination of three techniques: (1) ultrafast shape recognition, (2) timeseries segmentation, and (3) timeseries correlation analysis. The first procedure measures the differences between statistical distance distributions of atoms in different conformations by calculating shape similarity indices from molecular dynamics simulation trajectories. The second procedure is used to discover the times at which the protein makes transitions from one conformation to another. Finally, we employ the third technique to exploit spatial fingerprints of the stable conformations; this procedure is to map out the sequences of changes preceding the actual folding and unfolding events, since strongly correlated atoms in different conformations are different due to bond and steric constraints. The aforementioned high-frequency fluctuations are therefore characterized by distinct correlational and structural changes that are associated with rate-limiting precursors that translate into brief segments. Guided by these technical procedures, we choose a model system, a fragment of the protein transthyretin, for identifying in this system not only the precursory signatures of transitions associated with ? helix and ? hairpin, but also the important role played by weaker correlations in such protein folding dynamics.

In this paper, we present an application of Artificial Neural Networks (ANNs) in the renewable energy domain. We particularly look at the Multi-Layer Perceptron (MLP) network which has been the most used of ANNs architectures both in the renewable energy domain and in the timeseries forecasting. We have used a MLP and an ad hoc timeseries pre-processing to develop a methodology for the daily prediction of global solar radiation on a horizontal surface. First results are promising with nRMSE {proportional_to} 21% and RMSE {proportional_to} 3.59 MJ/m{sup 2}. The optimized MLP presents predictions similar to or even better than conventional and reference methods such as ARIMA techniques, Bayesian inference, Markov chains and k-Nearest-Neighbors. Moreover we found that the data pre-processing approach proposed can reduce significantly forecasting errors of about 6% compared to conventional prediction methods such as Markov chains or Bayesian inference. The simulator proposed has been obtained using 19 years of available data from the meteorological station of Ajaccio (Corsica Island, France, 41 55'N, 8 44'E, 4 m above mean sea level). The predicted whole methodology has been validated on a 1.175 kWc mono-Si PV power grid. Six prediction methods (ANN, clear sky model, combination..) allow to predict the best daily DC PV power production at horizon d + 1. The cumulated DC PV energy on a 6-months period shows a great agreement between simulated and measured data (R{sup 2} > 0.99 and nRMSE < 2%). (author)

The timeseries of data from a Radiation Portal Monitor (RPM) system are evaluated for the presence of point sources by isolating the contribution of anomalous radiation. Energy-windowed background spectra taken from the RPM are compared with the observed spectra at each time step during a vehicle drive-through. The total signal is turned into a spectral distance index using this method. This provides a timeseries with reduced systematic fluctuations due to background attenuation by the vehicle, and allows for point source detection by time-series analyses. The anomalous timeseries is reanalyzed by using a wavelet filter function of similar size to the expected source profile. A number of real drive-through data sets taken at a U.S. port of entry are analyzed in this way. A set of isotopes are injected into the data set, and the resultant benign and injected data sets are analyzed with gross-counting, spectral-ratio, and time-based algorithms. Spectral and time methods together offer a significant increase to detection performance.

We use the methodology of singular spectrum analysis (SSA), principal component analysis (PCA), and multi-fractal detrended fluctuation analysis (MFDFA), for investigating characteristics of vibration timeseries data from a friction brake. SSA and PCA are used to study the long time-scale characteristics of the timeseries. MFDFA is applied for investigating all time scales up to the smallest recorded one. It turns out that the majority of the long time-scale dynamics, that is presumably dominated by the structural dynamics of the brake system, is dominated by very few active dimensions only and can well be understood in terms of low dimensional chaotic attractors. The multi-fractal analysis shows that the fast dynamical processes originating in the friction interface are in turn truly multi-scale in nature.

The goal of this project is to build and test a TimeSeries Submersible Incubation Device (TS-SID) capable of the autonomous in situ measurement of phytoplankton production and other rate processes for a period of up at least three months. The instrument is conceptually based on a recently constructed Submersible Incubation Device (SID). The TS-SID is to possess the ability to periodically incubate samples in the presence of an appropriate tracer, and to store 94 chemically fixed subsamples for later analysis. The TS-SID has been designed to accurately simulate the natural environment, and to avoid trace metal contamination and physical damage to cells. Devices for biofouling control of internal and external surfaces are to be incorporated into the instrument. After the timeseries capabilities of the instrument have been successfully evaluated by medium-term coastal timeseries studies (up to one month), longer-term coastal timeseries studies (2-3 months) will be conducted to evaluate the biofouling prevention measures that have been used with the instrument.

The goal of this project is to build and test a TimeSeries Submersible Incubation Device (TS-SID) capable of the autonomous in situ measurement of phytoplankton production and other rate processes for a period of up at least three months. The instrument is conceptually based on a recently constructed Submersible Incubation Device (SID). The TS-SID is to possess the ability to periodically incubate samples in the presence of an appropriate tracer, and to store 94 chemically fixed subsamples for later analysis. The TS-SID has been designed to accurately simulate the natural environment, and to avoid trace metal contamination and physical damage to cells. Devices for biofouling control of internal and external surfaces are to be incorporated into the instrument. After the timeseries capabilities of the instrument have been successfully evaluated by medium-term coastal timeseries studies (up to one month), longer-term coastal timeseries studies (2-3 months) will be conducted to evaluate the biofouling prevention measures that have been used with the instrument.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

TimeSeries Measurements of Temperature, Current Velocity, and Sediment Resuspension in Saginaw Bay and verification. These measurements will be made as part of this project. Measurements of sediment resuspension sediment resuspension in the bay during the spring. Measurements of sediment resuspension are important

, Aline Jaimes, Craig Tweedie, and Vladik Kreinovich Abstract Timeseries comes from measurements, and measurements are never abso- lutely accurate. Traditionally, when we deal with an individual measurement or with a sample of measurement results, we subdivide a measurement error into random and systematic components

Recognising Visual Patterns to Communicate Gas Turbine Time-Series Data Jin Yu, Jim Hunter, Ehud analogue channels are sampled once per second and archived by the Tiger system for monitoring gas turbines that it is very important to identify such patterns in any attempt at summarisation. In the gas turbine domain

DAMAGE DETECTION IN A WIND TURBINE BLADE BASED ON TIMESERIES METHODS Simon Hoell, Piotr Omenzetter, the consequences are growing sizes of wind turbines (WTs) and erections in remote places, such as off in the past years, thus efficient energy harvesting becomes more important. For the sector of wind energy

Time-series comparisons of MIPAS Level 2 products with climatology V. Payne, A. Dudhia, C. Piccolo) for the calculation of the means. Here we compare these monthly means with reference climatologies for each that MIPAS has been operating. The reference climatologies used in these comparisons are the COSPAR Reference

TimeSeries Methods for ForecastingElectricityMarket Pricing Zoran Obradovic Kevin Tomsovic PO Box the predictability of electricity price under new market regulations and the engineering aspects of large scale of traditional commodities, such as,oil or agricultural products. Clearly, assessing the effectiveness

We consider the problem of providing integrity of aggregate result in the presence of an untrusted data aggregator who may introduce errors into data fusion, causing the final aggregate result to far deviate from the true result determined by participating ... Keywords: aggregate authentication, computation over authenticated data, time-series

Page 1 The 30-year TAMSAT African Rainfall Climatology1 And Time-series (TARCAT) Dataset 2 Authors 2 Key points1 Development of a satellite based 30 year rainfall dataset for Africa2 The dataset has been designed to be temporally consistency3 The dataset skilfully captures interannual

Analysis of Brain States from Multi-Region LFP Time-Series Kyle Ulrich 1 , David E. Carlson 1 field potential (LFP) is a source of information about the broad patterns of brain activity. It is believed that these regions may jointly constitute a "brain state," relating to cognition and behavior

Time-series validation of MODIS land biophysical products in a Kalahari woodland, Africa K. F MODIS variables are produced from the same algorithm. Solar zenith angle effects, differences between counts versus energy) were examined and rejected as explanations for the discrepancies between MODIS

sets: stock prices, air and sea temperatures, and wind speeds. Keywords: Compression, indexing.ics.uci.edu). Wind speeds: We have used daily wind speeds at twelve sites in Ireland, from 1961 to 1978, ob- tained. Indexing: The indexing of a time-series database is based on the notion of major inclines, illustrated

sets: stock prices, air and sea temperatures, and wind speeds. Keywords: Compression, indexing.ics.uci.edu). Wind speeds: We have used daily wind speeds at twelve sites in Ireland, from 1961 to 1978, obÂ­ tained. Indexing: The indexing of a timeÂ­series database is based on the notion of major inclines, illustrated

transmission studies. Norbert Wiener was able to combine these two sources and the hybrid has shown to be following these lines. In the first instance consider a geographical distribution to be completely static cited resulted in one observation every mile of the 2900 mile long route. In timeseries it is usually

Biomass monitoring, specifically detecting changes in the biomass or vegetation of a geographical region, is vital for studying the carbon cycle of the system and has significant implications in the context of understanding climate change and its impacts. Recently, several timeseries change detection methods have been proposed to identify land cover changes in temporal profiles (timeseries) of vegetation collected using remote sensing instruments. In this paper, we adapt Gaussian process regression to detect changes in such timeseries in an online fashion. While Gaussian process (GP) has been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. In this paper we address the scalability aspect of GP based timeseries change detection. Specifically, we exploit the special structure of the covariance matrix generated for GP analysis to come up with methods that can efficiently estimate the hyper-parameters associated with GP as well as identify changes in the timeseries while requiring a memory footprint which is linear in the size of input data, as compared to traditional method which involves solving a linear system of equations for the Choleksy decomposition of the quadratic sized covariance matrix. Experimental results show that our proposed method achieves significant speedups, as high as 1000, when processing long timeseries, while maintaining a small memory footprint. To further improve the computational complexity of the proposed method, we provide a parallel version which can concurrently process multiple input timeseries using the same set of hyper-parameters. The parallel version exploits the natural parallelization potential of the serial algorithm and is shown to perform significantly better than the serial version, with speedups as high as 10. Finally, we demonstrate the effectiveness of the proposed change detection method in identifying changes in Normalized Difference Vegetation Index (NDVI) data. Moreover, we show that the scalable solution is able to process NDVI data for the entire Iowa region significantly faster than the standard method.

Harris and McQuiston (1988) developed conduction transfer function (CTF) coefficients corresponding to 41 representative wall assemblies and 42 representative roof assemblies for use with the transfer function method (TFM). They also developed a grouping procedure that allows design engineers to determine the correct representative wall or roof assembly that most closely matches a specific wall or roof assembly. The CTF coefficients and the grouping procedure have been summarized in the ASHRAE Handbook--Fundamentals (1989, 1993, 1997) and the ASHRAE Cooling and Heating Load Calculation Manual, second edition. More recently, a new, simplified design cooling load calculation procedure, the radiant timeseries method (RTSM), has been developed. The RTSM uses periodic response factors to model transient conductive heat transfer. While not a true manual load calculation procedure, it is quite feasible to implement the RTSM in a spreadsheet. To be useful in such an environment, it would be desirable to have a pre-calculated set of periodic response factors. Accordingly, a set of periodic response factors has been calculated and is presented in this paper.

We present a novel algorithm aimed at identifying peaks within a uniformly sampled timeseries affected by uncorrelated Gaussian noise. The algorithm, called "MEPSA" (multiple excess peak search algorithm), essentially scans the timeseries at different timescales by comparing a given peak candidate with a variable number of adjacent bins. While this has originally been conceived for the analysis of gamma-ray burst light (GRB) curves, its usage can be readily extended to other astrophysical transient phenomena, whose activity is recorded through different surveys. We tested and validated it through simulated featureless profiles as well as simulated GRB time profiles. We showcase the algorithm's potential by comparing with the popular algorithm by Li and Fenimore, that is frequently adopted in the literature. Thanks to its high flexibility, the mask of excess patterns used by MEPSA can be tailored and optimised to the kind of data to be analysed without modifying the code. The C code is made publicly availabl...

Fourier series analysis is eminently suitable for modeling strongly periodic data. Weather independent energy use such as lighting and equipment load in commercial buildings is strongly periodic and is thus appropriate for Fourier series treatment...

Topology based analysis of time-series data from dynamical systems is powerful: it potentially allows for computer-based proofs of the existence of various classes of regular and chaotic invariant sets for high-dimensional dynamics. Standard methods are based on a cubical discretization of the dynamics, and use the timeseries to construct an outer approximation of the underlying dynamical system. The resulting multivalued map can be used to compute the Conley index of isolated invariant sets of cubes. In this paper we introduce a discretization that uses---by contrast---a simplicial complex constructed from a witness-landmark relationship. The goal is to obtain a natural discretization that is more tightly connected with the invariant density of the timeseries itself. The time-ordering of the data also directly leads to a map on this simplicial complex that we call the witness map. We obtain conditions under which this witness map gives an outer approximation of the dynamics, and thus can be used to compute the Conley index of isolated invariant sets. The method is illustrated by a simple example using data from the classical H\\'enon map.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Timeseries analysis of AERI radiances for GCM testing and improvement Timeseries analysis of AERI radiances for GCM testing and improvement Dykema, John Harvard University Leroy, Stephen Harvard University Anderson, James Harvard University Tobin, David University of Wisconsin-Madison Knuteson, Robert University Of Wisconsin Revercomb, Henry University of Wisconsin-Madison Category: Radiation High resolution infrared radiances measured by the Atmospheric Emitted Radiance Interferometer (AERI) contained detailed information about the structure and dynamics of temperature, water vapor, and clouds below 3 km. Infrared radiances also contain the signature of radiative forcing by well-mixed gases that constitutes the greenhouse effect. Direct comparison of these radiance observations to similar radiances calculated from output

One of the tasks of the Kepler Asteroseismic Science Operations Center (KASOC) is to provide asteroseismic analyses on Kepler Objects of Interest (KOIs). However, asteroseismic analysis of planetary host stars presents some unique complications with respect to data preprocessing, compared to pure asteroseismic targets. If not accounted for, the presence of planetary transits in the photometric timeseries often greatly complicates or even hinders these asteroseismic analyses. This drives the need for specialised methods of preprocessing data to make them suitable for asteroseismic analysis. In this paper we present the KASOC Filter, which is used to automatically prepare data from the Kepler/K2 mission for asteroseismic analyses of solar-like planet host stars. The methods are very effective at removing unwanted signals of both instrumental and planetary origins and produce significantly cleaner photometric timeseries than the original data. The methods are automated and can therefore easily be applied to a ...

High and Low Temperature Series Estimates for the Critical Temperature of the 3D Ising Model Zaher Abstract We have analysed low and high temperature series expansions for the threeÂ­dimensional Ising model temperature of the threeÂ­dimensional (3d) Ising model on the simple cubic lattice has been exhaustively

Systematic error is a major issue in the quantitative analysis of fossil biodiversity data in paleontology. I present results of timeseries analysis of a new and expanded data set (the Paleobiology Database) controlled and corrected for systematic error, and find that periodicities at approximately 62 and 150 Myr reported from previous data emerge at higher significance than before. This provides increased confidence that the periodicities are not collection, sampling, or binning artifacts. Both of these timescales are interestingly close to dynamical timescales of Solar motion in the Milky Way galaxy.

The use of remote sensing is necessary for monitoring forest carbon stocks at large scales. Optical remote sensing, although not the most suitable technique for the direct estimation of stand biomass, offers the advantage of providing large temporal and spatial datasets. In particular, information on canopy structure is encompassed in stand reflectance timeseries. This study focused on the example of Eucalyptus forest plantations, which have recently attracted much attention as a result of their high expansion rate in many tropical countries. Stand scale time-series of Normalized Difference Vegetation Index (NDVI) were obtained from MODIS satellite data after a procedure involving un-mixing and interpolation, on about 15,000 ha of plantations in southern Brazil. The comparison of the planting date of the current rotation (and therefore the age of the stands) estimated from these timeseries with real values provided by the company showed that the root mean square error was 35.5 days. Age alone explained more than 82% of stand wood volume variability and 87% of stand dominant height variability. Age variables were combined with other variables derived from the NDVI timeseries and simple bioclimatic data by means of linear (Stepwise) or nonlinear (Random Forest) regressions. The nonlinear regressions gave r-square values of 0.90 for volume and 0.92 for dominant height, and an accuracy of about 25 m3/ha for volume (15% of the volume average value) and about 1.6 m for dominant height (8% of the height average value). The improvement including NDVI and bioclimatic data comes from the fact that the cumulative NDVI since planting date integrates the interannual variability of leaf area index (LAI), light interception by the foliage and growth due for example to variations of seasonal water stress. The accuracy of biomass and height predictions was strongly improved by using the NDVI integrated over the two first years after planting, which are critical for stand establishment. These results open perspectives for cost-effective monitoring of biomass at large scales in intensively-managed plantation forests.

In this paper we present one application included in the Project @DAN, an AdvANced and high secure mobile platform to support the digital economy, that started on November 2001, and is financed by the EUIST-2001-...

The application of high-resolution radar waveform and interferometric principles recently led to the development of a microwave interferometer, suitable to simultaneously measuring the (static or dynamic) deflection of several points on a large structure. From the technical standpoint, the sensor is a Stepped Frequency Continuous Wave (SF-CW), coherent radar, operating in the K{sub u} frequency band.In the paper, the main procedures adopted to extract the deflection timeseries from raw radar data and to assess the quality of data are addressed, and the MATLAB toolbox developed is described. Subsequently, other functions implemented in the software tool (e.g. evaluation of the spectral matrix of the deflection time-histories, identification of natural frequencies and operational mode shapes evaluation) are described and the application to data recorded on full-scale bridges is exemplified.

Ignition implosions on the National Ignition Facility (NIF) [Lindl et al., Phys. Plasmas 11, 339 (2004)] are driven with a very carefully tailored sequence of four shock waves that must be timed to very high precision in order to keep the fuel on a low adiabat. The first series of precision tuning experiments on NIF have been performed. These experiments use optical diagnostics to directly measure the strength and timing of all four shocks inside the hohlraum-driven, cryogenic deuterium-filled capsule interior. The results of these experiments are presented demonstrating a significant decrease in the fuel adiabat over previously un-tuned implosions. The impact of the improved adiabat on fuel compression is confirmed in related deuterium-tritium (DT) layered capsule implosions by measurement of fuel areal density (rR), which show the highest fuel compression (rR {approx} 1.0 g/cm{sup 2}) measured to date.

Semi-empirical models for series fan-powered variable air volume terminal units (FPTUs) were developed based on models of the primary, plenum, fan airflow and the fan power consumption. The experimental setups and test procedures were developed...

is the activity coefficient (db/dz) for the annual mass balance calculation. Comparison between measured velocity, mass balance calculation from area outline of exposed ice and ELA timeseries. The output

A tourism destination is a complex dynamic system. As such it requires specific methods and tools to be analyzed and understood in order to better tailor governance and policy measures for steering the destination along an evolutionary growth path. Many proposals have been put forward for the investigation of complex systems and some have been successfully applied to tourism destinations. This paper uses a recent suggestion, that of transforming a timeseries into a network and analyzes it with the objective of uncovering the structural and dynamic features of a tourism destination. The algorithm, called visibility graph, is simple and its implementation straightforward, yet it is able to provide a number of interesting insights. An example is worked out using data from two destinations: Italy as a country and the island of Elba, one of its most known areas.

Wind data at time scales from 10 min to 1 h are an important input for modeling the performance of wind farms and their impact on many countries national electricity systems. Planners need long-term realistic (i.e., meteorologically spatially and ...

Analysis of over 36 yr of timeseries data from the NSO/AFRL/Sac Peak K-line monitoring program elucidates 5 components of the variation of the 7 measured chromospheric parameters: (a) the solar cycle (period {approx} 11 yr), (b) quasi-periodic variations (periods {approx} 100 days), (c) a broadband stochastic process (wide range of periods), (d) rotational modulation, and (e) random observational errors, independent of (a)-(d). Correlation and power spectrum analyses elucidate periodic and aperiodic variation of these parameters. Time-frequency analysis illuminates periodic and quasi-periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structure (a) and (b) at timescales in the range {approx}0.1-10 yr. These results using only full-disk data suggest that similar analyses will be useful for detecting and characterizing differential rotation in stars from stellar light curves such as those being produced by NASA's Kepler observatory. Component (c) consists of variations over a range of timescales, in the manner of a 1/f random process with a power-law slope index that varies in a systematic way. A time-dependent Wilson-Bappu effect appears to be present in the solar cycle variations (a), but not in the more rapid variations of the stochastic process (c). Component (d) characterizes differential rotation of the active regions. Component (e) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The data analyzed in this paper can be found at the National Solar Observatory Web site http://nsosp.nso.edu/cak{sub m}on/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.

of numerical time-series data. The modern world is being flooded with such data. For example, a typical gas-turbine summaries of data currently must be written by people. The goal of SumTime was to develop technology worked in three domains: weather forecasts, summaries of gas-turbine sensor data, and summaries of sensor

Comparing modern and Pleistocene ENSO-like influences in NW Argentina using nonlinear timeseries of 106 m3 occurred in the arid to semiarid intra-Andean basins of northwest- ern Argentina (Strecker Argentina are not well known for the period at around 30,000 14 C years ago. Marine and terrestrial records

Human reaction time has a substantial effect on modeling of human behavior at a microscopic level. Drivers and pedestrian do not react to an event instantaneously; rather, they take time to perceive the event, process the ...

......optimum value through a grid-search algorithm...method outperformed TD for estimating the aggregate data series...variable, there is no benefit of forecasting each subaggregate...forecasting strategies in estimating the `component'-level...WILLEMAIN, T. R., SMART, C. N., SHOCKOR......

......of independent arguments can help to decide about the stochastic or deterministic nature of the data source. We have to pro-ceed with caution when dealing with empirical data as the corresponding series is finite in length, sampled at a finite rate, contaminated......

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

NASA/TM-2012-104606/Vol 30 Technical Report Series on Global Modeling 20771 December 2012 #12;Since its founding, NASA has been dedicated to the advancement of aeronautics and space science. The NASA scientific and technical information (STI) pro- gram plays a key part

F2010-B-107 MODELING OF THE THS-II SERIES/PARALLEL POWER TRAIN AND ITS ENERGY MANAGEMENT SYSTEM Â­ Hybrid power train, power-split eCVT, rule-based control strategy, Toyota Hybrid System, driver the challenge of minimizing the consumption of the road transport. Although hybrid power train technologies did

for the series and parallel units, with coefficients varying by size and manufacturer. Statistical modeling utilized SAS software (2002). Fan power and airflow data were collected at downstream static pressures over a range from 0.1 to 0.5 in. w.g. (25 to 125 Pa...

We investigate the properties of the FLRW flat cosmological models in which the vacuum energy density evolves with time, $\\Lambda(t)$. Using different versions of the $\\Lambda(t)$ model, namely quantum field vacuum, power series vacuum and power law vacuum, we find that the main cosmological functions such as the scale factor of the universe, the Hubble expansion rate $H$ and the energy densities are defined analytically. Performing a joint likelihood analysis of the recent supernovae type Ia data, the Cosmic Microwave Background (CMB) shift parameter and the Baryonic Acoustic Oscillations (BAOs) traced by the Sloan Digital Sky Survey (SDSS) galaxies, we put tight constraints on the main cosmological parameters of the $\\Lambda(t)$ scenarios. Furthermore, we study the linear matter fluctuation field and the growth rate of clustering of the above vacuum models. Finally, we derived the theoretically predicted dark-matter halo mass function and the corresponding distribution of cluster-size halos for all the mode...

MODEL STUDY OF SHORELINE CHANGES DUE TO A SERIES OF OFFSHORE BREAKHATERS A Thesis by DONALD ALAN CORDS Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... May 1986 Hajor Subject: Ocean Engineering NODEL STUDY OF SHORELINE CHANGES DUE TD A SERIES OF OFFSHORE BREAKHATERS A Thesis by DONALD ALAN CORDS Approved as to style and content by: ohn . er sc (Chairman) o ert . a a (Nember ) o er . ei...

of the (random) data matrix. In [1], the idea of modeling Big Data with (large) random matrices was proposed in the framework of Big Data. These series works [1]Â­[3] clearly articulated the ideas and treated the necessary1 Modeling Massive Amount of Experimental Data with Large Random Matrices in a Real-Time UWB

Abstract--In this study, we analyzed a dataset of time-series vital-signs data collected, USAMRMC, Frederick, MD 21702, and with the Nuclear Engineering Department of the University of Tennessee and diagnostic decision-aid methods, we reviewed a dataset of time-series vital-signs data measured by a standard

Propagation of monochromatic extraordinary light in a hyperbolic metamaterial is identical to propagation of massive particles in a three dimensional effective Minkowski spacetime, in which the role of a timelike variable is played by one of the spatial coordinates. We demonstrate that this analogy may be used to build a metamaterial model of a time crystal, which has been recently suggested by Wilczek and Shapere. It is interesting to note that the effective single-particle energy spectrum in such a model does not contain a static ground state, thus providing a loophole in the proof of time crystal non-existence by P. Bruno.

In this study, a hybrid synergy model integrating exponential smoothing and neural network is proposed for financial ... attempts to incorporate the linear characteristics of an exponential smoothing model and no...

This paper analyzes world oil production data as a population/resource growth model. Both US and world oil production data are analyzed in terms of ... , is not a suitable model for world oil production. A flexib...

Existing recommender systems model user interests and the social influences independently. In reality, user interests may change over time, and as the interests change, new friends may be added while old friends grow apart and the new friendships formed ... Keywords: collaborative filtering, personalization, recommendation, social trust

......exponential time to failure data NANCY R. MANN FRANK E. GRUBBS Rocketdyne, North American Rockwell Corporation U.S. Army Aberdeen...reliability for exponential time to failure data BY NANCY R. MANN Rocketdyne, North American Rockwell Corporation AND FRANK E. GRUBBS......

This paper deals with the scheduling of trains so as to minimise train trip times, whilst maximising reliability of arrival times. The amount of risk of delay associated with a schedule is used as the reliability component of a constrained schedule optimisation model. The paper outlines the model developed to quantify the risk of delays to individual trains, as well as to specific track segments and to the schedule as a whole. The risk model, which deals with single track operations, can be used to estimate the likely impact on reliability of arrival times of changes in train frequencies and operating practices; track and station infrastructure investment strategies; and train technology upgrading. An application of the model to the optimisation of schedules on a track corridor is described. The results obtained using the model are compared with the schedules used by train operations planning staff, in terms of overall delay and timetable reliability. The results highlight the significance of including a measure of timetable reliability, such as risk of delays, in the objective function for scheduling optimisation.

We present an improved quantum defect theory model for the s, p, d, and f Rydberg series of CaF. The model, which is the result of an exhaustive fit of high-resolution spectroscopic data, parameterizes the electronic ...

An actuation system for flexible control of an advanced turbocharging system is studied. It incorporates a vacuum pump and tank that are connected to pulse width modulation controlled vacuum valves. A methodology for modeling the entire boost pressure actuation system is developed. Emphasis is placed on developing component models that are easily identified from measured data, without the need for expensive measurements.The models have physical interpretations that enable handling of varying surrounding conditions.The component models and integrated system are evaluated on a two stage series sequential turbo system with three actuators having different characteristics.Several applications of the developed system model are presented, including a nonlinear compensator for voltage disturbance rejection where the performance of the compensator is demonstrated on an engine in a test cell. The applicability of the complete system model for control and diagnosis of the vacuum system is also discussed.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

.5 concentration at a resolution of 1Â° x 1Â°. The GEOS-Chem chemical transport model (CTM) is used to relate each optical depth (AOD) satellite retrievals, of global chemical transport models (CTMs) and of ground and respiratory morbidity. The WHO air quality guideline (AQG) for PM2.5 of 10 g m-3 is surpassed in most

The traditional influence coefficient dynamic balancing method for multi-rotor series shafting such as turbine-generator sets gas turbines compressor trains and others usually needs to startup many times using trial weights along the rotor. Based on finite element model analysis for the multi-rotor series shafting a virtual dynamic balancing methodology which only needs to collect data of vibration response at operating speed without trial weights is developed in this paper. According to shafting structure and operating parameters the dynamic finite element model was built by using rotor dynamics theory and finite element simulation technology. The shafting dynamic characteristics and weighted influence coefficient matrix can be gotten by exciting virtual unbalance force on the balance place correspondingly. The effectiveness and flexibility of the proposed method have been illustrated by solving a shafting dynamic balancing example with no trial weights requirements. It is believed that the new methods developed in this work will help in reducing the time and cost of the equipment manufacturer or field dynamic balancing procedures.

We investigate the properties of the FLRW flat cosmological models in which the vacuum energy density evolves with time, $\\Lambda(t)$. Using different versions of the $\\Lambda(t)$ model, namely quantum field vacuum, power series vacuum and power law vacuum, we find that the main cosmological functions such as the scale factor of the universe, the Hubble expansion rate $H$ and the energy densities are defined analytically. Performing a joint likelihood analysis of the recent supernovae type Ia data, the Cosmic Microwave Background (CMB) shift parameter and the Baryonic Acoustic Oscillations (BAOs) traced by the Sloan Digital Sky Survey (SDSS) galaxies, we put tight constraints on the main cosmological parameters of the $\\Lambda(t)$ scenarios. Furthermore, we study the linear matter fluctuation field and the growth rate of clustering of the above vacuum models. Finally, we derived the theoretically predicted dark-matter halo mass function and the corresponding distribution of cluster-size halos for all the models studied. Their expected redshift distribution indicates that it will be difficult to distinguish the closely resembling models (constant vacuum, quantum field and power-law vacuum), using realistic future X-ray surveys of cluster abundances. However, cluster surveys based on the Sunayev-Zeldovich detection method give some hope to distinguish the closely resembling models at high redshifts.

We have measured the fluorescence spectrum for fluorescein solution in ethanol with concentration 1 {\\times} 10-3 mol/liter at different temperatures from room temperature to freezing point of solvent, (T = 153, 183, 223, 253, and 303 K) using liquid nitrogen. Table curve 2D version 5.01 program has been used to determine the fitting curve and fitting equation for each fluorescence spectrum. Fourier series (3 {\\times} 2) was the most suitable fitting equation for all spectra. Theoretical fluorescence spectrum of fluorescein in ethanol at T = 183K was calculated and compared with experimental fluorescence spectrum at the same temperature. There is a good similarity between them.

, Phragmites australis, Remote sensing, SPOT-5. Abstract The Camargue, the RhÃ´ne river delta in south of France to model the presence of common reed (Phragmite australis) stands in Camargue. The development

and relatively clean sites through the time range before and during their clean-up periods to see how the air quality may affect the precipitation amount. By comparing the annual precipitation amount between two polluted sites with different elevations we...

and relatively clean sites through the time range before and during their clean-up periods to see how the air quality may affect the precipitation amount. By comparing the annual precipitation amount between two polluted sites with different elevations we...

High precision uranium isotope measurements of marineclastic sediments are used to measure the transport and storage time ofsediment from source to site of deposition. The approach is demonstratedon fine-grained, late Pleistocene deep-sea sediments from Ocean DrillingProgram Site 984A on the Bjorn Drift in the North Atlantic. The sedimentsare siliciclastic with up to 30 percent carbonate, and dated by sigma 18Oof benthic foraminifera. Nd and Sr isotopes indicate that provenance hasoscillated between a proximal source during the last three interglacialperiods volcanic rocks from Iceland and a distal continental sourceduring glacial periods. An unexpected finding is that the 234U/238Uratios of the silicate portion of the sediment, isolated by leaching withhydrochloric acid, are significantly less than the secular equilibriumvalue and show large and systematic variations that are correlated withglacial cycles and sediment provenance. The 234U depletions are inferredto be due to alpha-recoil loss of234Th, and are used to calculate"comminution ages" of the sediment -- the time elapsed between thegeneration of the small (time of deposition on theseafloor. Transport times, the difference between comminution ages anddepositional ages, vary from less than 10 ky to about 300 to 400 ky forthe Site 984A sediments. Long transport times may reflect prior storagein soils, on continental shelves, or elsewhere on the seafloor. Transporttime may also be a measure of bottom current strength. During the mostrecent interglacial periods the detritus from distal continental sourcesis diluted with sediment from Iceland that is rapidly transported to thesite of deposition. The comminution age approach could be used to dateQuaternary non-marine sediments, soils, and atmospheric dust, and may beenhanced by concomitant measurement of 226Ra/230Th, 230Th/234U, andcosmogenic nuclides.

and for model comparisons that would never have been possible from the available ship data the dynamic nature of the northeastern Caribbean, underscoring the significant effect of periodic intrusions inaccuracies in ocean color algorithms for particular regions. In this paper a large set of ship and satellite

The solar influence on global climate is nonstationary. Processes such as the Schwabe and Gleissberg cycles of the Sun, or the many intrinsic atmospheric oscillation modes, yield a complex pattern of interaction with multiple time scales. In addition, emissions of greenhouse gases, aerosols, or volcanic dust perturb the dynamics of this coupled system to different and still uncertain extents. Here we show, using two independent driving force reconstruction techniques, that the combined effect of greenhouse gases and aerosol emissions has been the main external driver of global climate during the past decades.

We provide a free, open-source toolbox for non-linear timeseries analyses. The major goal of this project was to provide a toolbox for nonlinear ... . The toolbox can be run within the Matlab environment, but al...

Cosines The data this time will be the Motorcycle Acceleration Data: A data frame giving a series of measurements of head acceleration in a simulated motorcycle accident, used to test crash helmets. Usage: data

From the inversion of a timeseries of high resolution slit spectrograms obtained from the quiet sun, the spatial and temporal distribution of the thermodynamical quantities and the vertical flow velocity is derived as a function of logarithmic optical depth and geometrical height. Spatial coherence and phase shift analyzes between temperature and vertical velocity depict the height variation of these physical quantities for structures of different size. An average granular cell model is presented, showing the granule-intergranular lane stratification of temperature, vertical velocity, gas pressure and density as a function of logarithmic optical depth and geometrical height. Studies of a specific small and a specific large granular cell complement these results. A strong decay of the temperature fluctuations with increasing height together with a less efficient penetration of smaller cells is revealed. The T -T coherence at all granular scales is broken already at log tau =-1 or z~170 km. At the layers beyon...

An acoustic input is recognized from inferred articulatory movements output by a learned relationship between training acoustic waveforms and articulatory movements. The inferred movements are compared with template patterns prepared from training movements when the relationship was learned to regenerate an acoustic recognition. In a preferred embodiment, the acoustic articulatory relationships are learned by a neural network. Subsequent input acoustic patterns then generate the inferred articulatory movements for use with the templates. Articulatory movement data may be supplemented with characteristic acoustic information, e.g. relative power and high frequency data, to improve template recognition.

Predicting Time-Delays under Real-Time Scheduling for Linear Model Predictive Control Zhenwu Shi prediction of time-delays caused by real-time scheduling. Then, a model predictive controller is designed, the interaction between real-time scheduling and control design has received interest in the literature

This study estimates global time-series consumption-based GHG emissions by region from 1990 to 2005, including both CO2 and non-CO2 GHG emissions. Estimations are conducted for the whole economy and for two specific sectors: manufacturing and agriculture. Especially in the agricultural sector, it is important to include non-CO2 GHG emissions because these are the major emissions present. In most of the regions examined, the improvements in GHG intensities achieved in the manufacturing sector are larger than those in the agricultural sector. Compared with developing regions, most developed regions have consistently larger per-capita consumption-based GHG emissions over the whole economy, as well as higher production-based emissions. In the manufacturing sector, differences calculated by subtracting production-based emissions from consumption-based GHG emissions are determined by the regional economic level while, in the agricultural sector, they are dependent on regional production structures that are determined by international trade competitiveness. In the manufacturing sector, these differences are consistently and increasingly positive for the U.S., EU15 and Japan but negative for developing regions. In the agricultural sector, the differences calculated for the major agricultural importers like Japan and the EU15 are consistently positive while those of exporters like the U.S., Australia and New Zealand are consistently negative.

A timeseries of burned land areas was generated for a 23 year period (19842006) using 10-day composites of AVHRR data. The study area covers 1.6 million km2 of boreal forest in western Canada. The algorithm was intended to be consistent throughout the study period and region, and to avoid commission errors, so as to obtain a reliable sample of temporal trends in burned area in the region. The algorithm relies on temporal comparisons of several spectral indices (GEMI, BAI), as well as near infrared reflectance. It emphasizes the stability of the post-fire signal, to avoid false detections associated with cloud, cloud shadows, missed data and radiometric or geometric calibration between AVHRR sensors. Final results show a very consistent temporal adjustment to official statistics and fire perimeters, with very low commission error (

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Introduction to Ocean Station TimeSeries CD-ROM The National Oceanographic Data Center (NODC, density, and nutrients, are now available on one CD-ROM (originally distributed in 1993). CD-ROM Contents a complete record layout of the SD2 format and the code tables used with it. This CD-ROM also holds

Three solvable models are set out in some detail in reviewing different types of phase transitions. Two of these relate directly to emergent critical phenomena, viz. melting and magnetic transitions in heavy rare-earth metals, and secondly, via the $3d$ Ising model, to critical behaviour in an insulating ferromagnet such as CrBr$_3$. The final `transition', however, concerns ionization of an electron in an isoelectronic series with $N$ electrons as the atomic number $Z$ is reduced below that of the neutral atom. These solvable models are, throughout, brought into contact either with experiment, or with very precise numerical modelling on real materials.

1 Feedback Control RealÂ­Time Scheduling: Framework, Modeling, and Algorithms * Chenyang Lu John A}@virginia.edu Abstract This paper presents a feedback control realÂ­time scheduling (FCS) framework for adaptive real. In particular, we establish a dynamic model and performance analysis of several feedback control scheduling

1 Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms* Chenyang Lu John A}@virginia.edu Abstract This paper presents a feedback control real-time scheduling (FCS) framework for adaptive real. In particular, we establish a dynamic model and performance analysis of several feedback control scheduling

The free fermionic construction of the heterotic string in four dimensions produced a large space of three generation models with the underlying $SO(10)$ embedding of the Standard Model states. The $SO(10)$ symmetry is broken to a subgroup directly at the string scale. Over the past few years free fermionic models with the Pati-Salam and flipped $SU(5)$ subgroups have been classified. In this paper we extend this classification program to models in which the $SO(10)$ symmetry is broken at the string level to the $SU(4)\\times SU(2)_L\\times U(1)_R$ (SU421) subgroup. The subspace of free fermionic models that we consider corresponds to symmetric ${\\mathbb{Z}}_2 \\times {\\mathbb{Z}}_2$ orbifolds. We provide a general argument that shows that this class of SU421 free fermionic models cannot produce viable three generation models.

Model checking Timed CSP Philip Armstrong Gavin Lowe JoÂ¨el Ouaknine A.W. Roscoe Oxford University Department of Computer Science Abstract Though Timed CSP was developed 25 years ago and the CSP for Timed CSP. In this paper we report on the creation of such a version, based on the digitisation results

Abstract The stationary properties of natural gas consumption are essential for predicting the impacts of exogenous shocks on energy demand, which can help modeling the energy-growth nexus. Then, this paper proposes to investigate the panel unit root proprieties of natural gas energy consumption of 48 countries over the period of 19712010. We apply the Harvey et al. [69] linearity test in order to determine the type of the unit root tests (the Kruse (2010) nonlinear unit root or LM (Lagrange Multiplier) linear unit root tests). Our results show that the stationarity of natural gas consumption cannot be rejected for more than 60% of countries. In order to provide corroborating evidence, we employed not only the first and second generation panel unit root tests, but also the recent LM panel unit root test developed by Im et al. [28]. This test allows for structural breaks both in intercept and slope. The empirical findings support evidence in favor of stationarity of natural gas consumption for all panels. These results announce that any shock to natural gas consumption has a transitory impact for almost all countries implying that energy consumption will turn back to its time trend.

Higher variability in rainfall and river discharge could be of major importance in landslide generation in the north-western Argentine Andes. Annual layered (varved) deposits of a landslide dammed lake in the Santa Maria Basin (26 deg S, 66 deg W) with an age of 30,000 14C years provide an archive of precipitation variability during this time. The comparison of these data with present-day rainfall observations tests the hypothesis that increased rainfall variability played a major role in landslide generation. A potential cause of such variability is the El Nino/Southern Oscillation (ENSO). The causal link between ENSO and local rainfall is quantified by using a new method of nonlinear data analysis, the quantitative analysis of cross recurrence plots (CRP). This method seeks similarities in the dynamics of two different processes, such as an ocean-atmosphere oscillation and local rainfall. Our analysis reveals significant similarities in the statistics of both modern and palaeo-precipitation data. The similarities in the data suggest that an ENSO-like influence on local rainfall was present at around 30,000 14C years ago. Increased rainfall, which was inferred from a lake balance modeling in a previous study, together with ENSO-like cyclicities could help to explain the clustering of landslides at around 30,000 14C years ago.

to produce theoretical moment maps, which allow for the study of radar characteristics and limitations given of signal-processing techniques, which help identify and scrutinize factors that may have been overlooked corresponding to that spectral shape. Numerous statistical studies were made possible using this simulation

studies that our proposed method performs well. Finally, we apply our method to a seismic waves dataset took place near the Russian nuclear test facility in Novaya Zemlya. The problem of discriminating nuclear explosions and earthquakes. This latter problem is of critical importance for monitoring

studies that our proposed method performs well. Finally, we apply our method to a seismic waves dataset, University of Pittsburgh. #12;place near the Russian nuclear test facility in Novaya Zemlya. The problem of discriminating between nuclear explosions and earthquakes. This latter problem is one of critical importance

change in the volatility of five Asian and U.S. stock markets is examined during the post-liberalization period (1990-2005) in the Asian financial markets, using the Sup LM test. Four Asian financial markets (Hong Kong, Japan, Korea, and Singapore...

Health Monitoring of operating wind turbines is challenging, as those structures are characterized. The particular case of operating wind turbines is challeng- ing, as those structures are characterized by complex, analyzed and compared within the problem of vibration based fault detection on operating wind turbines

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This paper discusses a new cellular neural network model of the time-coding pathway of sound localization. The key feature of the model is lateral inhibition which is supposed to play crucial role in sound localization. The possible role of this inhibition ...

We discuss the matrix model in a class of 11D time dependent supersymmetric backgrounds as obtained in hep-th/0508191 . We construct the matrix model action through the matrix regularization of the membrane action in the background. We show that the action is exact to all order of fermionic coordinates. Furthermore We discuss the fuzzy sphere solutions in this background.

A computational method for a performance analysis of the switched reluctance motor has been developed. Most of the papers present static characteristics such as static torque based on a finite element methodanalysis. However this method is based on a Fourier series expansion with step-by-step current-dependent adjustable coefficients. Analytical expression for the calculation of instantaneous phase inductance flux linkage co-energy and electromagnetic torque as a function of rotor position and winding current are derived. Finally the strategy to represent the winding inductance variation with position is made by approximating the actual inductance profile.

The purpose of this exploratory case study was to describe an expert teacher?s decision-making system during interactive instruction using teacher self-report information, classroom observation data, and physiological recordings. Timed recordings...

). The independent test data were used to measure omission rates, and 100  omission was used as a measure of model quality. For each species, Kruskal-Wallis ANOVA tests were used to test differences between control and experimental groups and among... error. Error bars depict standard error. For explanation of model names (X axis), see Tables 1 and 2. For Aphelocoma californica, the topography-only and the topography plus climate groups of experiments were significantly different (Kruskal-Wallis...

of the 3D Ising Model Zaher Salman and Joan Adler Department-dimensional Ising model on the simple cubic lattice. Our analys* *is of Butera and Comi's new 32 term temperature of the three-dimensional (3d) Ising model on the si* *mple cubic lattice has been exhaustively

Several models exist to predict the time dependent behavior of bouyant puffs that result from explosions. This paper presents a new model that is derived from the strong conservative form of the conservation partial differential equations that are integrated over space to yield a coupled system of time dependent nonlinear ordinary differential equations. This model permits the cloud to evolve from an intial spherical shape not an ellipsoidal shape. It ignores the Boussinesq approximation, and treats the turbulence that is generated by the puff itself and the ambient atmospheric tubulence as separate mechanisms in determining the puff history. The puff cloud rise history was found to depend no only on the mass and initial temperature of the explosion, but also upon the stability conditions of the ambient atmosphere. This model was calibrated by comparison with the Roller Coaster experiments.

Plant safety as well as plant availability can be significantly improved if functions such as data validation, plant state verification, and fault identification are automated. A methodology for automation of these functions was presented in an earlier paper. To implement this methodology, plant models that run significantly faster than real transient time are needed. Such models for the intermediate heat exchanger and a once-through liquid-metal fast breeder reactor (LMFBR) steam generator have been presented. This paper discusses the modeling of LMFBR core transients. It is shown that, with a proper choice of shape functions, a nodal approximation of the coolant, cladding, and fuel temperature distributions leads to adequately accurate power and temperature predictions, as well as adequately short computation times. From the point of view of operational safety, it is desirable to terminate a transient before sodium boiling is initiated in the core. Thus, only the modeling of the preboiling phase of core transients is discussed.

In this paper, we analyze a two coupled fluids model by investigating several solutions for accelerated universe in flat FRW space-time. One of the fluids can be identified with the matter and the model possesses the standard matter solution also. Beyond the removal of the coincidence problem, we will see how the coupling may change the description of the energy contents of the universe and which features can be aquired with respect to the standard decoupled cases.

In this paper, we analyze a two coupled fluids model by investigating several solutions for accelerated universe in flat FRW space-time. One of the fluids can be identified with the matter and the model possesses the standard matter solution also. Beyond the removal of the coincidence problem, we will see how the coupling may change the description of the energy contents of the universe and which features can be aquired with respect to the standard decoupled cases.

There is considerable interest within government agencies and the energy industries across the globe to further advance the clean and economical conversion of coal into liquid fuels to reduce our dependency on imported oil. To date, advances in these areas have been largely based on experimental work. Although there are some detailed systems level performance models, little work has been done on numerical modeling of the component level processes. If accurate models are developed, then significant R&D time might be saved, new insights into the process might be gained, and some good predictions of process or performance can be made. One such area is the characterization of slag deposition and flow on the gasifier walls. Understanding slag rheology and slag-refractory interactions is critical to design and operation of gasifiers with extended refractory lifetimes and also to better control of operating parameters so that the overall gasifier performance with extended service life can be optimized. In the present work, the literature on slag flow modeling was reviewed and a model similar to Seggianis was developed to simulate the time varying slag accumulation and flow on the walls of a Prenflo coal gasifier. This model was further extended and modified to simulate a refractory wall gasifier including heat transfer through the refractory wall with flowing slag in contact with the refractory. The model was used to simulate temperature dependent slag flow using rheology data from our experimental slag testing program. These modeling results as well as experimental validation are presented.

We generalize a recently proposed small-energy expansion for one-dimensional quantum-mechanical models. The original approach was devised to treat symmetric potentials and here we show how to extend it to non-symmetric ones. Present approach is based on matching the logarithmic derivatives for the left and right solutions to the Schr\\"odinger equation at the origin (or any other point chosen conveniently) . As in the original method, each logarithmic derivative can be expanded in a small-energy series by straightforward perturbation theory. We test the new approach on four simple models, one of which is not exactly solvable. The perturbation expansion converges in all the illustrative examples so that one obtains the ground-state energy with an accuracy determined by the number of available perturbation corrections.

The goal of the previous research during 1987-1990 within the DOE (Department of Energy) Shelf Edge Exchange Processes (SEEP) program in the Mid-Atlantic Bight was to understand the physical and biogeochemical processes effecting the diffusive exchange of the proxies of energy-related, by-products associated with particulate matter between estuarine, shelf, and slope waters on this continental margin. As originally envisioned in the SEEP program plan, SEEP-III would take place at Cape Hatteras to study the advective exchange of materials by a major boundary current. One problem of continuing interest is the determination of the local assimilative capacity of slope waters and sediments off the eastern seaboard of the US to lengthen the pathway between potentially harmful energy by-products and man. At basin scales, realistic specification of the lateral transport by western boundary currents of particulate matter is a necessary input to global models of carbon/nitrogen cycling. Finally, at these global scales, the generic role of continental margins in cycling greenhouse gases, e.g. CO{sub 2}, CH{sub 4}, and N{sub 2}O, is now of equal interest. This continuing research of model construction and evaluation within the SEEP program focuses on all three questions at local, regional, and basin scales. Results from SEEP-I and II are discussed as well as plans for SEEP-III. 14 figs., 3 tabs.

Abstract In water, chlorine reacts with nitrogen-containing compounds to produce disinfection by-products such as nitrogen trichloride which induces ocular and respiratory irritations in swimming pool workers. This study proposes a model to predict variations in \\{NCl3\\} concentration over time in a traditional indoor swimming pool as a function of its operating parameters and attendance. The model was developed taking into consideration the reaction mechanisms, thermodynamic equilibria, physico-chemical properties, and transfer mechanisms occurring at the pool's surface. This model was validated through a robust series of experiments over two days and two nights in a real swimming pool. The model was found to satisfactorily predict variations over time in the concentrations of the chemical species investigated, including nitrogen trichloride. The work presented constitutes a first step to extend the model at different swimming pools. This approach may also be used to study the influence of the main operating parameters and to evaluate the impact of setting up water treatment systems on nitrogen trichloride concentration.

Abstract In this paper, the first of a series of papers on battery fuel gauge (BFG), we present a real time parameter estimation strategy for robust state of charge (SOC) tracking. The proposed parameter estimation scheme has the following novel features: it models hysteresis as an error in the open circuit voltage (OCV) and employs a combination of real time, linear parameter estimation and SOC tracking technique to compensate for it. This obviates the need for modeling of hysteresis as a function of SOC and load current. We identify the presence of correlated noise that has been so far ignored in the literature and use it to enhance the accuracy of model identification. As a departure from the conventional one model fits all strategy, we identify four different equivalent models of the battery that represent four modes of typical battery operation and develop the framework for seamless SOC tracking by switching. The proposed parameter approach enables a robust initialization/re-initialization strategy for continuous operation of the BFG. The performance of the online parameter estimation scheme was first evaluated through simulated data. Then, the proposed algorithm was validated using hardware-in-the-loop (HIL) data collected from commercially available Li-ion batteries.

We investigate the virialization of cosmic structures in the framework of flat FLRW cosmological models, in which the vacuum energy density evolves with time. In particular, our analysis focuses on the study of spherical matter perturbations, as they decouple from the background expansion, "turn around" and finally collapse. We generalize the spherical collapse model in the case when the vacuum energy is a running function of the Hubble rate, $\\Lambda=\\Lambda(H)$. A particularly well motivated model of this type is the so-called quantum field vacuum, in which $\\Lambda(H)$ is a quadratic function, $\\Lambda(H)=n_0+n_2\\,H^2$, with $n_0\

Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this timeseries is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar to other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.

This paper discusses the modeling of LMFBR core transients. It is shown that with a proper choice of shape functions a nodal approximation of the coolant, cladding, and fuel temperature distributions leads to adequately accurate power and temperature predictions, as well as adequately short computation times.

To establish a standard for the distinction of reptation from other modes of polymer diffusion, we analytically and numerically study the displacement of the central bead of a chain diffusing through an ordered obstacle array for times t100). Our analytically solvable model furthermore predicts a very short transient for the fourth moment. This is verified by computer experiment.

A simple upwind discretization of the highly coupled non-linear differential equations which define the hydrodynamic model for semiconductors is given in full detail. The hydrodynamic model is able to describe inertia effects which play an increasing role in different fields of opto- and microelectronics. A silicon $n^+ - n - n^+$ - structure is simulated, using the energy-balance model and the full hydrodynamic model. Results for stationary cases are then compared, and it is pointed out where the energy-balance model, which is implemented in most of today's commercial semiconductor device simulators, fails to describe accurately the electron dynamics. Additionally, a GaAs $n^+ - n - n^+$-structure is simulated in time-domain in order to illustrate the importance of inertia effects at high frequencies in modern submicron devices.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A discussion is presented of the principle of black hole com- plementarity. It is argued that this principle could be viewed as a breakdown of general relativity, or alternatively, as the introduction of a time variable with multiple `sheets' or `branches' A consequence of the theory is that the stress-energy tensor as viewed by an outside observer is not simply the Lorentz-transform of the tensor viewed by an ingoing observer. This can serve as a justification of a new model for the black hole atmosphere, recently re-introduced. It is discussed how such a model may lead to a dynamical description of the black hole quantum states.

We study the noise activated dynamics of a model {\\it autapse} neuron system that consists of a subcritical Hopf oscillator with a time delayed nonlinear feedback. The coherence of the noise driven pulses of the neuron exhibits a novel double peaked structure as a function of the noise amplitude. The two peaks correspond to separate optimal noise levels for excitation of single spikes and multiple spikes (bursts) respectively. The relative magnitudes of these peaks are found to be a sensitive function of time delay. The physical significance of our results and its practical implications in various real life systems are discussed.

Electric Load vs Time: Complicated World, Complicated Model Electric Load vs Time: Complicated World, Complicated Model Speaker(s): Phillip Price Date: March 7, 2013 - 12:00pm Location: 90-3122 Seminar Host/Point of Contact: Phillip Price "How much energy did I save by changing the operation of my building yesterday?" That turns out to be a very hard question to answer: you need to know how much energy you would have used under normal operations (the "baseline"), a number you can predict but not measure. In this talk we focus specifically on electrical energy ("electric load") in commercial buildings. Often the load can be broken down into several components that are superimposed on each other: a recurring weekly pattern, an effect of outdoor air temperature, and so on. Some buildings have patterns that are

The use of time-delay gravitational lenses to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 12 lens systems, which have thus far been used solely for optimizing the parameters of $\\Lambda$CDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between {\\it competing} models. The currently available sample indicates a likelihood of $\\sim 70-80%$ that the $R_{\\rm h}=ct$ Universe is the correct cosmology versus $\\sim 20-30%$ for the standard model. This possibly interesting result reinforces the need to greatly expand the sample of time-delay lenses, e.g., with the successful implementation of the Dark Energy Survey, the VST ATLAS survey, and the Large Synoptic Survey Telescope. In anticipation of a greatly expanded catalog of time-delay lenses identified with these surveys, we have produced synthetic sa...

We investigate the virialization of cosmic structures in the framework of flat FLRW cosmological models, in which the vacuum energy density evolves with time. In particular, our analysis focuses on the study of spherical matter perturbations, as the latter decouple from the background expansion and start to "turn around" and finally collapse. We generalize the spherical collapse model in the case when the vacuum energy is a running quantity of the Hubble rate, $\\Lambda=\\Lambda(H)$. A particularly well motivated model of this type is the so-called quantum field vacuum, in which $\\Lambda(H)$ is a quadratic function, $\\Lambda(H)=n_0+n_2\\,H^2$, with $n_0\

A counting process {N(t),t>=0} with the interoccurrence times X"1,X"2,... is an @a-series process if there exists a real number @a such that (k^@aX"k)"k"="1","2","... forms a renewal process. The nonparametric inference problem in an @a-series process ... Keywords: ?-series process, Linear regression, Trend

To probe both the Mechanical Non-Equilibrium (MNE) and Thermodynamic Non-Equilibrium (TNE) in the combustion procedure, a two-dimensional Multiple-Relaxation-Time (MRT) version of the Lattice Boltzmann Kinetic Model(LBKM) for combustion phenomena is presented. The chemical energy released in the progress of combustion is dynamically coupled into the system by adding a chemical term to the LB kinetic equation. The LB model is required to recover the Navier-Stokes equations with chemical reaction in the hydrodynamic limit. To that aim, we construct a discrete velocity model with $24$ velocities divided into $3$ groups. In each group a flexible parameter is used to control the size of discrete velocities and a second parameter is used to describe the contribution of the extra degrees of freedom. The current model works for both subsonic and supersonic flows with or without chemical reaction. In this model both the specific-heat ratio and the Prandtl number are flexible, the TNE effects are naturally presented in...

The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging survivability issues. Our findings indicate that packaging represents the most significant technical challenge associated with application of sensors in the downhole environment for long periods (5+ years) of time. These issues are described in detail within the report. The impact of successful reservoir monitoring programs and coincident improved reservoir management is measured by the production of additional oil and gas volumes from existing reservoirs, revitalization of nearly depleted reservoirs, possible re-establishment of already abandoned reservoirs, and improved economics for all cases. Smart Well monitoring provides the means to understand how a reservoir process is developing and to provide active reservoir management. At the same time it also provides data for developing high-fidelity simulation models. This work has been a joint effort with Sandia National Laboratories and UT-Austin's Bureau of Economic Geology, Department of Petroleum and Geosystems Engineering, and the Institute of Computational and Engineering Mathematics.

An offshore oil and gas structure will be decommissioned and removed from service at the end of its productive life, depending upon operator preferences, legislative requirements, and strategic opportunities. The basic aim of decommissioning is to render all wells permanently safe and remove most, if not all, surface/seabed signs of production activity. The purpose of this paper is to characterize the timing decisions associated with abandoning offshore oil and gas structures. Three models are developed, ranging from a production-based forecast to a risked, net present value approach. Functions that describe how the age of the structure upon abandonment is related to system parameters is constructed using a meta-modeling simulation and illustrated on a generic field development scenario.

Preprint Series LSEÂ­MPSÂ­67, Dept. of Mathematics, London School of Economics, Houghton St., London WC2A 2AE experiments on identification of transmembrane domains. In Proceedings of the 25th Hawaii International

We use the AdS/CFT correspondence to study the resummation of a perturbative genus expansion appearing in the type II superstring dual of ABJM theory. Although the series is Borel summable, its Borel resummation does not agree with the exact non-perturbative answer due to the presence of complex instantons. The same type of behavior appears in the WKB quantization of the quartic oscillator in Quantum Mechanics, which we analyze in detail as a toy model for the string perturbation series. We conclude that, in these examples, Borel summability is not enough for extracting non-perturbative information, due to non-perturbative effects associated to complex instantons. We also analyze the resummation of the genus expansion for topological string theory on local $\\mathbb P^1 \\times \\mathbb P^1$, which is closely related to ABJM theory. In this case, the non-perturbative answer involves membrane instantons computed by the refined topological string, which are crucial to produce a well-defined result. We give evidence that the Borel resummation of the perturbative series requires such a non-perturbative sector.

The control of spatially distributed systems is often complicated by significant uncertainty about system inputs, both time-varying exogenous inputs and time-invariant parameters. Spatial variations of uncertain parameters ...

Videos Videos Lawrence Berkeley National Laboratory Environmental Energy Technologies Division Distinguished Lecture Series Environmental Energy Technologies Division Distinguished Lecture Series Videos Long Fuse, Big Bang: Thomas Edison, Electricity, and the Locus of Innovation Andrew Hargadon, October 22, 2012 Climate Change Hits Home: Impacts on the Built Environment and Health John Spengler, June 18, 2012 High Comfort-Low Impact, From Buildings to Cities Matthias Schuler, April 30, 2012 Emissions Trading and Climate Finance: Is 2012 the Dead End or the Crossroads? Marc Stuart, January 27, 2012 Advances in Global Climate Modeling for Scientific Understanding and Predictability V. Ramaswamy, October 7, 2011 How is Building Energy Use Related to Occupant Behaviors and Building Usage

-time social trails that reflect the digital footprints of crowds of real-time web users in response to real-world events or online phenomena. These digital footprints correspond to the artifacts strewn across the real-time web like posting of messages...

controller of an airplane, railway crossing, robot controllers Â­ steel production controllers, communication Â­ the next-operator "measures" time passage Â­ two time units after being red, the light is green: 2 (red green) Â­ within two time units after red, the light is green: 2 (red (green green green)) 2 green

of an airplane, railway crossing, robot controllers Â­ steel production controllers, communication protocols Â­ the next-operator "measures" time passage Â­ two time units after being red, the light is green: 2 (red green) Â­ within two time units after red, the light is green: 2 (red (green green green)) | {z } 2

], the advantage of fuzzy logic in modeling and control is in the ability to combine modeling (constructingA New Approach to Fuzzy Modeling and Control of Discrete-Time Systems Michael Margaliot and Gideon Langholz #3; Abstract We present a new approach to fuzzy modeling and control of discrete-time sys- tems

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A discounted-cost, continuous-time, infinite-horizon version of a flexible manufacturing and operator scheduling model is solved. The solution procedure is to convexify the discrete operator-assignment constraints to obtain a linear program, and then to regain the discreteness and obtain an approximate manufacturing schedule by deconvexification of the solution of the linear program over time. The strong features of the model are the accommodation of linear inequality relations among the manufacturing activities and the discrete manufacturing scheduling, whereas the weak features are intra-period relaxation of inventory availability constraints, and the absence of inventory costs, setup times, and setup charges.

SERIES EXPANSIONS FOR THE SPHERICAL AND ISING MODELS WITH LARGE LATTICE DIMENSIONALITY (") by S. MILOSEVI exactement du modele d'Ising ou bien comme un modele de spins interagissant de maniere isotope lorsque la. -The spherical model can be regarded either as an exactly-solubleapproximation to the Ising model

International Airport (BOS) for the 22L, 27 | 22L, 22R runway configuration in 2011. III. Estimation of service to both the modeling of airport operations, and to the estimation of airport capacities. II. Data sources threshold [8]. The Airport Surface Detection Equipment Â­ Model X (ASDE-X) system combines data from surface

manufacturers. Funding for Francesca Dominici was provided by a grant from the Health Effects Institute (Walter Effects Institute (HEI), an organization jointly funded by the Environmental Protection Agency (EPA R@jhsph.edu. Acknowledgments: Research described in this article was partially supported by a contract and grant from Health

To predict short-term power load in an effective and fast way, the ... are also determined. And then the continuous power load data are transformed into data matrix by using the theory of phase- ... LSSVM is used...

The objective of this paper is to realize the real-time visualization of hydroelectric project. Based on the object-oriented graphics modeling technology, we construct the three kinds of graphics models sorted by hierarchy---unit model, process model, ... Keywords: visualization, hydroelectric project, simulation, object-oriented graphics modeling technology, interaction

sufficient secondary reserve and the resulting decrease in energy effi- ciency and increase SOx and NOx emissions at such part load operation of Combined Cycle Gas Tur- bines (CCs) and Coal power stations outside of their optimal generation point. A model... power stations. First, as base load power stations, combined cycle gas turbines are widely expected to dominate the picture and are therefore the sug- gested option. Secondly, for investment in peaking capacity, open cycle gas turbines are modelled...

A time-dependent model that simulates the interaction of a thunderstorm with its electrical environment is introduced. The model solves the continuity equation of the Maxwell current density that includes conduction, displacement, and source ...

A MODEL FOR THE FLEET SIZING OF DEMAND RESPONSIVE TRANSPORTATION SERVICES WITH TIME WINDOWS Marco a demand responsive transit service with a predetermined quality for the user in terms of waiting timemodels; Continuous approximation models; Paratransit services; Demand responsive transit systems. #12;3 1

TIME-VARYING LINEAR MODEL APPROXIMATION: APPLICATION TO THERMAL AND AIRFLOW BUILDING SIMULATION the computing time is still an open challenge. After spacial discretisation, the thermal model of a building is demonstrated by its application to the simulation of a multi-zones building. THERMAL AND AIRFLOW MODELS

......messages. Since the general STPN is not good at expressing these various messages...rules of accessing web services on the Internet, the interval time between any two requests...oriented computing environment. IEEE Internet Comput. 10, 4349. [10] Zhang,W......

Information-theoretic entropy measures are useful tools for quantifying the spreading of quantum states in phase space. In the present paper, we compare the time evolution of the joint entropy for three simple quantum systems: (i) a free Gaussian wave packet, (ii) a wave packet in a monochromatic electromagnetic field, and (iii) a wave packet tunneling through a ? barrier. As initial condition maximal classical states are used, which minimize the Heisenberg uncertainty and the entropy. It is found that, in all three cases, the joint entropy increases in time.

. The Ohio dataset has been of particular interest because of the suggestion that a nuclear facility of additional risk from a known putative source, such as a nuclear installation. For some further comments factors during the time period of the study. In the present paper, we use for illustration a dataset

functions were limited to steps, ramps and sinusoids. This limited class of inputs and delays defines the scope of this thesis and the results are to be interpreted as such. The methodology adopted to identify the basic underpinnings of models was system...

, the problem often becomes NP-complete.) We propose a simple model for measuring energy usage on a parallel consumes a fixed amount of energy per active timeslot, regardless of the number of jobs scheduled to finding a triangle-free 2-matching on a special graph. We extend the algorithm of Babenko et. al. [3

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

#12;We propose a simple model for measuring energy usage on a parallel machine. Rather than focusing consumes a fixed amount of energy per active timeslot, regardless of the number of jobs scheduled, the problem amounts to finding a triangle-free 2-matching on a special graph. We extend the algorithm

A one-dimensional time-dependent cumulonimbus model is designed that, unlike in previous one-dimensional models, simulates cloud-top heights, vertical velocities, and water contents that are reasonably consistent with those observed in real ...

Ocean models that are able to provide accurate and real-time prediction of ocean currents will improve the performance of glider navigation. ... a novel approach to compute a model for ocean currents at higher re...

This paper presents the theoretical development and numerical implementation of a new modeling approach for representing the groundwater pathway in risk assessment or performance assessment model of a contaminant transport system. The model developed ... Keywords: Groundwater pathway, Mixing model, Performance assessment, Residence time distribution

Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

. In such a case, the power of the test does not tend to one in spite of large sample sizes. On the other hand, the consistent nonparametric tests avoid this problem. To test the correctness of a parametric model, say, Yi = l(xti ;?) + ei, we can consider.... In practice, we use ^i in lieu of ei, where ^i = Yi ? l(xti ; ^) is a residual, and ^ is 11 an OLS estimator of ? and Yi is a response variable. Using the leave one out kernel estimator 1nh Pnj6=i ^jk(xtj ?xtih ), the test statistic stems from the following...

Abstract Detailed information on the spatial and temporal distribution of land cover is required to evaluate the effects of land cover change on environmental processes. The development of temporally consistent land cover timeseries (LCTS) from satellite-based earth observation has proven difficult because multi-year observations are acquired under different conditions resulting in high inter-annual reflectance variability. This leads to spurious differences in land cover when standard approaches for image classification are applied to generate multi-year land cover data. To reduce this effect, a common solution has been to first detect change and update a base map for only these change areas. As long as the change commission error is low, this approach will ensure high consistency between maps in the timeseries. Here we present an approach for change-based LCTS development following from previous research, but with significant advancements in change detection, training, classification, and evidence-based refinement. The method was applied to generate an annual LCTS covering Canada spanning 20002011 that is consistent between years and can be used to identify dominant change transitions. Assessment of the LCTS was challenging because multiple maps needed to be evaluated and can be prohibitive particularly for annual timeseries covering several years. Three approaches were undertaken involving visual examination, comparison with a reference sample derived from Landsat, and comparison with the MODIS Global LCTS V5.1. Visual assessment revealed high inter-map consistency and logical temporal change trajectories of land cover classes. Comparison with the reference sample showed an accuracy of 70% at the 19 class thematic resolution. Accounting for mixed pixels by considering the first or second reference land cover label as correct increased the accuracy to 80%. Comparison with the MODIS Global LCTS showed that the Canada LCTS achieved higher inter-map consistency and accuracy as expected with national relative to global land cover products.

dynamic modeling in a supermarket refrigeration system. Keywords: System identification, FOPDT, time in a refrigeration system. The TV-FOPDT model is an extension of the standard FOPDT by allowing the system parameters. The proposed approaches can simultaneously estimate the time-dependent system parameters, as well

the derivation of a system of demands for activity participation by applying microeconomic theory in a time-price to be of discrete choices (e.g., Train et al. 1987); many models of jointly estimated demand responses lack-time demand elasticities, values of time, and other behavioral properties. METHODOLOGY Microeconomic

We present a parareal in time algorithm for the simulation of neutron diffusion transient model. The method is made efficient by means of a coarse solver defined with large time steps and steady control rods model. Using finite element for the space discretization, our implementation provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch-Maurer-Werner (LMW) benchmark [1].

Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the models ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.

Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

One of the main obstacle that prevents model checking from being widely used in industrial control systems is the complexity of building formal models out of PLC programs, especially when timing aspects need to be integrated. This paper brings an answer to this obstacle by proposing a methodology to model and verify timing aspects of PLC programs. Two approaches are proposed to allow the users to balance the trade-off between the complexity of the model, i.e. its number of states, and the set of specifications possible to be verified. A tool supporting the methodology which allows to produce models for different model checkers directly from PLC programs has been developed. Verification of timing aspects for real-life PLC programs are presented in this paper using NuSMV.

Power Series 16.4 Introduction In this section we consider power series. These are examples of infinite series where each term contains a variable, x, raised to a positive integer power. We use the ratio test to obtain the radius of convergence R, of the power series and state the important result

It has been shown in previous work that the Kalman Filter and Linear Smoother produces optimal estimates of inventory and loss from a material balance area. The assumptions of the Kalman Filter/Linear Smoother approach assume no correlation between inventory measurement error nor does it allow for serial correlation in these measurement errors. The purpose of this report is to extend the previous results by relaxing these assumptions to allow for correlation of measurement errors. The results show how to account for correlated measurement errors in the linear system model of the Kalman Filter/Linear Smoother. An algorithm is also included for calculating the required error covariance matrices.

This paper is a tutorial that presents a new method of modeling the probabilistic description of failure mechanisms in complex, time-dependent systems. The method of modeling employs a state vector differential equation representation of cumulative failure probabilities derived from Markov models associated with certain generic fault trees, and the method automatically includes common cause/common mode statistical dependencies, as well as time-related dependencies not considered in the literature previously. Simulations of these models employ a population dynamics representation of a probability space involving probability particle transitions among the Markov disjoint states. The particle transitions are governed by a random, Monte Carlo selection process.

This paper presents a probabilistic influence model for smartphone usage; it applies a latent group model to social influence. The probabilistic model is built on the assumption that a timeseries of students...

The latent class structure of autism symptoms from the time of diagnosis to age 6 years was examined in a sample of 280 children with autism spectrum disorder. Factor mixture modeling was performed on 26 algor...

The authors demonstrate a statistical model for the time it takes a manuscript to be accepted for publication. The manuscript received and accepted dates from published manuscripts with the term hurricane in the title are obtained from the American ...

The finite difference method has been widely used in seismic modeling and reverse time migration. However, it generally has two issues: large computational cost and numerical dispersion. Recently, a nearly-analytic discrete ...

1 A time-delay approach for the modeling and control of plasma instabilities in thermonuclear for thermonuclear fusion plasmas. Indeed, advanced plasma confinement scenarios, such as the ones considered

D M E S S Modeling FullEnvelope Aerodynamics of Small UAVs in RealTime Prof. Michael Selig Applied Aerodynamics Group and Subsonic Aerodynamics Research Lab Department of Aerospace Engineering will focus on the development of a full six degreeoffreedom aerodynamics modeling environment for small UAVs

al proposed that removal rates of raised and down areas converge exponentially to the removal rate of an unpat- terned dielectric sheet film (blanket removal rate) as polish time increases [6]. However, both these models lack a clear connection to density. In addition, the model in [6] assumes the pad is always

A common technique for detection of gravitational-wave signals is searching for excess power in frequency-time maps of gravitational-wave detector data. In the event of a detection, model selection and parameter estimation will be performed in order to explore the properties of the source. In this paper, we develop a Bayesian statistical method for extracting model-dependent parameters from observed gravitational-wave signals in frequency-time maps. We demonstrate the method by recovering the parameters of model gravitational-wave signals added to simulated advanced LIGO noise. We also characterize the performance of the method and discuss prospects for future work.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Prediction of the shocks' arrival times (SATs) at the Earth is very important for space weather forecast. There is a well-known SAT model, STOA, which is widely used in the space weather forecast. However, the shock transit time from STOA model usually has a relative large error compared to the real measurements. In addition, STOA tends to yield too much `yes' prediction, which causes a large number of false alarms. Therefore, in this work, we work on the modification of STOA model. First, we give a new method to calculate the shock transit time by modifying the way to use the solar wind speed in STOA model. Second, we develop new criteria for deciding whether the shock will arrive at the Earth with the help of the sunspot numbers and the angle distances of the flare events. It is shown that our work can improve the SATs prediction significantly, especially the prediction of flare events without shocks arriving at the Earth.

.................................................................................... 4 EE 101 Creativity and Design in Electrical and Computer Engineering or CS 100 The Computer Science2013-2014Series College of Engineering University of Kentucky is accredited by the Southern of Kentucky. ComputerEngineering Computer engineering involves modeling, design, implementation, testing

.................................................................................... 4 EE 101 Creativity and Design in Electrical and Computer Engineering or CS 100 The Computer Science2012-2013 Series College of Engineering University of Kentucky is accredited by the Southern of Kentucky. ComputerEngineering Computer engineering involves modeling, design, implementation, testing

Bayesian phylogenetic methods require the selection of prior probability distributions for all parameters of the model of evolution. These distributions allow one to incorporate prior information into a Bayesian analysis, ...

The interevent time of terrorism attack events is investigated by empirical data and model analysis. Empirical evidence shows it follows a scale-free property. In order to understand the dynamic mechanism of such statistic feature, an opinion dynamic model with memory effect is proposed on a two-dimension lattice network. The model mainly highlights the role of individual social conformity and self-affirmation psychology. An attack event occurs when the order parameter of the system reaches a critical value. Ultimately, the model reproduces the same statistical property as the empirical data and gives a good understanding of terrorism attack.

Radiation portal monitors screen cargo and personal vehicle traffic at international border crossings to detect and interdict illicit sources which may be present in the commerce stream. One difficulty faced by RPM systems is the prospect of false alarms, or undesired alarms due to background fluctuation, or Naturally-Occurring Radioactive Material (NORM) sources in the commerce stream. In general, NORM alarms represent a significant fraction of the nuisance alarms at international border crossings, particularly with Polyvinyl-Toluene (PVT) RPM detectors, which have only very weak spectral differentiation capability. With PVT detectors, the majority of detected photon events fall within the Compton continuum of the material, allowing for very little spectral information to be preserved [1]. Previous work has shown that these detectors can be used for limited spectroscopy, utilizing around 8 spectral bins to further differentiate some NORM and other nuisance sources [2]. NaI based systems achieve much more detailed spectral resolution from each measurement of a source, but still combine all measurements over a vehicle's occupancy in order to arrive at a spectrum to be analyzed.

We develop a Regional Seismic Travel Time (RSTT) model and methods to account for the first-order effect of the three-dimensional crust and upper mantle on travel times. The model parameterization is a global tessellation of nodes with a velocity profile at each node. Interpolation of the velocity profiles generates a 3-dimensional crust and laterally variable upper mantle velocity. The upper mantle velocity profile at each node is represented as a linear velocity gradient, which enables travel time computation in approximately 1 millisecond. This computational speed allows the model to be used in routine analyses in operational monitoring systems. We refine the model using a tomographic formulation that adjusts the average crustal velocity, mantle velocity at the Moho, and the mantle velocity gradient at each node. While the RSTT model is inherently global and our ultimate goal is to produce a model that provides accurate travel time predictions over the globe, our first RSTT tomography effort covers Eurasia and North Africa, where we have compiled a data set of approximately 600,000 Pn arrivals that provide path coverage over this vast area. Ten percent of the tomography data are randomly selected and set aside for testing purposes. Travel time residual variance for the validation data is reduced by 32%. Based on a geographically distributed set of validation events with epicenter accuracy of 5 km or better, epicenter error using 16 Pn arrivals is reduced by 46% from 17.3 km (ak135 model) to 9.3 km after tomography. Relative to the ak135 model, the median uncertainty ellipse area is reduced by 68% from 3070 km{sup 2} to 994 km{sup 2}, and the number of ellipses with area less than 1000 km{sup 2}, which is the area allowed for onsite inspection under the Comprehensive Nuclear Test Ban Treaty, is increased from 0% to 51%.

A modified lattice Boltzmann model with multiple relaxation times (MRT) for the convection-diffusion equation (CDE) is proposed. By modifying the relaxation matrix, as well as choosing the corresponding equilibrium distribution functions properly, the present model can recover the CDE with anisotropic diffusion coefficient without any deviation terms even when the velocity vector varies with space or time through the Chapman-Enskog analysis. This model is firstly validated by simulating the diffusion of a Gaussian hill, which demonstrates it can handle the anisotropic diffusion problem correctly. Then it is adopted to calculate the longitudinal dispersion coefficient of the Taylor-Aris dispersion. Numerical results show that elimination of the deviation terms can help to reduce the numerical errors under the condition of non-zero velocity vector, especially when the dimensionless relaxation time is relatively large.

In this article we study the finite temperature and chemical potential effects in a nonlocal Nambu-Jona-Lasinio (nNJL) model in the real time formalism. We make the usual Wick rotation to get from imaginary to real time formalism. In doing so, we need to define our regulator in the complex plane q^2. This deffinition will be crucial in our later analysis. We study the poles in the propagator of this model and conclude that only some of them are of interst to us. Once we have a well defined model in real time formalism, we look at the chiral condensate to find the temperature at which chiral symmetry restoration will occur. We find a second order phase transition that turns to a first order one for high enough values of the chemical potential.

Purpose: The hematopoietically active tissues of the skeleton are an important target tissue for dosimetric analysis both in terms of diagnostic risk optimization and evaluating treatment efficacy. In the work presented here a recently published dosimetry model of the adult is extended to all pediatric ages of the ICRP reference series. Methods: NURBS/PM?based computational phantoms of the ICRP 89 reference newborn 1?year 5?year 10?year and 15?year male and female were constructed from image segmentation of age and gender?matched CTimages. Bone samples were subsequently acquired from autopsy harvest of two female newborns and one 18?year male subject. Individual bones were collected and segmented following high?resolution ex?vivo CT to yield fractional volumes of cortical bone and spongiosa. Cored samples of spongiosa were later imaged under microCT to yield fractional volumes of bone trabeculae and marrow tissues and to provide a 3D geometry for radiation transport. Previously acquired pathlength distributions of trabecular spongiosa for a 1.7?year and 9?year child were used to supplement the dataset. Results: A comprehensive set of absorbed fractions of energy for internally emitted electrons are presented for active and shallow marrow targets in all bones all ages and over the energies 1 keV to 10 MeV. These electron absorbed fractions were then used to assemble photon fluence?to?dose response functions permitting detailed marrow dosimetry for both externally incident (e.g. CT) and internally emitted (e.g. nuclear medicine)photons by bone site and subject age. Techniques and issues for patient?specific adjustments are discussed. Conclusions: Marrow dosimetry is a critical component to nuclear medicine risk assessment and therapy treatment planning. This work provides state?of?the?art methods for pediatric marrow dosimetry that supplants those developed previously for simpler stylized models of the pediatric skeleton. R01 CA116743 R01 CA96441 DE?FG07?06ID14773

A real time assimilation and forecasting system for coastal currents is presented. The purpose of the system is to deliver current analyses and forecasts based on assimilation of high frequency radar surface current measurements. The local Vessel Traffic Service monitoring the ship traffic to two oil terminals on the coast of Norway received the analyses and forecasts in real time. A new assimilation method based on optimal interpolation is presented where spatial covariances derived from an ocean model are used instead of simplified mathematical formulations. An array of high frequency radar antennae provide the current measurements. A suite of nested ocean models comprise the model system. The observing system is found to yield good analyses and short range forecasts that are significantly improved compared to a model twin without assimilation. The system is fast; analysis and six hour forecasts are ready at the Vessel Traffic Service 45 minutes after acquisition of radar measurements.

We investigate different accretion disk models and viscosity prescriptions in order to provide a basic explanation for the exotic temporal behavior in GRS 1915+105. Based on the fact that the overall cycle times are very much longer than the rise/fall time scales in GRS 1915, we rule out the geometry of ADAF or a hot quasi-spherical region plus a cold outer disk for this source. We thus concentrate on geometrically thin Shakura-Sunyaev type disks (Shakura & Sunyaev 1973; hereafter SS73). We have devised a modified viscosity law that has a quasi-stable upper branch. Via numerical simulations, we show that the model does account for several gross observational features of GRS 1915+105. On the other hand, the rise/fall time scales are not short enough, and no rapid oscillations on time scales $\\simlt$ 10 s emerge naturally from the model. We then consider and numerically test a more elaborate model that includes the cold disk, a corona, and plasma ejections from the inner disk region and show that this model allows us to reproduce several additional observed features of GRS 1915+105. We conclude that the most likely structure of the accretion flow in this source is that of a cold disk with a modified viscosity prescription, plus a corona that accounts for much of the X-ray emission, and unsteady plasma ejections that occur when the luminosity of the source is high.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

INCIDENT DETECTION USING THE STANDARD iNORMAL DEVIATE MODEL AND TRAVEL TECHIE INFORMATION FROM PROBE VEHICLES A Thesis by CHRISTOPHER EUGENE MOUNTAIN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... of the requirement for the degree of MASTFR OF SCIENCE December 1993 Major Subject: Civil Engineering INCIDENT DETECTION USING THE STANDARD NORMAL DEVIATE MODEL AND TRAVEL TIME INFORMATION FROM PROBE VEHICLES A Thesis by CHRISTOPHER EUGENE MOUNTAIN Submitted...

EXPERIMENTS WITH A TI&E-DEPENDENT, ZONALLY AVERAGED, SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Submitted to the Graduate College of Texas ASM University in partial fulfillment of the requirement for the decree... of MASTER OF SCIENCE December 1977 Major Subject: Meteorology EXPERIMENTS WITH A TIME DEPENDENT~ ZONALLY AVERAGED~ SEASONAL, ENERGY BALANCE CLIMATIC MODEL A Thesis by STARLEY LEE THOMPSON Approved as to style and content by: (Chairman of Committee...

In this paper, we study the half-supersymmetric time-dependent configurations in M-theory and their matrix models. We find a large class of 11D supergravity solutions, which keeps sixteen supersymmetries. Furthermore, we investigate the isometries of these configurations and show that in general these configurations have no supernumerary supersymmetries. And also we define the Matrix models in these backgrounds following Discrete Light-Cone Quantization (DLCQ) prescription.

We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as observations from the 1918 influenza pandemic collected at Camp Custer, Michigan.

The Subcritical Collapse of Predator Populations in Discrete-Time Predator-Prey Models MICHAEL G undergo a subcritical flip bifurcation with a concomitant crash in the predator's popula- tion. We review a technique for distinguishing between subcritical and supercritical flip bifurcations and provide examples

Three-dimensional finite difference time domain modeling of the Schumann resonance parameters to as Schumann resonances and are excited by lightning discharges. The detection of such resonances on other frequency propagation is employed to study the Schumann resonance problems on Titan, Venus, and Mars

Modelling and Verification of Automated Transit Systems, Using Timed Automata, Invariants in automated transit systems. The problems we consider are in- spired by design work in the Personal Rapid and Simulations Nancy Lynch * Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge

Modelling Planning and Scheduling Problems with Time and Resources* ROMAN BARTÃK Department of sequencing operators to achieve some goal. In STRIPS- like planning, the operator is defined by pre-conditions and effects, i.e., the pre-conditions must be satisfied to use the operator, and the effects hold after using

LINEAR TIME PERIODIC MODELLING OF POWER ELECTRONIC DEVICES FOR POWER SYSTEM HARMONIC ANALYSIS by simulation. 1. INTRODUCTION The variety and the wide spread use of power electronic devices in the power networks is due to their diverse and multiple functions: compensation, protection and interface

...this point, the geosteering team expected changes in log characteristics and signs of greater apparent dip angle. Real-time data...earth model and correlations on a Web-based page. This Intranet page could be accessed by any team member connected to the...

Simulator Generation Using an Automaton Based Pipeline Model for Timing Analysis Rola Kassem, Mika the description of the pipeline. The description is transformed into an automaton and a set of resources which. The blocks communicate and synchronise with each other in order to handle the pipeline hazards. A pipeline

In this paper, the basis of the models of the condensate water and the air cooled condenser are presented. The models are part of a full scope simulator of a 450 MW combined cycle power plant. The simulator is executed in real time and is intended to be a support for the training of the operators of the ComisiÃ³n Federal de Electricidad (the Mexican utility company). The simulator is presently in the final acceptance tests stage and is programmed to be in commercial operation in 2010. Here, are included a summary of the modelling methodology used to develop the referred models and the mathematical fundaments used to obtain the main equations. The tendencies of selected variables during a transient are displayed and analysed in order to probe the validity of the new generic models.

A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Experimental results of the temperature, field, and time dependence of the magnetization in high-temperature superconductors displaying the paramagnetic Meissner effect are compared with numerical results from model calculations. In experiments the relaxation rate of the zero-field-cooled magnetization exhibits novel field-dependent properties and the field-cooled magnetization is found to increase with time. A model based on an ensemble of superconducting loops, each loop containing an ordinary Josephson junction or a ? junction, is shown to be able to account for most of the experimental results. The time-dependent magnetization is explained by thermally activated flipping of spontaneous orbital magnetic moments, a dynamical process which is fundamentally different from the flux-creep phenomenon usually observed in type-II superconductors.

This paper addresses the issue of reconstructing the unknown field of absorption and scattering coefficients from time-resolved measurements of diffused light in a computationally efficient manner. The intended application is optical tomography, which has generated considerable interest in recent times. The inverse problem is posed in the Bayesian framework. The maximum {ital a posteriori} (MAP) estimate is used to compute the reconstruction. We use an edge- preserving generalized Gaussian Markov random field to model the unknown image. The diffusion model used for the measurements is solved forward in time using a finite-difference approach known as the alternating-directions implicit method. This method requires the inversion of a tridiagonal matrix at each time step and is therefore of O(N) complexity, where N is the dimensionality of the image. Adjoint differentiation is used to compute the sensitivity of the measurements with respect to the unknown image. The novelty of our method lies in the computation of the sensitivity since we can achieve it in O(N) time as opposed to O(N{sup 2}) time required by the perturbation approach. We present results using simulated data to show that the proposed method yields superior quality reconstructions with substantial savings in computation.

There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) as a natural Bayesian nonparametric extension of the ubiquitous Hidden Markov Model for learning from sequential and time-series ...

2 Colloquium Series 2 Colloquium Series 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | Date Title Special Colloquium December 13, 2012 "Pathways to Complex Matter Far-Away-From Equilibrium: Developing Spatiotemporal Tools," by Gopal Shenoy, Argonne National Laboratory, hostged by Daniel Lopez Abstract: From the Big Bang to the coming of humankind, every manifestation of nature has exhibited processes far-away-from equilibrium leading to increasingly complex structural orders from geological to atomic length and time scales. Examples include the evolution of galaxies, hurricanes, stars, and planets; prebiotic reactions; cyclical reactions; photosynthesis; and life itself. The organizational spatiotemporal evolution in soft, hard, and biological matter also follows the same path. It begins from a far-from-equilibrium state and develops over time into organizations with length scales between atoms and small molecules on the one hand and mesoscopic matter on the other.

The paper presents the mathematical modeling of the spacetime kinetics phenomena in Advanced Heavy Water Reactor (AHWR), a 920 MW (thermal), vertical pressure tube type thorium based nuclear reactor. The physical dimensions and the internal feedback effects of the AHWR are such that it is susceptible to xenon induced spatial oscillations. For the study of spatial effects and design of suitable control strategy, the need for a suitable mathematical model which is not of a very large order arises. In this paper, a mathematical model of the reactor within the framework of nodal modeling is derived with the two group neutron diffusion equation as the basis. A linear model in standard state space form is formulated from the set of equations so obtained. It has been shown that comparison of linear system properties could be helpful in deciding upon an appropriate nodalization scheme and thus obtaining a reasonably accurate model. For validation, the transient response of the simplified model has been compared with those from a rigorous finite-difference model.

We perform an extensive numerical investigation on the retrieval dynamics of the synchronous Hopfield model, also known as Little-Hopfield model, up to sizes of 218 neurons. Our results correct and extend much of the early simulations on the model. We find that the average convergence time has a power law behavior for a wide range of system sizes, whose exponent depends both on the network loading and the initial overlap with the memory to be retrieved. Surprisingly, we also find that the variance of the convergence time grows as fast as its average, making it a non-self-averaging quantity. Based on the simulation data we differentiate between two definitions for memory retrieval time, one that is mathematically strict, ?c, the number of updates needed to reach the attractor whose properties we just described, and a second definition correspondent to the time ?? when the network stabilizes within a tolerance threshold ? such that the difference of two consecutive overlaps with a stored memory is smaller that ?. We show that the scaling relationships between ?c and ?? and the typical network parameters as the memory load ? or the size of the network N vary greatly, being ?? relatively insensitive to system sizes and loading. We propose ?? as the physiological realistic measure for the typical attractor network response.

The geometry of ray paths through realistic Earth models can be extremely complex due to the vertical and lateral heterogeneity of the velocity distribution within the models. Calculation of high fidelity ray paths and travel times through these models generally involves sophisticated algorithms that require significant assumptions and approximations. To test such algorithms it is desirable to have available analytic solutions for the geometry and travel time of rays through simpler velocity distributions against which the more complex algorithms can be compared. Also, in situations where computational performance requirements prohibit implementation of full 3D algorithms, it may be necessary to accept the accuracy limitations of analytic solutions in order to compute solutions that satisfy those requirements. Analytic solutions are described for the geometry and travel time of infinite frequency rays through radially symmetric 1D Earth models characterized by an inner sphere where the velocity distribution is given by the function V (r) = A-Br{sup 2}, optionally surrounded by some number of spherical shells of constant velocity. The mathematical basis of the calculations is described, sample calculations are presented, and results are compared to the Taup Toolkit of Crotwell et al. (1999). These solutions are useful for evaluating the fidelity of sophisticated 3D travel time calculators and in situations where performance requirements preclude the use of more computationally intensive calculators. It should be noted that most of the solutions presented are only quasi-analytic. Exact, closed form equations are derived but computation of solutions to specific problems generally require application of numerical integration or root finding techniques, which, while approximations, can be calculated to very high accuracy. Tolerances are set in the numerical algorithms such that computed travel time accuracies are better than 1 microsecond.

Mean First-Passage Time Calculations for the Coil-to-Helix Transition:? The Active Helix Ising Model ... The kinetics and thermodynamics of the coil-to-helix transition is studied using a one-dimensional Zimm?Bragg Ising model. ... 4. Mean First-Passage Time for the Active-Helix Ising Model ...

In order to analyze the singularities of a power series function P(t) on the boundary of its convergent disc, we introduced the space @W(P) of opposite power series in the opposite variable s=1/t, where P(t) was, mainly, the growth function (Poincare ...

We discuss a model for non-linear quantum evolution based on the idea of time displaced entanglement, produced by taking one member of an entangled pair on a round trip at relativistic speeds, thus inducing a time-shift between the pair. We show that decoherence of the entangled pair is predicted. For non-maximal entanglement this then implies the ability to induce a non-unitary, non-linear quantum evolution. Although exhibiting unusual characteristics, we show that these evolutions cannot be dismissed on the basis of entropic or causal arguments.

This paper considers approaches to considering the validity of the overall structure of macro-econometric models  specifically, the Area Wide Model of the European Central Bank. By structure, we refer to the dynamic (business-cycle) and steady-state features that the model purports to capture. This leads to two types of tests. The first, drawing on the DSGE literature, is concerned with whether the model matches business-cycle (high-frequency) data characteristics. This is implemented by a Cholesky bootstrap whereby the steady state of the model is stochastically simulated using historically consistent covariances. The generated data is analysed for stylised-facts fitting and, similarly, using the models implied spectral characteristics, for congruence with the data in terms of persistence, periodicity and spectral fit. Moments matching, however, is only one aspect of overall model evaluation. Consequently, we move to tests that combine high-frequency aspects (short-horizon forecasts) with long-run features (such as the existence and identification of steady states and trends). Recursive forecasting tests form the second part. The forecasts attempt to measure the accuracy of model-based forecasts both simulated outof-sample and in an in-sample exercise. The out-of-sample exercise analyses the 1- to 8-step-ahead forecasting ability of the model. For this, the model is re-estimated each time on a subset of the original

models that can accom- modate trade-offs between time and space: 1) AND/OR Adaptive Caching (AOC(i)); 2 show that AOC(i) is better than the vanilla ver- sions of both VEC(i) and TDC(i), and use the guid- ing principles of AOC(i) to improve the other two schemes. Finally, we show that the improved ver- sions of VEC

Here we consider the time evolution of a one-dimensional quantum system with a double barrier given by a couple of two repulsive Dirac's deltas. In such a "pedagogical" model we give, by means of the theory of quantum resonances, the explicit expression of the dominant terms of $$, where $H$ is the double-barrier Hamiltonian operator and where $\\psi$ and $\\phi$ are two test functions.

We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

A real-time simulation model for a 440t/h circulating fluidized bed (CFB) is presented. The dynamic mathematical model ... predict the static and dynamic characters of the CFB boiler, on the basis of principle an...

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Wind energy is becoming a top contributor to the renewable energy mix, which raises potential reliability issues for the grid due to the fluctuating nature of its source. To achieve adequate reserve commitment and to promote market participation, it is necessary to provide models that can capture daily patterns in wind power production. This paper presents a cyclic inhomogeneous Markov process, which is based on a three-dimensional state-space (wind power, speed and direction). Each time-dependent transition probability is expressed as a Bernstein polynomial. The model parameters are estimated by solving a constrained optimization problem: The objective function combines two maximum likelihood estimators, one to ensure that the Markov process long-term behavior reproduces the data accurately and another to capture daily fluctuations. A convex formulation for the overall optimization problem is presented and its applicability demonstrated through the analysis of a case-study. The proposed model is capable of r...

Abstract A four-stage probabilistic damage model is proposed basis of cross-scale damage processes to deal with the local corrosion crack of oil pipeline. At first, some key parameters for life time prediction were determined; then the probabilistic damage model is formulated and numerically calculated by using Monte Carlo Simulation (MCS). Furthermore, the model is used to deal with an example in order to check its validity. The results show that the life-span of this pipeline is nearly 20.55 years, and the pipe wall thickness, operating pressure difference and corrosion electric current density are the three key parameters to determine the span-life of this pipeline; the longest examination and repair period should be less than 4.71 years for safety when the surface crack length of 10 mm can be detected reliably.

InfiniBand (IB) has established itself as a promising network infrastructure for high-end cluster computing systems as evidenced by its usage in the Top500 supercomputers today. While the IB standard describes multiple communication models (including reliable-connection (RC), and unreliable datagram (UD)), most of its promising features such as remote direct memory access (RDMA), hardware atomics and network fault tolerance are only available for the RC model which requires connections between communicating process pairs. In the past, several researchers have proposed on-demand connection management techniques that establish connections when there is a need to communicate, and not before. While such techniques work well for algorithms and applications that only communicate with a small set of processes in their life-time, there exists a broad set of applications that do not follow this trend. For example, applications that perform dynamic load balancing and adaptive work stealing have a small set of communicating neighbors at any given time, but over time the total number of neighbors can be very high; in some cases, equal to the entire system size. In this paper, we present a dynamic time-variant connection management approach that establishes connections on-demand like previous approaches, but further intelligently tears down some of the unused connections as well. While connection tear-down itself is relevant for any programming model, different models have different complexities. In this paper, we study the Global Arrays (GA) PGAS model for two reasons: (1) the simple one-sided communication primitives provided by GA and other PGAS models ensure that connection requests are always initiated by the origin process without explicit synchronization with the target process---this makes connection tear-down simpler to handle; and (2) GA supports several applications that demonstrate this behavior making it an obvious first target for the proposed enhancements. We evaluate our proposed approach using several micro-benchmarks as well as the NWChem computational chemistry application on more than 6000 processes, and show that our approach can significantly reduce the memory requirements of the communication library while maintaining its performance. To the best of our knowledge, this is the first design, implementation and evaluation of connection tear-down protocols over InfiniBand.

: GOES-R is a satellite series carrying six instruments designed to enhance weather forecasting, severeGOES-R The Geostationary Operational Environment Satellite R-Series Program Frequently Asked. The GOES-R series of satellites will fly improved spacecraft and instrument technologies, for more timely

Thermal photons from the photosphere may be the primary source of the observed prompt emission of gamma-ray bursts (GRBs). In order to produce the observed non-thermal spectra, some kind of dissipation mechanism near the photosphere is required. In this paper we numerically simulate the evolution of the photon spectrum in a relativistically expanding shell with a time-dependent numerical code. We consider two basic models. One is a leptonic model, where a dissipation mechanism heats the thermal electrons maintaining their high temperature. The other model involves a cascade process induced by pp(pn)-collisions which produce high-energy electrons, modify the thermal spectrum, and emit neutrinos. The qualitative properties of the photon spectra are mainly determined by the optical depth at which the dissipation mechanism sets in. Too large optical depths lead to a broad and curved spectrum contradicting the observations, while for optical depths smaller than unity the spectral hardness becomes softer than observed. A significant shift of the spectral peak energy to higher energies due to a large energy injection can lead to an overly broad spectral shape. We show ideal parameter ranges for which these models are able to reproduce the observed spectra. For the pn-collision model, the neutrino fluence in the 10100 GeV range is well above the atmospheric neutrino fluence, but its detection is challenging for presently available detectors.

Critical relaxation from a low-temperature fully ordered state of Fe{sub 2}/V{sub 13} iron-vanadium magnetic superlattice models has been studied using the method of short-time dynamics. Systems with three variants of the ratio R of inter-to intralayer exchange coupling have been considered. Particles with N = 262144 spins have been simulated with periodic boundary conditions. Calculations have been performed using the standard Metropolis algorithm of the Monte Carlo method. The static critical exponents of magnetization and correlation radius, as well as the dynamic critical exponent, have been calculated for three R values. It is established that a small decrease in the exchange ratio (from R = 1.0 to 0.8) does not significantly influence the character of the short-time dynamics in the models studied. A further significant decrease in this ratio (to R = 0.01), for which a transition from three-dimensional to quasi-two-dimensional magnetism is possible, leads to significant changes in the dynamic behavior of iron-vanadium magnetic superlattice models.

A model-based technique for real-time estimation of absolute fluorine concentration in a CF4/Ar for quantitative interpretation of actinometric data to deduce bulk plasma fluorine concentration in a CF4/Ar, for application of real-time feedback control to plasma etching. Based upon a model of CF4 chemistry reaction

this issue. We assume for simplicity a model for B such that magnetic field lines lie in meridional planesInitial Simulation Results of Storm-Time Ring Current in a Self-Consistent Magnetic Field Model S a strong and time-dependent perturbation of the magnetospheric magnetic field B, and this magnetic-field

The multivariate model toning procedure of Frankignoul et al. has been extended to the general time-series case, thus allowing to test ocean model ability at simulating the interannual variability. The method aims at distinguishing between model ...

Absorbing boundary conditions for waveguide ports in time domain are important elements of transient approaches to treat RF structures. A successful way to implement these termination conditions is the decomposition of the transient fields in the absorbing plane in terms of modal field patterns. The absorbing condition is then accomplished by transferring the wave impedances (or admittances) of the modes to time domain, which leads to convolution operations involving Bessel functions and integrals of Bessel functions. This paper presents a new alternative approach: the convolution operations are approximated by appropriate state-space models whose system responses can be conveniently computed by standard integration schemes. These schemes are indispensable for transient simulations anyhow. Sufficiently far away from the cutoff frequency, a wideband match is achieved.

...the dataset in the space of well-behaved...large and diverse library of operations...principal components space of operations...operations in our library each individually...principal components space of our library of operations...

We further investigate, both analytically and numerically, the properties of the fractal two-compartment model introduced by Fuite et al. [J. Fuite, R. Marsh, and J. Tuszynski, Phys. Rev. E 66, 021904 (2002)]. Specifically, we look at the effects of the fractal exponent of the elimination rate coefficient on the long-time behavior of the pharmacokinetic clearance tail. For small exponent values, the tail exhibits exponential behavior, while for larger values, there is a transition to a power law. The theory is applied to seven data sets simulating drugs taken from the pharmacological literature.

Growing size and complexity of many websites have made navigation through these sites increasingly difficult. Attempting to automatically predict the next page for a website user to visit has many potential benefits, for example in site navigation, automatic tour generation, adaptive web applications, recommendation systems, web server optimisation, web search and web pre-fetching. This paper describes an approach to link prediction using a Markov chain model based on an exponentially smoothed transition probability matrix which incorporates site usage statistics collected over multiple time periods. The improved performance of this approach compared to earlier methods is also discussed.

Although the mechanisms responsible for heating the Sun's corona and accelerating the solar wind are still being actively investigated, it is largely accepted that photospheric motions provide the energy source and that the magnetic field must play a key role in the process. \\citet{2010ApJ...708L.116V} presented a model for heating and accelerating the solar wind based on the turbulent dissipation of Alfv\\'en waves. We first use a time-dependent model of the solar wind to reproduce one of \\citeauthor{2010ApJ...708L.116V}'s solutions; then we extend its application to the case when the energy equation includes thermal conduction and radiation losses, and the upper chromosphere is part of the computational domain. Using this model, we explore parameter space and describe the characteristics of a fast-solar-wind solution. We discuss how this formulation may be applied to a 3D MHD model of the coron a and solar wind \\citep{2009ApJ...690..902L}.

We use the AdS/CFT correspondence to study the resummation of a perturbative genus expansion appearing in the type II superstring dual of ABJM theory. Although the series is Borel summable, its Borel resummation does not agree with the exact non-perturbative answer due to the presence of complex instantons. The same type of behavior appears in the WKB quantization of the quartic oscillator in Quantum Mechanics, which we analyze in detail as a toy model for the string perturbation series. We conclude that Borel summability is not enough for extracting non-perturbative information, and one has to add explicit non-perturbative effects associated to complex instantons. We also analyze the resummation of the genus expansion for topological string theory on local P1xP1, which is closely related to ABJM theory. In this case, the non-perturbative answer involves membrane instantons computed by the refined topological string, which are crucial to produce a well-defined result. We give evidence that the Borel resummation of the perturbative series requires such a non-perturbative sector.

Mean First-Passage Time Calculations for the Coil-to-Helix Transition: The Active Helix Ising Model of the coil-to-helix transition is studied using a one-dimensional "Zimm- Bragg" Ising model. The mean first-dimensional Ising model for arbitrary spin-spin coupling (J) and external field (H) where J and H are expressed

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A Self-assembly Model of Time-Dependent Glue Strength Sudheer Sahu, Peng Yin, and John H. Reif Abstract Self-assembly is a ubiquitous process in which small objects self- organize into larger model for theoretical studies of self-assembly. We propose a refined self-assembly model in which

For the purposes of both traffic-light control and the design of roadway layouts, it is important to understand pedestrian street-crossing behavior because it is not only crucial for improving pedestrian safety but also helps to optimize vehicle flow. This paper explores the mechanism of pedestrian street crossings during the red-man phase of traffic light signals and proposes a model for pedestrians waiting times at signalized intersections. We start from a simplified scenario for a particular pedestrian under specific traffic conditions. Then we take into account the interaction between vehicles and pedestrians via statistical unconditioning. We show that this in general leads to a U-shaped distribution of the pedestrians intended waiting time. This U-shaped distribution characterizes the nature of pedestrian street-crossing behavior, showing that in general there are a large proportion of pedestrians who cross the street immediately after arriving at the crossing point, and a large proportion of pedestrians who are willing to wait for the entire red-man phase. The U-shaped distribution is shown to reduce to a J-shaped or L-shaped distribution for certain traffic scenarios. The proposed statistical model was applied to analyze real field data.

Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

Simulations of a high-confinement-mode (H-mode) tokamak discharge with infrequent giant type-I ELMs are performed by the multi-fluid, multi-species, two-dimensional transport code UEDGE-MB, which incorporates the Macro-Blob approach for intermittent non-diffusive transport due to filamentary coherent structures observed during the Edge Localized Modes (ELMs) and simple time-dependent multi-parametric models for cross-field plasma transport coefficients and working gas inventory in material surfaces. Temporal evolutions of pedestal plasma profiles, divertor recycling, and wall inventory in a sequence of ELMs are studied and compared to the experimental time-dependent data. Short- and long-time-scale variations of the pedestal and divertor plasmas where the ELM is described as a sequence of macro-blobs are discussed. It is shown that the ELM recovery includes the phase of relatively dense and cold post-ELM divertor plasma evolving on a several ms scale, which is set by the transport properties of H-mode barrier. The global gas balance in the discharge is also analyzed. The calculated rates of working gas deposition during each ELM and wall outgassing between ELMs are compared to the ELM particle losses from the pedestal and neutral-beam-injection fueling rate, correspondingly. A sensitivity study of the pedestal and divertor plasmas to model assumptions for gas deposition and release on material surfaces is presented. The performed simulations show that the dynamics of pedestal particle inventory is dominated by the transient intense gas deposition into the wall during each ELM followed by continuous gas release between ELMs at roughly a constant rate.

Modeling the fiansitory Behavior of Speech Using a Time-varying Transmission-line Model Amit S. Abstract: In this study, a transmission-line model of speech production (1) is modified so that changes equivalent transmission line comprising Resis- tors (R), Capacitors (C), and Inductors (L). A radiation

In previous studies, 11 elements (Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Se, and Zn) were determined in 30-minute aerosol samples collected with the University of Maryland Semicontinuous Elements in Aerosol Sampler (SEAS; Kidwell and Ondov, 2001, 2004; SEAS-II) in several locations in which air quality is influenced by emissions from coal- or oil-fired power plants. At this time resolution, plumes from stationary high temperature combustion sources are readily detected as large excursions in ambient concentrations of elements emitted by these sources (Pancras et al. ). Moreover, the time-series data contain intrinsic information on the lateral diffusion of the plume (e.g., {sigma}{sub y}), which Park et al. (2005 and 2006) have exploited in their Pseudo-Deterministic Receptor Model (PDRM), to calculate emission rates of SO{sub 2} and 11 elements (mentioned above) from four individual coal- and oil-fired power plants in the Tampa Bay area. In the current project, we proposed that the resolving power of source apportionment methods might be improved by expanding the set of maker species and that there exist some optimum set of marker species that could be used. The ultimate goal was to determine the utility of using additional elements to better identify and isolate contributions of individual power plants to ambient levels of PM and its constituents. And, having achieved better resolution, achieve, also, better emission rate estimates. In this study, we optimized sample preparation and instrumental protocols for simultaneous analysis of 28 elements in dilute slurry samples collected with the SEAS with a new state-of-the-art Thermo-Systems, Inc., X-series II, Inductively Coupled Plasma Mass Spectroscopy (ICP-MS), and reanalyzed the samples previously collected in Tampa during the modeling period studied by Park et al. (2005) in which emission rates from four coal- and oil-fired power plants affected air quality at the sampling site. In the original model, Park et al. (2005), included 6 sources. Herein, we reassessed the number of contributing sources in light of the new data. A comprehensive list of sources was prepared and both our Gaussian Plume model and PMF were used to identify and predict the relative strengths of source contributions at the receptor sites. Additionally, PDRM was modified to apply National Inventory Emissions, Toxic Release Inventory, and Chemical Mass Balance source profile data to further constrain solutions. Both the original Tampa data set (SO{sub 2} plus 11 elements) and the new expanded data set (SO{sub 2} plus 23 elements) were used to resolve the contributions of particle constituents and PM to sources using Positive Matrix Factorization (PMF) and PDRM.

At the high temperatures found in the modified Claus reaction furnace, the thermal decomposition and oxidation of H[sub 2]S yields large quantities of desirable products, gaseous hydrogen (H[sub 2]) and sulfur (S[sub 2]). However, as the temperature of the gas stream is lowered in the waste heat boiler (WHB) located downstream of the furnace, the reverse reaction occurs leading to reassociation of H[sub 2] and S[sub 2] molecules. To examine the reaction quenching capabilities of the WHB, a rigorous computer model was developed incorporating recently published intrinsic kinetic data. A sensitivity study performed with the model demonstrated that WHBs have a wide range of operation with gas mass flux in the tubes from 4 to 24 kg/(m[sup 2] [center dot] s). Most important, the model showed that is was possible to operate WHBs such that quench times could be decreased to 40 ms, which is a reduction by 60% compared to a base case scenario. Furthermore, hydrogen production could be increased by over 20% simply by reconfiguring the WHB tubes.

Boost your knowledge on how to implement an energy management system through this four-part webinar series from the Superior Energy Performance program. Each webinar introduces various elements of the ISO 50001 energy management standardbased on the Plan-Do-Check-Act approachand the associated steps of DOE's eGuide for ISO 50001 software tool.

Bay Area Global Health Seminar Series Moving beyond millennium targets in global health: The challenges of investing in health and universal health coverage Although targets can help to focus global health efforts, they can also detract attention from deeper underlying challenges in global health

Bay Area Global Health Seminar Series Monday, January 27, 2014 2:30pm Â­ 4:00pm (Reception to follow at the Center for Health Policy and the Woods Institute for the Environment. He studies how economic, political, and natural environments affect population health in developing countries using a mix of experimental

techniques. We demonstrate the system on a model of a coal-fired power plant composed of more than 15 million such techniques. Choosing a 15-million-triangle model of a coal-fired electric power plant (Image 1) as our (Graphics data structures), I.3.7 Â­ Three-Dimensional Graphics and Realism (Virtual reality), J.2 Â­ Physical

Short-term forecasting of travel time is essential for the success of intelligent transportation system. In this paper, we review the state-of-art of short-term traffic forecasting models and outline their basic ideas, related works, advantages and disadvantages of each model. An improved adaptive exponential smoothing (IAES) model is also proposed to overcome the drawbacks of the previous adaptive exponential smoothing model. Then, comparing experiments are carried out under normal traffic condition and abnormal traffic condition to evaluate the performance of four main branches of forecasting models on direct travel time data obtained by license plate matching (LPM). The results of experiments show each model seems to have its own strength and weakness. The forecasting performance of IASE is superior to other models in shorter forecasting horizon (one and two step forecasting) and the IASE is capable of dealing with all kind of traffic conditions.

Context: Chemical models of dense cloud cores often utilize the so-called pseudo-time-dependent approximation, in which the physical conditions are held fixed and uniform as the chemistry occurs. In this approximation, the initial abundances chosen, which are totally atomic in nature except for molecular hydrogen, are artificial. A more detailed approach to the chemistry of dense cold cores should include the physical evolution during their early stages of formation. Aims: Our major goal is to investigate the initial synthesis of molecular ices and gas-phase molecules as cold molecular gas begins to form behind a shock in the diffuse interstellar medium. The abundances calculated as the conditions evolve can then be utilized as reasonable initial conditions for a theory of the chemistry of dense cores. Methods: Hydrodynamic shock-wave simulations of the early stages of cold core formation are used to determine the time-dependent physical conditions for a gas-grain chemical network. We follow the cold post-sho...

Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the Larmor radius of the threshold for the destruction of transport barriers.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the...

An approach to test climate models with observations is presented. In this approach, it is possible to directly observe the longwave feedbacks of the climate system in timeseries of annual average outgoing longwave spectra. Tropospheric ...

This work introduces a new functional series for expanding an analytic function in terms of an arbitrary analytic function. It is generally applicable and straightforward to use. It is also suitable for approximating the behavior of a function with a few terms. A new expression is presented for the composite function's n'th derivative. The inverse-composite method is handled in this work also.

Colloquium Series Colloquium Series The Center for Nanoscale Materials holds a regular biweekly colloquium on alternate Wednesday afternoons at 4:00 p.m. in Bldg. 440, Room A105/106. The goal of the series is to provide a forum for topical multidisciplinary talks in areas of interest to the CNM and also to offer a mechanism for fostering interactions with potential facility users. Refreshments will be served at 3:45. January 15, 2013 "Friction, Brownian Motion, and Energy Dissipation Mechanisms in Adsorbed Molecules and Molecularly Thin Films: Heating, Electrostatic and Magnetic Effects," by Jacquelin Krim, North Carolina State University, hosted by Diana Berman Abstract: In the study of friction at the nanoscale, phononic, electrostatic, conduction electron, and magnetic effects all contribute to the dissipation mechanisms. Electrostatic and magnetic contributions are increasingly alluded to in the current literature, but they remain poorly characterized. I will first overview the nature of these various contribution, and then report on our observations of magnetic and electrostatic contributions to friction for various systems in the presence and absence of external fields. I will also report on the use of a quartz crystal microbalance with a graphene/Ni(111) electrode to probe frictional heating effects in Kr monolayers sliding on the microbalance electrode in response to its oscillatory motion.

Real-Time Track Prediction of Tropical Cyclones over the North Indian Ocean Using the ARW Model of Technology Bhubaneswar, Odisha, India A. ROUTRAY National Centre for Medium Range Weather Forecasting, Noida The performance of the Advanced Research version of the Weather Research and Forecasting (ARW) model in real

Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models over more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.

We use the AdS/CFT correspondence to study the resummation of a perturbative genus expansion appearing in the type II superstring dual of ABJM theory. Although the series is Borel summable, its Borel resummation does not agree with the exact non-perturbative answer due to the presence of complex instantons. The same type of behavior appears in the WKB quantization of the quartic oscillator in Quantum Mechanics, which we analyze in detail as a toy model for the string perturbation series. We conclude that Borel summability is not enough for extracting non-perturbative information, and one has to add explicit non-perturbative effects associated to complex instantons. We also analyze the resummation of the genus expansion for topological string theory on local P1xP1, which is closely related to ABJM theory. In this case, the non-perturbative answer involves membrane instantons computed by the refined topological string, which are crucial to produce a well-defined result. We give evidence that the Borel resummati...

Power System Load models have a wide range of application in the electric power industry including applications involving: (i) load management policy monitoring; (ii) assisting with ... A method that has been uti...

Using a bootstrap analysis of solar irradiation timeseries, we model solar farms which sell their power output at ... one used in the province of Ontario, Canada. We show that the feed-in tariff ... remove the f...

Free-form surface machining is a fundamental but time-consuming process in modern manufacturing. The central question we ask in this thesis is how to reduce the time that it takes for a 5-axis CNC (Computer Numerical ...

This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.

This report briefly describes experimental validation of a computer model used to analyze LMFBR type core transients. This model is used to predict coolant, cladding, and fuel temperature distributions during transient overpower accidents. (JDH)

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Next Brookhaven Lecture Next Brookhaven Lecture JAN 22 Wednesday Brookhaven Lecture "491st Brookhaven Lecture: Juergen Thieme of Photon Sciences Directorate" Presented by Juergen Thieme, Brookhaven Lab's Photon Sciences Directorate 4 pm, Berkner Hall Auditorium Wednesday, January 22, 2014, 4:00 pm Hosted by: Allen Orville Refreshments will be served before and after the lecture. Brookhaven Lectures are free and open to the Public. Visitors to the Laboratory age 16 and older must bring photo ID. About the Brookhaven Lecture Series Gertrude Scharff-Goldhaber Gertrude Scharff-Goldhaber The Brookhaven Lectures, held by and for the Brookhaven staff, are meant to provide an intellectual meeting ground for all scientists of the Laboratory. In this role they serve a double purpose: they are to acquaint

A trajectory free walking control scheme was proposed for actuated biped robot with the NMPC method in order to carry out real-time gait programming. The basic feature in the proposed strategy is to use iterative on-line optimization approach to compute ... Keywords: NMPC, biped robot, real-time gait programming

......motion, since we have assumed mirror symmetry with respect to...technically the same as in the Schwarzschild case (see Paper I), with...space-time, since in the Schwarzschild case one of the roots was...solution given in Paper I for a Schwarzschild space-time. 4.2 Polar......

Exploration risk can be decreased by highgrading areas where the timing of structural events and maturation of source rocks are nearly coincident. Knowledge of migration fairways further aids in focusing exploration. Four burial-history models have been constructed to accommodate (1) a rift-fill sequence in excess of 24,000 ft, (2) a hypothetical Fairfield basin model, (3) a model using a deep well, and (4) a model on the Sparta shelf. These complex models, which use several variables including compaction, thermal conductivity, kerogen kinetics, and multiple unconformities, indicate a possibility for multiple hydrocarbon-generative events and show that linear geothermal gradients are ineffective in explaining maturation in Illinois. Periods of oil generation determined from the models can be compared with known timing of structural events to predict trapping potential. Depths to the oil phase-out zone are also significant. Exploration risk can be reduced in Illinois by using a simple migration model that uses the basal Upper Devonian Sylamore Sandstone in central and western Illinois as a migration conduit and the New Albany Group as a source. Other migration conduits in the basin are discussed including faults associated with structures and fracture systems such as the Wabash Valley fault system.

1 Sub-national TIMESmodel for analyzing regional future use of Biomass and Biofuels in France Introduction Renewable energy sources such as biomass and biofuels are increasingly being seen as important of biofuels on the final consumption of energy in transport should be 10%. The long-term target is to reduce

Peatland carbon cycle responses to hydrological change at time scales from years to centuries: Impacts on model simulations and regional carbon budgets By Benjamin N. Sulman A dissertation submitted to the long-term storage of carbon in peat, these ecosystems contain a significant fraction of the global

. Two models in particular are of great interest to mathematicians, namely the Ising model of a magnet and the percolation model of a porous solid. These models in turn are part of the unifying framework of the random-cluster representation, a model...

The objective of this work is to develop an optimization model for the medium-term planning of single stage continuous multiproduct plants. Several types of \\{SKUs\\} (Stock Keeping Units) are produced. Customers place orders that represent multiples of \\{SKUs\\} and these orders must be delivered at the end of each week. When different SKU types are processed, sequence-dependent changeover times and costs are incurred. The problem is represented as a mixed-integer linear programming (MILP) model with a hybrid time representation. The objective is to maximize profit that involves sales revenues, production costs, product changeover costs, inventory costs and late delivery penalties. The proposed optimization-based model is validated in a real-world polymer processing plant.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Using time dependent density functional theory (TDDFT) we examine the energy, angular and time-resolved photoelectron spectra (TRPES) of ethylene in a pump-probe setup. To simulate TRPES we expose ethylene to an ultraviolet (UV) femtosecond pump pulse, followed by a time delayed extreme ultraviolet (XUV) probe pulse. Studying the photoemission spectra as a function of this delay provides us direct access to the dynamic evolution of the molecule's electronic levels. Further, by including the nuclei's motion, we provide direct chemical insight into the chemical reactivity of ethylene. These results show how angular and energy resolved TRPES could be used to directly probe electron and nucleus dynamics in molecules.

(e.g. porosity, permeability). More recently, the availability of repeated seismic surveys over the time scale of years (i.e., 4D seismic) has shown promising results for the qualitative determination of changes in fluid phase distributions...

A general framework for performance optimization of continuous-time OTA-C (Operational Transconductance Amplifier-Capacitor) filters is proposed. Efficient procedures for evaluating nonlinear distortion and noise valid for any filter of arbitrary...

Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially time-dependent nature. The details of thinking and decision making processes are important for deta...

There is growing demand for high-reliability embedded systems that operate robustly and autonomously in the presence of tight real-time constraints. For robotic spacecraft, robust plan execution is essential during ...

Recent advances in autonomy have enabled a future vision of single operator control of multiple heterogeneous Unmanned Vehicles (UVs). Real-time scheduling for multiple UVs in uncertain environments will require the ...

Scalable thermal runaway models for cook?off of energetic materials (EMs) require realistic temperature? and pressure?dependent chemical reaction rates. The Sandia Instrumented Thermal Ignition apparatus was developed to provide in situ small?scale test data that address this model requirement. Spatially and temporally resolved internal temperature measurements have provided new insight into the energetic reactions occurring in PBX 9501 LX?10?2 and PBXN?109. The data have shown previously postulated reaction steps to be incorrect and suggest previously unknown reaction steps. Model adjustments based on these data have resulted in better predictions at a range of scales.

Scalable thermal runaway models for cook-off of energetic materials (EMs) require realistic temperature- and pressure-dependent chemical reaction rates. The Sandia Instrumented Thermal Ignition apparatus was developed to provide in situ small-scale test data that address this model requirement. Spatially and temporally resolved internal temperature measurements have provided new insight into the energetic reactions occurring in PBX 9501, LX-10-2, and PBXN-109. The data have shown previously postulated reaction steps to be incorrect and suggest previously unknown reaction steps. Model adjustments based on these data have resulted in better predictions at a range of scales.

Model Predictive Control (MPC) is a control strategy that is suitable for optimizing the performance of constrained systems. Constraints are present in all control systems due to the physical and environmental limits on ...

One-dimensional advectiondiffusion and advectiondiffusiondilution (or leaky-pipe) models have been widely used to interpret a variety of geophysical phenomena. For example, in the ocean these tools have been...

This paper investigates issues in modeling of current-mode control in converters. The effects of the current-sampling intrinsic to current-mode control are analyzed, and inadequately recognized limitations of linear ...

are investigated. A doubly-fed induction generator (DFIG)-based DG unit and a series capacitor (SC) and a thyristor DFIG units. The converter of the DFIG is modeled as an unbalanced harmonic-generating source

This paper presents a probabilistic influence model for smartphone usage; it applies a latent group model to social influence. The probabilistic model is built on the assumption that a timeseries of students' application downloads and activations can ... Keywords: NMF, behavior prediction, latent structure analysis, matrix factorization, mobile application, recommendation, user influence

It has been found that, for the Supernova Legacy Survey three-year (SNLS3) data, there is strong evidence for the redshift-evolution of color-luminosity parameter $\\beta$. In previous studies, only dark energy (DE) models are used to explore the effects of a time-varying $\\beta$ on parameter estimation. In this paper, we extend the discussions to the case of modified gravity (MG), by considering Dvali-Gabadadze-Porrati (DGP) model, power-law type $f(T)$ model and exponential type $f(T)$ model. In addition to the SNLS3 data, we also use the latest Planck distance priors data, the galaxy clustering (GC) data extracted from Sloan Digital Sky Survey (SDSS) data release 7 (DR7) and Baryon Oscillation Spectroscopic Survey (BOSS), as well as the direct measurement of Hubble constant $H_0$ from the Hubble Space Telescope (HST) observation. We find that, for both cases of using the supernova (SN) data alone and using the combination of all data, adding a parameter of $\\beta$ can reduce $\\chi^2$ by $\\sim$ 36 for all the MG models, showing that a constant $\\beta$ is ruled out at 6$\\sigma$ confidence level (CL). Moreover, we find that a time-varying $\\beta$ always yields a larger fractional matter density $\\Omega_{m0}$ and a smaller reduced Hubble constant $h$; in addition, it significantly changes the shapes of 1$\\sigma$ and 2$\\sigma$ confidence regions of various MG models, and thus corrects systematic bias for the parameter estimation. These conclusions are consistent with the results of DE models, showing that $\\beta$'s evolution is completely independent of the cosmological models in the background. Therefore, our work highlights the importance of considering the evolution of $\\beta$ in the cosmology-fits.

We propose a unified physical framework for transport in variably saturated porous media. This approach allows fluid flow and solute migration to be treated as ensemble averages of fluid and solute particles, respectively. We consider the cases of homogeneous and heterogeneous porous materials. Within a fractal mobile-immobile continuous time random-walk framework, the heterogeneity will be characterized by algebraically decaying particle retention times. We derive the corresponding (nonlinear) continuum-limit partial differential equations and we compare their solutions to Monte Carlo simulation results. The proposed methodology is fairly general and can be used to track fluid and solutes particles trajectories for a variety of initial and boundary conditions.

Eighty planetary systems of two or more planets are known to orbit stars other than the Sun. For most, the data can be sufficiently explained by non-interacting Keplerian orbits, so the dynamical interactions of these systems have not been observed. Here we present 4 sets of lightcurves from the Kepler spacecraft, which each show multiple planets transiting the same star. Departure of the timing of these transits from strict periodicity indicates the planets are perturbing each other: the observed timing variations match the forcing frequency of the other planet. This confirms that these objects are in the same system. Next we limit their masses to the planetary regime by requiring the system remain stable for astronomical timescales. Finally, we report dynamical fits to the transit times, yielding possible values for the planets masses and eccentricities. As the timespan of timing data increases, dynamical fits may allow detailed constraints on the systems architectures, even in cases for which high-precision Doppler follow-up is impractical.

This paper develops a novel approach by which to identify the price of oil at the time of depletion; the so-called terminal price of oil. It is shown that while the terminal price is independent of both GDP growth and the price elasticity of energy...

Technology mediated healthcare services designed to stimulate patients' self-efficacy are widely regarded as a promising paradigm to reduce the burden on the healthcare system. The promotion of healthy, active living is a topic of growing interest in ... Keywords: Personalization, Physical activity, Real time coaching, Tailoring, Telemedicine, eHealth

Science, Washington University Electrical and Computer Engineering Dept. St. Louis, MO 63130, USA 2000 Conference, IFIP/ACM, Palisades, New York, April 3-7, 2000. Abstract With the recent adoption recently, however, there were no CORBA ORBs that targeted high-performance and real-time systems, which

systems, such as e-healthcare and smart grids, have been drawing increasing attention in both industry is evaluated at packet level (e.g., packet send/delivery ratio [8], the number of jammed packets [11 metrics cannot be readily adapted to measure the jamming impact on time-critical appli- cations. Further

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

their exploration and development investment timing decisions, positive information externalities and negative's production decisions and profits depend on the decisions of firms owning neighboring tracts of land etrics, in industrial organization, in m icroeconom ic theory, in energy econom ics, and in environm

The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to OSPREY to used and evaluate the model.

a sparse set of parameters describing the energy consumption of a refrigerator and enables us to predict in the timeseries. Our motivating example is the problem of modeling and predicting energy consumption learn- ing the parameters of multiple linear systems as well as the change points that describe when

...on a desktop computer. This is compared...features of the system. The important...resistance. The analysis also identified...and uncertainty analysis: applications to large-scale systems, vol. 2. Boca...for sensitivity analysis of large models...European Symp. on Computer Aided Process...

Online Social Networks (OSNs) have, in recent years, emerged as a new way to communicate, diffuse information, coordinate people, establish relationships, among other possibilities. In this context, being able to understand and predict how users behave ... Keywords: microblogging, modeling, online social network, simulation

-year response lag, of aquifers which run dry by the end of the dry season and of subsurface runoff. The surface increases and slow recessions. These features prevent the use of ARMA-type of models [Box and Jenkins, 1976 acts as the input of a system that is representative of the transformations operated by the watershed

and recovery in Wireless Sensor Networks (WSNs) have utilized the spatio-temporal statistics of real world signals in order to achieve good performance in terms of energy savings and improved signal reconstruction address this gap by devising a mathematical model for real world signals that are correlated in space

......approaches (SDH, BS, WT, HPNK) all...correlations between the nuclear SFR and BHAR than...albeit at the cost of the smaller...excluding ONB) is BS with beta1...by events in the nuclear region, and the...for models SDH, BS, WT, and HPNK...between a high nuclear SFR and a small......

To assess the climate impacts of historical and projected land cover change in the Community Climate System Model, version 4 (CCSM4), new timeseries of transient Community Land Model, version 4 (CLM4) plant functional ...

. Yusong Cao and Dr. Xiaobo Chen for their kind help and valuable comments on my dissertation topic. I deeply appreciate the help from my classmates and other group members, including Liqing Huang, Maopeng Fang, John Bandas, Zhiyong Su and Amitava Guha...-craft .......................................... 89 5.2 RAOs of the T-craft........................................................................ 90 5.3 Time-domain Responses of the Three-body Floating System ....... 95 VI CONCLUSIONS AND FUTURE WORK...

Multielectron excited states have become a hot topic in many cutting-edge research fields such as the photophysics of polyenes and in the possibility of multiexciton generation in quantum dots for the purpose of increasing solar cell efficiency. However obtaining multielectron excited states has been a major obstacle as it is often done with multiconfigurational methods which involve formidable computational cost for large systems. Although they are computationally much cheaper than multiconfigurational wave function based methods linear response adiabatic time-dependent HartreeFock (TDHF) and density functional theory (TDDFT) are generally considered incapable of obtaining multielectron excited states. We have developed a real-time TDHF and adiabatic TDDFT approach that is beyond the perturbative regime. We show that TDHF/TDDFT is able to simultaneously excite two electrons from the ground state to the doubly excited state and that the real-time TDHF/TDDFT implicitly includes double excitation within a superposition state. We also present a multireference linear response theory to show that the real-time electron density response corresponds to a superposition of perturbative linear responses of the S 0 and S 2 states. As a result the energy of the two-electron doubly excited state can be obtained with several different approaches. This is done within the adiabatic approximation of TDDFT a realm in which the doubly excited state has been deemed missing. We report results on simple two-electron systems including the energies and dipole moments for the two-electron excited states of H 2 and He H + . These results are compared to those obtained with the full configuration interaction method.

The objective of this modeling and simulation study was to establish the role of stress wave interactions in the genesis of traumatic brain injury (TBI) from exposure to explosive blast. A high resolution (1 mm{sup 3} voxels), 5 material model of the human head was created by segmentation of color cryosections from the Visible Human Female dataset. Tissue material properties were assigned from literature values. The model was inserted into the shock physics wave code, CTH, and subjected to a simulated blast wave of 1.3 MPa (13 bars) peak pressure from anterior, posterior and lateral directions. Three dimensional plots of maximum pressure, volumetric tension, and deviatoric (shear) stress demonstrated significant differences related to the incident blast geometry. In particular, the calculations revealed focal brain regions of elevated pressure and deviatoric (shear) stress within the first 2 milliseconds of blast exposure. Calculated maximum levels of 15 KPa deviatoric, 3.3 MPa pressure, and 0.8 MPa volumetric tension were observed before the onset of significant head accelerations. Over a 2 msec time course, the head model moved only 1 mm in response to the blast loading. Doubling the blast strength changed the resulting intracranial stress magnitudes but not their distribution. We conclude that stress localization, due to early time wave interactions, may contribute to the development of multifocal axonal injury underlying TBI. We propose that a contribution to traumatic brain injury from blast exposure, and most likely blunt impact, can occur on a time scale shorter than previous model predictions and before the onset of linear or rotational accelerations traditionally associated with the development of TBI.

The numerical simulation of multiphase flows in Light Water (Nuclear) Reactors, LWRs, for normal, accident, and off-normal operation, and for operational optimization must cover a huge disparity of transient time durations, from milliseconds to years. In addition, our recent work has shown that the application of classical Riemann approaches, which pervade modern computational fluid dynamics (CFD), suffer numerical accuracy degradation, especially for compressible liquid flows. In this setting, all-speed or Mach uniform methods are need which can be accurately and efficiently integrated over a very large range of time scales. Thus we need a multi-time-scale integration approach to compliment our previously documented multi-spatial-scale approach to multiphase flow modeling [1]. This report briefly summarizes our efforts in these areas.

We propose a new method for imaging activation time within three-dimensional (3D) myocardium by means of a heart-excitation model. The activation time is estimated from body surface electrocardiograms by minimizing multiple objective functions of the measured body surface potential maps (BSPMs) and the heart-model-generated BSPMs. Computer simulation studies have been conducted to evaluate the proposed 3D myocardial activation time imaging approach. Single-site pacing at 24 sites throughout the ventricles, as well as dual-site pacing at 12 pairs of sites in the vicinity of atrio-ventricular ring, was performed. The present simulation results show that the average correlation coefficient (CC) and relative error (RE) for single-site pacing were 0.9992 ± 0.0008/0.9989 ± 0.0008 and 0.05 ± 0.02/0.07 ± 0.03, respectively, when 5 µV/10 µV Gaussian white noise (GWN) was added to the body surface potentials. The average CC and RE for dual-site pacing were 0.9975 ± 0.0037 and 0.08 ± 0.04, respectively, when 10 µV GWN was added to the body surface potentials. The present simulation results suggest the feasibility of noninvasive estimation of activation time throughout the ventricles from body surface potential measurement, and suggest that the proposed method may become an important alternative in imaging cardiac electrical activity noninvasively.

A three-dimensional transient model for time-domain (modulated) free-carrier absorption (FCA) measurement was developed to describe the transport dynamics of photo-generated excess carriers in silicon (Si) wafers. With the developed transient model, numerical simulations were performed to investigate the dependences of the waveforms of the transient FCA signals on the electronic transport parameters of Si wafers and the geometric parameters of the FCA experiment. Experimental waveforms of FCA signals of both n- and p-type Si wafers with resistivity ranging 138 ?·cm were then fitted to the three-dimensional transient model to extract simultaneously and unambiguously the transport parameters of Si wafers, namely, the carrier lifetime, the carrier diffusion coefficient, and the front surface recombination velocity via multi-parameter fitting. A basic agreement between the extracted parameter values and the literature values was obtained.

Chapter 6 Power Series Power series are one of the most useful type of series in analysis functions (and many other less familiar functions). 6.1. Introduction A power series (centered at 0 coefficients. If all but finitely many of the an are zero, then the power series is a polynomial function

to minimize the parameter directly and some- times necessary to accomplish regularization through a di erent measure of m. With 21 higher-order Tikhonov, it is desirable to regularize the rst or second derivative of m. Here the roughening matrix L... the rst derivative. Higher-order Tikhonov can be formulated as a dampened least squares problem, minkGm dk22 + 2kLmk22 (3.15) where in the case of rst-order Tikhonov regularization, the roughening matrix L is de ned as, L = 2 66 66 66 66 66 4 1 1 0 0...

Exponential smoothing is often used to forecast lead-time demand (LTD) for inventory control. In this paper, formulae are provided for calculating means and variances of LTD for a wide variety of exponential smoothing methods. A feature of many of the formulae is that variances, as well as the means, depend on trends and seasonal effects. Thus, these formulae provide the opportunity to implement methods that ensure that safety stocks adjust to changes in trend or changes in season. An example using weekly sales shows how safety stocks can be seriously underestimated during peak sales periods.

Sample records for time series models from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "time series models" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The paper is a part of student cooperation in AKTION project (Austria-Czech). Taylor series method for solving differential equations represents a non-traditional way of a numerical solution. Even though this method is not much preferred in the literature experimental calculations done at the Department of Intelligent Systems of the Faculty of Information Technology of TU Brno have verified that the accuracy and stability of the Taylor series method exceeds the currently used algorithms for numerically solving differential equations. The paper deals with possibilities of numerical solution of Initial Value Problems of Ordinary Differential Equations (ODEs) - using the Taylor series method with automatic computation of higher Taylor series terms. The explicit and implicit scheme of Taylor series method is compared with numerical solvers implemented in MATLAB software [1]. The computation time and accuracy of our approach are compared with that of MATLAB ode solvers on a set of ODEs test examples [2].

Ticket Information Ticket Information On-Line Tickets On-Line ticketing is now available! Click here to be connected to our secure on-line ticketing site. Please note that on-line ticketing for any particular event closes down the Friday prior to the event at noon. For example, a Friday night lecture has on-line sales ending at noon; on-line sales for a given Saturday night Art Series event will end at noon the Friday prior; and sales for a Sunday afternoon Gallery Chamber Series event will end at noon the Friday prior. Please present an ID to pick up student tickets. Telephone For information and tickets you may also call 630-840-ARTS (630-840-2787), or Fax to (630) 840-5501. An answering machine will take your confidential message during times that the box office manager is not available.

) for any piecewise continuous, uniformly bounded reference input signal r(t). In order to meet the control objective, we make the following standard assump- tions [26, 53] concerning the plant Gp(s) and the reference model I4 (s) ~ (Al) Rp(s) is a monic... polynomial of degree n. ~ (A2) Zp(s) is a monic Hurwitz polynomial of degree n, ( n. ~ (A3) The sign of kr and an upper bound k, ) 0 on [k?[ are known. Without any loss of generality, kr is assumed to be positive. Thus kr ( k ~ (A4) The relative degree n...

CASTLE was an atmospheric nuclear weapons test series held in the Marshall Islands at Enewetak and Bikini atolls in 1954. This is a report of DOD peronnel in CASTLE with an emphasis on operations and radiological safety.

It has been found that, for the Supernova Legacy Survey three-year (SNLS3) data, there is strong evidence for the redshift-evolution of color-luminosity parameter $\\beta$. In this paper, adopting the $w$-cold-dark-matter ($w$CDM) model and considering its interacting extensions (with three kinds of interaction between dark sectors), we explore the evolution of $\\beta$ and its effects on parameter estimation. In addition to the SNLS3 data, we also take into account the Planck distance priors data of the cosmic microwave background (CMB), the galaxy clustering (GC) data extracted from SDSS DR7 and BOSS, as well as the direct measurement of Hubble constant from the Hubble Space Telescope (HST) observation. We find that, for all the interacting dark energy (IDE) models, adding a parameter of $\\beta$ can reduce $\\chi^2$ by $\\sim$ 34, indicating that $\\beta_1 = 0$ is ruled out at 5.8$\\sigma$ confidence level (CL). Furthermore, it is found that varying $\\beta$ can significantly change the fitting results of various ...

Resource Center Resource Center Site Map Printable Version Development Adoption Compliance Regulations Resource Center FAQs Publications Resource Guides eLearning Model Policies Glossary Related Links ACE Learning Series Utility Savings Estimators ACE Learning Series - Adoption, Compliance, and Enforcement ACE Learning Series Buildings account for almost 40% of the energy used in the United States and, as a direct result of that use, our environment and economy are impacted. Building energy codes and standards provide an effective response. The Building Energy Codes Program (BECP) designed the ACE Learning Series for those in the building industry having the greatest potential to influence the adoption of and compliance with building energy codes and standards. The Learning Series consists of:

To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these timeseries. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation. We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.

Science Series Video Archive Science Series Video Archive Couldn't make it to the last Science Series lecture? Did you like a lecture so much that you just had to see it again? Not to worry! Past lectures are now available on demand! The Higgs Boson and Our Life The Higgs Boson and Our Life On July 4th, 2012, the ATLAS and CMS experiments operating at the CERN Large Hadron Collider (LHC) announced the discovery of a new particle compatible with the Higgs boson (hunted for almost 50 years), which is a crucial piece for our understanding of fundamental physics and thus the structure and evolution of the universe. This lecture describes the unprecedented instruments and challenges that have allowed such an accomplishment, the meaning and relevance of this discovery to physics... April 30, 2013

stitching and occlusion filling for human motion. In partic- ular, we provide a metric for evaluating-efficient way, which can save a significant percentage of electric power consumption in data centers. #12;vi #12

...On September 28, a 200,000-person Liberty Loan Drive took place on the streets...September 27, one day before the notorious Liberty Loan parade, and seven days before the...research group meetings at which this project was begun. The authors declare no conflict...

A growth-based Bayesian inverse method is presented for deriving emissions of atmospheric trace species from temporally sparse measurements of their mole fractions. This work is motivated by many recent studies that have ...

Continuously operating Global Positioning System (GPS) networks record station position changes with millimeter-level accuracy and have revealed transient deformations on various spatial and temporal scales. However, the ...

Demand side energy management has become an important issue for energy management. In order to support energy planning and policy decisions forecasting the future demand is very important. Thus, forecasting the f...