In finding shortestpaths, the operation of finding, successively, a minimum from a list of numbers may require more work than the remainder of the algorithm. It is shown how algorithms from sorting literature can be used to accomplish this part of the shortestpath algorithm. Bounds on the largest possible amount of work are established, and results of a

\\u000a Given a number of obstacles in a plane, the problem of computing a geodesic (or the shortestpath) between two points has\\u000a been studied extensively. However, the case where the obstacles are circular discs has not been explored as much as it deserves.\\u000a In this paper, we present an algorithm to compute a geodesic among a set of mutually disjoint

to compute and store a partial shortestpath tree (PSPT) for each node. The PSPTs have the property to be an extremely small frac- tion of the entire network; hence, PSPTs can be stored efficiently and each shortestpath can be computed ex- tremely quickly. For a real network with 5 million nodes and 69 mil- lion

You know you're in for a real treat when a lecture starts off with "I just happen to have with me today this bucket filled with soap solution, water, and some glycerin." That happens to be the opening line from a talk given by Professor Michael Dorff at the Mathematical Association of America (MAA). Dorff's talk was quite hands-on and it included a number of skeletal Zometool creations and deconstructed Slinkies, among other items. The title of the talk was "ShortestPaths, Soap Films, and Minimal Surfaces" and it is available here in its entirety. In the lecture, Dorff talks (and demonstrates) the shortest distance between four points, neighborhood accessibility, and a number of fascinating topics.

Abstract. The on-line shortestpath problem is considered under partial monitoring scenarios. At each round, a decision maker has to choose a path between two distinguished vertices of a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way such that the loss of the chosen path (dened as the sum of the weights of its

\\u000a We present a new speedup technique for route planning that exploits the hierarchy inherent in real world road networks. Our\\u000a algorithm preprocesses the eight digit number of nodes needed for maps of the USA or Western Europe in a few hours using linear\\u000a space. Shortest (i.e. fastest) path queries then take around eight milliseconds to produce exact shortestpaths. This

We conduct an extensive computational study of shortestpaths algorithms, including some very recent algorithms. We also suggest new algorithms motivated by the experimental results and prove interesting theoretical results suggested by the experimental data. Our computational study is based on several natural problem classes which identify strengths and weaknesses of various algorithms. These problem classes and algorithm implementations form

Sep 19, 2014 ... tree, whose child nodes are all leaves, the associated player can reach a decision by simply ... moved to its parent node. In this way, we can ... towards t, since the cost of the path from s to v does not influence the decision in v.

This paper deals with the complete characterization of the shortestpaths for a car-like robot. Previous works have shown that the search for a shortestpath may be limited to a simple family of trajectories. Our work completes this study by providing a way to select inside this family an optimal path to link any two configurations. We combine the

Fractal structure of shortestpaths depends strongly on interresidue interaction cutoff distance. Taking the cutoff distance as variable, the paths are self similar above 6.8 {\\AA} with a fractal dimension of 1.12, remarkably close to Euclidean dimension. Below 6.8 {\\AA}, paths are multifractal. The number of steps to traverse a shortestpath is a discontinuous function of cutoff size at short wavelengths. An algorithm is introduced to determine the residues on a given shortestpath. Shannon entropy of information transport between two residues along a shortestpath is lower than the entropies along longer paths between the same two points leading to the conclusion that communication over shortestpaths results in highest lossless encoding.

In recent years we have seen a growing interest in ir- regular network topologies for cluster interconnects. One problem related to such topologies is that the combination of shortestpath and deadlock free routing is difficult. As a result of this the existing solutions for routing in irreg- ular networks either guarantee shortestpaths relative to some constraint (like up*\\/down*),

The Routing Continuum from Shortest-path to All-path: A Unifying Theory Yanhua Li, Zhi-Li Zhang of data and sensor networks, routing strategies such as shortest-path, multi-path and potential-based ("all-path") routing have been developed. Based on the connection between routing and flow optimization

Solving shortestpath problems inside simple polygons is a very classical problem in motion planning. To date, it has usually relied on triangulation of the polygons. The question: "Can one devise a simple O(n) time algorithm for computing the shortestpath between two points in a simple polygon (with n vertices), without resorting to a (complicated) linear-time triangulation algorithm?" raised by J. S. B. Mitchell in Handbook of Computational Geometry (J. Sack and J. Urrutia, eds., Elsevier Science B.V., 2000), is still open. The aim of this paper is to show that convexity contributes to the design of efficient algorithms for solving some versions of shortestpath problems (namely, computing the convex hull of a finite set of points and convex rope on rays in 2D, computing approximate shortestpath between two points inside a simple polygon) without triangulation on the entire polygons. New algorithms are implemented in C and numerical examples are presented.

The on-line shortestpath problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of

The shortestpath problem on a network with fixed weights is a well studied problem with applications to many diverse areas such as transportation and telecommunications. We are particularly interested in the scenario where a nuclear material smuggler tries to succesfully reach herlhis target by identifying the most likely path to the target. The identification of the path relies on reliabilities (weights) associated with each link and node in a multi-modal transportation network. In order to account for the adversary's uncertainty and to perform sensitivity analysis we introduce random reliabilities. We perform some controlled experiments on the grid and present the distributional properties of the resulting stochastic shortestpaths.

\\u000a The question, whether an optional set of routes can be represented as shortestpaths, and if yes, then how, has been a rather\\u000a scarcely investigated problem up until now. In turn, an algorithm that, given an arbitrary set of traffic engineered paths,\\u000a can efficiently compute OSPF link weights as to map the given paths to shortestpaths may be of

Modeling wildfire propagation with Delaunay triangulation and shortestpath algorithms Alexander In this paper, a methodology for modeling surface wildfire propagation through a complex landscape is presented, wildfire modeling 1 Introduction During the years 2000-2004, the National Interagency Fire Center (NIFC

Algorithms for finding shortestpaths are presented which are faster than algorithms previously known on networks which are relatively sparse in arcs. Known results which the results of this paper extend are surveyed briefly and analyzed. A new implementation for priority queues is employed, and a class of arc set partition algorithms is introduced. For the single source problem on

Traffic congestion is a very serious problem in large cities. With the number of vehicles increasing rapidly, especially in cities whose economy is booming, the situation is getting even worse. In this paper, by leveraging the techniques of Vehicular Ad hoc Networks (VANETs) we present a dynamic navigation protocol called VAN for individual vehicles to find the shortest-time paths toward

E cient Algorithms for the Minimum ShortestPath Steiner Arborescence Problem with Applications is a shortestpath in G. Given a triple G;N;r, the MinimumShortest-Path Steiner Arborescence MSPSA problem seeks is a shortestpath in G. Given a triple G;N;r, the MinimumShortest-Path Steiner Arborescence MSPSA problem seeks

Shortestpath is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortestpath from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortestpaths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

Shortestpath algorithms are a key element of many graph problems. They are used in such applications as online direction finding and navigation, as well as modeling of traffic for large scale simulations of major metropolitan areas. As the shortestpath algorithms are an execution bottleneck, it is beneficial to move their execution to parallel hardware such as Field-Programmable Gate Arrays (FPGAs). Hardware implementation is accomplished through the use of a small A core replicated on the order of 20 times on an FPGA device. The objective is to maximize the use of on-board random-access memory bandwidth through the use of multi-threaded latency tolerance. Each shortestpath core is responsible for one shortestpath calculation, and when it is finished it outputs its result and requests the next source from a queue. One of the innovations of this approach is the use of a small bubble sort core to produce the extract-min function. While bubble sort is not usually considered an appropriate algorithm for any non-trivial usage, it is appropriate in this case as it can produce a single minimum out of the list in O(n) cycles, whwere n is the number of elements in the vertext list. The cost of this min operation does not impact the running time of the architecture, because the queue depth for fetching the next set of edges from memory is roughly equivalent to the number of cores in the system. Additionally, this work provides a collection of simulation results that model the behavior of the node queue in hardware. The results show that a hardware queue, implementing a small bubble-type minimum function, need only be on the order of 16 elements to provide both correct and optimal paths. Because the graph database size is measured in the hundreds of megabytes, the Cray SRAM memory is insufficient. In addition to the A* cores, they have developed a memory management system allowing round-robin servicing of the nodes as well as virtual memory managed over the Hypertransport bus. With support for a DRAM graph store with SRAM-based caching on the FPGA, the system provides a speedup of roughly 8.9x over the CPU-based implementation.

This paper presents a heuristic for guiding A* search for finding the shortest distance path between two vertices in a connected, undirected, and explicitly stored graph. The heuristic requires a small amount of data to be stored at each vertex. The heuristic has application to quickly detecting relationships between two vertices in a large information or knowledge network. We compare the performance of this heuristic with breadth-first search on graphs with various topological properties. The results show that one or more orders of magnitude improvement in the number of vertices expanded is possible for large graphs, including Poisson random graphs.

The constrained shortestpath (CSP) problem has been widely used in transportation optimization, crew scheduling, network routing and so on. It is an open issue since it is a NP-hard problem. In this paper, we propose an innovative method which is based on the internal mechanism of the adaptive amoeba algorithm. The proposed method is divided into two parts. In the first part, we employ the original amoeba algorithm to solve the shortestpath problem in directed networks. In the second part, we combine the Physarum algorithm with a bio-inspired rule to deal with the CSP. Finally, by comparing the results with other method using an examples in DCLC problem, we demonstrate the accuracy of the proposed method. PMID:24959603

The constrained shortestpath (CSP) problem has been widely used in transportation optimization, crew scheduling, network routing and so on. It is an open issue since it is a NP-hard problem. In this paper, we propose an innovative method which is based on the internal mechanism of the adaptive amoeba algorithm. The proposed method is divided into two parts. In the first part, we employ the original amoeba algorithm to solve the shortestpath problem in directed networks. In the second part, we combine the Physarum algorithm with a bio-inspired rule to deal with the CSP. Finally, by comparing the results with other method using an examples in DCLC problem, we demonstrate the accuracy of the proposed method. PMID:24959603

Large networked systems are constantly exposed to local damages and failures that can alter their functionality. The knowledge of the structure of these systems is, however, often derived through sampling strategies whose effectiveness at damage detection has not been thoroughly investigated so far. Here, we study the performance of shortest-path sampling for damage detection in large-scale networks. We define appropriate metrics to characterize the sampling process before and after the damage, providing statistical estimates for the status of nodes (damaged, not damaged). The proposed methodology is flexible and allows tuning the trade-off between the accuracy of the damage detection and the number of probes used to sample the network. We test and measure the efficiency of our approach considering both synthetic and real networks data. Remarkably, in all of the systems studied, the number of correctly identified damaged nodes exceeds the number of false positives, allowing us to uncover the damage precisely.

The problem of corridor location can be found in a number of fields including power transmission, highways, and pipelines. It involves the placement of a corridor or rights-of-way that traverses a landscape starting at an origin and ending at a destination. Since most systems are subject to environmental review, it is important to generate competitive, but different alternatives. This paper addresses the problem of generating efficient, spatially different alternatives to the corridor location problem. We discuss the weaknesses in current models and propose a new approach which is designed to overcome many of these problems. We present an application of this model to a real landscape and compare the results to past work. Overall, the new model called the multi-gateway shortestpath problem can generate a wide variety of efficient alignments, which eclipse what could be generated by past work.

Weight of a link in a shortestpath tree and the Dedekind Eta function Piet Van Mieghem Delft University of Technology September 11, 2009 Abstract The weight of a randomly chosen link in the shortestpath tree on the complete graph with exponential i.i.d. link weights is studied. The corresponding

This paper addresses one of the potential graph-based problems that arises when an optimal shortestpath solution, or near optimal solution is acceptable, namely the Single Source ShortestPath (SSP) problem. To this end, a novel Heuristic Genetic Algorithm (HGA) to solve the SSSP problem is developed and evaluated. The proposed algorithm employs knowledge from deterministic techniques and the genetic

Shortestpath tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used openshortestpath first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach. PMID:23144039

In this paper, we apply geodesic paths (the shortestpath which is an important object in the task of 3D shape analysis) to the description of natural textures. We first introduce an approach called propagation algorithm to estimate the shortestpaths on discrete surfaces and then derive a set of texture features (LR, PLR, SDR, DIF and 1c) from them. The results of the experiments indicate that these features can reflect texture surface patterns in various aspects and are suitable for texture description.

the problem to be analyzed follows: Given a starting point s, an ending point t and a set of n Weighted Faces (or regions) in a 2-dimensional space, find the best path from s to t, where the length of the path is defined as the weighted sum of the Euclidean length of the sub paths inside each region. Let

The shortestpath problem is a classic problem and is unlikely to find an efficient algorithm for solving it directly. It is applied broadly in practice. Thus rapid and effective solving shortestpath problem is very important application value in practice. Genetic Algorithm (GA) is a kind of heuristic global optimization search algorithm that simulates the biology evolutionary system. It is resolved efficiently by this improvement Genetic Algorithm. In this paper, a SoPC-based GA framework is proposed .The experiment results show that improved Genetic Algorithm enhances extremely in the same environment.

Existing wireless ad hoc routing protocols typically find routes with the minimum hop-count. This paper presents experimental evidence from two wireless test-beds which shows that there are usually multiple minimum hop-count paths, many of which have poor throughput. As a result, minimum-hop-count routing often chooses routes that have significantly less capacity than the best paths that exist in the network.

A partitioned, priority-queue algorithm for solving the single-source best-path problem is defined and evaluated. Finding single-source paths for sparse graphs is notable because of its definite lack of parallelism-no known algorithm are scalable. Qualitatively, they discuss the close relationship between the algorithm and previous work by Quinn, Chikayama, and others. Performance measurements of variations of the algorithm, implemented both in concurrent and imperative programming languages on a shared-memory multiprocessor, are presented. This quantitative analysis of the algorithms provides insights into the tradeoffs between complexity and overhead in graph-searching executed in high-level parallel languages with automatic task scheduling.

Given an undirected graph G = (V; E) with positive edge weights (lengths) w : E ! shortest-path Steiner arborescence (simply called an arborescence in the following) is a Steiner tree rooted at r spanning all terminals in N such that every source-to-sink

The quick construction of the ShortestPath Tree (SPT) is essential to achieve fast routing speed for an interior net- work using link state protocols, such as OSPF and IS-IS. Whenever the network topology changes, the old SPT must be updated. In a network with a large number of nodes, the technology with the whole SPT re-computation by tra- ditional

Fast Shortest-path Distance Queries on Road Networks by Pruned Highway Labeling Takuya Akiba Yoichi- ferred to as highway-based labelings and a preprocessing algorithm for it named pruned highway labeling to as the highway-based labeling framework and a preprocessing algorithm for the framework named pruned highway

Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortestpath distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW) with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health. PMID:24798197

k-Link ShortestPaths in Weighted Subdivisions Ovidiu Daescu1 , Joseph S.B. Mitchell2 , Simeon nonnegative weight. The weighted length of a line segment, ab, joining two points a and b within the same by NSF grant CCF-0430366. J. Mitchell is partially supported by the U.S.-Israel Binational Science

cost path between two nodes s and t such that each node of G is visited ..... In order to compare MCF and GCS, we need to project out also the q-variables .... generated by building a connected component including all the nodes, and then.

Background Identification of genes that modulate longevity is a major focus of aging-related research and an area of intense public interest. In addition to facilitating an improved understanding of the basic mechanisms of aging, such genes represent potential targets for therapeutic intervention in multiple age-associated diseases, including cancer, heart disease, diabetes, and neurodegenerative disorders. To date, however, targeted efforts at identifying longevity-associated genes have been limited by a lack of predictive power, and useful algorithms for candidate gene-identification have also been lacking. Methodology/Principal Findings We have utilized a shortest-path network analysis to identify novel genes that modulate longevity in Saccharomyces cerevisiae. Based on a set of previously reported genes associated with increased life span, we applied a shortest-path network algorithm to a pre-existing proteinprotein interaction dataset in order to construct a shortest-path longevity network. To validate this network, the replicative aging potential of 88 single-gene deletion strains corresponding to predicted components of the shortest-path longevity network was determined. Here we report that the single-gene deletion strains identified by our shortest-path longevity analysis are significantly enriched for mutations conferring either increased or decreased replicative life span, relative to a randomly selected set of 564 single-gene deletion strains or to the current data set available for the entire haploid deletion collection. Further, we report the identification of previously unknown longevity genes, several of which function in a conserved longevity pathway believed to mediate life span extension in response to dietary restriction. Conclusions/Significance This work demonstrates that shortest-path network analysis is a useful approach toward identifying genetic determinants of longevity and represents the first application of network analysis of aging to be extensively validated in a biological system. The novel longevity genes identified in this study are likely to yield further insight into the molecular mechanisms of aging and age-associated disease. PMID:19030232

A complete edge-weighted directed graph on vertices 1,2,...,n that assigns cost c(i,j) to the edge (i,j) is called Monge if its edge costs form a Monge array, i.e., for all i < k and j < l, c[i, j]+c[k,l]{le} < c[i,l]+c[k,j]. One reason Monge graphs are interesting is that shortestpaths can be computed quite quickly in such graphs. In particular, Wilber showed that the shortestpath from vertex 1 to vertex n of a Monge graph can be computed in O(n) time, and Aggarwal, Klawe, Moran, Shor, and Wilber showed that the shortest d-edge 1-to-n path (i.e., the shortestpath among all 1-to-n paths with exactly d edges) can be computed in O(dn) time. This paper`s contribution is a new algorithm for the latter problem. Assuming 0 {le} c[i,j] {le} U and c[i,j + 1] + c[i + 1,j] {minus} c[i,j] {minus} c[i + 1, j + 1] {ge} L > 0 for all i and j, our algorithm runs in O(n(1 + 1g(U/L))) time. Thus, when d {much_gt} 1 + 1g(U/L), our algorithm represents a significant improvement over Aggarwal et al.`s O(dn)-time algorithm. We also present several applications of our algorithm; they include length-limited Huffman coding, finding the maximum-perimeter d-gon inscribed in a given convex n-gon, and a digital-signal-compression problem.

We investigate the problem of constructing a multicast tree in ad-hoc networks. In particular, we address the issue of the power consumption, that is, the overall energy that the stations must spend to implement such a tree. We focus on two extreme cases of multicast: broadcast (one-to-all) and unicast (one-to-one). Minimum Spanning Trees (MSTs) and Shortest-Path Trees (SPTs) yield optimal

\\u000a Arrival time dependent shortestpath finding is an important function in the field of traffic information systems or telematics.\\u000a However large number of mobile objects on the road network results in a scalability problem for frequently updating and handling\\u000a their real-time location. In this paper, we propose a query processing method in MANET(Mobile Ad-hoc Network) environment\\u000a to find an arrival

We present new and improved methods for efficient shortest-path query processing. Our methods are tailored to work for two specific classes of graphs: graphs with small tree-width and complex networks. Seemingly unrelated at first glance, these two classes of graphs have some commonalities: complex networks are known to have a core--fringe structure with a dense core and a tree-like fringe.

A new algorithm is presented to compute the shortestpath on a graph when the node transition costs depend on the prior history of the path to the current node. The algorithm is applied to solve path planning problems with curvature constraints.

Endovascular interventional procedures are being used more frequently in cardiovascular surgery. Unfortunately, procedural failure, e.g., vessel dissection, may occur and is often related to improper guidewire and/or device selection. To support the surgeon's decision process and because of the importance of the guidewire in positioning devices, we propose a method to determine the guidewire path prior to insertion using a model of its elastic potential energy coupled with a representative graph construction. The 3D vessel centerline and sizes are determined for a specified vessel. Points in planes perpendicular to the vessel centerline are generated. For each pair of consecutive planes, a vector set is generated which joins all points in these planes. We construct a graph representing these vector sets as nodes. The nodes representing adjacent vector sets are joined by edges with weights calculated as a function of the angle between the corresponding vectors (nodes). The optimal path through this weighted directed graph is then determined using shortestpath algorithms, such as topological sort based shortestpath algorithm or Dijkstra's algorithm. Volumetric data of an internal carotid artery phantom (Ø 3.5mm) were acquired. Several independent guidewire (Ø 0.4mm) placements were performed, and the 3D paths were determined using rotational angiography. The average RMS distance between the actual and the average simulated guidewire path was 0.7mm; the computation time to determine the path was 3 seconds. The ability to predict the guidewire path inside vessels may facilitate calculation of vessel-branch access and force estimation on devices and the vessel wall.

are of capital importance in a variety of problems, from robot path planning, to maze solving. Path planning [16] is a well-known problem in the robotics community, described by [26] as "checking the consequences. This distribution, though peaked around optimal paths, let the random walker take a random transition according

Characterized by using minimum hard (structural) and soft (computational) resources, a novel parameter-free minimal resource neural network (MRNN) framework is proposed for solving a wide range of single-source shortestpath (SP) problems for various graph types. The problems are the k-shortest time path problems with any combination of three constraints: time, hop, and label constraints, and the graphs can be directed, undirected, or bidirected with symmetric and/or asymmetric traversal time, which can be real and time dependent. Isomorphic to the graph where the SP is to be sought, the network is activated by generating autowave at source neuron and the autowave travels automatically along the paths with the speed of a hop in an iteration. Properties of the network are studied, algorithms are presented, and computation complexity is analyzed. The framework guarantees globally optimal solutions of a series of problems during the iteration process of the network, which provides insight into why even the SP is still too long to be satisfied. The network facilitates very large scale integrated circuit implementation and adapt to very large scale problems due to its massively parallel processing and minimum resource utilization. When implemented in a sequentially processing computer, experiments on synthetic graphs, road maps of cities of the USA, and vehicle routing with time windows indicate that the MRNN is especially efficient for large scale sparse graphs and even dense graphs with some constraints, e.g., the CPU time taken and the iteration number used for the road maps of cities of the USA is even less than ? 2% and 0.5% that of the Dijkstra's algorithm. PMID:25050952

With rapid economic and social development, the problem of traffic congestion is getting more and more serious. Accordingly, network traffic models have attracted extensive attention. In this paper, we introduce a shortest-remaining-path-first queuing strategy into a network traffic model on Barabási-Albert scale-free networks under efficient routing protocol, where one packets delivery priority is related to its current distance to the destination. Compared with the traditional first-in-first-out queuing strategy, although the network capacity has no evident changes, some other indexes reflecting transportation efficiency are significantly improved in the congestion state. Extensive simulation results and discussions are carried out to explain the phenomena. Our work may be helpful for the designing of optimal networked-traffic systems.

We illustrate the use of the techniques of modern geometric optimal control theory by studying the shortestpaths for a model of a car that can move forwards and backwards. This problem was discussed in recent work by Reeds and Shepp who showed, by special methods, (a) that shortestpath motion could always be achieved by means of trajectories of

We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result. PMID:23807445

Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortestpath length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.

The traditional grid/cell-based wavefront expansion algorithms, such as the shortestpath algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortestpath method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

Since statistical relationships between HIV load and CD4+ T cell loss have been demonstrated to be weak, searching for host factors contributing to the pathogenesis of HIV infection becomes a key point for both understanding the disease pathology and developing treatments. We applied Maximum Relevance Minimum Redundancy (mRMR) algorithm to a set of microarray data generated from the CD4+ T cells of viremic non-progressors (VNPs) and rapid progressors (RPs) to identify host factors associated with the different responses to HIV infection. Using mRMR algorithm, 147 gene had been identified. Furthermore, we constructed a weighted molecular interaction network with the existing protein-protein interaction data from STRING database and identified 1331 genes on the shortest-paths among the genes identified with mRMR. Functional analysis shows that the functions relating to apoptosis play important roles during the pathogenesis of HIV infection. These results bring new insights of understanding HIV progression. PMID:24244287

One of the most important and challenging problems in biomedicine and genomics is how to identify the disease genes. In this study, we developed a computational method to identify colorectal cancer-related genes based on (i) the gene expression profiles, and (ii) the shortestpath analysis of functional protein association networks. The former has been used to select differentially expressed genes as disease genes for quite a long time, while the latter has been widely used to study the mechanism of diseases. With the existing protein-protein interaction data from STRING (Search Tool for the Retrieval of Interacting Genes), a weighted functional protein association network was constructed. By means of the mRMR (Maximum Relevance Minimum Redundancy) approach, six genes were identified that can distinguish the colorectal tumors and normal adjacent colonic tissues from their gene expression profiles. Meanwhile, according to the shortestpath approach, we further found an additional 35 genes, of which some have been reported to be relevant to colorectal cancer and some are very likely to be relevant to it. Interestingly, the genes we identified from both the gene expression profiles and the functional protein association network have more cancer genes than the genes identified from the gene expression profiles alone. Besides, these genes also had greater functional similarity with the reported colorectal cancer genes than the genes identified from the gene expression profiles alone. All these indicate that our method as presented in this paper is quite promising. The method may become a useful tool, or at least plays a complementary role to the existing method, for identifying colorectal cancer genes. It has not escaped our notice that the method can be applied to identify the genes of other diseases as well. PMID:22496748

experiments in the aircraft navigation sce- nario with real wind data demonstrate that our framework can Planning; Canadian Traveler Problem; Gaussian Process; Aircraft Routing ABSTRACT In a variety of real world problems from robot navigation to logistics, agents face the challenge of path optimization on a graph

The K shortestpaths problem has been extensively studied for many years. Ecien t methods have been devised, and many practical applications are known. Shortest hyperpath models have been proposed for several problems in dieren t areas, for example in relation with routing in dynamic networks. However, the K shortest hyperpaths problem has not yet been investigated. In this paper

strategies in the presence of significant sensor uncertainty. The navigation environments are modeled is continually updated during subsequent navigation. All motion paths are planned from landmark to landmark can be detected. These estimates are based on the history of all observations made by the robot

The theorem of extremum entropy generation is related to the stochastic order of the paths inside the phase space; indeed, the system evolves, from an indistinguishable configuration to another one, on the most probable path in relation to the paths stochastic order. The result is that, at the stationary state, the entropy generation is maximal and, this maximum value is a consequence of the stochastic order of the paths in the phase space. Conversely, the stochastic order of the paths in the phase space is a consequence of the maximum of the entropy generation for the open systems at the stationary states.

Over the last several years there has been renewed interest in the use of open-path Fourier Transform Infrared (FTIR) spectroscopy for a variety of air monitoring applications. The intersect has been motivated by the need for new technology to address the regulator requirements of the Clean Air Act Amendment of 1990. Interest has been expressed in exploring the applications of this technology to locate fugitive-source emissions and measuring total emissions from industrial facilities.

Methane () fluxes observed with the eddy-covariance technique using an open-path analyzer and a closed-path analyzer in a rice paddy field were evaluated with an emphasis on the flux correction methodology. A comparison of the fluxes obtained by the analyzers revealed that both the open-path and closed-path techniques were reliable, provided that appropriate corrections were applied. For the open-path approach, the influence of fluctuations in air density and the line shape variation in laser absorption spectroscopy (hereafter, spectroscopic effect) was significant, and the relative importance of these corrections would increase when observing small fluxes. A new procedure proposed by Li-Cor Inc. enabled us to accurately adjust for these effects. The high-frequency loss of the open-path analyzer was relatively large (11 % of the uncorrected covariance) at an observation height of 2.5 m above the canopy owing to its longer physical path length, and this correction should be carefully applied before correcting for the influence of fluctuations in air density and the spectroscopic effect. Uncorrected fluxes observed with the closed-path analyzer were substantially underestimated (37 %) due to high-frequency loss because an undersized pump was used in the observation. Both the bandpass and transfer function approaches successfully corrected this flux loss. Careful determination of the bandpass frequency range or the transfer function and the cospectral model is required for the accurate calculation of fluxes with the closed-path technique.

Shortest-path query processing not only serves as a long es- tablished routine for numerous applications in the past but also is of increasing popularity to support novel graph appli- cations in very large databases nowadays. For a large graph, there is the new scenario to query intensively against arbi- trary nodes, asking to quickly return node distance or even shortest

In July 1997 the Republic of Korea became the 15th country to exceed 10-million registered motor vehicles. The number of cars has been increasing exponentially in Korea for the past 12 years opening an era of one car per household in this nation with a population of 44 million. The air quality effects of the growth of increasingly congested motor vehicle traffic in Seoul, home to more than one-fourth of the entire population, is of great concern to Korea's National Institute of Environmental Research (NIER). AIL's Open-Path FTIR air quality monitor, RAM 2000TM, has been used to quantify the ozone increase over the course of a warm summer day. The RAM 2000 instrument was setup on the roof of the 6-story NIER headquarters. The retroreflector was sited 180-m away across a major highway where it was tripod-mounted on top of the 6- story Korean National Institute of Health facility. During the Open-Path FTIR data taking, NIER Air Physics Division research team periodically tethered an airborne balloon containing pump and a potassium iodide solution to obtain absolute ozone concentration results which indicated that the ambient ozone level was 50 ppb when the Open-Path FTIR measurements began. Total ozone concentrations exceeded 120 ppb for five hours between 11:30 AM and 4:30 PM. The peak ozone concentration measured was 199 ppb at 12:56 PM. The averaged concentration for five and a half hours of data collection was 145 ppb. Ammonia concentrations were also measured.

Atmospheric analysis by open-path Fourier-transform infrared (OP/FT-IR) spectrometry has been possible for over two decades but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe the current state-of-the-art of the hardware and software that constitutes a contemporary OP/FT-IR spectrometer. We then describe advances that have been made in our laboratory that have enabled many of the limitations of this type of instrument to be overcome. These include not having to acquire a single-beam background spectrum that compensates for absorption features in the spectra of atmospheric water vapor and carbon dioxide. Instead, an easily measured "short path-length" background spectrum is used for calculation of each absorbance spectrum that is measured over a long path-length. To accomplish this goal, the algorithm used to calculate the concentrations of trace atmospheric molecules was changed from classical least-squares regression (CLS) to partial least-squares regression (PLS). For calibration, OP/FT-IR spectra are measured in pristine air over a wide variety of path-lengths, temperatures, and humidities, ratioed against a short-path background, and converted to absorbance; the reference spectrum of each analyte is then multiplied by randomly selected coefficients and added to these background spectra. Automatic baseline correction for small molecules with resolved rotational fine structure, such as ammonia and methane, is effected using wavelet transforms. A novel method of correcting for the effect of the nonlinear response of mercury cadmium telluride detectors is also incorporated. Finally, target factor analysis may be used to detect the onset of a given pollutant when its concentration exceeds a certain threshold. In this way, the concentration of atmospheric species has been obtained from OP/FT-IR spectra measured at intervals of 1 min over a period of many hours with no operator intervention. PMID:18946664

A transmissometer is an optical instrument which measures transmitted intensity of monochromatic light over a fixed pathlength. Prototype of a simple laser transmissometer has been developed for transmission (or extinction) measurements through suspended absorbers and scatterers in the atmosphere over tens of meters. Instrument consists of a continuous green diode pumped solid state laser, transmission optics, photodiode detectors and A/D data acquisition components. A modulated laser beam is transmitted and subsequently reflected and returned to the unit by a retroreflecting mirror assembly placed several tens of meters away. Results from an open-path field measurement of the instrument are described.

Atmospheric analysis by open-path Fourier-transform infrared (OP\\/FT-IR) spectrometry has been possible for over two decades\\u000a but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe\\u000a the current state-of-the-art of the hardware and software that constitutes a contemporary OP\\/FT-IR spectrometer. We then describe\\u000a advances that have been made in our

This dissertation is comprised of three different Fourier transform infrared (FT-IR) monitoring configurations that utilize active and passive openpath Fourier transform infrared (OP-FT-IR) spectrometry. The configurations include: active monitoring using a SiC source (chapters 2 and 3); passive monitoring using the ambient background and sample emission and absorption (chapters 4 and 7); and a monitoring configuration that is a hybrid of these two utilizing on -site structures at above ambient temperatures often found at industrial sites as sources of IR radiation (chapters 5 and 6). Chapter 1 gives an introduction to the dissertation. Chapter 2 is an introduction to OP-FT-IR monitoring and is broken into four sections: IR, FT-IR, OP-FT-IR, and laboratory calibrations. This chapter includes an extensive suggested reading section including OP-FT-near-IR spectrometry. Chapter 3 describes some active OP-FT-IR spectrometry done with the EPA at a coal gasification Superfund site in Fairfield, Iowa. Chapter 4 is an introduction to passive FT-IR spectrometry. Chapters 5 and 6 explore the technique using on-site structures at above ambient temperatures at an agricultural chemical plant as sources for OP-FT-IR monitoring. Chapter 7 describes the analysis of passive FT-IR interferograms obtained in the laboratory for the detection of benzene.

The Steiner problem asks for the shortest network of line segments that will interconnect a set of given points. The Steiner problem cannot be solved by simply drawing lines between the given points, but it can be solved by adding new ones, called Steiner points, that serve as junctions in a shortest network. To determine the location and number of

In this paper we present a prototype instrument for remote open-path detection of nitrous oxide. The sensor is based on a 4.53 ?m quantum cascade laser and uses the chirped laser dispersion spectroscopy (CLaDS) technique for molecular concentration measurements. To the best of our knowledge this is the first demonstration of open-path laser-based trace-gas detection using a molecular dispersion measurement. The prototype sensor achieves a detection limit down to the single-ppbv level and exhibits excellent stability and robustness. The instrument characterization, field deployment performance, and the advantages of applying dispersion sensing to sensitive trace-gas detection in a remote open-path configuration are presented. PMID:23443389

Remote sensing of enemy installations or their movements by trace gas detection is a critical but challenging military objective. Openpath measurements over ranges of a few meters to many kilometers with sensitivity in the parts per million or billion regime are crucial in anticipating the presence of a threat. Previous approaches to detect ground level chemical plumes, explosive constituents, or combustion have relied on low-resolution, short range Fourier transform infrared spectrometer (FTIR), or low-sensitivity near-infrared differential optical absorption spectroscopy (DOAS). As mid-infrared quantum cascade laser (QCL) sources have improved in cost and performance, systems based on QCL's that can be tailored to monitor multiple chemical species in real time are becoming a viable alternative. We present the design of a portable, high-resolution, multi-kilometer openpath trace gas sensor based on QCL technology. Using a tunable (1045-1047cm-1) QCL, a modeled atmosphere and link-budget analysis with commercial component specifications, we show that with this approach, accuracy in parts per billion ozone or ammonia can be obtained in seconds at path lengths up to 10 km. We have assembled an open-path QCL sensor based on this theoretical approach at City College of New York, and we present preliminary results demonstrating the potential of QCLs in open-path sensing applications.

A series of nine large-scale, open fires was conducted in the Intermountain Fire Sciences Laboratory (IFSL) controlled-environment combustion facility. The fuels were pure pine needles or sagebrush or mixed fuels simulating forest-floor, ground fires; crown fires; broadcast burns; and slash pile burns. Mid-infrared spectra of the smoke were recorded throughout each fire by openpath Fourier transform infrared (FTIR) spectroscopy

We have performed a series of experiments to determine the tradeoff in detection sensitivity for implementing design features for an Open-Path Fourier Transform Infrared (OP-FTIR) chemical analyzer that would be quick to deploy under emergency response conditions. The fast-deplo...

Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortestpath in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortestpath in a partially observed network. In a sampled network, how closely does the partially observed shortestpath (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortestpath, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortestpath. We find that the partially observed shortestpath does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is. PMID:22587148

Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using a static, structurally realistic social network as a platform for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortestpath in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortestpath in a partially observed network. In a sampled network, how closely does the partially observed shortestpath (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortestpath, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortestpath. We find that the partially observed shortestpath does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is. PMID:22587148

Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortestpath in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortestpath in a partially observed network. In a sampled network, how closely does the partially observed shortestpath (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortestpath, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortestpath. We find that the partially observed shortestpath does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

We have examined the potential of using a closed-path sensor to accurately measure eddy fluxes of CO2. Five inlet tubeflow configurations were employed in the experimental setup. The fluxes of CO2 were compared against those measured with an open-path sensor. Sampling air through an intake tube causes a loss of flux, due to the attenuation of CO2 density fluctuations. Adjustments

Most protocols used for open-path Fourier transform infrared spectrometry (OP\\/FT-IR) require that spectra be measured at a resolution of 1 cm?1 and that the concentrations of the analytes be calculated by classical least squares regression (CLS). These specifications were largely developed for monitoring light molecules with easily resolvable rotational fine structure. For most volatile organic compounds in air, the rotational

Accurate determination of gas concentration emitted during thermal degradation (pyrolysis) of biomass in forest fires is one of the keypoints in recent research on physical-based fire spread models. However, it is a very cumbersome task not well solved by classical invasive sensors and procedures. In this work, a methodology to use open-path Fourier transform-based infrared (OP-FTIR) spectrometry has been applied

A FTIR spectroradiometer working in a short open-path configuration has been implemented coupled to a Mass Loss Calorimeter to measure in situ and simultaneously the concentrations of the main gaseous carbon-related by-products from the combustion of forest fuels. A proper methodology to retrieve accurate values of CO and CO2 concentrations has been developed. This methodology includes the determination of the

The design and characterization of a near-IR Chirped Laser Dispersion Spectroscopy (CLaDS) sensor for atmospheric methane detection are reported. The near-IR CLaDS system exhibits the benefits of the prior mid-IR CLaDS systems implemented for open-path sensing while taking advantage of the robust fiber-optic components available in the near-IR. System noise, long-term stability, and comparison with existing technology for methane detection are presented.

Polysiloxanes represent an important class of industrial polymers. Traditionally, poly(dimethylsiloxane) (PDMS) can be prepared by base-catalyzed ring-opening of cyclic dimethylsiloxanes. Ab initio electronic calculations were conducted to examine a reaction path for the KOH catalyzed ring-opening polymerization of hexamethylcyclotrisiloxane (D{sub 3}). The overall picture that emerges is initial side-on attack by KOH on a Si-O bond in the D{sub 3} ring leading to a stable addition complex with a 5-fold coordinated Si atom. The reaction path leads to a five-coordinate transition state, then to a stable insertion product (KOH inserts into the ring). The relative stability of a ring-opened product HO[Si(CH{sub 3}){sub 2}O]{sub 3}K is also considered. The energy along the reaction path was modeled both in the gas phase and in a moderately polar solvent (tetrahydrofuran, THF). The solvation energy was calculated using a recent implementation of an electrostatic model, where the solute molecule is placed in a non-spherical cavity in a dielectric continuum. The effect of basis set and electron correlation on the gas-phase energy and the effect of basis set on the solvation energy was studied. 34 refs., 5 figs., 4 tabs.

In recent years open-path FTIR systems (active and passive) have demonstrated great potential and success for monitoring air pollution, industrial stack emissions, and trace gas constituents in the atmosphere. However, most of the studies were focused mainly on monitoring gaseous species and very few studies have investigated the feasibility of detecting bio-aerosols and dust by passive open-path FTIR measurements. The goal of the present study was to test the feasibility of detecting a cloud of toxic aerosols by a passive mode open-path FTIR. More specifically, we are focusing on the detection of toxic organophosphorous nerve agents for which we use Tri-2-ethyl-hexyl-phosphate as a model compound. We have determined the compounds' optical properties, which were needed for the radiative calculations, using a procedure developed in our laboratory. In addition, measurements of the aerosol size distribution in an airborne cloud were performed, which provided the additional input required for the radiative transfer model. This allowed simulation of the radiance signal that would be measured by the FTIR instrument and hence estimation of the detection limit of such a cloud. Preliminary outdoor measurements have demonstrated the possibility of detecting such a cloud using two detection methods. However, even in a simple case consisting of the detection of a pure airborne cloud, detection is not straightforward and reliable identification of the compound would require more advanced methods than simple correlation with spectral library.

Open-path Fourier transform infrared (OP/FTIR) spectrometry was used to measure the concentrations of ammonia, methane, and other atmospheric gasses around an integrated industrial swine production facility in eastern North Carolina. Several single-path measurements were made ove...

Stimulated by the recent discovery of the 1 yr recurrence period nova M31N 2008-12a, we examined the shortest recurrence periods of hydrogen shell flashes on mass-accreting white dwarfs (WDs). We discuss the mechanism that yields a finite minimum recurrence period for a given WD mass. Calculating the unstable flashes for various WD masses and mass accretion rates, we identified a shortest recurrence period of about two months for a non-rotating 1.38 M ? WD with a mass accretion rate of 3.6 × 10-7 M ? yr-1. A 1 yr recurrence period is realized for very massive (gsim 1.3 M ?) WDs with very high accretion rates (gsim 1.5 × 10-7 M ? yr-1). We revised our stability limit of hydrogen shell burning, which will be useful for binary evolution calculations toward Type Ia supernovae.

Stimulated by the recent discovery of the 1 yr recurrence period nova M31N 2008-12a, we examined shortest recurrence periods of hydrogen shell flashes on mass-accreting white dwarfs (WDs). We discuss the mechanism that yields a finite minimum recurrence period for a given WD mass. Calculating unstable flashes for various WD masses and mass-accretion rates, we identified the shortest recurrence period of about two months for a non-rotating 1.38 M_\\sun WD with a mass-accretion rate of 3.6 \\times 10^{-7} M_\\sun yr^{-1}. One year recurrence period is realized for very massive (> 1.3 M_\\sun) WDs with very high accretion rates (>1.5 \\times 10^{-7} M_\\sun yr^{-1}). We also present a revised stability limit of hydrogen shell burning, which will be useful for binary evolution calculations toward Type Ia supernovae.

The Steiner problem asks for the shortest network of line segments that will interconnect a set of given points. The Steiner problem cannot be solved by simply drawing lines between the given points, but it can be solved by adding new ones, called Steiner points, that serve as junctions in a shortest network. To determine the location and number of Steiner points, mathematicians and computer scientists have developed algorithms, or precise procedures. Yet even the best of these algorithms running on the fastest computers cannot provide a solution for a large set of given points because the time it would take to solve such a problem is impractically long. Furthermore, the Steiner problem belongs to a class of problem for which many computer scientists now believe an efficient algorithm may never be found. Approximate solutions to the shortest-network problem are computed routinely for numerous applications, among them designing integrated circuits, determining the evolutionary tree of a group of organisms and minimizing materials used for networks of telephone lines, pipelines and roadways.

Accurate determination of gas concentration emitted during thermal degradation (pyrolysis) of biomass in forest fires is one of the keypoints in recent research on physical-based fire spread models. However, it is a very cumbersome task not well solved by classical invasive sensors and procedures. In this work, a methodology to use open-path Fourier transform-based infrared (OP-FTIR) spectrometry has been applied as a remote sensing technique that permits in situ, non-intrusive and simultaneous measurements. Main gaseous by-products (CO, CO 2, CH 4 and NH 3) have been measured and quantified in terms of path-integrated concentrations. Different emission ratios have been determined for the species under study. These results can help to simplify the modelling of pyrolysis processes inside the physical-based models for fire spread.

We present a new, very simple to use and very easy to align, inexpensive, robust, mono-static optical hygrometer based on tunable diode laser absorption spectroscopy (TDLAS) that makes use of very inexpensive reflective foils as scattering targets at the distant side of the absorption path. Various alternative foils as scattering targets were examined concerning their reflective behaviour and their suitability for TDLAS applications. Using a micro prismatic reflection tape as the optimum scattering target we determined absolute water vapour concentrations employing openpath TDLAS. With the reflection tape being in a distance of 75 cm to 1 m (i.e., absorption path lengths between 1.5 and 2 m) we detected ambient H2O concentrations of up to 12,300 ppmv with detectivities of 1 ppm which corresponds to length and bandwidth normalized H2O detection limits of up to 0.9 ppmv m/ sqrt {{Hz}} , which is only a factor of 2 worse than our previous bi-static TDLAS setups (Hunsmann, Appl. Phys. B 92:393-401, 1). This small sensitivity disadvantage is well compensated for by the simplicity of the spectrometer setup and particularly by its extreme tolerance towards misalignment of the scattering target.

Based on the technology of tunable diode laser absorption spectroscopy (TDLAS) in conjunction with second harmonic wave detection, a long open-path TDLAS system using a 1.65 microm InGaAsP distributed feedback laser was developed, which is used for detecting pipeline leakage. In this system, a high cost performance Fresnel lens is used as the receiving optical system, which receives the laser-beam reflected by a solid corner cube reflector, and focuses the receiving laser-beam to the InGaAs detector. At the same time, the influences of the concentration to the fluctuation of light intensity were taken into account in the process of measurement, and were eliminated by the method of normalized light intensity. As a result, the measurement error caused by the fluctuation of light intensity was made less than 1%. The experiment of natural gas leakage detection was simulated, and the detection sensitivity is 0.1 x 10(-6) (ratio by volume) with a total path of 320 m. According to the receiving light efficiency of the optical system and the detectable minimum light intensity of the detector, the detectable maximal optical path of the system was counted to be 2 000 m. The results of experiment show that it is a feasible design to use the Fresnel lens as the receiving optical system and can satisfy the demand of the leakage detection of natural gas. PMID:19455840

A novel instrument design combines the sensing paths of an open-path gas analyzer and a 3-D sonic anemometer and integrates the sensors in a single aerodynamic body. Common electronics provide fast-response, synchronized measurements of wind vector, sonic temperature, CO2 and H2O densities, and atmospheric pressure. An instantaneous CO2 mixing ratio, relative to dry air, is computed in real time. The synergy of combined sensors offers an alternative to the traditional density-based flux calculation method historically used for standalone open-path analyzers. A simple method is described for a direct, in-situ, mixing-ratio-based flux calculation. The method consists of: (i) correcting sonically derived air temperature for humidity effects using instantaneous water vapor density and atmospheric pressure measurements, (ii) computing water vapor pressure based on water-vapor density and humidity-corrected sonic temperature, (iii) computing fast-response CO2 mixing ratio based on CO2 density, sonic temperature, water vapor, and atmospheric pressures, and (iv) computing CO2 flux from the covariance of the vertical wind speed and the CO2 mixing ratio. Since CO2 mixing ratio is a conserved quantity, the proposed method simplifies the calculations and eliminates the need for corrections in post-processing by accounting for temperature, water-vapor, and pressure-fluctuation effects on the CO2 density. A field experiment was conducted using the integrated sensor to verify performance of the mixing-ratio method and to quantify the differences with density-derived CO2 flux corrected for sensible and latent-heat fluxes. The pressure term of the density corrections was also included in the comparison. Results suggest that the integrated sensor with co-located sonic and gas sensing paths and the mixing-ratio-based method minimize or eliminate the following uncertainties in the measured CO2 flux: (i) correcting for frequency-response losses due to spatial separation of measured quantities, (ii) correcting sonically-derived, sensible-heat flux for humidity, (iii) correcting latent-heat flux for sensible-heat flux and water-vapor self-dilution, (iv) correcting CO2 flux for sensible- and latent-heat fluxes, (v) correcting CO2 flux for pressure-induced density fluctuations.

Atmospheric trace-gas sensing with quantum cascade laser (QCL) spectroscopy offers the potential for high sensitivity, fast, selective mid-infrared absorption measurements of atmospheric species such as ammonia (NH3). As the third most abundant nitrogen species and most gaseous base in the atmosphere, ammonia plays important roles in neutralizing acidic species and as a gas-phase precursor to ammoniated fine particulate matter. High precision gas phase measurements are necessary to constrain highly uncertain emission sources and sinks with implications for understanding how chemical components of fine particulate matter affect air quality and climate as well as nitrogen deposition to ecosystems. Conventional ammonia sensors employing chemical ionization, denuder or filter techniques are labor-intensive, not gas-selective and exhibit low time resolution. As an advantageous alternative to conventional measurement techniques, we develop an open-path quantum cascade laser-based ammonia sensor operating at 9.06 ?m for ground-based measurements. A continuous wave, thermoelectrically cooled quantum cascade laser is used to perform wavelength modulation absorption spectroscopy (WMS). Room-temperature, unattended operation with minimal surface adsorption effects due to the open-path configuration represent significant improvements over cryogenically cooled, closed path systems. The feasibility of a cylindrical mirror multi-pass optical cell for achieving long path lengths near 50 m in a compact design is also assessed. Meaningful ammonia measurements require fast sub-ppbv detection limits due to ammonias large dynamic range and temporal and spatial atmospheric variability. When fully developed, our instrument will achieve high time resolution (up to 10 Hz) measurements with ammonia detection limits in the 100 pptv range. Initial results include ambient laboratory ammonia detection at 58 ppbv relative to a 0.4% ammonia reference cell based on the WMS signal integrated area. We estimate a limit of detection based on our signal to noise ratio of ~400 pptv NH3. Non-cryogenic, unattended operation of this compact sensor offers the potential for applications in particulate matter gas-phase precursor monitoring networks. Future sensor measurements can also be utilized for evaluations of and data assimilation into air quality and aerosol forecast models of particular importance for regions where ammonia plays a critical role in fine particulate matter formation.

Generalized Greedy Algorithm for Shortest Superstring Zhengjun Cao1,2 , Lihua Liu3 , and Olivier University, China Abstract. In the primitive greedy algorithm for shortest superstring, if a pair of strings of optimal set and generalize the primitive greedy algorithm. The generalized algorithm can be reduced

The research report here summarizes a solution for two dimensional Path Planning with obstacle avoidance in a workspace with stationary obstacles. The solution finds the shortestpath for the end effector of a manipulator arm. The program uses an overhead image of the robot work space and the starting and ending positions of the manipulator arm end effector to generate a search graph which is used to find the shortestpath through the work area. The solution was originally implemented in VAX Pascal, but was later converted to VAX C.

We prove the equivalence, up to a small polynomial approximation factor Ö{n\\/logn}\\\\sqrt{n\\/\\\\log n}, of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship\\u000a between uSVP and the more standard GapSVP, as well the BDD problem commonly used in

Path Dependency, Interorganizational Systems, Internet,Web EDI in path dependency. With the Internet used as the Internet IOS. Within one unified model, these variables allow us to test network effects and path dependency

The advantages of measuring open-path Fourier transform infrared (OP/FT-IR) spectra at low resolution are discussed both from a theoretical and experimental viewpoint. In general, the optimum combination of selectivity and sensitivity is found when the resolution is approximately equal to the average full-width at half height (FWHH) of the analytical bands. The FWHH of many bands in the vapor-phase spectra of molecules of medium size, such as chlorinated organic solvents, is approximately 20 cmMIN1, so that a resolution of 16 cmMIN1 is often found to yield the most accurate analytical results. The low baseline noise level found when spectra are measured at low resolution can allow room temperature deuterated triglycine sulfate pyroelectric bolometers to be used instead of liquid nitrogen cooled mercury cadmium telluride photodetectors for OP/FT-IR measurements.

During the cleanup of dioxin-contaminated soils from the Times Beach, Missouri, Superfund site, workers excavating a trench near the old City Park became nauseous from fumes emanating the excavation ditch. Investigations by US EPA and Missouri Department of Natural Resources found that approximately 12,000 square feet of soil was contaminated by toluene, ethylbenzene, and xylenes. During the remediation of this site, Open-Path FTIR was used to monitor the perimeter of the excavation and stockpile areas. The air monitoring data was collected and screened on a near real time basis to ensure that off-site workers and the public were protected. This paper outlines the air monitoring procedures used during the project and the difficulties encountered while sampling at the site.

It is widely known that methane, together with carbon dioxide, is one of the most effective greenhouse gases contributing to climate global change. According to EMEP/CORINAIR Emission Inventory Guidebook1, around 25% of global CH4 emissions originate from animal husbandry, especially from enteric fermentation. However, uncertainties in the CH4 emission factors provided by EMEP/CORINAIR are around 30%. For this reason, works addressed to calculate emissions experimentally are so important to improve the estimations of emissions due to livestock and to calculate emission factors not included in this inventory. FTIR spectroscopy has been frequently used in different methodologies to measure emission rates in many environmental problems. Some of these methods are based on dispersion modelling techniques, wind data, micrometeorological measurements or the release of a tracer gas. In this work, a new method for calculating emission rates from livestock buildings applying Open-Path FTIR spectroscopy is proposed. This method is inspired by the accumulation chamber method used for CO2 flux measurements in volcanic areas or CH4 flux in wetlands and aquatic ecosystems. The process is the following: livestock is outside the building, which is ventilated in order to reduce concentrations to ambient level. Once the livestock has been put inside, the building is completely closed and the concentrations of gases emitted by livestock begin to increase. The Open-Path system measures the concentration evolution of gases such as CO2, CH4, NH3 and H2O. The slope of the concentration evolution function, dC/dt, at initial time is directly proportional to the flux of the corresponding gas. This method has been applied in a cow shed in the surroundings of La Laguna, Tenerife Island, Spain). As expected, evolutions of gas concentrations reveal that the livestock building behaves like an accumulation chamber. Preliminary results show that the CH4 emission factor is lower than the proposed by the Emission Inventory.

, Saint Martin d'HÃ¨res campus satellite remote sensing, image processing, classification, mathematical generation of satellites provide images with a very high spatial resolution, down to less than 1 meter perDirectional mathematical morphology and pathopenings application to the analysis of satellite

of planning reliable landmark- based robot navigation strategies in the presence of significant sensor and constructs their visibility graph. This graph is continually updated during subsequent navigation. All motion can be detected. These estimates are based on the history of all observations made by the robot

Summary form only given. Broadcast operations are commonly used in a large variety of applications like: video-conference, television, etc. These applications need high level of QoS. Moreover, in such applications, each receiver has to pay to receive data. In the particular case of broadcast, the price paid by a given receiver is determined by multiple parameters like its location in

We derive a new fiber tracking algorithm for DT- MRI that parts with the locally 'greedy' paradigm intrinsic to conventional tracking algorithms. We demonstrate the ability to precisely reconstruct a diverse range of fiber trajectori es in authentic and computer-generated DT-MRI data, for which well- known conventional tracking algorithms are shown to fail. Our approach is to pose fiber tracking

and Computer Science on May 21, 1999, in partial fulfillment of the requirements for the degrees of Bachelor of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degrees of Bachelor of Science in Electrical Engineering and Computer Science and Master of Engineering in Computer

in time O(n=#15; + n log n). The data structure is associated with a new variety of Voronoi diagram. Given]. As for approximation algorithms, Papadimitriou [Pap85] has shown that O(n 3 (L + log(n=#15;)) 2 =#15;) time su in this paper is faster than that of [Pap85] when n#15; 3 is large. Also, the number of arithmetic operations

Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single paththe shortestpath possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

Ammonia (NH3) is a key precursor to atmospheric fine particulate matter, with strong implications for regional air quality and global climate change. Despite the importance of atmospheric ammonia, its spatial/temporal variation is poorly characterized, and the knowledge of its sources, sinks, and transport is severely limited. Existing measurements suggest that traffic exhaust may provide significant amounts of ammonia in urban areas, which cause greater impacts on particulate matter formation and urban air quality. To capture the spatial and temporal variation of ammonia emissions, a portable, low power sensor with high time resolution is necessary. We have developed a portable open-path ammonia sensor with a detection limit of 0.5 ppbv ammonia for 1 s measurements. The sensor has a power consumption of about 60 W and is capable of running on a car battery continuously for 24 hours. An additional laser has been coupled to the sensor to yield concurrent N2O and CO measurements as tracers for determining various sources. The overall sensor prototype fits on a 60 cm × 20 cm aluminum breadboard. Roadside measurements indicated NH3/CO emission ratios of 4.1±5.4 ppbv/ppmv from a fleet of 320 vehicles, which agree with existing on-ramp measurements. Urban measurements in the Baltimore and Washington, DC metropolitan areas have shown significant ammonia mixing ratios concurrent with carbon monoxide levels from the morning and evening rush hours. On-road measurements of our open-path sensor have also been performed continuously from the Midwest to Princeton, NJ including urban areas such as Pittsburgh, tunnels, and relatively clean conditions. The emission ratios of ammonia against CO and/or CO2 help identify the sources and amounts of both urban and agricultural ammonia emissions. Preliminary data from both spatial mapping, monitoring, and vehicle exhaust measurements suggest that urban ammonia emissions from fossil fuel combustion are significant and may provide an unrecognized source in the atmospheric ammonia budget. Ongoing efforts include spatial mapping of ammonia and other tracers in the New York City and Philadelphia metropolitan areas. Further comparison with TES satellite ammonia retrieval will help to put the measurements into a larger geographical and temporal context.

The stable isotopes in atmospheric water vapor contain rich information on the hydrologic cycles and gaseous exchange processes between biosphere and atmosphere. About one-week field experiment was conducted to continuously measure the isotope composition of water vapor in ambient air using an open-path FTIR system. Mixing ratios of H2 16O and HD16O were measured simultaneously. Analysis of water vapor isotopes revealed that the variations of H2 16O and HD16O were highly related. Mixing ratios of both isotopes varied considerably on a daily timescale or between days, with no obvious diurnal cycle, whereas the deuterium isotopic [delta]D showed clear diel cycle. The results illustrated that the correlation between [delta]D and H2O mixing ratio was relatively weak, which was also demonstrated by the Keeling plot analysis with the whole data. Yet the further Keeling analysis on a daily timescale displayed more obvious linear relationship between [delta]D and the total H2O concentration. All daily isotopic values of evapotranspiration source were obtained, with the range between -113.93±10.25 and -245.63±17.61 over the observation period.

Active open-path FTIR sensors provide more sensitive detection of chemical agents than passive FTIRs, such as the M21 RSCAAL and JSLSCAD, and at the same time identify and quantify toxic industrial chemicals (TIC). Passive FTIRs are bistatic sensors relying on infrared sources of opportunity. Utilization of earth-based sources of opportunity limits the source temperatures available for passive chemical-agent FTIR sensors to 300° K. Active FTIR chemical-agent sensors utilize silicon carbide sources, which can be operated at 1500° K. The higher source temperature provides more than an 80-times increase in the infrared radiant flux emitted per unit area in the 7 to 14 micron spectral fingerprint region. Minimum detection limits are better than 5 ?gm/m3 for GA, GB, GD, GF and VX. Active FTIR sensors can (1) assist first responders and emergency response teams in their assessment of and reaction to a terrorist threat, (2) provide information on the identification of the TIC present and their concentrations and (3) contribute to the understanding and prevention of debilitating disorders analogous to the Gulf War Syndrome for military and civilian personnel.

With the conjunction of tunable diode laser absorption spectroscopy technology (TDLAS) and the open long optical path technology, the system designing scheme of CO2 on-line monitoring based on near infrared tunable diode laser absorption spectroscopy technology was discussed in detail, and the instrument for large-range measurement was set up. By choosing the infrared absorption line of CO2 at 1.57 microm whose line strength is strong and suitable for measurement, the ambient atmospheric CO2 was measured continuously with a 30 s temporal resolution at an suburb site in the autumn of 2007. The diurnal atmospheric variations of CO2 and continuous monitoring results were presented. The results show that the variation in CO2 concentration has an obvious diurnal periodicity in suburb where the air is free of interference and contamination. The general characteristic of diurnal variation is that the concentration is low in the daytime and high at night, so it matches the photosynthesis trend. The instrument can detect gas concentration online with high resolution, high sensitivity, high precision, short response time and many other advantages, the monitoring requires no gas sampling, the calibration is easy, and the detection limit is about 4.2 x 10(-7). It has been proved that the system and measurement project are feasible, so it is an effective method for gas flux continuous online monitoring of large range in ecosystem based on TDLAS technology. PMID:19385195

CETREL--Empresa de Protecao Ambiental, is an environmental engineering company, which is owned by the member companies in the Camacari Petrochemical Complex, the largest petrochemical complex in Brazil. CETREL operates a centralized waste treatment plant, treatment and disposal facilities, an incineration unit, groundwater monitoring and air quality monitoring networks. The air monitoring network was designed based on mathematical modeling, and the results showed that the monoitoring of hydrocarbons is important not just within the complex but also at the area surrounding the complex. There are presently no regulations for hydrocarbons in Brazil, however they are monitored due to concerns about health problems arising from human exposure. The network has eight multiparameter monitoring stations, located at the villages nearby, where hydrocarbons are sampled with Summa canisters and subsequently analyzed with a GC/MS, using a Cryogenic trap at the interface. The open-path FTIR is used to monitor at the individual plants and in the areas in between because it is more efficient and costs less than it would to attempt to achieve the same level of coverage using the canisters. Ten locations were selected based on mathematical modeling and knowledge of the likely emission sources. Since August 1993, there have been five different measurement campaigns.

The automated quantification of three greenhouse gases, ammonia, methane, and nitrous oxide, in the vicinity of a large dairy farm by open-path Fourier transform infrared (OP/FT-IR) spectrometry at intervals of 5 min is demonstrated. Spectral pretreatment, including the automated detection and correction of the effect of interrupting the infrared beam, is by a moving object, and the automated correction for the nonlinear detector response is applied to the measured interferograms. Two ways of obtaining quantitative data from OP/FT-IR data are described. The first, which is installed in a recently acquired commercial OP/FT-IR spectrometer, is based on classical least-squares (CLS) regression, and the second is based on partial least-squares (PLS) regression. It is shown that CLS regression only gives accurate results if the absorption features of the analytes are located in very short spectral intervals where lines due to atmospheric water vapor are absent or very weak; of the three analytes examined, only ammonia fell into this category. On the other hand, PLS regression works allowed what appeared to be accurate results to be obtained for all three analytes. PMID:20879801

Ammonia (NH3) is a key precursor species to atmospheric fine particulate matter with strong implications for regional air quality and global climate change. NH3 from vehicles accounts for a significant fraction of total emissions of NH3 in urban areas. A mobile platform is developed to measure NH3, CO, and CO2 from the top of a passenger car. The mobile platform conducted 87 h of on-road measurements, covering 4500 km in New Jersey and California. The average on-road emission factor (EF) in CA is 0.49 ± 0.06 g NH3 per kg fuel and agrees with previous studies in CA (0.3-0.8 g/kg). The mean on-road NH3:CO emission ratio is 0.029 ± 0.005, and there is no systematic difference between NJ and CA. On-road NH3 EFs increase with road gradient by an enhancement of 53 mg/kg fuel per percentage of gradient. On-road NH3 EFs show higher values in both stop-and-go driving conditions and freeway speeds with a minimum near 70 km/h. Consistent with prior studies, the on-road emission ratios suggest a highly skewed distribution of NH3 emitters. Comparisons with existing NJ and CA on-road emission inventories indicate that there may be an underestimation of on-road NH3 emissions in both NJ and CA. We demonstrate that mobile, open-path measurements provide a unique tool to help quantitatively understand the on-road NH3 emissions in urban and suburban settings. PMID:24517544

We investigate the statistics of extremal path(s) (both the shortest and the longest) from the root to the bottom of a Cayley tree. The lengths of the edges are assumed to be independent identically distributed random variables drawn from a distribution rho(l). Besides, the number of branches from any node is also random. Exact results are derived for arbitrary distribution

We present methods for extracting optimal paths from motion planning roadmaps. Our system enables any combination of opti- mization criteria, such as collision detection, kinematic\\/dynamic constraints, or minimum clearance, and relaxed definitions of the goal state, to be used when selecting paths from roadmaps. Our algorithm is an augmented version of Dijkstra's shortestpath algorithm which allows edge weights to

One of the essential characteristics of LCLS and other FELs that are currently in operation or still under development is their ultrashort pulse duration, further enhanced just recently by novel schemes of electron bunch compression, which opens up unprecedented opportunities for the detailed investigation of reaction dynamics. However, to date there is no measuring device or concept which is able to determine precisely the pulse duration of the generated X-ray pulses. By overlapping the FEL with a synchronized optical laser in a gas target and measuring the energy of the IR laser dressed photoelectrons ('streaking spectroscopy') we were able to determine the pulse duration of the shortest FEL pulses available at LCLS to be not more than 4 fs. In addition, an analysis of the pulse substructure yields an estimation for the length of the underlying single-spikes in the order of 600 as.

Biomass samples from a diverse range of ecosystems were burned in the Intermountain Fire Sciences Laboratory open combustion facility. Midinfrared spectra of the nascent emissions were acquired at several heights above the fires with a Fourier transform infrared spectrometer (FTIR) coupled to an open multipascell. In this report, the results from smoldering combustion during 24 fires are presented including production

When introduced into a novel environment, mammals establish in it a preferred place marked by the highest number of visits and highest cumulative time spent in it. Examination of exploratory behavior in reference to this "home base" highlights important features of its organization. It might therefore be fruitful to search for other types of marked places in mouse exploratory behavior and examine their influence on overall behavior.Examination of path curvatures of mice exploring a large empty arena revealed the presence of circumscribed locales marked by the performance of tortuous paths full of twists and turns. We term these places knots, and the behavior performed in them-knot-scribbling. There is typically no more than one knot per session; it has distinct boundaries and it is maintained both within and across sessions. Knots are mostly situated in the place of introduction into the arena, here away from walls. Knots are not characterized by the features of a home base, except for a high speed during inbound and a low speed during outbound paths. The establishment of knots is enhanced by injecting the mouse with saline and placing it in an exposed portion of the arena, suggesting that stress and the arousal associated with it consolidate a long-term contingency between a particular locale and knot-scribbling.In an environment devoid of proximal cues mice mark a locale associated with arousal by twisting and turning in it. This creates a self-generated, often centrally located landmark. The tortuosity of the path traced during the behavior implies almost concurrent multiple views of the environment. Knot-scribbling could therefore function as a way to obtain an overview of the entire environment, allowing re-calibration of the mouse's locale map and compass directions. The rich vestibular input generated by scribbling could improve the interpretation of the visual scene. PMID:20090825

We develop a compact, mid-infrared quantum cascade (QC) laser based sensor to perform high precision measurements of N2O and CO simultaneously. Since CO is a good tracer of anthropogenic emissions, simultaneous measurements of CO and N2O allow us to correlate the sources of N2O emissions. The thermoelectrically (TE) cooled, and continuous wave QC laser enables room-temperature and unattended operation. The laser is scanned over the absorption features of N2O and CO near 4.54 ?m by laser current modulation. A novel cylindrical multi-pass optical cell terminated at the (N/2)th spot is used to simplify the optical configuration by separating the laser and TE cooled detector. Our systems are open-path and non-cryogenic, which avoids vacuum pump and liquid nitrogen. This configuration enables a future design of a non-intrusive, compact (shoe box size), and low-power (10W) sensor. Wavelength modulation spectroscopy (WMS) is used to enhance measurement sensitivity. Higher-harmonic detection (4f and 6f) is performed to improve the resolution between the nearly overlapping N2O and CO lines. Relevant atmospheric N2O and CO concentration is measured, with a detection limit of 0.3 ppbv for N2O and 2 ppbv for CO for 1 s averaging in terms of noise. We also develop an open-path high sensitivity atmospheric ammonia (NH3) sensor using a very similar instrument design. A 9.06 ?m QC laser is used to probe absorption features of NH3. Open-path detection of NH3 is even more beneficial due to the surface absorption effect of NH3 and its tendency to readily partition into condensed phases. The NH3 sensor was deployed at the CALNEX 2010 field campaign. The entire system was stable throughout the campaign and acquired data with 10 s time resolution under adverse ambient temperatures and dusty conditions. The measurements were in general agreement with other NH3 and trace gases sensors. Both the N2O/CO and NH3 sensors will be deployed in a local eddy-covariance station to examine long term stability and detection limit in the field. Future sensor applications include characterizing urban and agricultural N2O and NH3 emission sources and quantifying their respective fluxes.

Ab-initio predictions of nuclei with masses up to A~100 or more is becoming possible thanks to novel advances in computations and in the formalism of many-body physics. Some of the most fundamental issues include how to deal with many-nucleon interactions, how to calculate degenerate--open shell--systems, and pursuing ab-initio approaches to reaction theory. Self-consistent Green's function (SCGF) theory is a natural approach to address these challenges. Its formalism has recently been extended to three- and many-body interactions and reformulated within the Gorkov framework to reach semi-magic open shell isotopes. These exciting developments, together with the predictive power of chiral nuclear Hamiltonians, are opening the path to understanding large portions of the nuclear chart, especially within the $sd$ and $pf$ shells. The present talk reviews the most recent advances in ab-initio nuclear structure and many-body theory that have been possible through the SCGF approach.

Ab-initio predictions of nuclei with masses up to A~100 or more are becoming possible thanks to novel advances in computations and in the formalism of many-body physics. Some of the most fundamental issues include how to deal with many-nucleon interactions, how to calculate degenerateopen shellsystems, and pursuing ab-initio approaches to reaction theory. Self-consistent Green's function (SCGF) theory is a natural approach to address these challenges. Its formalism has recently been extended to three- and many-body interactions and reformulated within the Gorkov framework to reach semi-magic open shell isotopes. These exciting developments, together with the predictive power of chiral nuclear Hamiltonians, are opening the path to understanding large portions of the nuclear chart, especially within the sd and pf shells. The present talk reviews the most recent advances in ab-initio nuclear structure and many-body theory that have been possible through the SCGF approach.

This paper describes a modification of the Free-Space and Hata formulae for the prediction of received signal power, PR and propagation path loss, LP, in two cellular mobile radio systems (CMRS), in the Northern Nigeria. Measurements of PRs were taken with a Cellular Mobile Radio test Receiver (Sagem OT 160), in some selected open\\/rural environments, when the receiver was being

With the advent of cloud computing, it becomes desirable to utilize cloud computing to efficiently process complex operations on large graphs without compromising their sensitive information. This paper studies shortest distance computing in the cloud, which aims at the following goals: i) preventing outsourced graphs from neighborhood attack, ii) preserving shortest distances in outsourced graphs, iii) minimizing overhead on the

The Greedy Algorithm for Shortest Superstrings Haim Kaplan #3; Nira Shafrir #3; August 26, 2004 molecule. There is a natural greedy algorithm for the shortest superstring problem, which we refer to as GREEDY. The GREEDY algorithm maintains a set of strings, initialized to be equal to S. At each iteration

A new technique for the satellite remote sensing of greenhouse gases in the atmosphere via the absorption of short-wave infrared laser signals transmitted between counter-rotating satellites in low Earth orbit has recently been proposed; this would enable the acquisition of a long-term, stable, global set of altitude-resolved concentration measurements. We present the first ground-based experimental demonstration of this new infrared-laser occultation method, in which the atmospheric absorption of CO2 near 2.1 ?m was measured over a ~144 km path length between two peaks in the Canary Islands (at an altitude of ~2.4 km), using relatively low power diode lasers (~4 to 10 mW). The retrieved CO2 volume mixing ratio of 400 ppm (±15 ppm) is consistent within experimental uncertainty with simultaneously recorded in situ validation measurements. We conclude that the new method has a sound basis for monitoring CO2 in the free atmosphere; other greenhouse gases such as methane, nitrous oxide and water vapour can be monitored in the same way.

The problem of optimally scheduling the read\\/write requests in a disk storage system is considered. A new class of algorithms for the disk scheduling problem is presented, and the relations between this problem and the shortest hamiltonian path problem on asymmetric graphs are investigated. The problem of deriving realistic upper bounds for the disk utilization factor, one of the main

We report on the development of an optical instrument based on incoherent broadband cavity-enhanced absorption spectroscopy (IBBCEAS) for simultaneous open-path measurements of nitrous acid (HONO) and nitrogen dioxide (NO2) in ambient air using a UV light emitting diode operating at 366 nm. Detection limits of 430 pptv for HONO and 1 ppbv for NO2 were achieved with an optimum acquisition time of 90 s, determined by an Allan variance analysis. Based on a 1.85 m long high optical finesse open-path cavity, the effective optical path length of 2.8 km was realized in aerosol-free samples or in an urban environment at modest aerosol levels. Such a kilometer long optical absorption is comparable to that achieved in the well established differential optical absorption spectroscopy (DOAS) technology while keeping the instrument very compact. Open-path detection configuration allows one to avoid absorption cell wall losses and sampling induced artifacts. The demonstrated sensitivity and specificity shows high potential of this cost-effective and compact infrastructure for future field applications with high spatial resolution.

We investigate the statistics of extremal path(s) (both the shortest and the\\u000alongest) from the root to the bottom of a Cayley tree. The lengths of the edges\\u000aare assumed to be independent identically distributed random variables drawn\\u000afrom a distribution \\\\rho(l). Besides, the number of branches from any node is\\u000aalso random. Exact results are derived for arbitrary distribution

Because of the high travel speed, the complex flow dynamics around an aircraft and the complex dependency of the fluid dynamics on numerous airborne parameters, it is quite difficult to obtain accurate pressure values at a specific instrument location of an aircraft's fuselage. Complex simulations using computational fluid dynamics (CFD) models can in theory computationally "transfer" pressure values from one location to another. However, for long flight patterns, this process is inconvenient and cumbersome. Furthermore these CFD transfer models require a local experimental validation, which is rarely available. In this paper, we describe an integrated approach for a spectroscopic, calibration-free, in-flight pressure determination in an open-path White cell on an aircraft fuselage using ambient, atmospheric water vapour as the "sensor species". The presented measurements are realized with the HAI (Hygrometer for Atmospheric Investigations) instrument, built for multiphase water detection via calibration-free TDLAS (tunable diode laser absorption spectroscopy). The pressure determination is based on raw data used for H2O concentration measurement, but with a different post-flight evaluation method, and can therefore be conducted at deferred time intervals on any desired flight track. The spectroscopic pressure is compared in-flight with the static ambient pressure of the aircraft avionic system and a micro-mechanical pressure sensor, located next to the open-path cell, over a pressure range from 150 hPa to 800 hPa, and a water vapour concentration range of more than three orders of magnitude. The correlation between the micro-mechanical pressure sensor measurements and the spectroscopic pressure measurements show an average deviation from linearity of only 0.14% and a small offset of 9.5 hPa. For the spectroscopic pressure evaluation we derive measurement uncertainties under laboratory conditions of 3.2% and 5.1% during in flight operation on the HALO airplane. Under certain flight conditions we quantified for the first time stalling-induced, dynamic pressure deviations of up to 30% (at 200 hPa) between the avionic sensor and the optical and mechanical pressure sensors integrated in HAI. Such severe local pressure deviations from the usually used avionic pressure are important to take into account for other airborne sensors employed on such fast flying platforms as the HALO aircraft.

The continued development of computational and synthetic methods has enabled the enumeration or preparation of a nearly endless universe of chemical structures. Nevertheless, the ability of this chemical universe to deliver small molecules that can both modulate biological targets and have drug-like physicochemical properties continues to be a topic of interest to the pharmaceutical industry and academic researchers alike. The chemical space described by public, commercial, in-house and virtual compound collections has been interrogated by multiple approaches including biochemical, cellular and virtual screening, diversity analysis, and in-silico profiling. However, current drugs and known chemical probes derived from these efforts are contained within a remarkably small volume of the predicted chemical space. Access to more diverse classes of chemical scaffolds that maintain the properties relevant for drug discovery is certainly needed to meet the increasing demands for pharmaceutical innovation. The Lilly Open Innovation Drug Discovery platform (OIDD) was designed to tackle barriers to innovation through the identification of novel molecules active in relevant disease biology models. In this article we will discuss several computational approaches towards describing novel, biologically active, drug-like chemical space and illustrate how the OIDD program may facilitate access to previously untapped molecules that may aid in the search for innovative pharmaceuticals. PMID:24283973

An open-path tunable diode laser absorption spectroscopy (OP-TDLAS) detector was applied to detect the methane emission from a biogas plant in a dairy farm. Two OP-TDLAS scanning systems were built according to maximum likelihood with expectation minimization (MLEM) and smooth basis function minimization (SBFM) algorithms to reconstruct the two-dimensional (2-D) distribution maps. Six reconstruction maps with the resolution of 30×80 were obtained by the MLEM algorithm with "grid translation method" and three reconstruction maps were obtained by the SBFM algorithm with 2-D Gaussian model. The maximum mixing ratio in the first result was between 0.85 and 1.30 ppm, while it was between 1.14 and 1.30 ppm in the second result. The average mixing ratio in the first result was between 0.54 and 0.49 ppm, and between 0.56 and 0.65 ppm in the second result. The reconstruction results validated that the two algorithms could effectively reflect the methane mixing ratio distribution within the target area. However, with the more simple optical rays and less equipment requirements, the OP-TDLAS scanning system based on SBFM algorithm provides a useful monitoring tool of methane emissions in agricultural production.

In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortestpath between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortestpaths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

In this article, a case study was conducted to correlate the odor index and possible pollutants from a pharmaceutical plant based on the odor threshold and OpenPath Fourier Transform Infrared (OP-FTIR) technique to model the results using American Meteorological Society\\/Environmental Protection Agency Regulatory Dispersion Model (AERMOD). Although nine different pollutants were obtained from OP-FTIR, the contribution to the detected

The path taken by a car with a given minimum turning radius has a lower bound on its radius of curvature at each point, but the path has cusps if the car shifts into or out of reverse gear. What is the shortest such path a car can travel between two points if its starting and ending directions are specified?

We present observations of the dwarf novae GW Lib, V844 Her, and DI UMa. Radial velocities of H-alph yield orbital periods of 0.05332 +- 0.00002 d (= 76.78 m) for GW Lib and and 0.054643 +- 0.000007 d (= 78.69 m) for V844 Her. Recently, the orbital period of DI UMa was found to be only 0.054564 +- 0.000002 d (= 78.57 m) by Fried et al. (1999), so these are the three shortest orbital periods among dwarf novae with normal-abundance secondaries. GW Lib has attracted attention as a cataclysmic binary showing apparent ZZ Ceti-type pulsations of the white dwarf primary. Its spectrum shows sharp Balmer emission flanked by strong, broad Balmer absorption, indicating a dominant contribution by white-dwarf light. Analysis of the Balmer absorption profiles is complicated by the unknown residual accretion luminosity and lack of coverage of the high Balmer lines. Our best-fit model atmospheres are marginally hotter than the ZZ Ceti instability strip, in rough agreement with recent ultraviolet results from HST. The spectrum and outburst behavior of GW Lib make it a near twin of WZ Sge, and we estimate it to have a quiescent V absolute magnitude 12. Comparison with archival data reveals proper motion of 65 +- 12 mas/yr. The mean spectrum of V844 Her is typical of SU UMa dwarf novae. We detected superhumps in the 1997 May superoutburst with superhump period = 0.05597 +- 0.00005 d. The spectrum of DI UMa appears normal for a dwarf nova near minimum light. These three dwarf novae have nearly identical short periods but completely dissimilar outburst characteristics. We discuss possible implications.

The paper discusses several versions of the method of shortest residuals, a specific variant of the conjugate gradient algorithm,\\u000a first introduced by Lemarchal and Wolfe and discussed by Hestenes in a quadratic case. In the paper we analyze the global\\u000a convergence of the versions considered. Numerical comparison of these versions of the method of shortest residuals and an\\u000a implementation of

In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

This study introduced a quantitative method that can be used to measure the concentration of analytes directly from a single-beam spectrum of open-path Fourier Transform Infrared Spectroscopy (OP-FTIR). The peak shapes of the analytes in a single-beam spectrum were gradually canceled (i.e., "titrated") by dividing an aliquot of a standard transmittance spectrum with a known concentration, and the sum of the squared differential synthetic spectrum was calculated as an indicator for the end point of this titration. The quantity of a standard transmittance spectrum that is needed to reach the end point can be used to calculate the concentrations of the analytes. A NIST traceable gas standard containing six known compounds was used to compare the quantitative accuracy of both this titration method and that of a classic least square (CLS) using a closed-cell FTIR spectrum. The continuous FTIR analysis of industrial exhausting stack showed that concentration trends were consistent between the CLS and titration methods. The titration method allowed the quantification to be performed without the need of a clean single-beam background spectrum, which was beneficial for the field measurement of OP-FTIR. Persistent constituents of the atmosphere, such as NH3, CH4 and CO, were successfully quantified using the single-beam titration method with OP-FTIR data that is normally inaccurate when using the CLS method due to the lack of a suitable background spectrum. Also, the synthetic spectrum at the titration end point contained virtually no peaks of analytes, but it did contain the remaining information needed to provide an alternative means of obtaining an ideal single-beam background for OP-FTIR.

The telomere length can either be shortened or elongated by an enzyme called telomerase after each cell division. Interestingly, the shortest telomere is involved in controlling the ability of a cell to divide. Yet, its dynamics remains elusive. We present here a stochastic approach where we model this dynamics using a Markov jump process. We solve the forward Fokker-Planck equation to obtain the steady state distribution and the statistical moments of telomere lengths. We focus specifically on the shortest one and we estimate its length difference with the second shortest telomere. After extracting key parameters such as elongation and shortening dynamics from experimental data, we compute the length of telomeres in yeast and obtain as a possible prediction the minimum concentration of telomerase required to ensure a proper cell division. PMID:24329474

Graph expansion has proved to be a powerful general tool for analyzing the behavior of routing algorithms and the interconnection networks on which they run. We develop new routing algorithms and structural results for bounded-degree expander graphs. Our results are unified by the fact that they are all based upon, and extend, a body of work asserting that expanders are rich in short, disjoint paths. In particular, our work has consequences for the disjoint paths problem, multicommodify flow, and graph minor containment. We show: (i) A greedy algorithm for approximating the maximum disjoint paths problem achieves a polylogarithmic approximation ratio in bounded-degree expanders. Although our algorithm is both deterministic and on-line, its performance guarantee is an improvement over previous bounds in expanders. (ii) For a multicommodily flow problem with arbitrary demands on a bounded-degree expander, there is a (1 + {epsilon})-optimal solution using only flow paths of polylogarithmic length. It follows that the multicommodity flow algorithm of Awerbuch and Leighton runs in nearly linear time per commodity in expanders. Our analysis is based on establishing the following: given edge weights on an expander G, one can increase some of the weights very slightly so the resulting shortest-path metric is smooth - the min-weight path between any pair of nodes uses a polylogarithmic number of edges. (iii) Every bounded-degree expander on n nodes contains every graph with O(n/log{sup O(1)} n) nodes and edges as a minor.

The ordinary, low intensity activity of Stromboli volcano is sporadically interrupted by more energetic events termed, depending on their intensity, "major explosions" and "paroxysms". These short-lived energetic episodes represent a potential risk to visitors to the highly accessible summit of Stromboli. Observations made at Stromboli over the last decade have shown that the composition of gas emitted from the summit craters may change prior to such explosions, allowing the possibility that such changes may be used to forecast these potentially dangerous events. In 2008 we installed a novel, remote-controlled, open-path FTIR scanning system called Cerberus at the summit of Stromboli, with the objective of measuring gas compositions from individual vents within the summit crater terrace of the volcano with high temporal resolution and for extended periods. In this work we report the first results from the Cerberus system, collected in August-September 2009, November 2009 and May-June 2010. We find significant, fairly consistent intra-crater variability for CO2/SO2 and H2O/CO2 ratios, and relatively homogeneous SO2/HCl ratios. In general, the southwest crater is richest in CO2, and the northeast crater poorest, while the central crater is richest in H2O. It thus appears that during the measurement period the southwest crater had somewhat more direct connection to a primary, deep degassing system while the central and northeast craters reflect a slightly more secondary degassing nature, with a supplementary, shallow H2O source for the central crater, probably related to puffing activity. Such water-rich emissions from the central crater can account for the lower crystal content of its eruption products, and emphasise the role of continual magma supply to the shallowest levels of Stromboli's plumbing system. Our observations of heterogeneous crater gas emissions and high H2O/CO2 ratios do not agree with models of CO2-flushing, and we show that simple depressurisation during magma ascent to the surface is a more likely model for H2O loss at Stromboli. We highlight that alternative explanations other than CO2 flushing are required to explain distributions of H2O and CO2 amounts dissolved in melt inclusions. We detected fairly systematic increases in CO2/SO2 ratio some weeks prior to major explosions, and some evidence of a decrease in this ratio in the days immediately preceding the explosions, with periods of low, stable CO2/SO2 ratios between explosions otherwise. Our measurements, therefore, confirm the medium term (~ weeks) precursory increases previously observed with MultiGas instruments, and, in addition, reveal new short-term precursory decreases in CO2/SO2 ratios immediately prior to the major explosions. Such patterns, if shown to be systematic, may be of great utility for hazard management at Stromboli's summit. Our results suggest that intra-crater CO2/SO2 variability may produce short-term peaks and troughs in CO2/SO2 time series measured with in-situ MultiGas instruments, due simply to variations in wind direction.

To reduce the cost of fault management in all-optical networks, it is a promising approach to detect the degradation of optical signal quality solely at the terminal points of all-optical monitoring paths. The all-optical monitoring paths must be routed so that all single-link failures can be localized using route information of monitoring paths where signal quality degradation is detected. However, route computation for the all-optical monitoring paths that satisfy the above condition is time consuming. This paper proposes a procedure for deriving the lower bounds of the required number of monitoring paths to localize all single-link failures, and proposes an efficient monitoring path computation method based on the derived lower bounds. The proposed method repeats the route computation for the monitoring paths until feasible routes can be found, while the assumed number of monitoring paths increases, starting from the lower bounds. With the proposed method, the minimum number of monitoring paths with the overall shortest routes can be obtained quickly by solving several small-scale integer linear programming problems when the possible terminal nodes of monitoring paths are arbitrarily given. Thus, the proposed method can minimize the required number of monitors for detecting the degradation of signal quality and the total overhead traffic volume transferred through the monitoring paths.

The melting of permafrost soils in arctic regions is one of the effects of climate change. It is recognized that climatically relevant gases are emitted during the thawing process, and that they may lead to a positive atmospheric feedback [1]. For a better understanding of these developments, a quantification of the gases emitted from the soil would be required. Extractive sensors with local point-wise gas sampling are currently used for this task, but are hampered due to the complex spatial structure of the soil surface, which complicates the situation due to the essential need for finding a representative gas sampling point. For this situation it would be much preferred if a sensor for detecting 2D-concentration fields of e.g. water vapor, (and in the mid-term also for methane or carbon dioxide) directly in the soil-atmosphere-boundary layer of permafrost soils would be available. However, it also has to be kept in mind that field measurements over long time periods in such a harsh environment require very sturdy instrumentation preferably without the need for sensor calibration. Therefore we are currently developing a new, robust TDLAS (tuneable diode laser absorption spectroscopy)-spectrometer based on cheap reflective foils [2]. The spectrometer is easily transportable, requires hardly any alignment and consists of industrially available, very stable components (e.g. diode lasers and glass fibers). Our measurement technique, openpath TDLAS, allows for calibration-free measurements of absolute H2O concentrations. The static instrument for sampling open-path H2O concentrations consists of a joint sending and receiving optics at one side of the measurement path and a reflective element at the other side. The latter is very easy to align, since it is a foil usually applied for traffic purposes that retro-reflects the light to its origin even for large angles of misalignment (up to 60°). With this instrument, we achieved normalized detection limits of up to 0.9 ppmv?m??Hz. For absorption path lengths of up to 2 m and time resolution of 0.2 sec, we attained detection limits of 1 ppmv. Furthermore we realized a wide dynamic range covering concentrations between 200 ppmv and 12300 ppmv. The static spectrometer will now be extended to a spatially scanning TDL sensor using rapidly rotating polygon mirrors. In combination with tomographic reconstruction methods, spatially resolved 2D-fields will be measured and retrieved. The aim is to capture concentration fields with at least 1 m2 spatial coverage with concentration detection faster than 1 Hz rate. We simulated various measurements from typical concentration distributions ("phantoms") and used Algebraic Reconstruction Techniques (ART) to compute the according 2D-fields. The reconstructions look very promising and demonstrate the potential of the measurement method. In the presentation we will describe and discuss the optical setup of the stationary instrument and explain the concept of extending this instrument to a spatially scanning tomographic TDL instrument for soil studies. Further we present first results evaluating the capabilities of the selected ART reconstruction on tomographic phantoms. [1] E. Schuur, J. G. Vogel, K. G. Crummer, H. Lee, J. O. Sickman, and T. E. Osterkamp, "The effect of permafrost thaw on old carbon release and net carbon exchange from tundra.," Nature, vol. 459, no. 7246, pp. 556-9, May 2009. [2] A. Seidel, S. Wagner, and V. Ebert, "TDLAS-based open-path laser hygrometer using simple reflective foils as scattering targets," Applied Physics B, vol. 109, no. 3, pp. 497-504, Oct. 2012.

Open-path FT-IR spectra were measured while fireworks were emitting smoke and incandescent particles into the infrared beam. These conditions were designed to simulate the appearance of smoke and explosions in a battlefield. Diethyl ether was used to simulate the vapor-phase spectra of G agents such as sarin. The measured interferograms were corrected by a high-pass filter and were rejected when interfering features were of such high frequency that they could not be removed by application of this filter. The concentration of diethyl ether was calculated correctly by partial least squares regression in the absence of fireworks but significant errors were encountered when the spectra of the oxide particles were not included in the calibration set. Target factor analysis allowed the presence of the analyte to be detected even when the incandescent particles were present in the beam. PMID:20401469

, its power management system must be charge sustaining over the drive cycle, meaning that the battery of the test. During the test cycle, the power management system is free to vary the battery SOC so problem associated with the design of the power management system because it allows deviations of battery

) [Mi3] and O(n log1.5 n) [CKV, Wi]), but before this work the question of obtaining an optimal case of polygonal obstacles). In more recent work, [Mi2, Mi3] have used a continuous Dijkstra algorithm report [Mi3] erroneously claimed that [BG] had proved that g(n) is bounded by O( n log n log log n

This thesis aims at the development of faster Dynamic Traffic Assignment (DTA) models to meet the computational efficiency required by real world applications. A DTA model can be decomposed into several sub-models, of which ...

problem and maximum area contained triangle problem have applications to robotics and stockÂ­cutting. The results of this paper will appear also in [33]. Key Words and Phrases: robotics, stockÂ­cutting

]. The rectilinear version of the MSPSA problem is called the Minimum Rectilinear Steiner Arborescence (MRSA) problem(N) ) be the induced Hanan grid graph [10] of N . It can be shown that an MSPSA of (G H(N) ; N; r) is an MRSA of N . Exact methods for the MRSA problem can be classified into (1) dynamic programming, (2) integer

OpenShortestPath First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortestpaths, splitting flow at nodes where several outgoing links are on shortestpaths to the destination. The weights of the links, and thereby the shortestpath routes, can be changed by the network opera- tor. The weights could be set

This website catalogs all the tornado paths in the United States since 1950. The tornado path data is overlaid onto a Google Maps base for easy browsing and manipulation of the map view. Clicking on individual tornados provides the user with information such as its Fujita rating, the amount of damage caused by the tornado, the size of the path that the tornado made, and the length of time the tornado was on the ground.

We report trace-gas emission factors from three pine-understory prescribed fires in South Carolina, US measured during the fall of 2011. The fires were more intense than many prescribed burns because the fuels included mature pine stands not subjected to prescribed fire in decades that were lit following an extended drought. The emission factors were measured with a fixed open-path Fourier transform infrared (OP-FTIR) system that was deployed on the fire control lines. We compare these emission factors to those measured with a roving, point sampling, land-based FTIR and an airborne FTIR that were deployed on the same fires. We also compare to emission factors measured by a similar OP-FTIR system deployed on savanna fires in Africa. The data suggest that the method used to sample smoke can strongly influence the relative abundance of the emissions that are observed. The majority of the fire emissions were lofted in the convection column and they were sampled by the airborne FTIR along with the downwind chemistry. The roving, ground-based, point sampling FTIR measured the contribution of actively located individual residual smoldering combustion fuel elements scattered throughout the burn site. The OP-FTIR provided a ~30 m path-integrated sample of emissions transported to the fixed path via complex ground-level circulation. The OP-FTIR typically probed two distinct combustion regimes, "flaming-like" (immediately after adjacent ignition and before the adjacent plume achieved significant vertical development) and "smoldering-like." These two regimes are denoted "early" and "late", respectively. The emission factors from all three systems were plotted versus modified combustion efficiency and for some species (e.g. CH4 and CH3OH) they fit a single trend suggesting that the different emission factors for these species were mainly due to the specific mix of flaming and smoldering that each system sampled. For other species, the different fuels sampled also likely contributed to platform differences in emission factors. The path-integrated sample of the ground-level smoke layer adjacent to the fire provided by the OP-FTIR also provided our best estimate of fire-line exposure to smoke for wildland fire personnel. We provide a table of estimated fire-line exposures for numerous known air toxics based on synthesizing results from several studies. Our data suggest that peak exposures are more likely to challenge permissible exposure limits for wildland fire personnel than shift-average (8 h) exposures.

We report trace-gas emission factors from three pine-understory prescribed fires in South Carolina, US measured during the fall of 2011. The fires were more intense than many prescribed burns because the fuels included mature pine stands not subjected to prescribed fire in decades that were lit following an extended drought. Emission factors were measured with a fixed open-path Fourier transform infrared (OP-FTIR) system that was deployed on the fire control lines. We compare these emission factors to those measured with a roving, point sampling, land-based FTIR and an airborne FTIR deployed on the same fires. We also compare to emission factors measured by a similar OP-FTIR system deployed on savanna fires in Africa. The data suggest that the method used to sample smoke can strongly influence the relative abundance of the emissions that are observed. The majority of fire emissions were lofted in the convection column and were sampled by the airborne FTIR. The roving, ground-based, point sampling FTIR measured the contribution of individual residual smoldering combustion fuel elements scattered throughout the burn site. The OP-FTIR provided a ~ 30 m path-integrated sample of emissions transported to the fixed path via complex ground-level circulation. The OP-FTIR typically probed two distinct combustion regimes, "flaming-like" (immediately after adjacent ignition and before the adjacent plume achieved significant vertical development) and "smoldering-like." These two regimes are denoted "early" and "late", respectively. The path-integrated sample of the ground-level smoke layer adjacent to the fire from the OP-FTIR provided our best estimate of fire-line exposure to smoke for wildland fire personnel. We provide a table of estimated fire-line exposures for numerous known air toxics based on synthesizing results from several studies. Our data suggest that peak exposures are more likely to challenge permissible exposure limits for wildland fire personnel than shift-average (8 h) exposures.

We used an airborne pulsed integrated path differential absorption lidar to make spectroscopic measurements of the pressure-induced line broadening and line center shift of atmospheric carbon dioxide at the 1572.335 nm absorption line. We scanned the lidar wavelength over 13 GHz (110 pm) and measured the absorption lineshape at 30 discrete wavelengths in the vertical column between the aircraft and ground. A comparison of our measured absorption lineshape to calculations based on HIgh-resolution TRANsmission molecular absorption database shows excellent agreement with the peak optical depth accurate to within 0.3%. Additionally, we measure changes in the line center position to within 5.2 MHz of calculations and the absorption linewidth to within 0.6% of calculations. These measurements highlight the high precision of our technique, which can be applied to suitable absorption lines of any atmospheric gas.

This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortestpath tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

The Kentucky Department for Environmental Protection has been developing on-site monitoring capability for the measurement of air pollutants. The department has purchased a mobile laboratory equipped with a GC/MS for point monitoring and a long-path Fourier transform infrared (FT-IR) remote sensor unit for monitoring air pollutants at different locations in the State. Prior to deploying the FT-IR instrument in the field, the instrument has been evaluated for precision and accuracy with 15 certified gases (CO, NO, NH3, COS, CS2, SO2, (CH3)2S, acetone, benzene, CH3OH, CH4, CCl4, CCl3H, C2H5OH, and H2S) against the vendor provided calibration spectra by using a 15 cm quality control internal cell. Results of this study are presented. Some other studies include the cases of strong spectral overlaps and structured spectral features. Results of some short-term field study at Calvert City, Western Kentucky are also presented.

We demonstrate that a generalized nonlinear Schroedinger equation (NSE), which includes dispersion of the intensity-dependent group velocity, allows for exact solitary solutions. In the limit of a long pulse duration, these solutions naturally converge to a fundamental soliton of the standard NSE. In particular, the peak pulse intensity times squared pulse duration is constant. For short durations, this scaling gets violated and a cusp of the envelope may be formed. The limiting singular solution determines then the shortest possible pulse duration and the largest possible peak power. We obtain these parameters explicitly in terms of the parameters of the generalized NSE.

Since the delay of a circuit is determined by the delay of its longest sensitizable paths (such paths are called critical paths), the problem of estimating the delay of a circuit is called critical path problem. One important aspect of the critical path problem is to decide whether a path is sensitizable. A framework which allows various previously proposed path

Ground-based optical remote sensing has become an essential technology for quantifying pollutant or greenhouese gas (GHG) emissions from point or area sources and for the validation of airborne or satellite remote sensing data. Extensive studies have shown the capability of both ground and airborne surveys in meeting the necessary requirements for large-scale monitoring programs of atmospheric gas variations, e.g. in urban environments or regions with variable land use intensity. Openpath instruments (such as infrared or laser spectrometer) that can rapidly scan in ambient air over significant distances are especially useful tools when it comes to detecting any GHG concentration variations (e.g. carbon dioxide CO2, nitrous oxide N2O, methane CH4) that are above normal background levels. Fourier-transform infrared spectroscopy is proven to be a powerful and non-invasive technique that can be used for online monitoring of fugitive emissions for industrial, environmental and health applications. We applied ground-based OP-FTIR spectroscopy as part of a hierarchical monitoring concept to investigate path-averaged atmospheric composition on a large scale, in terms of identifying areas with higher emission rates that subsequently require further detailed meso-scale investigations. A mobile passive and a bistatic active OP-FTIR spectrometer system (Bruker) were installed and a survey of column abundances of CO2 and several other trace gases was performed, allowing a maximum spatial coverage area of several square km to be mapped. In this presentation, we show results of a feasibility study investigating various scenarios (such as a Central European urban region, an agricultural landscape and a natural CO2 degassing area). The data were analysed and compared with accompanying in-situ geophysical, soil gas and micro-meteorological investigation results. Here, we present the significant spatial and temporal variability of CO2 emissions related to local anomalies, temporal events, and / or any correlations with rapidly changing environmental conditions.

Most of the mobile robot path planning is estimated to reach its predetermined aim through the shortestpath and avoiding the obstacles. This paper is a survey on path planning algorithms of various current research and existing system of Unmanned Ground Vehicles (UGV) where their challenging issues to be intelligent autonomous robot. The focuses are some short reviews on individual papers for UGV in the known environment. Methods and algorithms in path planning for the autonomous robot had been discussed. From the reviews, we obtained that the algorithms proposed are appropriate for some cases such as single or multiple obstacles, static or movement obstacle and optimal shortestpath. This paper also describes some pros and cons for every reviewed paper toward algorithms improvement for further work.

A tunable diode laser absorption spectroscopy (TDLAS) device fiber coupled to a pair of 12.5 in. telescopes was used to study atmospheric propagation for openpath lengths of 100-1,000 meters. More than 50 rotational lines in the molecular oxygen A-band O2 {{X}}{^{ 3}}{ sum_{{g}}^{ - }} {{to}} {{b}}{^{ 1}}{ sum_{{g}}^{ + }} transition near 760 nm were observed. Temperatures were determined from the Boltzmann rotational distribution to within 1.3 % (less than ±2 K). Oxygen concentration was obtained from the integrated spectral area of the absorption features to within 1.6 % (less than ±0.04 × 1018 molecules/cm3). Pressure was determined independently from the pressure-broadened Voigt lineshapes to within 10 %. A fourier transform interferometer (FTIR) was also used to observe the absorption spectra at 1 cm-1 resolution. The TDLAS approach achieves a minimum observable absorbance of 0.2 %, whereas the FTIR instrument is almost 20 times less sensitive. Applications include atmospheric characterization for high energy laser propagation and validation of monocular passive raging.

Open source software is often seen as a path to reproducibility in computational science. In practice there are many obstacles, even when the code is freely available, but open source policies should at least lead to better quality code.

Assume that G=(V,E) is a simple undirected graph, and C is a nonempty subset of V. For every v?V, we define Ir(v)={u?C?dG(u,v)?r}, where dG(u,v) denotes the number of edges on any shortestpath between u and v. If the sets Ir(v) for v??C are pairwise different, and none of them is the empty set, we say that C is an

PokÂ´emon Cards and the Shortest Common Superstring Mark Stamp Austin E Stamp June 12, 2003 Abstract Evidence is presented that certain sequences of PokÂ´emon cards are determined by selecting consecutive (SCS), i.e., the shortest string that contains each of the PokÂ´emon card sequences as a consecutive

Savanna fires contribute approximately 40-50% of total global annual biomass burning carbon emissions. Recent comparisons of emission factors from different savanna regions have highlighted the need for a regional approach to emission factor development, and better assessment of the drivers of the temporal and spatial variation in emission factors. This paper describes the results of open-path Fourier Transform Infrared (OP-FTIR) spectroscopic field measurements at twenty-one fires occurring in the tropical savannas of the Northern Territory, Australia, within different vegetation assemblages and at different stages of the dry season. Spectra of infrared light passing through a long (22-70 m) open-path through ground-level smoke released from these fires were collected using an infrared lamp and a field-portable FTIR system. The IR spectra were used to retrieve the mole fractions of fourteen different gases present within the smoke, and these measurements used to calculate the emission ratios and emission factors of the various gases emitted by the burning. Only a handful of previous emission factor measures are available specifically for the tropical savannas of Australia and here we present the first reported emission factors for methanol, acetic acid, and formic acid for this biome. Given the relatively large sample size, it was possible to study the potential causes of the within-biome variation of the derived emission factors. We find that the emission factors vary substantially between different savanna vegetation assemblages; with a majority of this variation being mirrored by variations in the modified combustion efficiency (MCE) of different vegetation classes. We conclude that a significant majority of the variation in the emission factor for trace gases can be explained by MCE, irrespective of vegetation class, as illustrated by variations in the calculated methane emission factor for different vegetation classes using data subsetted by different combustion efficiencies. Therefore, the selection of emission factors for emissions modelling purposes need not necessarily require detailed fuel type information, if data on MCE (e.g. from future spaceborne total column measurements) or a correlated variable were available. From measurements at twenty-one fires, we recommend the following emission factors for Australian tropical savanna fires (in grams of gas emitted per kilogram of dry fuel burned) which are our mean measured values: 1674 g kg-1 of carbon dioxide; 87 g kg-1 of carbon monoxide; 2.1 g kg-1 of methane; 0.11 g kg-1 of acetylene; 0.49 g kg-1 of ethylene; 0.08 g kg-1 of ethane; 1.57 g kg-1 of formaldehyde; 1.06 g kg-1 of methanol; 1.54 g kg-1 of acetic acid; 0.16 g kg-1 of formic acid; 0.53 g kg-1 of hydrogen cyanide; and 0.70 g kg-1 of ammonia.

Savanna fires contribute approximately 40-50% of total global annual biomass burning carbon emissions. Recent comparisons of emission factors from different savanna regions have highlighted the need for a regional approach to emission factor development, and better assessment of the drivers of the temporal and spatial variation in emission factors. This paper describes the results of open-path Fourier transform infrared (OP-FTIR) spectroscopic field measurements at 21 fires occurring in the tropical savannas of the Northern~Territory, Australia, within different vegetation assemblages and at different stages of the dry season. Spectra of infrared light passing through a long (22-70 m) open-path through ground-level smoke released from these fires were collected using an infrared lamp and a field-portable FTIR system. The IR spectra were used to retrieve the mole fractions of 14 different gases present within the smoke, and these measurements used to calculate the emission ratios and emission factors of the various gases emitted by the burning. Only a handful of previous emission factor measures are available specifically for the tropical savannas of Australia and here we present the first reported emission factors for methanol, acetic acid, and formic acid for this biome. Given the relatively large sample size, it was possible to study the potential causes of the within-biome variation of the derived emission factors. We find that the emission factors vary substantially between different savanna vegetation assemblages; with a majority of this variation being mirrored by variations in the modified combustion efficiency (MCE) of different vegetation classes. We conclude that a significant majority of the variation in the emission factor for trace gases can be explained by MCE, irrespective of vegetation class, as illustrated by variations in the calculated methane emission factor for different vegetation classes using data sub-set by different combustion efficiencies. Therefore, the selection of emission factors for emissions modelling purposes need not necessarily require detailed fuel type information, if data on MCE (e.g. from future spaceborne total column measurements) or a correlated variable were available. From measurements at 21 fires, we recommend the following emission factors for Australian tropical savanna fires (in grams of gas emitted per kilogram of dry fuel burned), which are our mean measured values: 1674 ± 56 g kg-1 of carbon dioxide; 87 ± 33 g kg-1 of carbon monoxide; 2.1 ± 1.2 g kg-1 of methane; 0.11 ± 0.04 g kg-1 of acetylene; 0.49 ± 0.22 g kg-1 of ethylene; 0.08 ± 0.05 g kg-1 of ethane; 1.57 ± 0.44 g kg-1 of formaldehyde; 1.06 ± 0.87 g kg-1 of methanol; 1.54 ± 0.64 g kg-1 of acetic acid; 0.16 ± 0.07 g kg-1 of formic acid; 0.53 ± 0.31 g kg-1 of hydrogen cyanide; and 0.70 ± 0.36 g kg-1 of ammonia. In a companion paper, similar techniques are used to characterise the emissions from Australian temperate forest fires.

A set of differential operators acting by continuous deformations on path dependent functionals of open and closed curves is introduced. Geometrically, these path operators are interpreted as infinitesimal generators of curves in the base manifold of the gauge theory. They furnish a representation with the action of the group of loops having a fundamental role. We show that the path derivative, which is covariant by construction, satisfies the Ricci and Bianchi identities. Also, we provide a geometrical derivation of covariant Taylor expansions based on particular deformations of open curves. The formalism includes, as special cases, other path dependent operators such as end point derivatives and area derivatives.

A set of differential operators acting by continuous deformations on path dependent functionals of open and closed curves is introduced. Geometrically, these path operators are interpreted as infinitesimal generators of curves in the base manifold of the gauge theory. They furnish a representation with the action of the group of loops having a fundamental role. We show that the path derivative, which is covariant by construction, satisfies the Ricci and Bianchi identities. Also, we provide a geometrical derivation of covariant Taylor expansions based on particular deformations of open curves. The formalism includes, as special cases, other path dependent operators such as end point derivatives and area derivatives.

Openpath Fourier transform infrared (OP-FTIR) spectroscopy is a new air monitoring technique that can be used to measure concentrations of air contaminants in real or near-real time. OP-FTIR spectroscopy has been used to monitor workplace gas and vapor exposures, emissions from hazardous waste sites, and to track emissions along fence lines. This paper discusses a statistical process control technique that can be used with air monitoring data collected with an OP-FTIR spectrometer to detect departures from normal operating conditions in the workplace or along a fence line. Time series data, produced by plotting consecutive air sample concentrations in time, were analyzed. Autocorrelation in the time series data was removed by fitting dynamic models. Control charts were used with the residuals of the model fit data to determine if departures from defined normal operating conditions could be rapidly detected. Shewhart and exponentially weighted moving average (EWMA) control charts were evaluated for use with data collected under different room air flow and mixing conditions. Under rapidly changing conditions the Shewhart control chart was able to detect a leak in a simulated process area. The EWMA control chart was found to be more sensitive to drifts and slowly changing concentrations in air monitoring data. The time series and statistical process control techniques were also applied to data obtained during a field study at a chemical plant. A production area of an acrylonitrile, 1,3-butadiene, and styrene (ABS) polymer process was monitored in near-real time. Decision logics based on the time series and statistical process control technique introduced suggest several applications in workplace and environmental monitoring. These applications might include signaling of an alarm or warning, increasing levels of worker respiratory protection, or evacuation of a community, when gas and vapor concentrations are determined to be out-of-control. PMID:8012765

Openpath tunable diode-laser absorption spectroscopy (OP-TDLAS) is a promising technique to detect low concentrations of possible biogenic gases on Mars. This technique finds the concentration of a gas by measuring the amount of laser light absorbed by gaseous molecules at a specific wavelength. One of the major factors limiting sensitivity in the TDLAS systems operating at low modulation frequencies is 1/f noise. 1/f noise is minimized in many spectroscopy systems by the use of high frequency modulation techniques. However, these techniques require complex instruments that include reference cells and other devices for calibration, making them relatively large and bulky. We are developing a spectroscopy system for space applications that requires small, low mass and low power instrumentation, making the high frequency techniques unsuitable. This paper explores a new technique using two-laser beam to reduce the affect of 1/f noise and increase the signal strength for measurements made at lower frequencies. The two lasers are excited at slightly different frequencies. An algorithm is used to estimate the noise in the second harmonic from the combined spectra of both lasers. This noise is subtracted from the signal to give a more accurate measurement of gas concentration. The error in estimation of 1/f noise is negligible as it corresponds to noise level made at much higher frequencies. Simulation results using ammonia gas and two lasers operating at 500 and 510 Hz respectively shows that this technique is able to decrease the error in estimation of gas concentration to 1/6 its normal value.

Biomass burning releases trace gases and aerosol particles that significantly affect the composition and chemistry of the atmosphere. Australia contributes approximately 8% of gross global carbon emissions from biomass burning, yet there are few previous measurements of emissions from Australian forest fires available in the literature. This paper describes the results of field measurements of trace gases emitted during hazard reduction burns in Australian temperate forests using open-path Fourier transform infrared spectroscopy. In a companion paper, similar techniques are used to characterise the emissions from hazard reduction burns in the savanna regions of the Northern Territory. Details of the experimental methods are explained, including both the measurement set-up and the analysis techniques employed. The advantages and disadvantages of different ways to estimate whole-fire emission factors are discussed and a measurement uncertainty budget is developed. Emission factors for Australian temperate forest fires are measured locally for the first time for many trace gases. Where ecosystem-relevant data are required, we recommend the following emission factors for Australian temperate forest fires (in grams of gas emitted per kilogram of dry fuel burned) which are our mean measured values: 1620 ± 160 g kg-1 of carbon dioxide; 120 ± 20 g kg-1 of carbon monoxide; 3.6 ± 1.1 g kg-1 of methane; 1.3 ± 0.3 g kg-1 of ethylene; 1.7 ± 0.4 g kg-1 of formaldehyde; 2.4 ± 1.2 g kg-1 of methanol; 3.8 ± 1.3 g kg-1 of acetic acid; 0.4 ± 0.2 g kg-1 of formic acid; 1.6 ± 0.6 g kg-1 of ammonia; 0.15 ± 0.09 g kg-1 of nitrous oxide and 0.5 ± 0.2 g kg-1 of ethane.

Students follow several pathways using anatomical directions on a simulated "body" produced from a copy of a school building's fire evacuation plan. The main hallways are designated as major blood vessels and the various areas of the school, the head, chest, abdomen, etc. Students complete several pathways using anatomical terms as directions. For example, one of my paths begins, "Ex- ot-, ad- superior, ecto- derm-, peri-frontal, circum- rhino-, " which loosely means, exit the ear, go to the superior region, outside the skin, around the frontal region, around the nose. At the end of each path I leave a clue that lets me know the students actually made it. The combined clues form a sentence.

Atmospheric ammonia (NH3) is an important fine aerosol gas-phase precursor, with implications for regional air quality and climate change. Atmospheric methane (CH4) is an important greenhouse gas, with high uncertainties in the partitioning of various emission sources. Ammonia and methane agricultural emissions are highly variable in space and time and are highly uncertain, with a lack of widespread, in-situ measurements. We characterize the spatial variability of dairy livestock emissions by performing high resolution (5 Hz), in-situ, on-road mobile measurements of NH3, CH4, CO2, N2O, CO and H2O simultaneously with open-path sensors mounted on a passenger vehicle. This suite of multiple trace gas measurements allows for emission ratio calculations and separation of agricultural, petrochemical and combustion emission signatures. Mobile measurements were performed in the Tulare County dairy farm region (~120 dairy farms sampled downwind) in the Central Valley, California during NASA DISCOVER-AQ in winter 2013. We calculate the ?NH3/?CH4 and ?NH3/?CO2 emission ratios for each dairy farm sampled downwind. Emission plumes from individual farms are isolated based on known dairy farm locations and high resolution (1 km) surface wind field simulations. Background concentrations are subtracted to calculate the emission ratios. We find high spatial variability of ammonia and methane concentrations, with localized maximums of >1 ppmv NH3 downwind of individual dairy farms. The spatial extent of individual farm emission plumes are evaluated for NH3, CH4 and CO2, which all show well-defined enhancements localized to the dairy farms near the roadside (typical sampling proximity of ? 50 m). The NH3 concentrations are correlated with the distance from each dairy farm. The observed median concentration within 100 m downwind of the dairy farms is 63 ppbv NH3, with the 95th percentile at 417 ppbv NH3 and decreases to background conditions at ~500 m distance downwind. The diurnal variability of NH3 and CH4 background concentrations at the same locations sampled on multiple days is also evaluated; including a case study of a strong morning temperature inversion. Finally, we find the NH3/CH4 ratios at the sub-farm scale vary by at least a factor of two due to spatially heterogeneous farming practices. These results highlight the need for widespread, in-situ spatial and temporal sampling of agricultural regions to further characterize these heterogeneous emissions. Future analyses will inform emission inventories and regional air quality modeling efforts.

For the next two exercises, we will break up into groups of four. Each member of the group will represent one of four waves leaving the source: direct wave, ground roll, reflected wave, and head wave. All four "waves" will leave the source at the same time and travel at a particular speed and path as directed by the instructor. ALL students will record the arrival time of each "wave" at each geophone until all 12 geophones have been used. Plot arrival time versus distance for each "wave". Do any of the time versus distance curves fit a straight line? Do any of them not fit a straight line? Explain why they do or don't fit a straight line. Uses online and/or real-time data Has minimal/no quantitative component

is formed that connects the food sources through the shortest route. When the light- avoiding organism risk as the experimentally measurable rate of light-avoiding movement, the minimum-risk path maintaining sufficient connectivity to permit intracellular communication. Such behavior in a primitive

Background Despite evidence that environmental features are related to physical activity, the association between the built environment and bicycling for transportation remains a poorly investigated subject. The aim of the study was to improve our understanding of the environmental determinants of bicycling as a means of transportation in urban European settings by comparing the spatial differences between the routes actually used by bicyclists and the shortest possible routes. Methods In the present study we examined differences in the currently used and the shortest possible bicycling routes, with respect to distance, type of street, and environmental characteristics, in the city of Graz, Austria. The objective measurement methods of a Global Positioning System (GPS) and a Geographic Information System (GIS) were used. Results Bicycling routes actually used were significantly longer than the shortest possible routes. Furthermore, the following attributes were also significantly different between the used route compared to the shortest possible route: Bicyclists often used bicycle lanes and pathways, flat and green areas, and they rarely used main roads and crossings. Conclusion The results of the study support our hypothesis that bicyclists prefer bicycle pathways and lanes instead of the shortest possible routes. This underlines the importance of a well-developed bicycling infrastructure in urban communities. PMID:24597725

In this paper,we investigate diameter and average path length(APL) of Sierpinski pentagon based on its recursive construction and self-similar structure.We find that the diameter of Sierpinski pentagon is just the shortestpath lengths between two nodes of generation 0. Deriving and solving the linear homogenous recurrence relation the diameter satisfies, we obtain rigorous solution for the diameter. We also obtain approximate solution for APL of Sierpinski pentagon, both diameter and APL grow approximately as a power-law function of network order $N(t)$, with the exponent equals $\\frac{\\ln(1+\\sqrt{3})}{\\ln(5)}$. Although the solution for APL is approximate,it is trusted because we have calculated all items of APL accurately except for the compensation($\\Delta_{t}$) of total distances between non-adjacent branches($\\Lambda_t^{1,3}$), which is obtained approximately by least-squares curve fitting. The compensation($\\Delta_{t}$) is only a small part of total distances between non-adjacent branches($\\Lambda_t^{1,3}$) and has little effect on APL. Further more,using the data obtained by iteration to test the fitting results, we find the relative error for $\\Delta_{t}$ is less than $10^{-7}$, hence the approximate solution for average path length is almost accurate.

Under the OpenShortestPath First (OSPF) protocol, tra-c ?ow in an Internet Protocol (IP) network is routed on the shortestpaths between each source and destination. The shortestpath is calculated based on pre-assigned weights on the network links. The OSPF weight setting problem is to determine a set of weights such that, if the ?ow is routed based

Data aggregation is a fundamental task in multihop wireless sensor networks. Minimum-latency aggregation schedul- ing (MLAS) seeks to minimize the number of scheduled time slots to perform an aggregation. In this paper, we present the first work on a solvable mathematical formulation of the MLAS problem. The optimal solution of small example networks sug- gests that an optimal scheduling can

In a double slit interference experiment, the wave function at the screen with both slits open is not exactly equal to the sum of the wave functions with the slits individually open one at a time. The three scenarios represent three different boundary conditions and as such, the superposition principle should not be applicable. However, most well-known text books in quantum mechanics implicitly and/or explicitly use this assumption that is only approximately true. In our present study, we have used the Feynman path integral formalism to quantify contributions from nonclassical paths in quantum interference experiments that provide a measurable deviation from a naive application of the superposition principle. A direct experimental demonstration for the existence of these nonclassical paths is difficult to present. We find that contributions from such paths can be significant and we propose simple three-slit interference experiments to directly confirm their existence.

In a double slit interference experiment, the wave function at the screen with both slits open is not exactly equal to the sum of the wave functions with the slits individually open one at a time. The three scenarios represent three different boundary conditions and as such, the superposition principle should not be applicable. However, most well-known text books in quantum mechanics implicitly and/or explicitly use this assumption that is only approximately true. In our present study, we have used the Feynman path integral formalism to quantify contributions from nonclassical paths in quantum interference experiments that provide a measurable deviation from a naive application of the superposition principle. A direct experimental demonstration for the existence of these nonclassical paths is difficult to present. We find that contributions from such paths can be significant and we propose simple three-slit interference experiments to directly confirm their existence. PMID:25279612

In a double slit interference experiment, the wave function at the screen with both slits open is not exactly equal to the sum of the wave functions with the slits individually open one at a time. The three scenarios represent three different boundary conditions and as such, the superposition principle should not be applicable. However, most well known text books in quantum mechanics implicitly and/or explicitly use this assumption which is only approximately true. In our present study, we have used the Feynman path integral formalism to quantify contributions from non-classical paths in quantum interference experiments which provide a measurable deviation from a naive application of the superposition principle. A direct experimental demonstration for the existence of these non-classical paths is hard. We find that contributions from such paths can be significant and we propose simple three-slit interference experiments to directly confirm their existence.

Let $G=(V,E)$ be a graph and let $r\\\\ge 1$ be an integer. For a set $D \\\\subseteq V$, define $N_r[x] = \\\\{y \\\\in V: d(x, y) \\\\leq r\\\\}$ and $D_r(x) = N_r[x] \\\\cap D$, where $d(x,y)$ denotes the number of edges in any shortestpath between $x$ and $y$. $D$ is known as an $r$-identifying code ($r$-locating-dominating set, respectively), if

An integrative overview of the algorithmic characteristics of three well-known polynomialtime heuristics for the undirected\\u000a Steiner minimum tree problem:shortestpath heuristic (SPH),distance network heuristic (DNH), andaverage distance heuristic (ADH) is given. The performance of thesesingle-pass heuristics (and some variants) is compared and contrasted with several heuristics based onrepetitive applications of the SPH. It is shown that two of these repetitive

Approximation of greedy algorithms for Max-ATSP, Maximal Compression, and Shortest Cyclic Cover the approximation of associated greedy algorithms for these four problems, and show they reach a ratio of 1. For these problems, greedy algorithms are easier to implement and often faster than existing approximation algorithms

, these can be noisy (that is, have a high Monte Carlo error). We derive an optimal weighting strategy using Monte Carlo error than empirical shortest intervals. We implement the new method in an R package (SPIn such as the nor- mal and gamma but are more difficult to compute from simulations. We would like to go back

The absence of telomerase in many eukaryotes leads to the gradual shortening of telomeres, causing replicative senescence. In humans, this proliferation barrier constitutes a tumor suppressor mechanism and may be involved in cellular aging. Yet the heterogeneity of the senescence phenotype has hindered the understanding of its onset. Here we investigated the regulation of telomere length and its control of senescence heterogeneity. Because the length of the shortest telomeres can potentially regulate cell fate, we focus on their dynamics in Saccharomyces cerevisiae. We developed a stochastic model of telomere dynamics built on the protein-counting model, where an increasing number of protein-bound telomeric repeats shift telomeres into a nonextendable state by telomerase. Using numerical simulations, we found that the length of the shortest telomere is well separated from the length of the others, suggesting a prominent role in triggering senescence. We evaluated this possibility using classical genetic analyses of tetrads, combined with a quantitative and sensitive assay for senescence. In contrast to mitosis of telomerase-negative cells, which produces two cells with identical senescence onset, meiosis is able to segregate a determinant of senescence onset among the telomerase-negative spores. The frequency of such segregation is in accordance with this determinant being the length of the shortest telomere. Taken together, our results substantiate the length of the shortest telomere as being the key genetic marker determining senescence onset in S. cerevisiae. PMID:23733785

We investigate the statistics of extremal path(s) (both the shortest and the longest) from the root to the bottom of a Cayley tree. The lengths of the edges are assumed to be independent identically distributed random variables drawn from a distribution rho(l). Besides, the number of branches from any node is also random. Exact results are derived for arbitrary distribution rho(l). In particular, for the binary 0,1 distribution rho(l)=pdelta(l,1)+(1-p)delta(l, 0), we show that as p increases, the minimal length undergoes an unbinding transition from a "localized" phase to a "moving" phase at the critical value, p=p(c)=1-b(-1), where b is the average branch number of the tree. As the height n of the tree increases, the minimal length saturates to a finite constant in the localized phase (p

p(c)) where the velocity v(min)(p) is determined via a front selection mechanism. At p=p(c), the minimal length grows with n in an extremely slow double-logarithmic fashion. The length of the maximal path, on the other hand, increases linearly as v(max)(p)n for all p. The maximal and minimal velocities satisfy a general duality relation, v(min)(p)+v(max)(1-p)=1, which is also valid for directed paths on finite-dimensional lattices. PMID:11138046

In this paper, a two step path-planning algorithm for UAVs is proposed. The algorithm generates a stealthy path through a set of enemy radar sites of known location, and provides an intuitive way to trade-off stealth versus path length. In the first step, a suboptimal rough-cut path is generated through the radar sites by constructing and searching a graph based

Emergency has attracted global attentions of government and the public, and it will easily trigger a series of serious social problems if it is not supervised effectively in the dissemination process. In the Internet world, people communicate with each other and form various virtual communities based on social networks, which lead to a complex and fast information spread pattern of emergency events. This paper collects Internet data based on data acquisition and topic detection technology, analyzes the process of information spread on social networks, describes the diffusions and impacts of that information from the perspective of random graph, and finally seeks the key paths through an improved IBF algorithm. Application cases have shown that this algorithm can search the shortest spread paths efficiently, which may help us to guide and control the information dissemination of emergency events on early warning. PMID:24600323

In this paper,we investigate diameter and average path length(APL) of Sierpinski pentagon based on its recursive construction and self-similar structure.We find that the diameter of Sierpinski pentagon is just the shortestpath lengths between two nodes of generation 0. Deriving and solving the linear homogenous recurrence relation the diameter satisfies, we obtain rigorous solution for the diameter. We also obtain approximate solution for APL of Sierpinski pentagon, both diameter and APL grow approximately as a power-law function of network order $N(t)$, with the exponent equals $\\frac{\\ln(1+\\sqrt{3})}{\\ln(5)}$. Although the solution for APL is approximate,it is trusted because we have calculated all items of APL accurately except for the compensation($\\Delta_{t}$) of total distances between non-adjacent branches($\\Lambda_t^{1,3}$), which is obtained approximately by least-squares curve fitting. The compensation($\\Delta_{t}$) is only a small part of total distances between non-adjacent branches($\\Lambda_t^{1...

Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions Drew Blasco1 to the availability of clean, safe water. In this study we examined cross cultural preferences for soft path vs. hard conceptualize water solutions (hard paths, soft paths, no paths) cross-culturally? 2) What role does development

One of the most important aspects of the modelling of musculoskeletal systems is the determination of muscle moment arms which are dependent upon the paths of the muscles. These paths are often required to wrap around passive structures that can be modelled as simple geometric shapes. A novel technique for the prediction of the paths of muscles modelled as strings when wrapping around smooth analytical surfaces is presented. The theory of geodesics is used to calculate the shortestpath of the string on the surface and a smoothness constraint is used to determine the correct solutions for the string path between insertions. The application of the technique to tapered cylinders and ellipsoids is presented as an extension of previous work on right-circular cylinders and spheres. The technique is assessed with reference to a particular biomechanical scenario; string lengths and moment arms are calculated and compared with alternative approximate methods. This illustrates the potential of the technique to provide more accurate muscle moment arm predictions. PMID:18335718

WaldenÂs Paths enables users of digital document collections (e.g. the Web) to exploit these documents by reusing them for previously unintended audiences in an academic setting. Authors of paths (usually educators) overlay a linear, directed meta-structure over the Web documents and recontextualize these by adding explanatory text to achieve their curricular goals. Paths do not modifythe structure or content of the Web resources that they include. The creation of a path over pre-organized content (e.g. books, Web pages) to reorganize and associate related information serves to facilitate easy retrieval and communication. WaldenÂs Paths displays the information that the path points to in conjunction with the textual annotations added by the author of the path.

Path Relaxation is a method of planning safe paths around obstacles for mobile robots. It works in two steps: a global grid starch that finds a rough path, followed by a local relaxation step that adjusts each node on the path to lower the overall path cost. The representation used by Path Relaxation allows an explicit tradeoff among length of

Real-time applications dealing with a large amount of data often require timely service of disk I\\/O requests. This paper presents a new real-time disk scheduling method called Urgent Group and Shortest Seek Time First(UG-SSTF) for soft real-time systems. According to this algorithm, the urgent group is formed by identifying urgent requests awaiting the disk service in a queue, among which

This qualitative study describes how 12 Haudenosaunee (Six Nations Iroquois Confederacy) college graduates constructed pathways to degree completion. The participants related their experiences on this path through open-ended interviews. The pathways were found to be complex owing to their unique cultural grounding and dedication to family. The

Using both literature (a book featuring a path, such as Little Red Riding Hood) and satellite images, students will identify paths, observe and analyze them from different altitudes, and distinguish natural paths from those made by humans. Students will learn how images can inform the building, use and maintenance of paths. The URL opens to the investigation directory, with links to teacher and student materials, lesson extensions, resources, teaching tips, and assessment strategies. This is Investigation 2 of four found in the Grades K-4 Module 4 of Mission Geography. The Mission Geography curriculum integrates data and images from NASA missions with the National Geography Standards. Each of the four investigations in Module 4, while related, can be done independently. Please see Investigation 1 of this module for a two-page module overview and list of all standards addressed.

Background The doctor-patient encounter (DPE) and associated patient expectations are potential confounders in open-label randomized trials of treatment efficacy. It is therefore important to evaluate the effects of the DPE on study outcomes. Methods Four hundred participants with chronic low back pain (LBP) were randomized to four dose groups: 0, 6, 12, or 18 sessions of spinal manipulation from a chiropractor. Participants were treated three times per week for six weeks. They received light massage control at visits when manipulation was not scheduled. Treating chiropractors were instructed to have equal enthusiasm for both interventions. A path analysis was conducted to determine the effects of dose, patient expectations of treatment success, and DPE on LBP intensity (100-point scale) at the end of care (6 weeks) and primary endpoint (12 weeks). Direct, indirect, and total standardized effects (?total) were computed. Expectations and DPE were evaluated on Likert scales. The DPE was assessed as patient-rated perception of chiropractor enthusiasm, confidence, comfort with care, and time spent. Results The DPE was successfully balanced across groups, as were baseline expectations. The principal finding was that the magnitude of the effects of DPE on LBP at 6 and 12 weeks (|?|total?=?0.22 and 0.15, p?.05) were comparable to the effects of dose of manipulation at those times (|?|total?=?0.11 and 0.12, p?.05). In addition, baseline expectations had no notable effect on follow-up LBP. Subsequent expectations were affected by LBP, DPE, and dose (p?.05). Conclusions The DPE can have a relatively important effect on outcomes in open-label randomized trials of treatment efficacy. Therefore, attempts should be made to balance the DPE across treatment groups and report degree of success in study publications. We balanced the DPE across groups with minimal training of treatment providers. Trial registration ClinicalTrials.gov NCT00376350 PMID:24410959

This paper confirms the importance of path dependency in the accumulation of firm-specific technological competencies. It shows that firms are guided by the selective logic of path dependency in their innovation processes, even if management has no part in decisions to invest in a new business idea. The research focuses on the output of bootlegging, defined as research in which

This site features an interactive applet that models the Sun's path from a geocentric view. It calculates and visualizes the position of the Sun based on latitude and time, and allows students to simulate the Sun's position and path for an hour, a day, a month or a year.

Nitrous oxide (N2O) is an important greenhouse gas with an atmospheric lifetime of ~ 120 years and a global warming potential ~300 times that of CO2. Atmospheric N2O concentrations have increased from ~270 ppbv during pre-industrial times to ~330 ppbv today. Anthropic emissions are a major source of atmospheric N2O and about half of global anthropic emissions are from the agricultural sector. N2Oemissions from soils exhibit high spatial and temporal variability. Estimation of N2O emissions from agricultural soils is particularly challenging because N2O fluxes are affected by fertilizer type and application rates, land-use history and management, as well as soil biological activity. We studied ecosystem level N2O emissions from agricultural lands using a combination of static chamber methods and continuous N2O exchange measured by a quantum cascade laser-based, open-path analyzer coupled with an eddy-covariance system. We also compared N2O emissions between different static chamber methods, using both laboratory-based gas chromatography (GC) and an in situ quantum cascade (QC) laser for N2O analyses. Finally, we compared emissions estimated by the two static chamber methods to those estimated by eddy-covariance. We examined pre- and post- fertilization N2O fluxes from soils in two no-till continuous corn fields with distinct land-use histories: one field converted from permanent grassland (CRP-C) and the other from conventional corn-soybean rotation (AGR-C). Both fields were fertilized with ~160 kg urea-N ha-1. We compared N2O emissions from these fields to those from an unmanaged grassland (REF). In addition, we examined the potential effect of post-fertilization precipitation on N2O emissions by applying 50 mm of artificial rainfall to the static chambers at all three locations. Measurements of N2O emissions using both GC and QC laser methods with static chambers were in good agreement (R2 = 0.96). Even though average soil N2O fluxes before fertilization were low, they still exhibited high temporal and spatial variability. Fluxes from the CRP-C site were higher than fluxes from the AGR-C site, and fluxes from the REF site were lowest, ranging from 2 - 22, 1 - 3, and ~1 g N2O-N ha-1 day-1, respectively. Post-fertilization fluxes were minor as well due to very dry soil conditions in 2012. However, after applying artificial rain, soil N2O fluxes were distinctly higher in all systems, increasing to 106 - 208 g N2O-N ha-1 day-1 at the CRP-C site, to 36 g N2O-N ha-1 day-1 at Ag-C, and to 5 g N2O-N ha-1 day-1 at the REF site. Fluxes decreased to pre-rain levels 1-2 days after wetting. This single rain event resulted in total emissions of 5, 43, and 251 g N2O-N ha-1 from REF, Ag-C, and CRP-C systems, respectively. A comparison between static chambers and the open-path method at CRP-C system revealed similar diurnal trends in N2O fluxes and similar cumulative N2O-N emissions. Overall, we found a strong relationship between land-use history and soil N2O emissions: soils with higher organic carbon content (CRP-C) exhibited greater fluxes. In addition, we found that N2O emissions increased significantly after a post-fertilization rain event, accounting for a significant proportion of typical total annual emission from these no-till corn fields. We also present the first measurements of ecosystem level N2O fluxes using an open-path N2O analyzer and show the potential of this novel system to study ecosystem level N2O fluxes.

A collision-free path is a path which an industrial robot can physically take while traveling from one location to another in an environment containing obstacles. Usually the obstacles are expanded to compensate for the body width of the robot. For robots with a prismatic joint, which allows only a translational motion along its axis, additional problems created by the long boom are handled by means of pseudoobstacles which are generated by real obstacle's edges and faces. The environment is then modified by the inclusion of pseudoobstacles which contribute to the forbidden regions. This process allows the robot itself again to be represented by a point specifying the location of its end effector in space. An algorithm for determining the shortest distance collision-free path given a sequence of edges to be traversed has been developed for the case of stationary obstacles.

A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

We introduce a reweighting scheme for the path ensembles in the transition interface sampling framework. The reweighting allows for the analysis of free energy landscapes and committor projections in any collective variable space. We illustrate the reweighting scheme on a two dimensional potential with a nonlinear reaction coordinate and on a more realistic simulation of the Trp-cage folding process. We suggest that the reweighted path ensemble can be used to optimize possible nonlinear reaction coordinates. PMID:21054008

The paper discusses the concepts behind recent developments in optical remote sensing (ORS) and the results from experiments. Airborne fugitive and fine particulate matter (PM) from various sources contribute to exceedances of state and federal PM and visibility standards. Recent...

The General Educational Development program, or GED, is undergoing the biggest revamping in its 69-year history, driven by mounting recognition that young adults' future success depends on getting more than a high-school-level education. Potent forces have converged to stoke the GED's redesign. A labor market that increasingly seeks some

In this paper we give, for first time in the open literature, sufficient conditions so that a one-dimensional iterative-logic-array (ILA) is C-testable taking into account the path delay fault model. We give also a method for path selection so as all the selected paths can be tested by a constant number of test-vector pairs. The delay of all other paths

The demand to move large amounts of data between sites are ubiquitous and growing-particularly among the signal\\/image processing (SIP) community. The Department of Energy (DOE) Sandia Labs has made available a tool called multiple path secure copy (MPSCP), which can move data 4 to 90 times faster than OpenSSHpsilas secure copy (SCP) by using parallel TCP streams. We have enhanced

Let G=(V,E) be a graph and let r?1 be an integer. For a set D?V, define Nr[x]={y?V:d(x,y)?r} and Dr(x)=Nr[x]?D, where d(x,y) denotes the number of edges in any shortestpath between x and y. D is known as an r-identifying code (r-locating-dominating set, respectively), if for all vertices x?V (x?V?D, respectively), Dr(x) are all nonempty and different. Roberts and Roberts

The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably. PMID:23300667

We address the problem of sampling double-ended diffusive paths. The ensemble of paths is expressed using a symmetric version of the Onsager-Machlup formula, which only requires evaluation of the force field and which, upon direct time discretization, gives rise to a symmetric integrator that is accurate to second order. Efficiently sampling this ensemble requires avoiding the well-known stiffness problem associated with sampling infinitesimal Brownian increments of the path, as well as a different type of stiffness associated with sampling the coarse features of long paths. The fine-features sampling stiffness is eliminated with the use of the fast sampling algorithm (FSA), and the coarse-feature sampling stiffness is avoided by introducing the sliding and sampling (S&S) algorithm. A key feature of the S&S algorithm is that it enables massively parallel computers to sample diffusive trajectories that are long in time. We use the algorithm to sample the transition path ensemble for the structural interconversion of the 38-atom Lennard-Jones cluster at low temperature.

The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

Prins cyclization of bis-homoallylic alcohols with aldehydes catalyzed by iron(III) salts shows excellent cis selectivity and yields to form 2,7-disubstituted oxepanes. The iron(III) is able to catalyze this process with unactivated olefins. This cyclization was used as the key step in the shortest total synthesis of (+)-isolaurepan. PMID:23167915

DeWitt{close_quote}s covariant formulation of path integration [B. De Witt, {open_quotes}Dynamical theory in curved spaces. I. A review of the classical and quantum action principles,{close_quotes} Rev. Mod. Phys. {bold 29}, 377{endash}397 (1957)] has two practical advantages over the traditional methods of {open_quotes}lattice approximations;{close_quotes} there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature {minus}1. The Pauli{endash}DeWitt curvature correction term arises, as in DeWitt{close_quote}s work. Introducing a Fuchsian group {Gamma} of the first kind, and a continuous, bounded, {Gamma}-automorphic potential V, we obtain a Feynman{endash}Kac formula for the automorphic Schr{umlt o}dinger equation on the Riemann surface {Gamma}{backslash}H. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, {ital Path Integration on Manifolds, Mathematical Aspects of Superspace}, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47{endash}90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, {open_quotes}The path integral on the Poincare upper half plane and for Liouville quantum mechanics,{close_quotes} Phys. Lett. A {bold 123}, 319{endash}328 (1987). {copyright} {ital 1997 American Institute of Physics.}

Schaefer, J. [Department of Mathematics, State University of New York at Stony Brook, Stony Brook, New York 11794-3651 (United States)] [Department of Mathematics, State University of New York at Stony Brook, Stony Brook, New York 11794-3651 (United States)

The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and

Describes "Off the Beaten Path", a program that takes at-risk students out of the traditional classroom and puts them into a camping atmosphere in order to increase academic achievement, improve self-esteem, and promote better social skills. (WRM)

A gas path seal suitable for use with a turbine engine or compressor is described. A shroud wearable or abradable by the abrasion of the rotor blades of the turbine or compressor shrouds the rotor bades. A compliant backing surrounds the shroud. The backi...

In examing the world nuclear energy paths, the following assumptions were adopted: the world economy will grow somewhat more slowly than in the past, leading to reductions in electricity demand growth rates; national and international political impediments to the deployment of nuclear power will gradually disappear over the next few years; further development of nuclear power will proceed steadily, without

The proximal relationship of two objects is of interest for many reasons, e.g., the interference detection problem in robot motion planning. When the mathematical representations of two objects are separated the natural measure of the proximal relationship is the shortest Euclidean distance. However, much less is known when the representations intersect. In this dissertation, one of the main foci is the characterization and computation of measures of the proximal relationship between two intersecting objects. We call these measures penetration distances. A formal exposition of penetration distances and their mathematical properties is given. A penetration distance is defined by the least 'movement' needed to separate the two objects. In general, 'movement' involves both rotation and translation. Several ways of measuring the degree of rotation and translation are introduced and each yields a different definition of penetration distance. In the special case of convex objects, it is shown that the various penetration distances are the same and are determined from translational motion alone. The above-mentioned penetration distances are difficult to compute. An important contribution of this thesis is the development of a new penetration distance based on ideas of 'growing' the mathematical representations of objects. It is called the growth distance and can be computed easily for a pair of convex objects. The mathematical properties and computational aspects of the growth distance are developed at length. For instance, its relationship to the other penetration distances is determined. When two objects are separated, the growth distance is also a measure of separation. An important application of the growth distance is to path finding for robotic systems in the presence of obstacles. Our approach is to convert the path finding problem into an optimization problem. path. &This novel approach involves searching among collision paths. Problems unique to this formulation are discussed. Our approach has several advantages over existing approaches and has performed well in several examples.

Free scalar field theory on a flat spacetime can be cast into a generally covariant form known as parametrised field theory in which the action is a functional of the scalar field as well as the embedding variables which describe arbitrary, in general curved, foliations of the flat spacetime. We construct the path integral quantization of parametrised field theory in order to analyse issues at the interface of quantum field theory and general covariance in a path integral context. We show that the measure in the Lorentzian path integral is non-trivial and is the analog of the Fradkin- Vilkovisky measure for quantum gravity. We construct Euclidean functional integrals in the generally covariant setting of parametrised field theory using key ideas of Schleich and show that our constructions imply the existence of non-standard `Wick rotations' of the standard free scalar field 2 point function. We develop a framework to study the problem of time through computations of scalar field 2 point functions. We illustrate our ideas through explicit computation for a time independent 1+1 dimensional foliation. Although the problem of time seems to be absent in this simple example, the general case is still open. We discuss our results in the contexts of the path integral formulation of quantum gravity and the canonical quantization of parametrised field theory.

A triggerable opening switch for a very high voltage and current pulse includes a transmission line extending from a source to a load and having an intermediate switch section including a plasma for conducting electrons between transmission line conductors and a magnetic field for breaking the plasma conduction path and magnetically insulating the electrons when it is desired to open the switch.

A triggerable opening switch for a very high voltage and current pulse includes a transmission line extending from a source to a load and having an intermediate switch section including a plasma for conducting electrons between transmission line conductors and a magnetic field for breaking the plasma conduction path and magnetically insulating the electrons when it is desired to open the switch.

The primary functions of the Lander Flight Path Analysis Team (LFPAT) were to (1) design the Viking Lander (VL) descent trajectory and compute the descent guidance parameters for command transmission to the Viking Lander and Viking Orbiter (VO), (2) reconstruct the VL trajectory from separation to touchdown using data transmitted from the VL to Earth via the VO during descent, and (3) predict the VL/VO relay link system performance during descent and post touchdown. The preflight VL capability, the history of proposed descent trajectory designs as the site selection process evolved, and the final trajectory design and guidance parameters for each vehicle are addressed along with the trajectory reconstruction process, including the overall reconstructed VL flight path summary and a detailed discussion of the entry trajectory and atmosphere reconstruction results. The postland relay link prediction function is discussed.

Let $G=(V,E)$ be a graph and let $r\\ge 1$ be an integer. For a set $D \\subseteq V$, define $N_r[x] = \\{y \\in V: d(x, y) \\leq r\\}$ and $D_r(x) = N_r[x] \\cap D$, where $d(x,y)$ denotes the number of edges in any shortestpath between $x$ and $y$. $D$ is known as an $r$-identifying code ($r$-locating-dominating set, respectively), if for all vertices $x\\in V$ ($x \\in V\\backslash D$, respectively), $D_r(x)$ are all nonempty and different. In this paper, we provide complete results for $r$-identifying codes in paths and odd cycles; we also give complete results for 2-locating-dominating sets in cycles.

Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

It has been suggested that neural systems across several scales of organization show optimal component placement, in which any spatial rearrangement of the components would lead to an increase of total wiring. Using extensive connectivity datasets for diverse neural networks combined with spatial coordinates for network nodes, we applied an optimization algorithm to the network layouts, in order to search for wire-saving component rearrangements. We found that optimized component rearrangements could substantially reduce total wiring length in all tested neural networks. Specifically, total wiring among 95 primate (Macaque) cortical areas could be decreased by 32%, and wiring of neuronal networks in the nematode Caenorhabditis elegans could be reduced by 48% on the global level, and by 49% for neurons within frontal ganglia. Wiring length reductions were possible due to the existence of long-distance projections in neural networks. We explored the role of these projections by comparing the original networks with minimally rewired networks of the same size, which possessed only the shortest possible connections. In the minimally rewired networks, the number of processing steps along the shortestpaths between components was significantly increased compared to the original networks. Additional benchmark comparisons also indicated that neural networks are more similar to network layouts that minimize the length of processing paths, rather than wiring length. These findings suggest that neural systems are not exclusively optimized for minimal global wiring, but for a variety of factors including the minimization of processing steps. PMID:16848638

A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

The South East coastal region experiences hurricane threat for almost six months in every year. To improve the accuracy of hurricane forecasts, meteorologists would need the storm paths of both the present and the past. A hurricane path can be established if we could identify the correct position of the storm at different times right from its birth to the end. We propose a method based on both spatial and temporal image correlations to locate the position of a storm from satellite images. During the hurricane season, the satellite images of the Atlantic ocean near the equator are examined for the hurricane presence. This is accomplished in two steps. In the first step, only segments with more than a particular value of cloud cover are selected for analysis. Next, we apply image processing algorithms to test the presence of a hurricane eye in the segment. If the eye is found, the coordinate of the eye is recorded along with the time stamp of the segment. If the eye is not found, we examine adjacent segments for the existence of hurricane eye. It is probable that more than one hurricane eye could be found from different segments of the same period. Hence, the above process is repeated till the entire potential area for hurricane birth is exhausted. The subsequent/previous position of each hurricane eye will be searched in the appropriate adjacent segments of the next/previous period to mark the hurricane path. The temporal coherence and spatial coherence of the images are taken into account by our scheme in determining the segments and the associated periods required for analysis.

JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

The Ohio State Universitys Library web site notes As a navigational aviator, Byrd pioneered in the technology that would be the foundation for modern polar exploration and investigation. As a decorated and much celebrated hero, Byrd drew popular attention to areas of the world that became focal points of scientific investigation in numerous disciplines. More information about Admiral Richard E. Byrd can be found at (http:--www.lib.ohio-state.edu-arvweb-polar-byrd-byrd.htm). The next animation, #1001, shows Byrds plane as it follows the flight path presented in this animation.

of scheduling problems that involve SDS times (costs), an important consideration in many practical applications. It focuses on papers published within the last decade, addressing a variety of machine configurations - single machine, parallel machine, flow shop...

We have spectroscopically confirmed a brown dwarf mass companion to the hydrogen atmosphere white dwarf NLTT 5306. The white dwarf's atmospheric parameters were measured using the Sloan Digital Sky Survey and X-shooter spectroscopy as Teff = 7756 ± 35 K and log(g) = 7.68 ± 0.08, giving a mass for the primary of MWD = 0.44 ± 0.04 M? at a distance of 71 ± 4 pc with a cooling age of 710 ± 50 Myr. The existence of the brown dwarf secondary was confirmed through the near-infrared arm of the X-shooter data and a spectral type of dL4-dL7 was estimated using standard spectral indices. Combined radial velocity measurements from the Sloan Digital Sky Survey, X-shooter and the Hobby-Eberly Telescope's High Resolution Spectrograph of the white dwarf give a minimum mass of 56 ± 3 MJup for the secondary, confirming the substellar nature. The period of the binary was measured as 101.88 ± 0.02 min using both the radial velocity data and i'-band variability detected with the Isaac Newton Telescope. This variability indicates `day' side heating of the brown dwarf companion. We also observe H? emission in our higher resolution data in phase with the white dwarf radial velocity, indicating that this system is in a low level of accretion, most likely via a stellar wind. This system represents the shortest period white dwarf+brown dwarf binary and the secondary has survived a stage of common envelope evolution, much like its longer period counterpart, WD 0137-349. Both systems likely represent bona fide progenitors of cataclysmic variables with a low-mass white dwarf and a brown dwarf donor.

Analysis of 746 new V-band observations of the RR Lyrae star AH Cam obtained during 1989 - 1992 clearly show that its light curve cannot be described by a single period. In fact, at first glance, the Fourier spectrum of the photometry resembles that of a double-mode pulsator, with peaks at a fundamental period of 0.3686 d and an apparent secondary period of 0.2628 d. Nevertheless, the dual-mode solution is a poor fit to the data. Rather, we believe that AH Cam is a single-mode RR Lyrae star undergoing the Blazhko effect: periodic modulation of the amplitude and shape of its light curve. What was originally taken to be the period of the second mode is instead the 1-cycle/d alias of a modulation sidelobe in the Fourier spectrum. The data are well described by a modulation period of just under 11 d, which is the shortest Blazhko period reported to date in the literature and confirms the earlier suggestion by Goranskii. A low-resolution spectrum of AH Cam indicates that it is relatively metal rich, with delta-S less than or = 2. Its high metallicity and short modulation period may provide a critical test of at least one theory for the Blazhko effect. Moskalik's internal resonance model makes specific predictions of the growth rate of the fundamental model vs fundamental period. AH Cam falls outside the regime of other known Blazhko variables and resonance model predictions, but these are appropriate for metal-poor RR Lyrae stars. If the theory matches the behavior of AH Cam for a metal-rich stellar model, this would bolster the resonance hypothesis.

Shortest term wind power predictions with forecast horizons less than 12 hours benefit from observed data. So comparison between Synop observations and numerical weather predictions can serve as assessment of the available forecasts. This ad-hoc analysis depends very much on the observed data whose spatial distribution is very patchy in some areas. A powerful interpolation method helps to create a correct representation of the observed fields. The Kriging method finds a correlation function which describes the variation of the spatial field best. An unknown data point is estimated by weighted observation points, taking distance and clustering effects into account. We use mean sea level pressure data measured at Synop and ship stations in Europe and the Atlantic to assess the quality of the predicted mean sea level pressure from Numerical Weather Prediction models. Those models serve as base of wind power predictions and influence the prediction error considerably. If this error can be estimated, a rating of different Numerical Weather Prediction Models which are taken into consideration can follow. The choice of the area that represents the area of interest in terms of prediction errors depend on the moving direction of the low pressure systems. First calculations show that errors of the ad-hoc analysis are in good agreement with the ECMWF analysis. The evaluation of the ad-hoc analysis in an area over Great Britain/North Sea and the comparison with available ECMWF forecasts over Germany reveals phase shifts for some time intervals. This supports the assumption of moving prediction errors along low pressure systems.

AM CVn stars are ultracompact binaries (P_{orb}< 65 min) where a hydrogen-deficient low-mass, degenerate donor star overfills its Roche lobe and transfers matter to a companion white dwarf via an accretion disc. SDSS J0926+36 is currently the only eclipsing AM CVn star and also the shortest period eclipsing binary known. Its light curve displays deep ( 2 mag) eclipses every 28.3 min, which last for  2 min, as well as  2 mag amplitude outbursts every  100-200 d. Superhumps were seen in its quiescent light curve in some occasions, probably as a reminiscence of a (in some cases undetected) previous outburst. Its eclipsing nature allows a unique opportunity to disentangle the emission from several different light sources, and to map the surface brightness distribution of its hydrogen-deficient accretion disc with the aid of maximum entropy eclipse mapping techniques. Here we report the eclipse mapping analysis of optical light curves of SDSS J0926+36, collected with the 2.4 m Liverpool Robotic Telescope, covering 20 orbits of the binary over 5 nights of observations between 2012 February and March. The object was in quiescence at all runs. Our data show no evidence of superhumps nor of orbital modulation due to anisotropic emission from a bright spot at disc rim. Accordingly, the average out-of-eclipse flux level is consistent with that of the superhump-subtracted previous light curves. We combined all runs to obtain an orbital light curve of improved S/N. The corresponding eclipse map shows a compact source at disc centre (T_{b}simeq 17000 K), a faint, cool accretion disc ( 4000 K) plus enhanced emission along the gas stream ( 6000 K) beyond the impact point at the outer disc rim, suggesting the occurrence of gas stream overflow at that epoch.

[11]Cycloparaphenylene ([11]CPP) selectively encapsulates La@C82 to form the shortest possible metallofullerene-carbon nanotube (CNT) peapod, La@C82 ?[11]CPP, in solution and in the solid state. Complexation in solution was affected by the polarity of the solvent and was 16?times stronger in the polar solvent nitrobenzene than in the nonpolar solvent 1,2-dichlorobenzene. Electrochemical analysis revealed that the redox potentials of La@C82 were negatively shifted upon complexation from free La@C82 . Furthermore, the shifts in the redox potentials increased with polarity of the solvent. These results are consistent with formation of a polar complex, (La@C82 )(?-) ?[11]CPP(?+) , by partial electron transfer from [11]CPP to La@C82 . This is the first observation of such an electronic interaction between a fullerene pea and CPP pod. Theoretical calculations also supported partial charge transfer (0.07) from [11]CPP to La@C82 . The structure of the complex was unambiguously determined by X-ray crystallographic analysis, which showed the La atom inside the C82 near the periphery of the [11]CPP. The dipole moment of La@C82 was projected toward the CPP pea, nearly perpendicular to the CPP axis. The position of the La atom and the direction of the dipole moment in La@C82 ?[11]CPP were significantly different from those observed in La@C82 ?CNT, thus indicating a difference in orientation of the fullerene peas between fullerene-CPP and fullerene-CNT peapods. These results highlight the importance of pea-pea interactions in determining the orientation of the metallofullerene in metallofullerene-CNT peapods. PMID:25224281

Summary The propagator of a particle inside a sector of opening angle ? is calculated exactly by the approach of path integrals of\\u000a Feynman. The wave functions suitably normalized are then derived. Special cases of propagators are considered.

We review our investigations on Gibbs measures relative to Brownian motion, in particular the existence of such measures and their path properties, uniqueness, resp. non-uniqueness. For the case when the energy only depends on increments, we present a functional central limit theorem. We also explain connections with other work and state open problems of interest.

We review our investigations on Gibbs measures relative to Brownian motion, in particular the existence of such measures and their path properties, uniqueness, resp. non-uniqueness. For the case when the energy only depends on increments, we present a functional central limit theorem. We also explain connections with other work and state open problems of interest.

The minimum-time manipulator control problem is solved for the case when the path is specified and the actuator torque limitations are known. The optimal open-loop torques are found, and a method is given for implementing these torques with a conventional linear feedback control system. The algorithm allows bounds on the torques that may be arbitrary functions of the joint angles

While resources for the gifted are not abundant, many schools do offer classes, programs, services, and/or clubs that broaden student learning beyond the curriculum. What can educators do to expand the horizons of gifted children--to open their minds to new worlds of knowledge and understanding? Programs for gifted students, particularly those

This paper surveys recent results in the area of virtual path layout in ATM networks. We focus on the one-to-all (or broadcast) and the all-to-all problems. We present a model for theoretical studies of these layouts, which amounts to covering the network with simple paths, under various constraints. The constraints are the hop count (the number of paths traversed between

This is an activity to help students visualize the relationship of motion, time and space as it relates to objects orbiting the earth. They will be able to track the path of an orbiting object on a globe, plot the path of an orbiting object on a flat world map, and explain that an object orbiting earth on a plane will produce a flight path which appears as wavy lines on the earths surface.

Path Loss Measurements were obtained on three (3) GPS equipped 757 aircraft. Systems measured were Marker Beacon, LOC, VOR, VHF (3), Glide Slope, ATC (2), DME (2), TCAS, and GPS. This data will provide the basis for assessing the EMI (Electromagnetic Interference) safety margins of comm/nav (communication and navigation) systems to portable electronic device emissions. These Portable Electronic Devices (PEDs) include all devices operated in or around the aircraft by crews, passengers, servicing personnel, as well as the general public in the airport terminals. EMI assessment capability is an important step in determining if one system-wide PED EMI policy is appropriate. This data may also be used comparatively with theoretical analysis and computer modeling data sponsored by NASA Langley Research Center and others.

Path and path deviation equations for neutral, charged, spinning and spinning charged test particles, using a modified Bazanski Lagrangian, are derived. We extend this approach to strings and branes. We show how the Bazanski Lagrangian for charged point particles and charged branes arises à la Kaluza-Klein from the Bazanski Lagrangian in 5-dimensions.

Multihoming Intelligent Route Control (IRC) plays a significant role in improving the performance of Internet accesses. However,\\u000a in a competitive environment, IRC systems may introduce persistent route oscillations, causing significant performance degradation.\\u000a In this study, three design alternatives to cope with this issue are investigated: Randomized Path Monitoring, Randomized\\u000a Path Switching and History-aware Path Switching. The simulation results show that

In this paper, we adapt the heuristic of Fortz and Thorup for optimizing the weights of ShortestPath First protocols such as OpenShortestPath First (OSPF) or Intermediate System-Intermediate System (IS-IS), in order to take into account failure scenarios. More precisely, we want to find a set of weights that is robust to all single link failures. A direct

In accordance with Fermat's Variation Principle, a ray path connecting two arbitrary points in a scene via multiple reflectors is given by a non-linear system. If we fix one of the two points and let the other change, the system can be considered as a function relating the reflection points along the path to the varying point. In this paper,

This report defines the problem of crossing path crashes in the United States. This crash type involves one moving vehicle that cuts across the path of another when their initial approach comes from either lateral or opposite directions and they typically...

In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences based on development status and, to a lesser extent, water scarcity. People in less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in more developed sites. Thematically, people in less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community based solutions, while people in more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in water-rich sites. Thematically, people in water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences associated with development status and, to a lesser extent, water scarcity. People in the two less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in the more developed sites. Thematically, people in the two less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community-based solutions, while people in the more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in the two water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in the water-rich sites. Thematically, people in the two water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in the water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

221A Lecture Notes Path Integral 1 Feynman's Path Integral Formulation Feynman's formulation with a weight factor given by the classical action for each path. Hence the name path integral. This is it. Note of quantum mechanics using the so-called path inte- gral is arguably the most elegant. It can be stated

The problem of automatic collision-free path planning is central to mobile robot applications. An approach to automatic path planning based on a quadtree representation is presented. Hierarchical path-searching methods are introduced, which make use of this multiresolution representation, to speed up the path planning process considerably. The applicability of this approach to mobile robot path planning is discussed.

A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.

... Technology Systems Interactions and Whole House Approaches PATH?s mission is to advance technology ... technology arena. Far reaching exploratory research that can lead to breakthrough technologies and ...

Through the combined use of regression techniques, we will learn models of the uncertainty propagation efficiently and accurately to replace computationally intensive Monte- Carlo simulations in informative path planning. ...

The World Wide Web contains rich collections of digital materials that can be used in education and learning settings. The collaborative authoring prototype of Walden's Paths targets two groups of users: educators and learners. From the perspective...

We consider the path space of a curved manifold on which a point particle is introduced in a conservative physical system with constant total energy to formulate its action functional and geodesic equation together with breaks on the path. The second variation of the action functional is exploited to yield the geodesic deviation equation and to discuss the Jacobi fields on the curved manifold. We investigate the topology of the path space using the action functional on it and its physical meaning by defining the gradient of the action functional, the space of bounded flow energy solutions and the moduli space associated with the critical points of the action functional. We also consider the particle motion on the $n$-sphere $S^{n}$ in the conservative physical system to discuss explicitly the moduli space of the path space and the corresponding homology groups.

A new OSI (Open Systems Interconnection)-based model is described that can be used for the classification of residential gateways (RG). It is applied to analyze current gateway solutions and to draw evolutionary paths for the mid-to-long term. It is concluded that set-top boxes and broadband modems especially, in contrast to game consoles and PCs, have a strong potential to evolve

Absolutely calibrated in-situ measurements of tropospheric hydroxyl radicals, formaldehyde, sulfur dioxide, and naphthalene (C10H8) were performed by long-path laser absorption spectroscopy during the field campaign POPCORN. The absorption light path was folded into an open optical multiple reflection cell with a mirror separation of 38.5 m. Using a light path length of 1848 m and an integration time of 200

Natural motion of virtual characters is crucial in games and simulations. The quality of such motion strongly depends on the path the character walks and the animation of the character locomotion. Therefore, much work has been done on path planning and character animation. However, the combination of both fields has received less attention. Combining path planning and motion synthesis introduces several problems. A path generated by a path planner is often a simplification of the character movement. This raises the question which (body) part of the character should follow the path generated by the path planner and to what extent it should closely follow the path. We will show that enforcing the pelvis to follow the path will lead to unnatural animations and that our proposed solution, using path abstractions, generates significantly better animations.

Improved gas-path seals are needed for better fuel economy, longer performance retention, and lower maintenance, particularly in advanced, high-performance gas turbine engines. Problems encountered in gas-path sealing are described, as well as new blade-tip sealing approaches for high-pressure compressors and turbines. These include a lubricant coating for conventional, porous-metal, rub-strip materials used in compressors. An improved hot-press metal alloy shows promise to increase the operating surface temperatures of high-pressure-turbine, blade-tip seals to 1450 K (2150 F). Three ceramic seal materials are also described that have the potential to allow much higher gas-path surface operating temperatures than are possible with metal systems.

This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.

is that the distribution of mitochondria varies as a function of fiber size. In juveniles, mitochondria are uniformly distributed throughout the fibers and the population is equally divided between subsarcolemmal4045 Muscle fibers typically fall within a size range of 10Â­100Â·Âµm along the shortest axis (e

The relationship between utility judgments of subtask paths and the utility of the task as a whole was examined. The convergent validation procedure is based on the assumption that measurements of the same quantity done with different methods should covary. The utility measures of the subtasks were obtained during the performance of an aircraft flight controller navigation task. Analyses helped decide among various models of subtask utility combination, whether the utility ratings of subtask paths predict the whole tasks utility rating, and indirectly, whether judgmental models need to include the equivalent of cognitive noise.

A polymer density functional theory (P-DFT) has been extended to the case of quantum statistics within the framework of Feynman path integrals. We start with the exact P-DFT formalism for an ideal open chain and adapt its efficient numerical solution to the case of a ring. We show that, similarly, the path integral problem can, in principle, be solved exactly by making use of the two-particle pair correlation function (2p-PCF) for the ends of an open polymer, half of the original. This way the exact data for one-dimensional quantum harmonic oscillator are reproduced in a wide range of temperatures. The exact solution is not, though, reachable in three dimensions (3D) because of a vast amount of storage required for 2p-PCF. In order to treat closed paths in 3D, we introduce a so-called "open ring" approximation which proves to be rather accurate in the limit of long chains. We also employ a simple self-consistent iteration so as to correctly account for the interparticle interactions. The algorithm is speeded up by taking convolutions with the aid of fast Fourier transforms. We apply this approximate path integral DFT (PI-DFT) method to systems within spherical symmetry: 3D harmonic oscillator, atoms of hydrogen and helium, and ions of He and Li. Our results compare rather well to the known data, while the computational effort (some seconds or minutes) is about 100 times less than that with Monte Carlo simulations. Moreover, the well-known "sign problem" is expected to be considerably reduced within the reported PI-DFT, since it allows for a direct estimate of the corresponding partition functions. PMID:16383563

Immigration: Rubio's path to presidency? In media blitz retorting conservative critics, he aims Writer Of the four Democratic and four Republican senators who wrote the immigration reform proposal now, both in Congress and nationwide, need more convincing on immigration reform than Democrats. And Rubio

A triangular path inverting interferometer is described with application to the study of thermal 'schlieren'. This is practically free of any vibration and coherence troubles, and possesses the unique feature that either differential or total shear may be obtained only with proper positioning of the object; once aligned, the optical components need not be disturbed further. This simple and stable

production; Nuclear power operations and control ï¿½ Plasma sciences; Applied plasma physics; Nuclear fusionNPRE at Illinois Three Paths Students choose from three concentrations: ï¿½ Plasma and Fusion ï¿½ Power and interactions of radiation with matter ï¿½ Applications of nuclear processes ï¿½ Nuclear fission for electric power

A complex series of evolutionary steps, contingent upon a dynamic environmental context and a long biological heritage, have led to the ascent of Homo sapiens as a dominant component of the modern biosphere. In a field where missing links still abound and new discoveries regularly overturn theoretical paradigms, our understanding of the path of human evolution has made tremendous advances

For many years, GE has been conducting research to understand better the loss mechanisms that degrade the aerodynamic performance of steam turbine stages, and to develop new computational fluid dynamics (CFD) computer programs to predict these losses accurately. This paper describes a number of new steam path design features that have been introduced in the GE steam turbine product line

Career paths, current and future, in the environmental sciences will be discussed, based on experiences and observations during the author's 40 + years in the field. An emphasis will be placed on the need for integrated, transdisciplinary systems thinking approaches toward achie...

Undergraduate Student) #12;Computer Science Misconceptions Intro to Computer Science - Florida International much can I make out of college? Data from the Bureau of Labor Statistics #12;Computer ScienceCOMPUTER SCIENCE: MISCONCEPTIONS, CAREER PATHS AND RESEARCH CHALLENGES School of Computing

This paper examines various career paths leading to deanship and considers the implications of the findings for women and minorities who aspire to this position. The paper is part of a larger study of academic deanship conducted by the Center for Academic Leadership at Washington State University between October 1996 and January 1997. Data for the

A model of d-pairing for superconducting and superfluid Fermi-systems has been formulated within the path integration technique. By path integration over {open_quote}fast{close_quotes} and {open_quotes}slow{close_quotes} Fermi-fields, the action functional (which determines all properties of model system) has been obtained. This functional could be used for the determination of different superconducting (superfluid) states, for calculation of the transition temperatures for these states, and for the calculation of the collective mode spectrum for HTSC, as well as for heavy fermion superconductors.

Delay testing of combinational logic in a clocked environment is analyzed. A model based upon paths is introduced for delay faults. Any path with a total delay exceeding the clock interval is called a \\

It has long been recognized that chemotaxis is the primary means by which nematodes locate host plants. Nonetheless, chemotaxis has received scant attention. We show that chemotaxis is predicted to take nematodes to a source of a chemo-attractant via the shortest possible routes through the labyrinth of air-filled or water-filled channels within a soil through which the attractant diffuses. There are just two provisos: (i) all of the channels through which the attractant diffuses are accessible to the nematodes and (ii) nematodes can resolve all chemical gradients no matter how small. Previously, this remarkable consequence of chemotaxis had gone unnoticed. The predictions are supported by experimental studies of the movement patterns of the root-knot nematodes Meloidogyne incognita and Meloidogyne graminicola in modified Y-chamber olfactometers filled with Pluronic gel. By providing two routes to a source of the attractant, one long and one short, our experiments, the first to demonstrate the routes taken by nematodes to plant roots, serve to test our predictions. Our data show that nematodes take the most direct route to their preferred hosts (as predicted) but often take the longest route towards poor hosts. We hypothesize that a complex of repellent and attractant chemicals influences the interaction between nematodes and their hosts. PMID:20880854

We propose consideration of at least two possible evolutionary paths for the emergence of intelligent life with the potential for technical civilization. The first is the path via encephalization of homeothermic animals; the second is the path to swarm intelligence of so-called superorganisms, in particular the social insects. The path to each appears to be facilitated by environmental change: homeothermic animals by decreased climatic temperature and for swarm intelligence by increased oxygen levels.

As more data-path stacks are integrated into system-on-a-chip (SOC), data-path is becoming a critical part of the whole giga-scale integrated circuits (GSI) design. The traditional layout design methodology can not satisfy the data-path performance requirements because it has no knowledge of the data-path bit-sliced structure and the strict performance (such as timing, coupling, and crosstalk) constraints. In this paper, we

Here, we represent protein structures as residue interacting networks, which are assumed to involve a permanent flow of information between amino acids. By removal of nodes from the protein network, we identify fold centrally conserved residues, which are crucial for sustaining the shortest pathways and thus play key roles in long-range interactions. Analysis of seven protein families (myoglobins, G-protein-coupled receptors, the trypsin class of serine proteases, hemoglobins, oligosaccharide phosphorylases, nuclear receptor ligand-binding domains and retroviral proteases) confirms that experimentally many of these residues are important for allosteric communication. The agreement between the centrally conserved residues, which are key in preserving short path lengths, and residues experimentally suggested to mediate signaling further illustrates that topology plays an important role in network communication. Protein folds have evolved under constraints imposed by function. To maintain function, protein structures need to be robust to mutational events. On the other hand, robustness is accompanied by an extreme sensitivity at some crucial sites. Thus, here we propose that centrally conserved residues, whose removal increases the characteristic path length in protein networks, may relate to the system fragility. PMID:16738564

In this paper we study path integral for a single spinless particle on a star graph with N edges, whose vertex is known to be described by U(N) family of boundary conditions. After carefully studying the free particle case, both at the critical and off-critical levels, we propose a new path integral formulation that correctly captures all the scale-invariant subfamily of boundary conditions realized at fixed points of boundary renormalization group flow. Our proposal is based on the folding trick, which maps a scalar-valued wave function on star graph to an N-component vector-valued wave function on half-line. All the parameters of scale-invariant subfamily of boundary conditions are encoded into the momentum independent weight factors, which appear to be associated with the two distinct path classes on half-line that form the cyclic group Z2. We show that, when bulk interactions are edge-independent, these weight factors are generally given by an N-dimensional unitary representation of Z2. Generalization to momentum dependent weight factors and applications to worldline formalism are briefly discussed.

Flexible lifelong learning requires that learners can compare and select learning paths that best meet individual needs, not just in terms of learning goals, but also in terms of planning, costs etc. To this end a learning path specification was developed, which describes both the contents and the structure of any learning path, be it formal,

Although the Friis' formula is widely used to calculate the free space path loss of narrowband communications, it is considered only single frequency. Therefore, it should be extended to calculate the free space path loss of ultra wideband (UWB) communications by considering the frequency bandwidth. In this paper, the free space path loss of UWB communications is studies. The Friis'

Loomis, Klatzky, Avraamides, Lippa & Golledge (2007) suggest that, when it comes to spatial information, verbal description and perceptual experience are nearly functionally equivalent with respect to the cognitive representations they produce. We tested this idea for the case of spatial memory for complex paths. Paths consisted entirely of unit-length segments followed by 90-degree turns, thus assuring that a path

Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.

A detailed analysis of experimentally obtained curvilinear crack path trajectories formed in a heterogeneous stress field is presented. Experimental crack path trajectories were used as data for numerical simulations, recreating the actual stress field governing the development of the crack path. Thus, the current theories of crack curving and kinking could be examined by comparing them with the actual stress field parameters as they develop along the experimentally observed crack path. The experimental curvilinear crack path trajectories were formed in the tensile specimens with a hole positioned in the vicinity of a potential crack path. The numerical simulation, based on the solution of equivalent boundary value problems with the possible perturbations of the crack path, is presented here.

A detailed analysis of experimentally obtained curvilinear crack path trajectories formed in a heterogeneous stress field is presented. Experimental crack path trajectories were used as data for the numerical simulations, recreating the actual stress field governing the development of the crack path. Thus, the current theories of crack curving and kinking could be examined by comparing them with the actual stress field parameters as they develop along the experimentally observed crack path. The experimental curvilinear crack path trajectories were formed in the tensile specimens with a hole positioned in the vicinity of a potential crack path. The numerical simulation, based on the solution of equivalent boundary value problems with the possible perturbations of the crack path, is presented.

Open-access journals, which provide access to their scholarly articles freely and without limitations, are at a systematic disadvantage relative to traditional closed-access journal publishing and its subscription-based business model. A simple, cost-effective remedy to this inequity could put open-access publishing on a path to become a sustainable, efficient system. PMID:19652697

A learning path is proposed starting from the characterization of a sound wave, showing how human beings emit articulate sounds in the language, introducing psychoacoustics, i.e. how the sound interacts with ears and it is transduced into an electrical signal for transmission to the brain. What is perceived as noise is presented and the concept is extended to physical measurements. The interdisciplinary teaching process is focused on active learning through activities at school and outside performed with an open source software which allows to record sounds and analyze spectral components.

In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.

We report on the discovery of the shortest period binary comprising a hot subdwarf star (CD-30 11223, GALEX J1411-3053) and a massive unseen companion. Photometric data from the All Sky Automated Survey show ellipsoidal variations of the hot subdwarf primary and spectroscopic series revealed an orbital period of 70.5 minutes. The large velocity amplitude suggests the presence of a massive white dwarf in the system (M{sub 2}/M{sub Sun} {approx}> 0.77) assuming a canonical mass for the hot subdwarf (0.48 M{sub Sun }), although a white dwarf mass as low as 0.75 M{sub Sun} is allowable by postulating a subdwarf mass as low as 0.44 M{sub Sun }. The amplitude of ellipsoidal variations and a high rotation velocity imposed a high-inclination to the system (i {approx}> 68 Degree-Sign ) and, possibly, observable secondary transits (i {approx}> 74 Degree-Sign ). At the lowest permissible inclination and assuming a subdwarf mass of {approx}0.48 M{sub Sun }, the total mass of the system reaches the Chandrasekhar mass limit at 1.35 M{sub Sun} and would exceed it for a subdwarf mass above 0.48 M{sub Sun }. The system should be considered, like its sibling KPD 1930+2752, a candidate progenitor for a Type Ia supernova. The system should become semi-detached and initiate mass transfer within Almost-Equal-To 30 Myr.

Methods and systems for using one or more radio frequency identification devices (RFIDs), or other suitable signal transmitters and/or receivers, to provide a sensor information communication path, to provide location and/or spatial orientation information for an emergency service worker (ESW), to provide an ESW escape route, to indicate a direction from an ESW to an ES appliance, to provide updated information on a region or structure that presents an extreme environment (fire, hazardous fluid leak, underwater, nuclear, etc.) in which an ESW works, and to provide accumulated thermal load or thermal breakdown information on one or more locations in the region.

The present invention relates to a dispersive spectrometer. The spectrometer allows detection of multiple orders of light on a single focal plane array by splitting the orders spatially using a dichroic assembly. A conventional dispersion mechanism such as a defraction grating disperses the light spectrally. As a result, multiple wavelength orders can be imaged on a single focal plane array of limited spectral extent, doubling (or more) the number of spectral channels as compared to a conventional spectrometer. In addition, this is achieved in a common path device.

The California Partners for Advanced Transit and Highways (PATH) researches methods for increasing highway safety, reducing congestion, and minimizing pollution and energy consumption. Intellimotion is one of its publications that highlights some of the current projects. Although it is labeled as a quarterly newsletter, Intellimotion is released on a very irregular basis. The 2002 issue covers several stories, including a project that makes vehicle navigation with the Global Positioning System extremely accurate. Another article looks at intelligent transportation systems and the issues regarding Bus Rapid Transit. Many past issues of Intellimotion are available on this Web site. This site is also reviewed in the October 25, 2002 Scout Report.

As part of a Superfund Innovative Technology Evaluation (SITE) field program, a Fourier transform infrared (FTIR) spectrometer vas used to make openpath measurements of volatile organic compounds in the New Castle, Delaware, area. he SITE program requires that new technologies b...

Visitors to the FDR Memorial in Washington, D.C., enter the area through ceremonial openings: from the pathway around the reflecting pond of the Jefferson Memorial, or across a small shaded plaza reached from a roadway parallel to the Potomac River. The FDR Memorial itself cannot be seen at the start of either of these paths. It is out there

Over 100 million gallons of radioactive and toxic waste materials generated in weapon materials production are stored in 322 tanks buried within large areas at DOE sites. Toxic vapors occur in the tank headspace due to the solvents used and chemical reactions within the tanks. To prevent flammable or explosive concentration of volatile vapors, the headspace are vented, either manually or automatically, to the atmosphere when the headspace pressure exceeds preset values. Furthermore, 67 of the 177 tanks at the DOE Hanford Site are suspected or are known to be leaking into the ground. These underground storage tanks are grouped into tank farms which contain closely spaced tanks in areas as large as 1 km{sup 2}. The objective of this program is to protect DOE personnel and the public by monitoring the air above these tank farms for toxic air pollutants without the monitor entering the tanks farms, which can be radioactive. A secondary objective is to protect personnel by monitoring the air above buried 50 gallon drums containing moderately low radioactive materials but which could also emit toxic air pollutants.

Students will complete several activities in which they will describe, draw, examine and explore paths. Activities range from describing, drawing and exploring local paths (roads/sidewalks to school, hiking trails, trails in the local school environment, etc.) to comparing and contrasting larger-scale paths (streets, bridges, runways, rivers) on maps and in satellite images of three major world cities. NASA satellite images of Boston, Paris and Houston are included in the lesson. This investigation also introduces students to the need for "ground truthing." The URL opens to the investigation directory, with links to teacher and student materials, lesson extensions, resources, teaching tips, and assessment strategies. The teacher's guide will begin with a two-page module overview and list of all standards addressed. This is Investigation 1 of four found in the Grades K-4 Module 4 of Mission Geography. The Mission Geography curriculum integrates data and images from NASA missions with the National Geography Standards. Each of the four investigations in Module 4, while related, can be done independently.

Path entanglement is a key resource for quantum metrology. Using path-entangled states, the standard quantum limit can be beaten, and the Heisenberg limit can be achieved. However, the preparation and detection of such states scales unfavourably with the number of photons. Here we introduce sequential path entanglement, in which photons are distributed across distinct time bins with arbitrary separation, as a resource for quantum metrology. We demonstrate a scheme for converting polarization Greenberger-Horne-Zeilinger entanglement into sequential path entanglement. We observe the same enhanced phase resolution expected for conventional path entanglement, independent of the delay between consecutive photons. Sequential path entanglement can be prepared comparably easily from polarization entanglement, can be detected without using photon-number-resolving detectors, and enables novel applications.

We construct the path integral for determining the potential on any dendritic tree described by a linear cable equation. This is done by generalizing Brownian motion from a line to a tree. We also construct the path integral for dendritic structures with spatially-varying and\\/or time-dependent membrane conductivities due, for example, to synaptic inputs. The path integral allows novel computational techniques

Given two metastable states A and B of a biomolecular system, the problem is to calculate the likely paths of the transition from A to B. Such a calculation is more informative and more manageable if done for a reduced set of collective variables chosen so that paths cluster in collective variable space. The computational task becomes that of computing the center of such a cluster. A good way to define the center employs the concept of a committor, whose value at a point in collective variable space is the probability that a trajectory at that point will reach B before A. The committor foliates the transition region into a set of isocommittors. The maximum flux transition path is defined as a path that crosses each isocommittor at a point which (locally) has the highest crossing rate of distinct reactive trajectories. This path is based on the same principle as the minimum resistance path of Berkowitz et al (1983), but it has two advantages: (i) the path is invariant with respect to a change of coordinates in collective variable space and (ii) the differential equations that define the path are simpler. It is argued that such a path is nearer to an ideal path than others that have been proposed with the possible exception of the finite-temperature string method path. To make the calculation tractable, three approximations are introduced, yielding a path that is the solution of a nonsingular two-point boundary-value problem. For such a problem, one can construct a simple and robust algorithm. One such algorithm and its performance is discussed. PMID:20890401

Efforts to give an improved mathematical meaning to Feynman's path integral formulation of quantum mechanics started soon after its introduction and continue to this day. In the present paper, one common thread of development is followed over many years, with contributions made by various authors. The present version of this line of development involves a continuous-time regularization for a general phase space path integral and provides, in the author's opinion at least, perhaps the optimal formulation of the path integral.

Recently, it has been shown that Absolute Parallelism (AP) geometry admits paths that are naturally quantized. These paths have been used to describe the motion of spinning particles in a background gravitational field. In case of a weak static gravitational field limits, the paths are applied successfully to interpret the discrepancy in the motion of thermal neutrons in the Earth's gravitational field (COW-experiment). The aim of the present work is to explore the properties of the deviation equations corresponding to these paths. In the present work the deviation equations are derived and compared to the geodesic deviation equation of the Riemannian geometry.

There are many possibilities for rewarding careers in the geosciences including positions in academia, government, industry, and other parts of the private sector. How do you choose the right path to meet your goals and needs and find the right career? What are the tradeoffs and strategic moves that you should make at different stages in your career? Some of the pros and cons between soft-money research, government research, and management and industry positions are discussed from a personal perspective. In addition this presentation will provide some perspective on different career choices as seen by program managers in funding agencies. The competing priorities between work life and private life are discussed with the some thoughts on compromising between "having it all" and finding what works for you.

The classical notions of continuity and mechanical causality are left in order to refor- mulate the Quantum Theory starting from two principles: I) the intrinsic randomness of quantum process at microphysical level, II) the projective representations of sym- metries of the system. The second principle determines the geometry and then a new logic for describing the history of events (Feynman's paths) that modifies the rules of classical probabilistic calculus. The notion of classical trajectory is replaced by a history of spontaneous, random an discontinuous events. So the theory is reduced to determin- ing the probability distribution for such histories according with the symmetries of the system. The representation of the logic in terms of amplitudes leads to Feynman rules and, alternatively, its representation in terms of projectors results in the Schwinger trace formula.

Graph search represents a cornerstone in computer science and is employed when the best algorithmic solution to a problem consists in performing an analysis of a search space representing computational possibilities. Typically, in such problems it is crucial to determine the sequence of transitions performed that led to certain states. In this work we discuss how to adapt generic quantum search procedures, namely quantum random walks and Grover's algorithm, in order to obtain computational paths. We then compare these approaches in the context of tree graphs. In addition we demonstrate that in a best-case scenario both approaches differ, performance-wise, by a constant factor speedup of two, whilst still providing a quadratic speedup relatively to their classical equivalents. We discuss the different scenarios that are better suited for each approach.

Path and path deviation equations for charged, spinning and spinning charged objects in different versions of Kaluza-Klein (KK) theory using a modified Bazanski Lagrangian have been derived. The significance of motion in five dimensions, especially for a charged spinning object, has been examined. We have also extended the modified Bazanski approach to derive the path and path deviation equations of a test particle in a version of non-symmetric KK theory.

This paper describes the system design and performance of an optical path cross-connect (OPXC) system based on wavelength path concept. The (OPXC) is designed to offer 16 sets of input and output fiber ports with each fiber transporting eight multiwavelength signals for optical paths. Each optical path has a capacity of 2.5 Gb\\/s. Consequently, the total system throughput is 8×16×2.5=320

Algorithms to construct optimal-path maps for single isolated homogeneous-cost convex-polygonal regions are discussed. Assuming the ability to construct optimal paths for a certain set of key points, a complete analysis is given of one of the four possible single-region situations, showing how to partition the map into regions of similar path behavior. An algorithm is then proposed for constructing optimal-path

The optical path (OP) technology, which employs both wavelength-division multiplexing and wavelength routing, will be the key to enhanced network integrity and an ubiquitous broadband integrated services digital network (B-ISDN) in the future. To construct the OP network, path accommodation design that can solve simultaneously the problems of path routing and wavelength assignment must be established. Since optical wavelengths are

Unmanned ground vehicles (UGVs) will be playing increasingly important role in the future battlefields. How to automatically guide and control UGVs under varying environment conditions represents a challenging issue. This paper presents a novel approach to achieving path planning and path tracking of UGVs under dynamic environments. We apply the topology theory to find the optimal path given any starting

We present a path perturbation algorithm which can maximize users? location privacy given a quality of service constraint. This work concentrates on a class of applications that continuously collect location samples from a large group of users, where just removing user identifiers from all samples is insufficient because an adversary could use trajectory information to track paths and follow users?

This interactive visual 'path finder' from the Concord Consortium allows users to explore the component pieces of the Next Generation Science Standards. After selecting the appropriate practices, core ideas, and crosscutting concepts, the path finder will suggest relevant resources from the Concord Consortium's collection.

This paper proposes two kinds of control-path oriented workflow knowledge analy- sis approaches which will be applied to a workflow intelligence and quality improve- ment framework aiming at the high degree of the workflow traceability and rediscover- ability. The framework needs two kinds of algorithms ? One is for generating the total sequences of the control-paths from a workflow model,

This document describes an evaluation of the baseline and two alternative disposition paths for the final disposition of the calcine wastes stored at the Idaho Nuclear Technology and Engineering Center at the Idaho National Engineering and Environmental Laboratory. The pathways are evaluated against a prescribed set of criteria and a recommendation is made for the path forward.

This document describes an evaluation of the baseline and two alternative disposition paths for the final disposition of the calcine wastes stored at the Idaho Nuclear Technology and Engineering Center at the Idaho National Engineering and Environmental Laboratory. The pathways are evaluated against a prescribed set of criteria and a recommendation is made for the path forward.

In this paper, we propose a novel path-based control method for generating realistic smoke animations. Our method allows an animator to specify a 3D curve for the smoke to follow. Path control is then achieved using a linear (closed) feedback loop to match the velocity field obtained from a 3D flow simulation with a target velocity field. The target velocity

In this paper we analyze the efficacy of basic path loss models at predicting median path loss in urban environments. We attempt to bound the practical error of these models and look at how they may hinder practical wireless applications, and in particular dynamic spectrum access networks. This analysis is made using a large set of measurements from production networks

the humble to the spectacular: lichens, coral, river systems, and lightning are all examples of naturally of objects exhibiting dendritic shape include lichens, coral, trees, lightning, rivers, crystals least-cost paths through the lattice. Multiple paths from a single starting location (or gen- erator

This short note introduces an interesting random walk on a circular path with cards of numbers. By using high school probability theory, it is proved that under some assumptions on the number of cards, the probability that a walker will return to a fixed position will tend to one as the length of the circular path tends to infinity.

We present a novel test generation technique for path delay faults, based on the growth (G) and disappearance (D) faults of programmable logic arrays (PLA). The circuit is modeled as a PLA that is prime and irredundant with respect to every output. Certain tests for G faults, generated by using known efficient methods are transformed into tests for path delay

Extinction appears ubiquitously in many fields, including chemical reactions, population biology, evolution and epidemiology. Even though extinction as a random process is a rare event, its occurrence is observed in large finite populations. Extinction occurs when fluctuations owing to random transitions act as an effective force that drives one or more components or species to vanish. Although there are many random paths to an extinct state, there is an optimal path that maximizes the probability to extinction. In this paper, we show that the optimal path is associated with the dynamical systems idea of having maximum sensitive dependence to initial conditions. Using the equivalence between the sensitive dependence and the path to extinction, we show that the dynamical systems picture of extinction evolves naturally towards the optimal path in several stochastic models of epidemics. PMID:21571943

Today's Internet uses a path vector routing protocol, BGP, for global routing. After a connectivity change, a path vector protocol tends to explore a potentially large number of alternative paths before converging on new stable paths. Several techniques for improving path vector convergence have been proposed, however there has been no comparative analysis to judge the relative merit of each

Complex real world action and its prediction and control has escaped analysis by the classical methods of psychological research. The reason is that psychologists have no procedures to parse complex tasks into their constituents. Where such a division can be made, based say on expert judgment, there is no natural scale to measure the positive or negative values of the components. Even if we could assign numbers to task parts, we lack rules i.e., a theory, to combine them into a total task representation. We compare here two plausible theories for the amalgamation of the value of task components. Both of these theories require a numerical representation of motivation, for motivation is the primary variable that guides choice and action in well-learned tasks. We address this problem of motivational quantification and performance prediction by developing psychophysical scales of the desireability or aversiveness of task components based on utility scaling methods (Galanter 1990). We modify methods used originally to scale sensory magnitudes (Stevens and Galanter 1957), and that have been applied recently to the measure of task 'workload' by Gopher and Braune (1984). Our modification uses utility comparison scaling techniques which avoid the unnecessary assumptions made by Gopher and Braune. Formula for the utility of complex tasks based on the theoretical models are used to predict decision and choice of alternate paths to the same goal.

The quantum theory of cosmological perturbations in single-field inflation is formulated in terms of a path integral. Starting from a canonical formulation, we show how the free propagators can be obtained from the well-known gauge-invariant quadratic action for scalar and tensor perturbations, and determine the interactions to arbitrary order. This approach does not require the explicit solution of the energy and momentum constraints, a novel feature which simplifies the determination of the interaction vertices. The constraints and the necessary imposition of gauge conditions is reflected in the appearance of various commuting and anticommuting auxiliary fields in the action. These auxiliary fields are not propagating physical degrees of freedom but need to be included in internal lines and loops in a diagrammatic expansion. To illustrate the formalism we discuss the tree-level three-point and four-point functions of the inflaton perturbations, reproducing the results already obtained by the methods used in the current literature. Loop calculations are left for future work.

The combination of automatic image acquisition and automatic image analysis of premature chromosome condensation (PCC) spreads was tested as a rapid biodosimeter protocol. Human peripheral lymphocytes were irradiated with 60Co gamma rays in a single dose of between 1 and 20 Gy, stimulated with phytohaemaglutinin and incubated for 48 h, division blocked with Colcemid, and PCC-induced by Calyculin A. Images of chromosome spreads were captured and analysed automatically by combining the Metafer 4 and CellProfiler platforms. Automatic measurement of chromosome lengths allows the calculation of the length ratio (LR) of the longest and the shortest piece that can be used for dose estimation since this ratio is correlated with ionizing radiation dose. The LR of the longest and the shortest chromosome pieces showed the best goodness-of-fit to a linear model in the dose interval tested. The application of the automatic analysis increases the potential use of the PCC method for triage in the event of massive radiation causalities. PMID:24789085

The combination of automatic image acquisition and automatic image analysis of premature chromosome condensation (PCC) spreads was tested as a rapid biodosimeter protocol. Human peripheral lymphocytes were irradiated with (60)Co gamma rays in a single dose of between 1 and 20 Gy, stimulated with phytohaemaglutinin and incubated for 48 h, division blocked with Colcemid, and PCC-induced by Calyculin A. Images of chromosome spreads were captured and analysed automatically by combining the Metafer 4 and CellProfiler platforms. Automatic measurement of chromosome lengths allows the calculation of the length ratio (LR) of the longest and the shortest piece that can be used for dose estimation since this ratio is correlated with ionizing radiation dose. The LR of the longest and the shortest chromosome pieces showed the best goodness-of-fit to a linear model in the dose interval tested. The application of the automatic analysis increases the potential use of the PCC method for triage in the event of massive radiation causalities. PMID:24789085

We present an optical method based on fluorescence spectroscopy for measuring chromophore concentrations in vivo. Fluorescence differential path length spectroscopy (FPDS) determines chromophore concentration based on the fluorescence intensity corrected for absorption. The concentration of the photosensitizer m-THPC (Foscan®) was studied in vivo in normal rat liver, which is highly vascularized and therefore highly absorbing. Concentration estimates of m-THPC measured by FDPS on the liver are compared with chemical extraction. Twenty-five rats were injected with 0.3 mg/kg m-THPC. In vivo optical concentration measurements were performed on tissue 3, 24, 48, and 96 h after m-THPC administration to yield a 10-fold variation in tissue concentration. After the optical measurements, the liver was harvested for chemical extraction. FDPS showed good correlation with chemical extraction. FDPS also showed a correlation between m-THPC fluorescence and blood volume fraction at the two shortest drug-light intervals. This suggests different compartmental localization of m-THPC for different drug-light intervals that can be resolved using fluorescence spectroscopy. Differences in measured m-THPC concentration between FDPS and chemical extraction are related to the interrogation volume of each technique; ~0.2 mm3 and ~102 mm3, respectively. This indicates intra-animal variation in m-THPC distribution in the liver on the scale of the FDPS sampling volume.

Robot path planning can refer either to a mobile vehicle such as a Mars Rover, or to an end effector on an arm moving through a cluttered workspace. In both instances there may exist many solutions, some of which are better than others, either in terms of distance traversed, energy expended, or joint angle or reach capabilities. A path planning program has been developed based upon a genetic algorithm. This program assumes global knowledge of the terrain or workspace, and provides a family of good paths between the initial and final points. Initially, a set of valid random paths are constructed. Successive generations of valid paths are obtained using one of several possible reproduction strategies similar to those found in biological communities. A fitness function is defined to describe the goodness of the path, in this case including length, slope, and obstacle avoidance considerations. It was found that with some reproduction strategies, the average value of the fitness function improved for successive generations, and that by saving the best paths of each generation, one could quite rapidly obtain a collection of good candidate solutions.

The personalized urban multi-criteria quasi-optimum path problem (PUMQPP) is a branch of multi-criteria shortestpath problems (MSPPs) and it is classified as a NP-hard problem. To solve the PUMQPP, by considering dependent criteria in route selection, there is a need for approaches that achieve the best compromise of possible solutions/routes. Recently, invasive weed optimization (IWO) algorithm is introduced and used as a novel algorithm to solve many continuous optimization problems. In this study, the modified algorithm of IWO was designed, implemented, evaluated, and compared with the genetic algorithm (GA) to solve the PUMQPP in a directed urban transportation network. In comparison with the GA, the results have shown the significant superiority of the proposed modified IWO algorithm in exploring a discrete search-space of the urban transportation network. In this regard, the proposed modified IWO algorithm has reached better results in fitness function, quality metric and running-time values in comparison with those of the GA.

A complex series of evolutionary steps, contingent upon a dynamic environmental context and a long biological heritage, have led to the ascent of Homo sapiens as a dominant component of the modern biosphere. In a field where missing links still abound and new discoveries regularly overturn theoretical paradigms, our understanding of the path of human evolution has made tremendous advances in recent years. Two major trends characterize the development of the hominin clade subsequent to its origins with the advent of upright bipedalism in the Late Miocene of Africa. One is a diversification into two prominent morphological branches, each with a series of 'twigs' representing evolutionary experimentation at the species or subspecies level. The second important trend, which in its earliest manifestations cannot clearly be ascribed to one or the other branch, is the behavioral complexity of an increasing reliance on technology to expand upon limited inherent morphological specializations and to buffer the organism from its environment. This technological dependence is directly associated with the expansion of hominin range outside Africa by the genus Homo, and is accelerated in the sole extant form Homo sapiens through the last 100 Ka. There are interesting correlates between the evolutionary and behavioral patterns seen in the hominin clade and environmental dynamics of the Neogene. In particular, the tempo of morphological and behavioral innovation may be tracking major events in Neogene climatic development as well as reflecting intervals of variability or stability. Major improvements in analytical techniques, coupled with important new collections and a growing body of contextual data are now making possible the integration of global, regional and local environmental archives with an improved biological understanding of the hominin clade to address questions of coincidence and causality.

Transports along path in fibre bundles are axiomatically introduced. Their general functional form and some their simple properties are investigated. The relationships of the transports along paths and lifting of paths are studied.

A simulation of a mathematical model to compute path discrepancies between great circle and rhumb line flight paths is presented. The model illustrates that the path errors depend on the latitude, the bearing, and the trip length of the flight.

The equilibrial path concept is further developed. Special attention is spent the symmetry conservation along equilibrial paths and symmetry-breaking. Symmetry-breaking can occur only at singular points. The simple singular points of an equilibrial path are valleyridge inflection points. In contrast to the intrinsic reaction paths and the gradient extremal paths, the equilibrial paths enable to describe the branching of reaction

The performance of single dish radio antennas or telescopes is depending on the surface accuracy of the reflectors in the beam path and the focus/pointing errors induced by deviations/misalignment of the reflectors from a desired direction. For multiple dish VLBI arrays an additional mechanical effect, the path length stability, is a further source of performance degradation. For application at higher frequencies environmental influences as wind and temperature have to be considered additionally to the usually required manufacturing and alignment accuracies. Active measurement ("metrology") of the antenna deformations and their compensation by "active optics" (AO) respectively "flexible body compensation" (FBC) are established methods. For the path length errors AO or FBC are up to now not established methods. The paper describes how to handle the path length errors and the related metrology analogues to the established methods used for surface and focus/pointing error corrections.

Recently, in complex network, link prediction has brought a surge of researches, among which similarity based link prediction outstandingly gains considerable success, especially similarity in terms of paths. In investigation of paths based similarity, we find that the effective influence of endpoints and strong connectivity make paths contribute more similarity between two unconnected endpoints, leading to a more accurate link prediction. Accordingly, we propose a so-called effective path index (EP) in this paper to leverage effective influence of endpoints and strong connectivity in similarity calculation. For demonstrating excellence of our index, the comparisons with six mainstream indices are performed on experiments in 15 real datasets and results show a great improvement of performance via our index.

A unified classical path theory of pressure broadening is derived using only elementary concepts. It is shown that the theory of Smith, Cooper and Vidal is only correct at all frequencies to first order in the number density of perturbers.

Demonstrates erroneous results obtained if change in a vector under parallel transport about a closed path in Riemannian spacetime is made in a complete circuit rather than just half a circuit. (Author/SL)

A specific proposal is discussed for how to identify decohering paths in a wavefunction of the universe. The emphasis is on determining the correlations among subsystems and then considering how these correlations evolve. The proposal is similar to earlier ideas of Schroedinger and of Zeh, but in other ways it is closer to the decoherence functional of Griffiths, Omnes, and Gell-Mann and Hartle. There are interesting differences with each of these which are discussed. Once a given coarse-graining is chosen, the candidate paths are fixed in this scheme, and a single well defined number measures the degree of decoherence for each path. The normal probability sum rules are exactly obeyed (instantaneously) by these paths regardless of the level of decoherence. Also briefly discussed is how one might quantify some other aspects of classicality. The important role that concrete calculations play in testing this and other proposals is stressed.

destination following weighted time-optimal and effort-optimal control paths. Simulation of this herding problem is accomplished through dynamic programming by utilizing the SNOPT software in the MATLAB environment. The numerical solution gives us the optimal...

development for entertainment applications and many classes of simulations. We present a novel behav- ioral- served paths in black on the floor. A library exit (a) and a university hallway (b). A realistic walking

The simple physics of a free particle reveals important features of the path-integral formulation of relativistic quantum theories. The exact quantum-mechanical propagator is calculated here for a particle described by the simple relativistic action proportional to its proper time. This propagator is nonvanishing outside the light cone, implying that spacelike trajectories must be included in the path integral. The propagator matches the WKB approximation to the corresponding configuration-space path integral far from the light cone; outside the light cone that approximation consists of the contribution from a single spacelike geodesic. This propagator also has the unusual property that its short-time limit does not coincide with the WKB approximation, making the construction of a concrete skeletonized version of the path integral more complicated than in nonrelativistic theory.

Discusses an alternate path to teaching introductory stoichiometry based on research findings. The recommendation is to use problems that can be solved easily by rapid mental calculation as well as by pure logic. (AIM)

Setting up a new lab is an exciting but challenging prospect. We discuss our experiences in finding a path to tackle some of the key current questions in cell biology and the hurdles that we have encountered along the way.

Galileo is reported to have stated, "Measure what is measurable and make measurable what is not so." My group's trajectory in cell biology has closely followed this philosophy, although it took some searching to find this path. PMID:24174456

Galileo is reported to have stated, Measure what is measurable and make measurable what is not so. My group's trajectory in cell biology has closely followed this philosophy, although it took some searching to find this path. PMID:24174456

exhaust, fuel to heat the steam is not a direct cost to producing power in the steam turbine. Increasing turbine efficiency, therefore, benefits the Lyondell facility by increasing power output. DESCRIPTION OF THE STEAM PATH AUDIT The basic approach... National Industrial Energy Technology Conference, Houston, TX, April 22-23, 1992 ---------- ---------- steam path after the initial turbine inlet. The extraction valves cause a large pressure drop between the exhaust of stage 2 and the inlet to stage 3...

Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ?, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

A description of the generation and evolution of ionospheric oxygen-ion conic distributions by electromagnetic ion-cyclotron-resonance heating is formulated in terms of a path integral. All of the relevant physics is contained in this path integral, which may be used to calculate measurable properties of the conic distribution. Although the presentation is applied to this specific ionospheric context, the treatment may be generalized to treat other diffusion problems of interest.

The popular Friis transmission formula is used to evaluate the amount of path loss in free space between the transmit and\\u000a receive antennas for the design of wireless transceivers. It is could be also used to estimate the path loss for the link\\u000a of the on-body network when there is no barrier and no body surface in between. Paying special

An experiment conducted with the ATS-6 satellite to determine the additional path loss over free-space loss experienced by land-mobile communication links is described. This excess path loss is measured as a function of 1) local environment, 2) vehicle heading, 3) link frequency, 4) satellite elevation angle, and 5) street side. A statistical description of excess loss developed from the data

In this paper we investigate the potential benefits of coordinated congestion control for multipath data transfers, and contrast with uncoordinated control. For static random path selections, we show the worst-case throughput performance of uncoordinated control behaves as if each user had but a single path (scaling like log(log(N ))\\/ log(N ) where N is the system size, measured in number

WAI. DEN'S PATHS QUIZ: SYSTEM DESIGN AND IMPLEMENTATION A Thesis by AVITAL JAYANT ARORA Submitted to the Offtce of Graduate Studies of Texas AJtM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... WAI. DEN'S PATHS QUIZ: SYSTEM DESIGN AND IMPLEMENTATION A Thesis by AVITAL JAYANT ARORA Submitted to the Offtce of Graduate Studies of Texas AJtM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE...

Non-sensory (cognitive) inputs can play a powerful role in monitoring one's self-motion. Previously, we showed that access to spatial memory dramatically increases response precision in an angular self-motion updating task [1]. Here, we examined whether spatial memory also enhances a particular type of self-motion updating - angular path integration. "Angular path integration" refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. It was hypothesized that remembered spatial frameworks derived from vision and spatial language should facilitate angular path integration by decreasing the uncertainty of self-location estimates. To test this we implemented a whole-body rotation paradigm with passive, non-visual body rotations (ranging 40 degrees -140 degrees ) administered about the yaw axis. Prior to the rotations, visual previews (Experiment 1) and verbal descriptions (Experiment 2) of the surrounding environment were given to participants. Perceived angular displacement was assessed by open-loop pointing to the origin (0 degrees ). We found that within-subject response precision significantly increased when participants were provided a spatial context prior to whole-body rotations. The present study goes beyond our previous findings by first establishing that memory of the environment enhances the processing of idiothetic self-motion signals. Moreover, we show that knowledge of one's immediate environment, whether gained from direct visual perception or from indirect experience (i.e., spatial language), facilitates the integration of incoming self-motion signals. PMID:20448337

Although the rules for interpreting local quantum theory imply discretization of process, Lorentz covariance is usually regarded as precluding time quantization. Nevertheless a time-discretized quantum representation of redshifting spatially-homogeneous universe may be based on discrete-step Feynman paths carrying causal Lorentz-invariant action--paths that not only propagate the wave function but provide a phenomenologically-promising elementary-particle Hilbert-space basis. In a model under development, local path steps are at Planck scale while, at a much larger ''wave-function scale'', global steps separate successive wave-functions. Wave-function spacetime is but a tiny fraction of path spacetime. Electromagnetic and gravitational actions are ''at a distance'' in Wheeler-Feynman sense while strong (color) and weak (isospin) actions, as well as action of particle motion, are ''local'' in a sense paralleling the action of local field theory. ''Nonmaterial'' path segments and ''trivial events'' collaborate to define energy and gravity. Photons coupled to conserved electric charge enjoy privileged model status among elementary fermions and vector bosons. Although real path parameters provide no immediate meaning for ''measurement'', the phase of the complex wave function allows significance for ''information'' accumulated through ''gentle'' electromagnetic events involving charged matter and ''soft'' photons. Through its soft-photon content the wave function is an ''information reservoir''.

This paper examines a new building block for next-generation networks: SNAPP, or Stateless Network-Authenticated Path Pinning. SNAPP-enabled routers securely embed their routing decisions in the packet headers of a stream of traffic, effectively pinning a flows path between sender and receiver. A sender can use the pinned path (even if routes subsequently change) by including the path embedding in later packet headers. This architectural building block decouples routing from forwarding, which greatly enhances the availability of a path in the face of routing misconfigurations or malicious attacks. To demonstrate the extreme flexibility of SNAPP, we show how it can support a wide range of applications, including sender-controlled paths, expensive route lookups, sender anonymity, and sender accountability. Our analysis shows that SNAPPs overhead is low, and the system is easily implemented in hardware. We believe that SNAPP is a worthy addition to the network architects toolbox, enabling a variety of new designs and trade-offs.

Effective navigation requires planning extended routes to remembered goal locations. Hippocampal place cells have been proposed to play a role in navigational planning but direct evidence has been lacking. Here, we show that prior to goal-directed navigation in an open arena, the hippocampus generates brief sequences encoding spatial trajectories strongly biased to progress from the subjects current location to a known goal location. These sequences predict immediate future behavior, even in cases when the specific combination of start and goal locations is novel. These results suggest that hippocampal sequence events previously characterized in linearly constrained environments as replay are also capable of supporting a goal-directed, trajectory-finding mechanism, which identifies important places and relevant behavioral paths, at specific times when memory retrieval is required, and in a manner which could be used to control subsequent navigational behavior. PMID:23594744

Holographic devices are expected to have a much larger capacity than conventional optical storage systems such as CD, DVD or blue diode laser based HD-DVD or BD. Recent developments in the field of dedicated recording materials and advanced optical enabling technologies are now opening the door for the realization of commercial products. One of the major technical challenges is the development of a robust and reliable system concept, which allows easy exchangeability of the medium. We developed a holographic tester system with common paths for the reference and the signal beams based on a single mode blue laser diode and a commercial CMOS detector. The system will be used to evaluate various multiplexing schemes, to investigate the influence of system tolerances on the reading performance and to estimate fundamental system limitations.

Nitrogenase catalyzed nitrogen fixation is the process by which life converts dinitrogen gas into fixed nitrogen in the form of bioavailable ammonia. The most common form of nitrogenase today requires a complex metal cluster containing molybdenum (Mo), although alternative forms exist which contain vanadium (V) or only iron (Fe). It has been suggested that Mo-independent forms of nitrogenase (V and Fe) were responsible for N(2) fixation on early Earth because oceans were Mo-depleted and Fe-rich. Phylogenetic- and structure-based examinations of multiple nitrogenase proteins suggest that such an evolutionary path is unlikely. Rather, our results indicate an evolutionary path whereby Mo-dependent nitrogenase emerged within the methanogenic archaea and then gave rise to the alternative forms suggesting that they arose later, perhaps in response to local Mo limitation. Structural inferences of nitrogenase proteins and related paralogs suggest that the ancestor of all nitrogenases had an open cavity capable of binding metal clusters which conferred reactivity. The evolution of the nitrogenase ancestor and its associated bound metal cluster was controlled by the availability of fixed nitrogen in combination with local environmental factors that influenced metal availability until a point in Earth's geologic history where the most desirable metal, Mo, became sufficiently bioavailable to bring about and refine the solution (Mo-nitrogenase) we see perpetuated in extant biology. PMID:22065963

Autonomy, often associated with an open and reflective evaluation of experience, is sometimes confused with reactance, which\\u000a indicates resistance to persuasion attempts. Two studies examined a path model in which autonomy and reactance predicted motivation\\u000a following the provision of anonymous or source-identified health-risk information, via the mediation of perceived threat to decision-making freedom and of perceived informational\\u000a value. Study 1

A triggerable plasma opening switch for connecting a megavolt, megampere power supply to a load is described comprising: cathode means having an input end, an output end, and a switch portion between the ends; anode means having an input end, an output end, and a switch portion between the ends and spaced from the switch portion of the cathode means by a gap; whereby the power supply is connectable between the input ends and the load is connectable between the output ends; plasma source means for filling the gap with a plasma for providing a current path for shorting current from the load; and triggering means for generating a magnetic field for controllably moving the plasma away from one of the anode or the cathode to generate an insulating gap and to block the electron flow across the gap, thereby opening the switch and permitting current to flow from the power supply to the load.

When a covariance structure model is misspecified, parameter estimates will be affected. It is important to know which estimates are systematically affected and which are not. The approach of analyzing the path is both intuitive and informative for such a purpose. Different from path analysis, analyzing the path uses path tracing and elementary

An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.

An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.

In this paper, exact algorithms are proposed for addressing multicriteria adaptive path problems, where arc attributes are stochastic and time-varying. Adaptive paths comprise a set of path strategies that enable the traveler to select a direction among all Pareto-optimal solutions at each node in response to knowledge of the arrival time at the intermediate nodes. Such paths can be viewed

We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores. PMID:15133624

Given an arbitrary Lagrangian function on R{sup d} and a choice of classical path, one can try to define Feynman's path integral supported near the classical path as a formal power series parameterized by 'Feynman diagrams', although these diagrams may diverge. We compute this expansion and show that it is (formally, if there are ultraviolet divergences) invariant under volume-preserving changes of coordinates. We prove that if the ultraviolet divergences cancel at each order, then our formal path integral satisfies a 'Fubini theorem' expressing the standard composition law for the time evolution operator in quantum mechanics. Moreover, we show that when the Lagrangian is inhomogeneous quadratic in velocity such that its homogeneous-quadratic part is given by a matrix with constant determinant, then the divergences cancel at each order. Thus, by 'cutting and pasting' and choosing volume-compatible local coordinates, our construction defines a Feynman-diagrammatic 'formal path integral' for the nonrelativistic quantum mechanics of a charged particle moving in a Riemannian manifold with an external electromagnetic field.

To obtain a well-defined path integral one often employs discretizations. In the case of gravity and reparametrization-invariant systems, the latter of which we consider here as a toy example, discretizations generically break diffeomorphism and reparametrization symmetry, respectively. This has severe implications, as these symmetries determine the dynamics of the corresponding system. Indeed we will show that a discretized path integral with reparametrization-invariance is necessarily also discretization independent and therefore uniquely determined by the corresponding continuum quantum mechanical propagator. We use this insight to develop an iterative method for constructing such a discretized path integral, akin to a Wilsonian RG flow. This allows us to address the problem of discretization ambiguities and of an anomaly-free path integral measure for such systems. The latter is needed to obtain a path integral, that can act as a projector onto the physical states, satisfying the quantum constraints. We will comment on implications for discrete quantum gravity models, such as spin foams.

Path analysis, a form of general linear structural equation models, is used in studies of human genetics data to discern genetic, environmental, and cultural factors contributing to familial resemblance. It postulates a set of linear and additive parametric relationships between phenotypes and genetic and cultural variables and then essentially uses the assumption of multivariate normality to estimate and perform tests of hypothesis on parameters. Such an approach has been advocated for the analysis of genetic epidemiological data by D. C. Rao, N. Morton, C. R. Cloninger, L. J. Eaves, and W. E. Nance, among others. This paper reviews and evaluates the formulations, assumptions, methodological procedures, interpretations, and applications of path analysis. To give perspective, we begin with a discussion of path analysis as it occurs in the form of general linear causal models in several disciplines of the social sciences. Several specific path analysis models applied to lipoprotein concentrations, IQ, and twin data are then reviewed to keep the presentation self-contained. The bulk of the critical discussion that follows is directed toward the following four facets of path analysis: (1) coherence of model specification and applicability to data; (2) plausibility of modeling assumptions; (3) interpretability and utility of the model; and (4) validity of statistical and computational procedures. In the concluding section, a brief discussion of the problem of appropriate model selection is presented, followed by a number of suggestions of essentially model-free alternative methods of use in the treatment of complex structured data such as occurs in genetic epidemiology. PMID:6349335

The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

A finite element method (FEM)-based surface wave propagation prediction simulator is developed. The simulator is tested and calibrated against analytical ray-mode models that also take into account the Millington recovery effects. It successfully calculates path losses over multimixed propagation paths at MF and HF frequency bands where the surface wave contribution is significant.

PATHS: Analysis of PATH Duration Statistics and their Impact on Reactive MANET Routing Protocols,fbai,bkrishna,helmy}@usc.edu ABSTRACT We develop a detailed approach to study how mobility im- pacts the performance of reactive MANET. Consequently, Mobile Ad hoc NET- works (MANETs) are attracting a lot of attention from the research community

We have been developing path planning techniques to look for paths that balance the utility and risk associated with different routes through a minefield. Such methods will allow a battlegroup commander to evaluate alternative route options while searching for low risk paths. Extending on previous years' efforts, we have implemented a generalized path planning framework to allow rapid evaluation and integration of new path planning algorithms. We have also implemented a version of Rapidly-Explored Random Trees (RRTs) for mine path planning which integrates path risk, path time, and dynamic and kinematic concerns. Several variants of the RRT algorithm and our existing path planning algorithms were quantitatively evaluated using the generalized path planning framework and an algorithm-dynamic evaluation graphical user interface.

This document presents a performance comparison between the per- domain path computation method and the Path Computation Element (PCE) Architecture based Backward Recursive Path Computa- tion (BRPC) procedure. Metrics to capture the significant performance aspects are identified and detailed simulations are carried out on realistic scenarios. A performance analysis for each of the path computation methods is then undertaken.

We report molecular simulations of diffusion in confinement showing a phenomenon that we denote as molecular path control (MPC); depending on loading, molecules follow a preferred pathway. MPC raises the important question to which extent the loading may affect the molecular trajectories in nanoporous materials. Through MPC one is able to manually adjust the ratio of the diffusivities through different types of pores, and as an application one can direct the flow of diffusing particles in membranes forward or sideward by simply adjusting the pressure, without the need for mechanical parts like valves. We show that the key ingredient of MPC is the anisotropic nature of the nanoporous material that results in a complex interplay between different diffusion paths as a function of loading. These paths may be controlled by changing the loading, either through a change in pressure or temperature. PMID:16109769

W. C. Gardiner observed that achieving understanding through combustion modeling is limited by the ability to recognize the implications of what has been computed and to draw conclusions about the elementary steps underlying the reaction mechanism. This difficulty can be overcome in part by making better use of reaction path analysis in the context of multidimensional flame simulations. Following a survey of current practice, an integral reaction flux is formulated in terms of conserved scalars that can be calculated in a fully automated way. Conditional analyses are then introduced, and a taxonomy for bidirectional path analysis is explored. Many examples illustrate the resulting path analysis and uncover some new results about nonpremixed methane-air laminar jets.

The quickest path problem deals with the transmission of a message of size {sigma} from a source to a destination with the minimum end-to-end delay over a network with bandwidth and delay constraints on the links. The authors consider four basic modes and two variations for the message delivery at the nodes reflecting the mechanisms such as circuit switching, Internet protocol, and their combinations. For each of the first three modes, they present O(m{sup 2} + mn log n) time algorithm to compute the quickest path for a given message size {sigma}. For the last mode, the quickest path can be computed in O(m + n log n) time.

Abstract. A smooth-primitive constrained-optimization-based path-tracking algorithm for mobile robots that compensates for rough terrain, predictable vehicle dynamics, and vehicle mobility constraints has been developed, implemented, and tested on the DARPA LAGR platform. Traditional methods for the geometric path following control problem involve trying to meet position constraints at fixed or velocity dependent look-ahead distances using arcs. We have reformulated the problem as an optimal control problem, using a trajectory generator that can meet arbitrary boundary state constraints. The goal state along the target path is determined dynamically by minimizing a utility function based on corrective trajectory feasibility and cross-track error. A set of field tests compared the proposed method to an implementation of the pure pursuit algorithm and showed that the smooth corrective trajectory constrained optimization approach exhibited higher performance than pure pursuit by achieving rough four times lower average cross-track error and two times lower heading error. 1

Path integration is a navigation strategy widely observed in nature where an animal maintains a running estimate, called the home vector, of its location during an excursion. Evidence suggests it is both ancient and ubiquitous in nature, and has been studied for over a century. In that time, canonical and neural network models have flourished, based on a wide range of assumptions, justifications and supporting data. Despite the importance of the phenomenon, consensus and unifying principles appear lacking. A fundamental issue is the neural representation of space needed for biological path integration. This paper presents a scheme to classify path integration systems on the basis of the way the home vector records and updates the spatial relationship between the animal and its home location. Four extended classes of coordinate systems are used to unify and review both canonical and neural network models of path integration, from the arthropod and mammalian literature. This scheme demonstrates analytical equivalence between models which may otherwise appear unrelated, and distinguishes between models which may superficially appear similar. A thorough analysis is carried out of the equational forms of important facets of path integration including updating, steering, searching and systematic errors, using each of the four coordinate systems. The type of available directional cue, namely allothetic or idiothetic, is also considered. It is shown that on balance, the class of home vectors which includes the geocentric Cartesian coordinate system, appears to be the most robust for biological systems. A key conclusion is that deducing computational structure from behavioural data alone will be difficult or impossible, at least in the absence of an analysis of random errors. Consequently it is likely that further theoretical insights into path integration will require an in-depth study of the effect of noise on the four classes of home vectors. PMID:19962387

Gas path seals are discussed with emphasis on sealing clearance effects on engine component efficiency, compressor pressure ratio, and stall margin. Various case-rotor relative displacements, which affect gas path seal clearances, are identified. Forces produced by nonuniform sealing clearances and their effect on rotor stability are examined qualitatively, and recent work on turbine-blade-tip sealing for high temperatures is described. The need for active clearance control and for engine structural analysis is discussed. The functions of the internal-flow system and its seals are reviewed.

We discuss the integration of the SANDROS path planner into a general robot simulation and control package with the inclusion of a fast geometry engine for distance calculations. This creates a single system that allows the path to be computed, simulated, and then executed on the physical robot. The architecture and usage procedures are presented. Also, we present examples of its usage in typical environments found in our organization. The resulting system is as easy to use as the general simulation system (which is in common use here) and is fast enough (example problems are solved in seconds) to be used interactively on an everyday basis.

Conceived by the British Labor Government in the 1960's the Open University was viewed as a way to extend higher education to Britain's working class, but enrollment figures in classes that represent traditional academic disciplines show that the student population is predominantly middle class. Bringing education into the home presents numerous

Optimal-path maps tell robots or people the best way to reach a goal point from anywhere in a known terrain area, eliminating most of the need to plan during travel. The authors address the construction of optimal-path maps for two-dimensional polygonal weighted-region terrain, terrain partitioned into polygonal areas such that the cost per unit of distance traveled is homogeneous and isotropic within each area. This is useful for overland route planning across varied ground surfaces and vegetation. The authors propose a new algorithm that recursively partitions terrain into regions of similar optimal-path behavior, and defines corresponding path subspaces for these regions. This process constructs a piecewise-smooth function of terrain position whose gradient direction is everywhere the optimal-path direction, permitting quick path finding. The algorithm used is more complicated than the current path-caching and wavefront-propagation algorithms, but it gives more accurate maps requiring less space to represent. Experiments with an implementation confirm the practicality of the authors' algorithm.

Intradomain traffic engineering aims to make more effi- cient use of network resources within an autonomous system. Interior Gateway Protocols such as OSPF (OpenShortestPath First) and IS-IS (Intermediate System- Intermediate System) are commonly used to select the paths along which traffic is routed within an autonomous system. These routing protocols direct traffic based on link weights assigned by

OpenMP lacks essential features for developing mission-critical software. In particular, it has no support for detecting and handling errors or even a concept of them. In this paper, the OpenMP Error Model Subcommittee reports on solutions under consideration for this major omission. We identify issues with the current OpenMP specification and propose a path to extend OpenMP with error-handling capabilities. We add a construct that cleanly shuts down parallel regions as a first step. We then discuss two orthogonal proposals that extend OpenMP with features to handle system-level and user-defined errors.

Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

Extensive work has been conducted on SPE analysis efforts: Fault effects Non-uniform weathered layer analysis MUNROU: material library incorporation, parallelization, and development of non-locking tets Development of a unique continuum-based-visco-plastic strain-rate-dependent material model With corrected SPE data path is now set for a multipronged approach to fully understand experimental series shot effects.

Purpose: The purpose of this paper is to present qualitative evidence on the processes and forces that shape school administrator career paths. Design/methodology/approach: An embedded case study approach is used to understand more than 100 administrator career transitions within the Delaware education system. Semi-structured interview data were

Pickup and delivery problems discussed in the literature of ten allow for only particularly sim- ple solutions in terms of the sequence of visited locations. We study the very simplest pickup and delivery paths which are concatenations of short patter ns visiting one or two requests. This restricted variant, still NP -hard, is close to the traveling salesman problem with

Derivation of a unified classical path theory of pressure broadening, using only elementary concepts. It is shown that the theory of Smith, Cooper and Vidal (1969) is only correct at all frequencies to first order in the number density of perturbers.

In this article we examine the career paths of top-level managers in the arts. By analysing the training and work history of 23 managers in a variety of arts organisations we evaluate the utility of several existing theories for understanding careers that are characterised by low levels of initial knowledge, the absence of a clear method of entry

The Lorentz gas is a model for a cloud of point particles (electrons) in a distribution of scatterers in space. The scatterers are often assumed to be spherical with a fixed diameter d, and the point particles move with constant velocity between the scatterers, and are specularly reflected when hitting a scatterer. There is no interaction between point particles. An interesting question concerns the distribution of free path lengths, i.e. the distance a point particle moves between the scattering events, and how this distribution scales with scatterer diameter, scatterer density and the distribution of the scatterers. It is by now well known that in the so-called Boltzmann-Grad limit, a Poisson distribution of scatterers leads to an exponential distribution of free path lengths, whereas if the scatterer distribution is periodic, the free path length distribution asymptotically behaves as a power law. This paper considers the case when the scatters are distributed on a quasi crystal, i.e. non periodically, but with a long range order. Simulations of a one-dimensional model are presented, showing that the quasi crystal behaves very much like a periodic crystal, and in particular, the distribution of free path lengths is not exponential.

To steer a course through the world, people are almost entirely dependent on visual information, of which a key component is optic flow. In many models of locomotion, heading is described as the fundamental control variable; however, it has also been shown that fixating points along or near one's future path could be the basis of an efficient

DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group ? of the first kind, and a continuous, bounded, ?-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ?H. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).

An on-axis, vibration insensitive, polarization Fizeau interferometer is realized through the use of a novel pixelated mask spatial carrier phase shifting technique in conjunction with a low coherence source and a polarization path matching mechanism. In this arrangement, coherence is used to effectively separate out the orthogonally polarized test and reference beam components for interference. With both the test and

Applications of the partial least squares (PLS) path modeling approachwhich have gained increasing dissemination in business researchusually build on the assumption that the data stem from a single population. However, in empirical applications, this assumption of homogeneity is unrealistic. Analyses on the aggregate data level ignore the existence of groups with substantial differences and more often than not result in

, in cooperation with the State of California Business, Transportation, and Housing Agency, Department of Trans from Implementation: Part X Freeway Service Patrols California PATH Working Paper UCB-ITS-PWP-2001 SERVICE PATROLS. David Levinson and Pavithra Kandadai Parthasarathi #12;2 Introduction Incident management

In robotics, path planning refers to finding a short. collision-free path from an initial robot configuration to a desired configuratioin. It has to be fast to support real-time task-level robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To remedy this situation, we present and analyze a learning algorithm that uses past experience to increase future performance. The algorithm relies on an existing path planner to provide solutions to difficult tasks. From these solutions, an evolving sparse network of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a speedup-learning framework in which a slow but capable planner may be improved both cost-wise and capability-wise by a faster but less capable planner coupled with experience. The basic algorithm is suitable for stationary environments, and can be extended to accommodate changing environments with on-demand experience repair and object-attached experience abstraction. To analyze the algorithm, we characterize the situations in which the adaptive planner is useful, provide quantitative bounds to predict its behavior, and confirm our theoretical results with experiments in path planning of manipulators. Our algorithm and analysis are sufficiently, general that they may also be applied to other planning domains in which experience is useful.

This study investigated the career paths of 625 university graduates who prepared to be secondary school teachers in Oman, their assessment of their current work situation, and the extent to which their initial commitment to teaching was related to their subsequent career satisfaction and intention to remain in teaching. While nearly all graduates

This article furthers research and theory on the initiation and development of service-learning partnerships. It identifies three paths of engagement between university and community agencies: tentative engagement, aligned engagement, and committed engagement. This conceptualization helps to understand how service-learning partnerships evolve over

This article reviews the path of funding higher education in Hungary, where funding cuts have resulted in understaffing, escalating tuition, growing student debt, and declining enrollment. Graduation rates are low, government policies favor vocational disciplines, and the system of preparation and access gives preference to students from wealthier

The authors consider the problem of planning paths for a robot which has a minimum turning radius. This is a first step towards accurately modeling a robot with the kinematics of a car. The technique used is to define a set of canonical trajectories which satisfy the nonholonomic constraints imposed. A configuration space can be constructed for these trajectories in

Fusion Development Path Panel Preliminary Report Summary for NRC BPAC Panel (Focus on MFE of a demonstration power plant in approximately 35 years. The plan should recognize the capabilities of all fusion facilities around the world, and include both magnetic fusion energy (MFE) and inertial fusion energy (IFE

In this paper, we address the problem of air path variables estimation for an HCCI engine. Two observers are pro- posed. Both rely on physical assumptions on the com- bustion, but use different sensors. After proving conver- gence in the two cases, we carry out comparisons based on simulation results. We stress the impact of two particu- lar additional sensors

Increased utilization of transmission critical path by wide area real time dynamic rating using Internet based weather data is presented. Economic analysis shows significant benefit of operating transmission lines at higher ratings by taking advantage of favorable weather conditions during overload conditions. A new line current profile based dynamic line rating system has been implemented at Idaho power by real

The assumption that there is an oculomotor plant, a fixed relationship between motoneuron firing rate and eye position, is dis- proved by brainstem recording studies showing that this relationship depends on which supernuclear subsystem determines firing rate. But it remains possible that there is a final common path (FCP), a fixed relationship between firing rate and muscle force. But then,

In this paper, an optimized cell planning for path loss reduction has been proposed. A 10 by 10 Km square block is considered as traffic required area (TRA) which consists of number of base stations (BSs) and mobile stations (MSs). The optimization algorithm: Tabu search, is used to optimize the position of base stations in such a way that the

To study the carrier frequency effects on path loss, measurements have been conducted at four discrete frequencies in the range 460-5100 MHz. The transmitter was placed on the roof of a 36 meters tall building and the receive antennas were placed on the roof of a van. Both urban and suburban areas were included in the measurement campaign. The results

This article provides an overview of the intellectual and sociopolitical roots of Iran's tortuous path toward Islamic liberalism and reform. It analyzes the shift in the ideological orientation of a major faction within the political elite from a radical to a relatively moderate and liberal interpretation of Islam. The authors trace the roots of this ideological shift to a series

Argues against community colleges opting for a quick fix in difficult times, a tactic designed for continuity rather than change. Adds that financial restrictions confronting community colleges call for drastic change in order to spur sustainable growth. Recommends five paths for strategic resource management and future advantage in the market,

Behavioral Path Analysis is both a theory and a methodology for studying person-environment interactions. It is designed to be applicable to the evaluation of both environments in use and proposed designed environments. This paper presents the basics of the theory, and some examples of recent applications that have guided its development. The

to be as creative as we wish How can you encourage creative thinking? Is it obvious why this is keyPaths to Creativity in Security Careers Privacy, Security and Trust 2006 Dr. Gregory Newby Arctic. These people need to be as creative, skilled and innovative as their adversaries. #12;Who is this Guy

Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect. PMID:25151621

An adaptive optical system used to correct horizontal beam propagation paths has been demonstrated. This system utilizes an interferometric wave-front sensor and a large-actuator-number MEMS-based spatial light modulator to correct the aberrations incurred by the beam after propagation along the path. Horizontal path correction presents a severe challenge to adaptive optics systems due to the short atmospheric transverse coherence length and the high degree of scintillation incurred by laser propagation along these paths. Unlike wave-front sensors that detect phase gradients, however, the interferometric wave-front sensor measures the wrapped phase directly. Because the system operates with nearly monochromatic light and uses a segmented spatial light modulator, it does not require that the phase be unwrapped to provide a correction and it also does not require a global reconstruction of the wave-front to determine the phase as required by gradient detecting wave-front sensors. As a result, issues with branch points are eliminated. Because the atmospheric probe beam is mixed with a large amplitude reference beam, it can be made to operate in a photon noise limited regime making its performance relatively unaffected by scintillation. The MEMS-based spatial light modulator in the system contains 1024 pixels and is controlled to speeds in excess of 800 Hz, enabling its use for correction of horizontal path beam propagation. In this article results are shown of both atmospheric characterization with the system and open loop horizontal path correction of a 1.53 micron laser by the system. To date Strehl ratios of greater than 0.5 have been achieved.

All gravitationally bound clusters expand, due to both gas loss from their most massive members and binary heating. All are eventually disrupted tidally, either by passing molecular clouds or the gravitational potential of their host galaxies. However, their interior evolution can follow two very different paths. Only clusters of sufficiently large initial population and size undergo the combined interior contraction and exterior expansion that leads eventually to core collapse. In all other systems, core collapse is frustrated by binary heating. These clusters globally expand for their entire lives, up to the point of tidal disruption. Using a suite of direct N-body calculations, we trace the `collapse line' in rv-N space that separates these two paths. Here, rv and N are the cluster's initial virial radius and population, respectively. For realistic starting radii, the dividing N-value is from 104 to over 105. We also show that there exists a minimum population, Nmin, for core collapse. Clusters with N < Nmin tidally disrupt before core collapse occurs. At the Sun's Galactocentric radius, RG = 8.5 kpc, we find Nmin ? 300. The minimum population scales with Galactocentric radius as R_G^{-9/8}. The position of an observed cluster relative to the collapse line can be used to predict its future evolution. Using a small sample of open clusters, we find that most lie below the collapse line, and thus will never undergo core collapse. Most globular clusters, on the other hand, lie well above the line. In such a case, the cluster may or may not go through core collapse, depending on its initial size. We show how an accurate age determination can help settle this issue.

Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

Connection oriented packet networks such as IP\\/MPLS can better meet the QoS guarantees in terms of delay jitter, bandwidth etc. required by premium traffic (PT). These connection oriented networks are more vulnerable to failures and they can be classified into link\\/path and degraded failures. Virtual path hopping concept eliminates all terminations of data communications due to degraded type failures detected

We investigate the correspondence between a non-equilibrium ensemble defined via the distribution of phase-space paths of a Hamiltonian system, and a system driven into a steady-state by non-equilibrium boundary conditions. To discover whether the non-equilibrium path ensemble adequately describes the physics of a driven system, we measure transition rates in a simple one-dimensional model of rotors with Newtonian dynamics and purely conservative interactions. We compare those rates with known properties of the non-equilibrium path ensemble. In doing so, we establish effective protocols for the analysis of transition rates in non-equilibrium quasi-steady states. Transition rates between potential wells and also between phase-space elements are studied, and found to exhibit distinct properties, the more coarse-grained potential wells being effectively further from equilibrium. In all cases the results from the boundary-driven system are close to the path-ensemble predictions, but the question of equivalence of the two remains open.

to a victim path, speeding up crosstalk pattern generation. In order to induce maximum crosstalk slowdown along a path, aggressors are prioritized based on their potential delay increase and timing alignment. The test generation engine introduces...

A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of the true minimum energy path using some method of choice for evaluating the energy and atomic forces, for example by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to the true minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. Th...

Knowledge of phonon mean free path (MFP) distribution is critically important to engineering size effects. Phenomenological models of phonon relaxation times can give us some sense about the mean free path distribution, ...

A Flight Path Generator is defined as the module of an automated Air Traffic Control system which plans aircraft trajectories in the terminal area with respect to operational constraints. The flight path plans have to be ...

Astronomical observations of the interstellar medium often struggle to measure fundamental physical properties of the gas on small scales because most observations are averaged along the line of sight, leading to difficulties in evaluating pressure equilibrium, turbulence, magnetic field structure, and volume density. The local ISM has helped in this regard by providing relatively simple ISM absorption profiles over short path lengths, with low column densities only detectable with strong transitions in the UV . On August 25, 2012, the first human-made object, the Voyager 1 spacecraft, crossed the heliosphere, effectively leaving the solar system and entering the galactic interstellar environment. Voyager 2 is expected to do the same in the coming years, and over the next decade both spacecraft will continue to make daily measurements of fundamental physical properties. We propose to make the first observations of nearby stars along the same line of sight as the current locations of the Voyager spacecraft in order to measure the same interstellar material. The proposed observations are of the very closest stars in these directions and will provide measurements of the kinematic structure, electron density, temperature and turbulence, elemental abundances and small scale structure by comparing two closely spaced sight lines. With both HST and the Voyager spacecraft approaching the end of long and fruitful missions, we have the opportunity to acquire a unique dataset which synthesizes the independent and complimentary in situ observations with the shortest possible line-of-sight observations, to provide an unprecedented study of the galactic ISM surrounding the Sun.

The Hamiltonian counterpart of classical Lagrangian field theory is covariant Hamiltonian field theory where momenta correspond to derivatives of fields with respect to all world coordinates. In particular, classical Lagrangian and covariant Hamiltonian field theories are equivalent in the case of a hyperregular Lagrangian, and they are quasi-equivalent if a Lagrangian is almost-regular. In order to quantize covariant Hamiltonian field theory, one usually attempts to construct and quantize a multisymplectic generalization of the Poisson bracket. In the present work, the path integral quantization of covariant Hamiltonian field theory is suggested. We use the fact that a covariant Hamiltonian field system is equivalent to a certain Lagrangian system on a phase space which is quantized in the framework of perturbative field theory. We show that, in the case of almost-regular quadratic Lagrangians, path integral quantizations of associated Lagrangian and Hamiltonian field theories are equivalent.

Simultaneous measurements from the ground of the spectral optical thickness and the atmospheric path radiance from over 30 sites located in many parts of the world and affected by several different aerosol types are reported. These measurements are used to derive the relationship between the optical thickness and the path radiance for a single viewing and illumination geometry and to discuss its implications on remote sensing observations. It is shown that simple measurements performed from the ground can yield empirical relationships that can be used to check some of the common but not validated assumptions about the particle homogeneity, sphericity, composition, and size distribution used in remote sensing models and in estimates of the radiative effects of aerosol. The results are used to test concepts of atmospheric corrections and remote sensing of aerosol from space.

This paper describes the development and testing of a path model of aircraft noise annoyance by using noise and social survey data collected in the vicinity of Toronto International Airport. Path analysis is used to estimate the direct and indirect effects of seventeen independent variables on individual annoyance. The results show that the strongest direct effects are for speech interference, attitudes toward aircraft operations, sleep interruption and personal sensitivity to noise. The strongest indirect effects are for aircraft Leq(24) and sensitivity. Overall the model explains 41 percent of the variation in the annoyance reported by the 673 survey respondents. The findings both support and extend existing statements in the literature on the antecedents of annoyance.

Path planning needs to be fast to facilitate real-time robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To overcome this difficulty, we present an adaptive algorithm that uses past experience to speed up future performance. It is a learning algorithm suitable for automating flexible manufacturing in incrementally-changing environments. The algorithm allows the robot to adapt to its environment by having two experience manipulation schemes: For minor environmental change, we use an object-attached experience abstraction scheme to increase the flexibility of the learned experience; for major environmental change, we use an on-demand experience repair scheme to retain those experiences that remain valid and useful. Using this algorithm, we can effectively reduce the overall robot planning time by re-using the computation result for one task to plan a path for another.

The molecular mechanism of a reaction in solution is reflected in its transition-state ensemble and transition paths. We use a Bayesian formula relating the equilibrium and transition-path ensembles to identify transition states, rank reaction coordinates, and estimate rate coefficients. We also introduce a variational procedure to optimize reaction coordinates. The theory is illustrated with applications to protein folding and the dipole reorientation of an ordered water chain inside a carbon nanotube. To describe the folding of a simple model of a three-helix bundle protein, we variationally optimize the weights of a projection onto the matrix of native and nonnative amino acid contacts. The resulting one-dimensional reaction coordinate captures the folding transition state, with formation and packing of helix 2 and 3 constituting the bottleneck for folding. PMID:15814618

This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

The (parallel) linear transports along paths in vector bundles are axiomatically described. Their general form and certain properties are found. It is shown that these transports are locally (i.e. along every fixed path) always Euclidean ones in a senses that there exist frames in which their matrices are unit. The investigated transports along paths are described in terms of their local coefficients, as well as in terms of derivations along paths.

We show how the time-continuous coherent state path integral breaks down for both the single-site Bose-Hubbard model and the spin-path integral. Specifically, when the Hamiltonian is quadratic in a generator of the algebra used to construct coherent states, the path integral fails to produce correct results following from an operator approach. As suggested by previous authors, we note that the problems do not arise in the time-discretized version of the path integral.

Some time-dependent physical systems do not admit, in the general case, either an invariant or auxiliary equation. The study of these systems is then in general made easier by space-time transformation of coordinates. This is true for the case for a rectangular well with a moving wall, which generalized canonical transformations reduce, in the path-integral formalism, to the case of

We discuss the time-continuous path integration in the coherent states basis in a way that is free from inconsistencies. Employing this notion we reproduce known and exact results working directly in the continuum. Such a formalism can set the basis to develop perturbative and non-perturbative approximations already known in the quantum field theory community. These techniques can be proven useful in a great variety of problems where bosonic Hamiltonians are used.

Social media services, such as Twitter, enable commercial businesses to participate actively in online word-of-mouth communication.\\u000a In this project, we examined the potential influences of business engagement in online word-of-mouth communication on the\\u000a level of consumers engagement and investigated the trajectories of a business online word-of-mouth message diffusion in\\u000a the Twitter community. We used path analysis to examine 164,478 tweets

This paper addresses policy issues raised by recent recommendations (e.g., Holmes Group, 1986) to improve the quality of teacher education by moving to fifth-year certification. It describes career patterns of beginning teachers with different undergraduate preparation paths, using data from the 1979 follow-up of the National Longitudinal Study of the High School Class of 1972 (NLS-72) (Riccobono, Henderson, Burkheimer, Place,

A directed path from the origin in the square lattice, and confined to a wedge, exerts a net entropic force on the wedge. If the wedge is formed by the Y -axis and the line Y = rX, then the moment of the force on the line Y = rX about the origin is given by {\\cal M}_\\alpha = \\frac{-\\log \\cot\\alpha}{(1+\\cot\\alpha)^2} \\qquad \\if 0 \\leq \\alpha\\leq \\pi/4 where ? is the vertex angle of the wedge formed by the lines X = 0 and Y = rX in the square lattice. If ? in [ ?/4, ?/2), then the moment about the origin is zero. This model is closely related to a model of a descending directed path crossing a wedge from the point (0, N) to the point (pM, qM) on the line Y = (q/p)X. If lengths in this model are rescaled by pM, while N = lfloor?qMrfloor and (q/p) ? r, where r is an irrational number, then a limiting model of a path crossing the wedge from the point (0, ?) to the point (1, r) on the line Y = rX is obtained. The limiting path exerts a force on the line Y = rX, and the moment of this force about the origin is {\\cal M}_\\alpha = \\frac{-(\\beta-1)\\log((\\beta-1)\\cot\\alpha) }{(1+(\\beta-1)\\cot\\alpha)^2} if ? > 1 and where ? ? [0, ?/2] is the vertex angle of the wedge.

AbstractTor is currently the most,popular low,latency anonymizing overlay network for TCP-based applications. How- ever, it is well understood that Tors path selection algorithm is vulnerable to end-to-end traffic correlation attacks since it chooses Tor routers in proportion to their perceived bandwidth capabilities. Prior work has shown that the fraction of malicious routers and the amount,of adversary-controlled bandwidth are significant factors

Since the Indian Ocean tsunami of December 26, 2004, scientists have tried to retrace the path of the giant waves to learn how and why the water moved in unexpected directions, even turning corners and producing simultaneous wavefronts coming from different directions. This radio broadcast describes efforts to measure the strength, distance traveled inland, and height of the tsunami, as well as mapping its route. The clip is 4 minutes in length.

In this activity, "space cadets" (learners) use writing and sequencing skills in addition to directional words/ordered pairs to guide Buzz Lightyear (from the movie "Toy Story") through a grid to reach his orbiter. Learners are encouraged to avoid all obstacles and use as many or as few steps as needed. Once completed, learners give their flight path to a friend and ask him/her to follow the directions to help Buzz reach his orbiter.

A new line current profile based dynamic line rating system has been implemented at Idaho Power Company by real-time exchange of data with Energy Management System and real-time weather data from the Internet. Increased utilization of transmission critical path is made possible by wide-area real-time dynamic rating using Internet based weather data. Economic analysis shows significant benefit of operating transmission

Systems with constraints pose problems when they are quantized. Moreover, the Dirac procedure of quantization prior to reduction is preferred. The projection operator method of quantization, which can be most conveniently described by coherent state path integrals, enables one to directly impose a regularized form of the quantum constraints. This procedure also overcomes conventional difficulties with normalization and second class constraints that invalidate conventional Dirac quantization procedures.

Empirical path-loss formulas for microcells in low-rise and high-rise environments are established from measurements conducted in the San Francisco Bay area. Using the 1-km intercepts and slope indexes of the least square fit lines to the measurements at cellular and personal communication services (PCS) frequencies for three base station heights, simple analytic expressions are obtained. Separate formulas are presented for

In this paper, we propose a computer simulation model for the study of the large-scale effects on narrowband wireless transmission systems. The development of the path loss model is based on the ray tracing technique. This study concentrates on the first-order scattering effects, namely each multipath signal is a two-hop signal that involves a single scattering object. The first hop

A class of optimization problems in networks of intersecting diffusion domains of a special form of thin paths has been considered. The system of equations describing stationary solutions is equivalent to an electrical circuit built of intersecting conductors. The solution of an optimization problem has been obtained and extended to the analogous electrical circuit. The interest in this network arises from, among other applications, an application to wave-particle diffusion through resonant interactions in plasma.

We illustrate some of the static and dynamic relations discovered by Cohen, Crooks, Evans, Jarzynski, Kirkwood, Morriss, Searles, and Zwanzig. These relations link nonequilibrium processes to equilibrium isothermal free energy changes and to dynamical path probabilities. We include ideas suggested by Dellago, Geissler, Oberhofer, and Schoell-Paschinger. Our treatment is intended to be pedagogical, for use in an updated version of our book: Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.

Disclosed is a vertical flight path angle steering system for aircraft, utilizing a digital flight control computer which processes pilot control inputs and aircraft response parameters into suitable elevator commands and control information for display to the pilot on a cathode ray tube. The system yields desirable airplane control handling qualities and responses as well as improvements in pilot workload and safety during airplane operation in the terminal area and under windshear conditions.

It is well established that volatility has a memory of the past, moreover it is found that volatility correlations are long ranged. As a consequence the volatility cannot be characterized by a single correlation time. Recent empirical work suggests that the volatility correlation functions of various assets actually decay as a power law. In this paper we show that it is possible to derive the path integral for a non-Gaussian option pricing model that can capture fat-tails. We aim to find the most probable path that contributes to the action functional, that describes the dynamics of the entire system, by finding local minima. We obtain a second order differential equation for the functional return. This paper reviews our current progress and the remaining open questions.

Laboratory soil column experiments were conducted to study the distribution of preferential flow paths resulting from removal of fine-size clay particles. These experiments specifically studied the influence of clay (kaolinite) percentage in sand-clay mixtures and the effect of hydraulic gradients on pore evolution. Analysis of the effluent during the experiments indicated that clay particles were removed from the soil column, accompanied by an increase in porosity and hydraulic conductivity. Dye experiments were conducted on the same columns to stain the pathways where clay particle removal occurred. It was observed that pore formation was fairly uniform in some cases, while other cases showed distinct preferential flow path formation. A physically-based model was used to identify a dimensionless parameter, G, which expresses the ratio of detachment and deposition forces at any space-time location. A model, based on equivalent media theory, is proposed to describe the hydraulic conductivity of soils with preferential flow paths. Future work will test the theoretical expressions for conductivity with experimental results, and investigate the relationship between G and the equivalent conductivity for such soils.

To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.

This article considers the AS path prepending approach to engineer inbound traffic for multihomed ASs. The AS path prepending approach artificially inflates the length of the AS path attribute on one of the links in hopes of diverting some of the traffic to other links. Unlike the current practice that determines the prepending length in a trial-and-error way, we propose

A new system is being developed for measurement of visibility and scattering characteristics in the visible and NIR wavelengths over extended paths. This is being developed to better understand transmission properties over long horizontal near-surface propagation paths over the ocean. The instrument, a Multispectral Scattering Imager, is designed to acquire calibrated radiance images in several wavelengths over extended paths. From