Sample records for parallel symplectic surfaces

Let X be a symplectic homotopy K3 surface and G = S5 act on X symplectically. In this paper, we give a weak classification of the G action on X by discussing the fixed-point set structure. Besides, we analyse the exoticness of smooth structures of X under the action of G. Keywords. K3 surfaces; symplectic actions; exotic ...

We give a comparative description of the Poisson structures on the moduli spaces of flat connections on real surfaces and holomorphic Poisson structures on the moduli spaces of holomorphic bundles on complex surfaces. The symplectic leaves of the latter are classified by restrictions of the bundles to certain divisors. This can be regarded as fixing a "complex analogue of the holonomy" of a connection along a "complex analogue of the boundary" in analogy with the real case.

The tomographic-probability description of quantum states is reviewed. The symplectic tomography of quantum states with continuous variables is studied. The symplectic entropy of the states with continuous variables is discussed and its relation to Shannon entropy and information is elucidated. The known entropic uncertainty relations of the probability distribution in position and momentum of a particle are extended and new uncertainty relations for symplectic entropy are obtained. The partial case of symplectic entropy, which is optical entropy of quantum states, is considered. The entropy associated to optical tomogram is shown to satisfy the new entropic uncertainty relation. The example of Gaussian states of harmonic oscillator is studied and the entropic uncertainty relations for optical tomograms of the Gaussian state are shown to minimize the uncertainty relation

We prove relative versions of the symplectic capping theorem and sufficiency of Giroux's criterion for Stein fillability and use these to study the 4-genus of knots. More precisely, suppose we have a symplectic 4-manifold with convex boundary and a symplecticsurface in such that is a transverse knot in .

In a recent article, the authors introduced a new geometric structure which they proposed to call co-symplectic geometry. This structure is based on a symmetric bilinear form of signature zero and leads to a geometry that is, in many respects, analogous to the symplectic geometry. Its usefulness lies principally in the fact that it provides scope for the geometrization of a number of familiar structures in physics which are not so easily amenable by the methods of symplectic geometry. These include the angular momentum operators of quantum theory, the Dirac operators in relativistic quantum field theory. It is anticipated that, in conjunction with the more familiar symplectic geometry, the co-symplectic geometry will go some way to providing the tools necessary for a full geometrization of physics. In this paper, a co-symplectic structure on the cotangent bundle T * X of an arbitrary manifold X is defined, and the notion of associated symplectic and co-symplectic structures is introduced. By way of example, the two-dimensional case is considered in some detail. The general case is investigated, and some implications of these results for polarizations in geometric quantization are considered

The goal of these notes is to provide a fast introduction to symplectic geometry for graduate students with some knowledge of differential geometry, de Rham theory and classical Lie groups. This text addresses symplectomorphisms, local forms, contact manifolds, compatible almost complex structures, Kaehler manifolds, hamiltonian mechanics, moment maps, symplectic reduction and symplectic toric manifolds. It contains guided problems, called homework, designed to complement the exposition or extend the reader's understanding. There are by now excellent references on symplectic geometry, a subset of which is in the bibliography of this book. However, the most efficient introduction to a subject is often a short elementary treatment, and these notes attempt to serve that purpose. This text provides a taste of areas of current research and will prepare the reader to explore recent papers and extensive books on symplectic geometry where the pace is much faster. For this reprint numerous corrections and cl...

This book arises from the INdAM Meeting "Complex and Symplectic Geometry", which was held in Cortona in June 2016. Several leading specialists, including young researchers, in the field of complex and symplectic geometry, present the state of the art of their research on topics such as the cohomology of complex manifolds; analytic techniques in Kähler and non-Kähler geometry; almost-complex and symplectic structures; special structures on complex manifolds; and deformations of complex objects. The work is intended for researchers in these areas.

We construct symplectic integrators for Lie–Poisson systems. The integrators are standard symplectic (partitioned) Runge–Kutta methods. Their phase space is a symplectic vector space equipped with a Hamiltonian action with momentum map J whose range is the target Lie–Poisson manifold, and their Hamiltonian is collective, that is, it is the target Hamiltonian pulled back by J. The method yields, for example, a symplectic midpoint rule expressed in 4 variables for arbitrary Hamiltonians on so(3) ∗ . The method specializes in the case that a sufficiently large symmetry group acts on the fibres of J, and generalizes to the case that the vector space carries a bifoliation. Examples involving many classical groups are presented. (paper)

By now symplectic integration has been applied to many problems in classical mechanics. It is my conviction that the field of particle simulation in circular rings is ideally suited for the application of symplectic integration. In this paper, I present a short description symplectic tools in circular storage rings

Symplectic and contact geometry naturally emerged from the mathematical description of classical physics. The discovery of new rigidity phenomena and properties satisfied by these geometric structures launched a new research field worldwide. The intense activity of many European research groups in this field is reflected by the ESF Research Networking Programme "Contact And Symplectic Topology" (CAST). The lectures of the Summer School in Nantes (June 2011) and of the CAST Summer School in Budapest (July 2012) provide a nice panorama of many aspects of the present status of contact and symplectic topology. The notes of the minicourses offer a gentle introduction to topics which have developed in an amazing speed in the recent past. These topics include 3-dimensional and higher dimensional contact topology, Fukaya categories, asymptotically holomorphic methods in contact topology, bordered Floer homology, embedded contact homology, and flexibility results for Stein manifolds.

We construct a class of symplectic non-Kaehler and complex non-Kaehler string theory vacua, extending and providing evidence for an earlier suggestion by Polchinski and Strominger. The class admits a mirror pairing by construction. Comparing hints from a variety of sources, including ten-dimensional supergravity and KK reduction on SU(3)-structure manifolds, suggests a picture in which string theory extends Reid's fantasy to connect classes of both complex non-Kaehler and symplectic non-Kaehler manifolds.

In the first part we considered the quantum phase space in terms of noncommutative differential geometry. Following relevant literature, a short introduction to vector fields and differential forms on the differential vector space M{sub N}(C) was given. Special emphasis has been laid on the construction of a canonical symplectic form analogous to the one known from classical mechanics. The canonical choice of this form has been shown to be just the (scaled) commutator of two matrices. Using the Schwinger basis, the symplectic form derived in the first sections has been further examined by calculating concrete expressions for products of general matrices and their commutators which are, as we remember, just the symplectic form. Subsequently, a discrete analog to the continuous theory has been developed, in which the lattice of the quantum phase space forms the base space, and the Heisenberg group including the Schwinger elements is identified with the fiber space. In the continuum limit it could be shown that the discrete theory seamlessly passed into the commonly known continuous theory of connection forms on fiber bundles. The connection form and its exterior covariant derivation, the curvature form, have been calculated. It has been found that the curvature form can even be pulled back to the symplectic form by the section defined by the Schwinger elements. (orig.)

The gyrocenter dynamics of charged particles in time-independent magnetic fields is a non-canonical Hamiltonian system. The canonical description of the gyrocenter has both theoretical and practical importance. We provide a general procedure of the gyrocenter canonicalization, which is expressed by the series of a small variable ϵ depending only on the parallel velocity u and can be expressed in a recursive manner. We prove that the truncation of the series to any given order generates a set of exact canonical coordinates for a system, whose Lagrangian approximates to that of the original gyrocenter system in the same order. If flux surfaces exist for the magnetic field, the series stops simply at the second order and an exact canonical form of the gyrocenter system is obtained. With the canonicalization schemes, the canonical symplectic simulation of gyrocenter dynamics is realized for the first time. The canonical symplectic algorithm has the advantage of good conservation properties and long-term numerical accuracy, while avoiding numerical instability. It is worth mentioning that explicitly expressing the canonical Hamiltonian in new coordinates is usually difficult and impractical. We give an iteration procedure that is easy to implement in the original coordinates associated with the coordinate transformation. This is crucial for modern large-scale simulation studies in plasma physics. The dynamics of gyrocenters in the dipole magnetic field and in the toroidal geometry are simulated using the canonical symplectic algorithm by comparison with the higher-order non symplectic Runge-Kutta scheme. The overwhelming superiorities of the symplectic method for the gyrocenter system are evidently exhibited

In this paper the authors apply to the zeros of families of L-functions with orthogonal or symplectic symmetry the method that Conrey and Snaith (Correlations of eigenvalues and Riemann zeros, 2008) used to calculate the n-correlation of the zeros of the Riemann zeta function. This method uses the Ratios Conjectures (Conrey, Farmer, and Zimbauer, 2008) for averages of ratios of zeta or L-functions. Katz and Sarnak (Zeroes of zeta functions and symmetry, 1999) conjecture that the zero statistics of families of L-functions have an underlying symmetry relating to one of the classical compact groups U(N), O(N) and USp(2N). Here the authors complete the work already done with U(N) (Conrey and Snaith, Correlations of eigenvalues and Riemann zeros, 2008) to show how new methods for calculating the n-level densities of eigenangles of random orthogonal or symplectic matrices can be used to create explicit conjectures for the n-level densities of zeros of L-functions with orthogonal or symplectic symmetry, including al...

This is a book on symplectic topology, a rapidly developing field of mathematics which originated as a geometric tool for problems of classical mechanics. Since the 1980s, powerful methods such as Gromov's pseudo-holomorphic curves and Morse-Floer theory on loop spaces gave rise to the discovery of unexpected symplectic phenomena. The present book focuses on function spaces associated with a symplectic manifold. A number of recent advances show that these spaces exhibit intriguing properties and structures, giving rise to an alternative intuition and new tools in symplectic topology. The book provides an essentially self-contained introduction into these developments along with applications to symplectic topology, algebra and geometry of symplectomorphism groups, Hamiltonian dynamics and quantum mechanics. It will appeal to researchers and students from the graduate level onwards. I like the spirit of this book. It formulates concepts clearly and explains the relationship between them. The subject matter is i...

A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

Symplectic Quantum Mechanics SQM considers a non-commutative algebra of functions on a phase space Γ and an associated Hilbert space HΓ to construct a unitary representation for the Galilei group. From this unitary representation the Schrödinger equation is rewritten in phase space variables and the Wigner function can be derived without the use of the Liouville-von Neumann equation. In this article we extend the methods of supersymmetric quantum mechanics SUSYQM to SQM. With the purpose of applications in quantum systems, the factorization method of the quantum mechanical formalism is then set within supersymmetric SQM. A hierarchy of simpler hamiltonians is generated leading to new computation tools for solving the eigenvalue problem in SQM. We illustrate the results by computing the states and spectra of the problem of a charged particle in a homogeneous magnetic field as well as the corresponding Wigner function.

We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplecticsurfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplecticsurfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

We consider a general theory of curvatures of discrete surfaces equipped with edgewise parallel Gauss images, and where mean and Gaussian curvatures of faces are derived from the faces' areas and mixed areas. Remarkably these notions are capable

We introduce the notion of regular symplectomorphism and graded regular symplectomorphism between singular phase spaces. Our main concern is to exhibit examples of unitary torus representations whose symplectic quotients cannot be graded regularly symplectomorphic to the quotient of a symplectic...

The surface tree languages obtained by top-down finite state transformation of monadic trees are exactly the frontier-preserving homomorphic images of sets of derivation trees of ETOL systems. The corresponding class of tree transformation languages is therefore equal to the class of ETOL languages.

The Lagrangian, multi-dimensional, ideal, compressible gas dynamic equations are written in a multi-symplectic form, in which the Lagrangian fluid labels, m i (the Lagrangian mass coordinates) and time t are the independent variables, and in which the Eulerian position of the fluid element {x}={x}({m},t) and the entropy S=S({m},t) are the dependent variables. Constraints in the variational principle are incorporated by means of Lagrange multipliers. The constraints are: the entropy advection equation S t = 0, the Lagrangian map equation {{x}}t={u} where {u} is the fluid velocity, and the mass continuity equation which has the form J=τ where J={det}({x}{ij}) is the Jacobian of the Lagrangian map in which {x}{ij}=\\partial {x}i/\\partial {m}j and τ =1/ρ is the specific volume of the gas. The internal energy per unit volume of the gas \\varepsilon =\\varepsilon (ρ ,S) corresponds to a non-barotropic gas. The Lagrangian is used to define multi-momenta, and to develop de Donder-Weyl Hamiltonian equations. The de Donder-Weyl equations are cast in a multi-symplectic form. The pullback conservation laws and the symplecticity conservation laws are obtained. One class of symplecticity conservation laws give rise to vorticity and potential vorticity type conservation laws, and another class of symplecticity laws are related to derivatives of the Lagrangian energy conservation law with respect to the Lagrangian mass coordinates m i . We show that the vorticity-symplecticity laws can be derived by a Lie dragging method, and also by using Noether’s second theorem and a fluid relabelling symmetry which is a divergence symmetry of the action. We obtain the Cartan-Poincaré form describing the equations and we discuss a set of differential forms representing the equation system.

We give a self-contained algebraic description of a formal symplectic groupoid over a Poisson manifold M. To each natural star product on M we then associate a canonical formal symplectic groupoid over M. Finally, we construct a unique formal symplectic groupoid ‘with separation of variables’ over an arbitrary Kähler-Poisson manifold.

This is a short tract on the essentials of differential and symplectic geometry together with a basic introduction to several applications of this rich framework: analytical mechanics, the calculus of variations, conjugate points & Morse index, and other physical topics. A central feature is the systematic utilization of Lagrangian submanifolds and their Maslov-Hörmander generating functions. Following this line of thought, first introduced by Wlodemierz Tulczyjew, geometric solutions of Hamilton-Jacobi equations, Hamiltonian vector fields and canonical transformations are described by suitable Lagrangian submanifolds belonging to distinct well-defined symplectic structures. This unified point of view has been particularly fruitful in symplectic topology, which is the modern Hamiltonian environment for the calculus of variations, yielding sharp sufficient existence conditions. This line of investigation was initiated by Claude Viterbo in 1992; here, some primary consequences of this theory are exposed in...

A particle in a particle accelerator can often be considered a Hamiltonian system, and when that is the case, its motion obeys the constraints of the Symplectic Condition. This tutorial monograph derives the condition from the requirement that a canonical transformation must yield a new Hamiltonian system from an old one. It then explains some of the consequences of symplecticity and discusses examples of its applications, touching on symplectic matrices, phase space and Liouville's Theorem, Lagrange and Poisson brackets, Lie algebra, Lie operators and Lie transformations, symplectic maps and symplectic integrators.

For an arbitrary Lie group of canonical transformations on a symplectic manifold collective coordinates are introduced. They describe a motion of the dynamical system as a whole under the group transformations. Some properties of Lie group of canonical transformations are considered [ru

Workpiece surface profile, texture and roughness can be predicted by modeling the topography of wheel surface and modeling kinematics of grinding process, which compose an important part of precision grinding process theory. Parallel grinding technology is an important method for nonaxisymmetric aspheric lens machining, but there is few report on relevant simulation. In this paper, a simulation method based on parallel grinding for precision machining of aspheric lens is proposed. The method combines modeling the random surface of wheel and modeling the single grain track based on arc wheel contact points. Then, a mathematical algorithm for surface topography is proposed and applied in conditions of different machining parameters. The consistence between the results of simulation and test proves that the algorithm is correct and efficient. (authors)

The numerical integration of Hamiltonian systems by symplectic and trigonometrically symplectic method is considered in this Letter. We construct new symplectic and trigonometrically symplectic methods of second and third order. We apply our new methods as well as other existing methods to the numerical integration of the harmonic oscillator, the 2D harmonic oscillator with an integer frequency ratio and an orbit problem studied by Stiefel and Bettis

We describe a reduction process for symplectic principal R-bundles in the presence of a momentum map. These types of structures play an important role in the geometric formulation of non-autonomous Hamiltonian systems. We apply this procedure to the standard symplectic principal R-bundle associated with a fibration π:M→R. Moreover, we show a reduction process for non-autonomous Hamiltonian systems on symplectic principal R-bundles. We apply these reduction processes to several examples. (paper)

"Symplectic Geometry Algorithms for Hamiltonian Systems" will be useful not only for numerical analysts, but also for those in theoretical physics, computational chemistry, celestial mechanics, etc. The book generalizes and develops the generating function and Hamilton-Jacobi equation theory from the perspective of the symplectic geometry and symplectic algebra. It will be a useful resource for engineers and scientists in the fields of quantum theory, astrophysics, atomic and molecular dynamics, climate prediction, oil exploration, etc. Therefore a systematic research and development

. Using such decompositions the authors define the Maslov index of the curve by symplectic reduction to the classical finite-dimensional case. The authors prove the transitivity of repeated symplectic reductions and obtain the invariance of the Maslov index under symplectic reduction while recovering all...... for varying well-posed boundary conditions on manifolds with boundary and obtain the splitting formula of the spectral flow on partitioned manifolds....

Using the example of the helical wiggler proposed for the KEK photon factory, we show how to integrate the equation of motion through the wiggler. The integration is performed in cartesian coordinates. For the usual expanded Hamiltonian (without square root), we derive a first order symplectic integrator for the purpose of tracking through a wiggler in a ring. We also show how to include classical radiation for the computation of the damping decrement

We describe a method for numerical construction of a symplectic map for particle propagation in a general accelerator lattice. The generating function of the map is obtained by integrating the Hamilton-Jacobi equation as an initial-value problem on a finite time interval. Given the generating function, the map is put in explicit form by means of a Fourier inversion technique. We give an example which suggests that the method has promise. 9 refs., 9 figs

The authors consider a curve of Fredholm pairs of Lagrangian subspaces in a fixed Banach space with continuously varying weak symplectic structures. Assuming vanishing index, they obtain intrinsically a continuously varying splitting of the total Banach space into pairs of symplectic subspaces. Using such decompositions the authors define the Maslov index of the curve by symplectic reduction to the classical finite-dimensional case. The authors prove the transitivity of repeated symplectic reductions and obtain the invariance of the Maslov index under symplectic reduction while recovering all the standard properties of the Maslov index. As an application, the authors consider curves of elliptic operators which have varying principal symbol, varying maximal domain and are not necessarily of Dirac type. For this class of operator curves, the authors derive a desuspension spectral flow formula for varying well-posed boundary conditions on manifolds with boundary and obtain the splitting formula of the spectral f...

The monograph is a study of the local bifurcations of multiparameter symplectic maps of arbitrary dimension in the neighborhood of a fixed point.The problem is reduced to a study of critical points of an equivariant gradient bifurcation problem, using the correspondence between orbits ofa symplectic map and critical points of an action functional. New results onsingularity theory for equivariant gradient bifurcation problems are obtained and then used to classify singularities of bifurcating period-q points. Of particular interest is that a general framework for analyzing group-theoretic aspects and singularities of symplectic maps (particularly period-q points) is presented. Topics include: bifurcations when the symplectic map has spatial symmetry and a theory for the collision of multipliers near rational points with and without spatial symmetry. The monograph also includes 11 self-contained appendices each with a basic result on symplectic maps. The monograph will appeal to researchers and graduate student...

The surfaces of many bodies are weakened by shallow enigmatic cracks that parallel the surface. A re-formulation of the static equilibrium equations in a curvilinear reference frame shows that a tension perpendicular to a traction-free surface can arise at shallow depths even under the influence of gravity. This condition occurs if σ11k1 + σ22k2 > ρg cosβ, where k1 and k2 are the principal curvatures (negative if convex) at the surface, σ11 and σ22 are tensile (positive) or compressive (negative) stresses parallel to the respective principal curvature arcs, ρ is material density, g is gravitational acceleration, and β is the surface slope. The curvature terms do not appear in equilibrium equations in a Cartesian reference frame. Compression parallel to a convex surface thus can cause subsurface cracks to open. A quantitative test of the relationship above accounts for where sheeting joints (prominent shallow surface-parallel fractures in rock) are abundant and for where they are scarce or absent in the varied topography of Yosemite National Park, resolving key aspects of a classic problem in geology: the formation of sheeting joints. Moreover, since the equilibrium equations are independent of rheology, the relationship above can be applied to delamination or spalling caused by surface-parallel cracks in many materials.

A generalization of the multi-symplectic form for Hamiltonian systems to self-adjoint systems with dissipation terms is studied. These systems can be expressed as multi-symplectic Birkhoffian equations, which leads to a natural definition of Birkhoffian multi-symplectic structure. The concept of Birkhoffian multi-symplectic integrators for Birkhoffian PDEs is investigated. The Birkhoffian multi-symplectic structure is constructed by the continuous variational principle, and the Birkhoffian multi-symplectic integrator by the discrete variational principle. As an example, two Birkhoffian multi-symplectic integrators for the equation describing a linear damped string are given.

We find a complete set of local invariants of singular symplectic forms with the structurally stable Martinet hypersurface on a 2 n-dimensional manifold. In the C-analytic category this set consists of the Martinet hypersurface Σ2, the restriction of the singular symplectic form ω to TΣ2 and the kernel of ω n - 1 at the point p ∈Σ2. In the R-analytic and smooth categories this set contains one more invariant: the canonical orientation of Σ2. We find the conditions to determine the kernel of ω n - 1 at p by the other invariants. In dimension 4 we find sufficient conditions to determine the equivalence class of a singular symplectic form-germ with the structurally smooth Martinet hypersurface by the Martinet hypersurface and the restriction of the singular symplectic form to it. We also study the singular symplectic forms with singular Martinet hypersurfaces. We prove that the equivalence class of such singular symplectic form-germ is determined by the Martinet hypersurface, the canonical orientation of its regular part and the restriction of the singular symplectic form to its regular part if the Martinet hypersurface is a quasi-homogeneous hypersurface with an isolated singularity.

We prove a new symplectic analogue of Kashiwara’s equivalence from D–module\\ud theory. As a consequence, we establish a structure theory for module categories over\\ud deformation-quantizations that mirrors, at a higher categorical level, the BiałynickiBirula\\ud stratification of a variety with an action of the multiplicative group Gm . The\\ud resulting categorical cell decomposition provides an algebrogeometric parallel to the\\ud structure of Fukaya categories of Weinstein manifolds. From it,...

Full Text Available This paper studies transformations for conjoined bases of symplectic difference systems $Y_{i+1}=\\mathcal S_{i}Y_{i}$ with the symplectic coefficient matrices $\\mathcal S_i.$ For an arbitrary symplectic transformation matrix $P_{i}$ we formulate most general sufficient conditions for $\\mathcal S_{i},\\, P_{i}$ which guarantee that $P_{i}$ preserves oscillatory properties of conjoined bases $Y_{i}.$ We present examples which show that our new results extend the applicability of the discrete transformation theory.

We formulate a consistent multiparametric differential calculus on the quadratic coordinate algebra of the quantum vector space and use this as a tool to obtain a deformation of the associated symplectic phase space involving n(n-1)/2+1 deformation parameters. A consistent calculus on the relation subspace is also constructed. This is achieved with the help of a restricted ansatz and solving the consistency conditions to directly arrive at the main commutation structures without any reference to the R-matrix. However, the non-standard R-matrices for GL r,qij (n) and Sp r,qij (2n) can be easily read off from the commutation relations involving coordinates and derivatives. (author). 9 refs

.... Symplectic numerical methods are applied to the Extended Kalman Filter (EKF) algorithm to give the SKF, which outperforms the standard EKF in the presence of nonlinearity and low measurement noise in the 1-D case...

Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

We study the topology of integrable Hamiltonian systems, giving the main attention to the affine structure of their orbit spaces. In particular, we develop some aspects of Fomenko's theory about topological classification of integrable non-degenerate systems, and consider some relations between such systems and ''pure'' contact and symplectic geometry. We give a notion of integrable surgery and use it to obtain some interesting symplectic structures. (author). Refs, 10 figs

We prove that a mapA εsp(σ,R), the set of infinitesimally symplectic maps, is strongly stable if and only if its centralizerC(A) insp(σ,R) contains only semisimple elements. Using the theorem that everyB insp(σ,R) close toA is conjugate by a real symplectic map to an element ofC(A), we give a new

Although the formation of a capillary bridge between two parallelsurfaces has been extensively studied, the majority of research has described only symmetric capillary bridges between two smooth surfaces. In this work, an instrument was built to form a capillary bridge by squeezing a liquid drop on one surface with another surface. An analytical solution that describes the shape of symmetric capillary bridges joining two smooth surfaces has been extended to bridges that are asymmetric about the midplane and to rough surfaces. The solution, given by elliptical integrals of the first and second kind, is consistent with a constant Laplace pressure over the entire surface and has been verified for water, Kaydol, and dodecane drops forming symmetric and asymmetric bridges between parallel smooth surfaces. This solution has been applied to asymmetric capillary bridges between a smooth surface and a rough fabric surface as well as symmetric bridges between two rough surfaces. These solutions have been experimentally verified, and good agreement has been found between predicted and experimental profiles for small drops where the effect of gravity is negligible. Finally, a protocol for determining the profile from the volume and height of the capillary bridge has been developed and experimentally verified.

We find that the coherent state projection operator representation of symplectic transformation constitutes a loyal group representation of symplectic group. The result of successively applying squeezing operators on number state can be easily derived. The project supported by National Natural Science Foundation of China under Grant No. 10575057 and the President Foundation of the Chinese Academy of Sciences

We consider the special type of field-theoretical symplectic structures called weakly nonlocal. The structures of this type are, in particular, very common for integrable systems such as KdV or NLS. We introduce here the special class of weakly nonlocal symplectic structures which we call weakly nonlocal symplectic structures of hydrodynamic type. We investigate then the connection of such structures with the Whitham averaging method and propose the procedure of 'averaging' the weakly nonlocal symplectic structures. The averaging procedure gives the weakly nonlocal symplectic structure of hydrodynamic type for the corresponding Whitham system. The procedure also gives 'action variables' corresponding to the wave numbers of m-phase solutions of the initial system which give the additional conservation laws for the Whitham system

Full Text Available We consider nonlinear recurrences generated from the iteration of maps that arise from cluster algebras. More precisely, starting from a skew-symmetric integer matrix, or its corresponding quiver, one can define a set of mutation operations, as well as a set of associated cluster mutations that are applied to a set of affine coordinates (the cluster variables. Fordy and Marsh recently provided a complete classification of all such quivers that have a certain periodicity property under sequences of mutations. This periodicity implies that a suitable sequence of cluster mutations is precisely equivalent to iteration of a nonlinear recurrence relation. Here we explain briefly how to introduce a symplectic structure in this setting, which is preserved by a corresponding birational map (possibly on a space of lower dimension. We give examples of both integrable and non-integrable maps that arise from this construction. We use algebraic entropy as an approach to classifying integrable cases. The degrees of the iterates satisfy a tropical version of the map.

An algorithm has been developed for converting an ''order-by-order symplectic'' Taylor map that is truncated to an arbitrary order (thus not exactly symplectic) into a Courant-Snyder matrix and a symplectic implicit Taylor map for symplectic tracking. This algorithm is implemented using differential algebras, and it is numerically stable and fast. Thus, lifetime charged-particle tracking for large hadron colliders, such as the Superconducting Super Collider, is now made possible

The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.

In this paper, we construct finite blow-up examples for symplectic mean curvature flows and we study symplectic translating solitons. We prove that there is no translating solitons with vertical bar α vertical bar ≤ α 0 to the symplectic mean curvature flow or to the almost calibrated Lagrangian mean curvature flow for some α 0 . (author)

Given a formal symplectic groupoid G over a Poisson manifold ( M, π 0), we define a new object, an infinitesimal deformation of G, which can be thought of as a formal symplectic groupoid over the manifold M equipped with an infinitesimal deformation {π_0 + \\varepsilon π_1} of the Poisson bivector field π 0. To any pair of natural star products {(ast,tildeast)} having the same formal symplectic groupoid G we relate an infinitesimal deformation of G. We call it the deformation groupoid of the pair {(ast,tildeast)} . To each star product with separation of variables {ast} on a Kähler-Poisson manifold M we relate another star product with separation of variables {hatast} on M. We build an algorithm for calculating the principal symbols of the components of the logarithm of the formal Berezin transform of a star product with separation of variables {ast} . This algorithm is based upon the deformation groupoid of the pair {(ast,hatast)}.

Quasipolynomial (or QP) mappings constitute a wide generalization of the well-known Lotka-Volterra mappings, of importance in different fields such as population dynamics, physics, chemistry or economy. In addition, QP mappings are a natural discrete-time analogue of the continuous QP systems, which have been extensively used in different pure and applied domains. After presenting the basic definitions and properties of QP mappings in a previous paper, the purpose of this work is to focus on their characterization by considering the existence of symplectic QP mappings. In what follows such QP symplectic maps are completely characterized. Moreover, use of the QP formalism can be made in order to demonstrate that all QP symplectic mappings have an analytical solution that is explicitly and generally constructed. Examples are given.

Quasipolynomial (or QP) mappings constitute a wide generalization of the well-known Lotka-Volterra mappings, of importance in different fields such as population dynamics, physics, chemistry or economy. In addition, QP mappings are a natural discrete-time analogue of the continuous QP systems, which have been extensively used in different pure and applied domains. After presenting the basic definitions and properties of QP mappings in a previous paper, the purpose of this work is to focus on their characterization by considering the existence of symplectic QP mappings. In what follows such QP symplectic maps are completely characterized. Moreover, use of the QP formalism can be made in order to demonstrate that all QP symplectic mappings have an analytical solution that is explicitly and generally constructed. Examples are given

Quasipolynomial (or QP) mappings constitute a wide generalization of the well-known Lotka-Volterra mappings, of importance in different fields such as population dynamics, physics, chemistry or economy. In addition, QP mappings are a natural discrete-time analogue of the continuous QP systems, which have been extensively used in different pure and applied domains. After presenting the basic definitions and properties of QP mappings in a previous paper [1], the purpose of this work is to focus on their characterization by considering the existence of symplectic QP mappings. In what follows such QP symplectic maps are completely characterized. Moreover, use of the QP formalism can be made in order to demonstrate that all QP symplectic mappings have an analytical solution that is explicitly and generally constructed. Examples are given.

We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.

We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.

This book presents a collection of papers on two related topics: topology of knots and knot-like objects (such as curves on surfaces) and topology of Legendrian knots and links in 3-dimensional contact manifolds. Featured is the work of international experts in knot theory (""quantum"" knot invariants, knot invariants of finite type), in symplectic and contact topology, and in singularity theory. The interplay of diverse methods from these fields makes this volume unique in the study of Legendrian knots and knot-like objects such as wave fronts. A particularly enticing feature of the volume is

Given a formal symplectic groupoid $G$ over a Poisson manifold $(M, \\pi_0)$, we define a new object, an infinitesimal deformation of $G$, which can be thought of as a formal symplectic groupoid over the manifold $M$ equipped with an infinitesimal deformation $\\pi_0 + \\epsilon \\pi_1$ of the Poisson bivector field $\\pi_0$. The source and target mappings of a deformation of $G$ are deformations of the source and target mappings of $G$. To any pair of natural star products $(\\ast, \\tilde\\ast)$ ha...

Recently Kontsevich solved the classification problem for deformation quantizations of all Poisson structures on a manifold. In this paper we study those Poisson structures for which the explicit methods of Fedosov can be applied, namely the Poisson structures coming from symplectic Lie algebroids......, as well as holomorphic symplectic structures. For deformations of these structures we prove the classification theorems and a general a general index theorem....

We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

To the integral symplectic group Sp(2g, Z) we associate two posets of which we prove that they have the Cohen-Macaulay property. As an application we show that the locus of marked decomposable principally polarized abelian varieties in the Siegel space of genus g has the homotopy type of a bouquet

In this paper, we construct an invariant metric in the space of homogeneous polynomials of a given degree (≥ 3). The homogeneous polynomials specify a nonlinear symplectic map which in turn represents a Hamiltonian system. By minimizing the norm constructed out of this metric as a function of system parameters, we ...

In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

In this paper, a classical system of ordinary differential equations is built to describe a kind of n-dimensional quantum systems. The absorption spectrum and the density of the states for the system are defined from the points of quantum view and classical view. From the Birkhoffian form of the equations, a Birkhoffian symplectic scheme is derived for solving n-dimensional equations by using the generating function method. Besides the Birkhoffian structure-preserving, the new scheme is proven to preserve the discrete local energy conservation law of the system with zero vector f. Some numerical experiments for a 3-dimensional example show that the new scheme can simulate the general Birkhoffian system better than the implicit midpoint scheme, which is well known to be symplectic scheme for Hamiltonian system. (general)

A variety of insertion devices (IDs), wigglers and undulators, linearly or elliptically polarized,are widely used as high brightness radiation sources at the modern light source rings. Long and high-field wigglers have also been proposed as the main source of radiation damping at next generation damping rings. As a result, it becomes increasingly important to understand the impact of IDs on the charged particle dynamics in the storage ring. In this paper, we report our recent development of a general explicit symplectic model for IDs with the paraxial ray approximation. High-order explicit symplectic integrators are developed to study real-world insertion devices with a number of wiggler harmonics and arbitrary polarizations

Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.

The main purpose of this work is to describe the quantum analogue of the usual classical symplectic geometry and then to formulate the quantum mechanics as a (quantum) non-commutative symplectic geometry. In this first part, we define the quantum symplectic structure in the context of the matrix differential geometry by using the discrete Weyl-Schwinger realization of the Heisenberg group. We also discuss the continuous limit and give an expression of the quantum structure constants. (author). 42 refs

A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transform...

Symplectic geometry originated in physics, but it has flourished as an independent subject in mathematics, together with its offspring, symplectic topology. Symplectic methods have even been applied back to mathematical physics. Noncommutative geometry has developed an alternative mathematical quantization scheme based on a geometric approach to operator algebras. Deformation quantization, a blend of symplectic methods and noncommutative geometry, approaches quantum mechanics from a more algebraic viewpoint, as it addresses quantization as a deformation of Poisson structures. This volume contains seven chapters based on lectures given by invited speakers at two May 2010 workshops held at the Mathematical Sciences Research Institute: Symplectic and Poisson Geometry in Interaction with Analysis, Algebra and Topology (honoring Alan Weinstein, one of the key figures in the field) and Symplectic Geometry, Noncommutative Geometry and Physics. The chapters include presentations of previously unpublished results and ...

Let be a smooth manifold with a regular foliation F and a 2-form which induces closed forms on the leaves of F in the leaf topology. A smooth map f : ( M , F ) ⟶ ( N , ) in a symplectic manifold ( N , ) is called a foliated symplectic immersion if restricts to an immersion on each leaf of the foliation and further, the ...

Using the basic concepts of the chain by chain method we show that the symplectic analysis, which was claimed to be equivalent to the usual Dirac method, fails when second class constraints are present. We propose a modification in symplectic analysis that solves the problem

We recall the Chernoff-Marsden definition of weak symplectic structure and give a rigorous treatment of the functional analysis and geometry of weak symplectic Banach spaces. We define the Maslov index of a continuous path of Fredholm pairs of Lagrangian subspaces in continuously varying Banach...

Considered two dimensional self-dual fields, the symplectic structure on the space of solutions is given. It is shown that this structure is Poincare invariant. The Lagrangian of two dimensional self-dual field is invariant under infinite one component conformal group, then this symplectic structure is also invariant under this conformal group. The conserved currents in geometrical formalism are also obtained

The authors propose explicit symplectic integrators of molecular dynamics (MD) algorithms for rigid-body molecules in the canonical and isobaric-isothermal ensembles. They also present a symplectic algorithm in the constant normal pressure and lateral surface area ensemble and that combined with the Parrinello-Rahman algorithm. Employing the symplectic integrators for MD algorithms, there is a conserved quantity which is close to Hamiltonian. Therefore, they can perform a MD simulation more stably than by conventional nonsymplectic algorithms. They applied this algorithm to a TIP3P pure water system at 300 K and compared the time evolution of the Hamiltonian with those by the nonsymplectic algorithms. They found that the Hamiltonian was conserved well by the symplectic algorithm even for a time step of 4 fs. This time step is longer than typical values of 0.5-2 fs which are used by the conventional nonsymplectic algorithms.

In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.

These notes are concerned with the formulation of a new conceptual framework for classical field theories. Although the formulation is based on fairly advanced concepts of symplectic geometry these notes cannot be viewed as a reformulation of known structures in more rigorous and elegant torns. Our intention is rather to communicate to theoretical physicists a set of new physical ideas. We have chosen for this purpose the language of local coordinates which is more elementary and more widely known than the abstract language of modern differntial geometry. Our emphasis is directed more to physical intentions than to mathematical vigour. We start with a symplectic analysis of staties. Both discrete and continuous systems are considered on a largely intuitive level. The notion of reciprocity and potentiality of the theory is discussed. Chapter II is a presentation of particle dynamics together with more rigorous definitions of the geometric structure. Lagrangian-Submanifolds and their generating function 3 are defined and the time evolution of particle states is studied. Chapter II form the main part of these notes. Here we describe the construction of canonical momenta and discuss the field dynamics in finite domains of space-time. We also establish the relation between our symplectic framework and the geometric formulation of the calculus of variations of multiple integrals. In the following chapter we give a few examples of field theories selected to illustrate various features of the new approach. A new formulation of the theory of gravity consists of using the affine connection in space-time as the field configuration. In the past section we present an analysis of hydrodynamics within our framework which reveals a formal analogy with electrodynamics. The discovery of potentials for hydrodynamics and the subsequent formulation of a variational principle provides an excellent example for the fruitfulness of the new approach to field theory. A short review of

We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

We discuss symplectic cutting for Hamiltonian actions of non-Abelian compact groups. By using a degeneration based on the Vinberg monoid we give, in good cases, a global quotient description of a surgery construction introduced by Woodward and Meinrenken, and show it can be interpreted in algebro......-geometric terms. A key ingredient is the `universal cut' of the cotangent bundle of the group itself, which is identified with a moduli space of framed bundles on chains of projective lines recently introduced by the authors....

Full Text Available Background and aimsThe paper presents the results of an investigation on the acoustic performance of tilted profile parallel barriers with quadratic residue diffuser tops and faces.MethodsA2D boundary element method (BEM is used to predict the barrier insertion loss. The results of rigid and with absorptive coverage are also calculated for comparisons. Using QRD on the top surface and faces of all tilted profile parallel barrier models introduced here is found to improve the efficiency of barriers compared with rigid equivalent parallel barrier at the examined receiver positions.Results Applying a QRD with frequency design of 400 Hz on 5 degrees tilted parallel barrier improves the overall performance of its equivalent rigid barrier by 1.8 dB(A. Increase the treated surfaces with reactive elements shifts the effective performance toward lower frequencies. It is found that by tilting the barriers from 0 to 10 degrees in parallel set up, the degradation effects in parallel barriers is reduced but the absorption effect of fibrous materials and also diffusivity of thequadratic residue diffuser is reduced significantly. In this case all the designed barriers have better performance with 10 degrees tilting in parallel set up.ConclusionThe most economic traffic noise parallel barrier, which produces significantly high performance, is achieved by covering the top surface of the barrier closed to the receiver by just a QRD with frequency design of 400 Hz and tilting angle of 10 degrees. The average Aweighted insertion loss in this barrier is predicted to be 16.3 dB (A.

A method of stress analysis for a two dimentional crack, which is subjected to internal gas pressure, and situated parallel to a free surface of a material, is presented. It is based on the concept of continuously distributed edge dislocations of two kinds, i.e. one with Burgers vector normal to the free surface and the other with parallel to it. Stress fields of individual dislocations are chosen so as to satisfy stress free boundary conditions at the free surface, by taking account of image dislocations. Distributions of the both kinds of dislocations in the crack are derived so as to give the internal gas pressure and, at the same time, to satisfy shear stress free boundary condition on the crack surface. Stress fields σsub(xx), σsub(yy) and σsub(xy) in the sub-surface layer are then determined from them. They have square root singularities at the crack-tip. (author)

Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

An x-ray diffraction method is described for study of thin polycrystalline and amorphous films and surface layers in an extremely asymmetrical diffraction system in parallel grazing rays using a DRON-3.0 diffractometer. The minimum grazing angles correspond to diffraction under conditions of total external reflection and a layer depth of ∼ 2.5-8 nm

Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

Fast parallel femtosecond laser surface micro-structuring is demonstrated using a spatial light modulator (SLM). The Gratings and Lenses algorithm, which is simple and computationally fast, is used to calculate computer generated holograms (CGHs) producing diffractive multiple beams for the parallel processing. The results show that the finite laser bandwidth can significantly alter the intensity distribution of diffracted beams at higher angles resulting in elongated hole shapes. In addition, by synchronisation of applied CGHs and the scanning system, true 3D micro-structures are created on Ti6Al4V.

The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

We present a derivation of the Von Neumann entropy and mutual information of arbitrary two-mode Gaussian states, based on the explicit determination of the symplectic eigenvalues of a generic covariance matrix. The key role of the symplectic invariants in such a determination is pointed out. We show that the Von Neumann entropy depends on two symplectic invariants, while the purity (or the linear entropy) is determined by only one invariant, so that the two quantities provide two different hierarchies of mixed Gaussian states. A comparison between mutual information and entanglement of formation for symmetric states is considered, taking note of the crucial role of the symplectic eigenvalues in qualifying and quantifying the correlations present in a generic state. (letter to the editor)

We present a derivation of the Von Neumann entropy and mutual information of arbitrary two-mode Gaussian states, based on the explicit determination of the symplectic eigenvalues of a generic covariance matrix. The key role of the symplectic invariants in such a determination is pointed out. We show that the Von Neumann entropy depends on two symplectic invariants, while the purity (or the linear entropy) is determined by only one invariant, so that the two quantities provide two different hierarchies of mixed Gaussian states. A comparison between mutual information and entanglement of formation for symmetric states is considered, taking note of the crucial role of the symplectic eigenvalues in qualifying and quantifying the correlations present in a generic state. (letter to the editor)

In this note we prove a correspondence between the Wess-Zumino-Novikov-Witten model of the Lie supergroup GL(1 vertical stroke 1) and a free model consisting of two scalars and a pair of symplectic fermions. This model was discussed earlier by LeClair. Vertex operators for the symplectic fermions include twist fields, and correlation functions of GL(1 vertical stroke 1) agree with the known results for the scalars and symplectic fermions. We perform a detailed study of boundary states for symplectic fermions and apply them to branes in GL(1 vertical stroke 1). This allows us to compute new amplitudes of strings stretching between branes of different types and confirming Cardy's condition. (orig.)

In this note we prove a correspondence between the Wess-Zumino-Novikov-Witten model of the Lie supergroup GL(1 vertical stroke 1) and a free model consisting of two scalars and a pair of symplectic fermions. This model was discussed earlier by LeClair. Vertex operators for the symplectic fermions include twist fields, and correlation functions of GL(1 vertical stroke 1) agree with the known results for the scalars and symplectic fermions. We perform a detailed study of boundary states for symplectic fermions and apply them to branes in GL(1 vertical stroke 1). This allows us to compute new amplitudes of strings stretching between branes of different types and confirming Cardy's condition. (orig.)

Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M φ of symplectic manifolds (M,ω M ) with symplectomorphisms φ. While the cylindrical contact homology of M φ is given by the Floer homologies of powers of φ, the other algebraic invariants of symplectic field theory for M φ provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian φ we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M φ ≅ S 1 x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic field theory are established, our result implies that the

Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M{sub {phi}} of symplectic manifolds (M,{omega}{sub M}) with symplectomorphisms {phi}. While the cylindrical contact homology of M{sub {phi}} is given by the Floer homologies of powers of {phi}, the other algebraic invariants of symplectic field theory for M{sub {phi}} provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian {phi} we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M{sub {phi}} {approx_equal} S{sup 1} x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic

Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.

Abstract. Let M be a smooth manifold with a regular foliation F and a 2-form ω which induces closed forms on the leaves of F in the leaf topology. A smooth map f : (M, F) −→ (N,σ) in a symplectic manifold (N,σ) is called a foliated symplectic immersion if f restricts to an immersion on each leaf of the foliation and further, the.

There is constructed a family of Lie algebras that act in a Hamiltonian way on the symplectic affine space of linear symplectic connections on a symplectic manifold. The associated equivariant moment map is a formal sum of the Cahen-Gutt moment map, the Ricci tensor, and a translational term. The critical points of a functional constructed from it interpolate between the equations for preferred symplectic connections and the equations for critical symplectic connections. The commutative algebra of formal sums of symmetric tensors on a symplectic manifold carries a pair of compatible Poisson structures, one induced from the canonical Poisson bracket on the space of functions on the cotangent bundle polynomial in the fibers, and the other induced from the algebraic fiberwise Schouten bracket on the symmetric algebra of each fiber of the cotangent bundle. These structures are shown to be compatible, and the required Lie algebras are constructed as central extensions of their! linear combinations restricted to formal sums of symmetric tensors whose first order term is a multiple of the differential of its zeroth order term.

A new scheme for the temporal discretization of the seismic wave equation is constructed based on symplectic geometric theory and a modified strategy. The ordinary differential equation in terms of time, which is obtained after spatial discretization via the spectral-element method, is transformed into a Hamiltonian system. A symplectic partitioned Runge-Kutta (PRK) scheme is used to solve the Hamiltonian system. A term related to the multiplication of the spatial discretization operator with the seismic wave velocity vector is added into the symplectic PRK scheme to create a modified symplectic PRK scheme. The symplectic coefficients of the new scheme are determined via Taylor series expansion. The positive coefficients of the scheme indicate that its long-term computational capability is more powerful than that of conventional symplectic schemes. An exhaustive theoretical analysis reveals that the new scheme is highly stable and has low numerical dispersion. The results of three numerical experiments demonstrate the high efficiency of this method for seismic wave modeling.

A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.

A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.

Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

Theoretical derivations were made for the induced potential and the stopping power of a charged particle moving close and parallel to the surface of a solid. It was illustrated that the induced potential produced by the interaction of particle and solid depended not only on the velocity but also on the previous velocity of the particle before its last inelastic interaction. Another words, the particle kept a memory on its previous velocity, v , in determining the stopping power for the particle of velocity v. Based on the dielectric response theory, formulas were derived for the induced potential and the stopping power with memory effect. An extended Drude dielectric function with spatial dispersion was used in the application of these formulas for a proton moving parallel to Si surface. It was found that the induced potential with memory effect lay between induced potentials without memory effect for constant velocities v and v. The memory effect was manifest as the proton changes its velocity in the previous inelastic interaction. This memory effect also reduced the stopping power of the proton. The formulas derived in the present work can be applied to any solid surface and charged particle moving with arbitrary parallel trajectory either inside or outside the solid

This paper presents a case study in the design strategy used inbuilding a graphics computer, for drawing very complex 3Dgeometric surfaces. The goal is to build a PC based computer systemcapable of handling surfaces built from about 2 million triangles, andto be able to render a perspective view...... of these on a computer displayat interactive frame rates, i.e. processing around 50 milliontriangles per second. The paper presents a hardware/softwarearchitecture called HPGA (Hybrid Parallel Graphics Architecture) whichis likely to be able to carry out this task. The case study focuses ontechniques to increase...

We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

Highlights: • Zeeman effect shifts superconducting gaps of sub-band system, towards pair-breaking. • Higher-level sub-bands become normal-state-like electronic states by magnetic fields. • Magnetic field dependence of zero-energy DOS reflects multi-gap superconductivity. - Abstract: We study paramagnetic pair-breaking in electric-field-induced surface superconductivity, when magnetic field is applied parallel to the surface. The calculation is performed by Bogoliubov-de Gennes theory with s-wave pairing, including the screening effect of electric fields by the induced carriers near the surface. Due to the Zeeman shift by applied fields, electronic states at higher-level sub-bands become normal-state-like. Therefore, the magnetic field dependence of Fermi-energy density of states reflects the multi-gap structure in the surface superconductivity.

Full Text Available We study Riemann-Roch expected curves on $mathbb{P}^1 imes mathbb{P}^1$ in the context of the Nagata-Biran conjecture. This conjecture predicts that for sufficiently large number of points multiple points Seshadri constants of an ample line bundle on algebraic surface are maximal. Biran gives an effective lower bound $N_0$. We construct examples verifying to the effect that the assertions of the Nagata-Biran conjecture can not hold for small number of points. We discuss cases where our construction fails. We observe also that there exists a strong relation between Riemann-Roch expected curves on $mathbb{P}^1 imes mathbb{P}^1$ and the symplectic packing problem. Biran relates the packing problem to the existence of solutions of certain Diophantine equations. We construct such solutions for any ample line bundle on $mathbb{P}^1 imes mathbb{P}^1$ and a relatively smallnumber of points. The solutions geometrically correspond to Riemann-Roch expected curves. Finally we discuss in how far the Biran number $N_0$ is optimal in the case of mathbb{P}^1 imes mathbb{P}^1. In fact we conjecture that it can be replaced by a lower number and we provide evidence justifying this conjecture.

We consider a simple one-dimensional model to study the effects of the beam-beam force on the coherent dynamics of colliding beams. The key ingredient is a linearized beam-beam kick. We study only the quadrupole modes, with the dynamical variables being the 2nd-order moments of the canonical variables q, p. Our model is self-consistent in the sense that no higher order moments are generated by the linearized beam-beam kicks, and that the only source of violation of symplecticity is the radiation. We discuss the round beam case only, in which vertical and horizontal quantities are assumed to be equal (though they may be different in the two beams). Depending on the values of the tune and beam intensity, we observe steady states in which otherwise identical bunches have sizes that are equal, or unequal, or periodic, or behave chaotically from turn to turn. Possible implications of luminosity saturation with increasing beam intensity are discussed. Finally, we present some preliminary applications to an asymmetric collider. 8 refs., 8 figs

Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.

X-ray diffraction method is suggested to use to investigate thin films and near-surface layers under the conditions of total external reflection (TER) and in the geometry of parallel glancing rays. Experimental realization of the method using the DRON-30 diffractometer is described. Calculation for the required width of the aperture of Soller slot system is presented. The described diffraction scheme is used to investigate thin film crystal structure at glancing angles in the range from TER up to 8-10 deg. The thickness of the investigated layer in this case changes from 2.5-8 nm up to 10 3 nm. The suggested diffraction method in parallel glancing rays is especially important when investigating the films with thickness lower than 1000-2000A

We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

Generalized Zakharov-Kuznetsov equation, a typical nonlinear wave equation, was studied based on the multi-symplectic theory in Hamilton space. The multi-symplectic formulations of generalized Zakharov-Kuznetsov equation with several conservation laws are presented. The multi-symplectic Preissmann method is used to discretize the formulations. The numerical experiment is given, and the results verify the efficiency of the multi-symplectic scheme. (authors)

In this paper a detailed Hamiltonian analysis of three-dimensional gravity without dynamics proposed by V. Hussain is performed. We report the complete structure of the constraints and the Dirac brackets are explicitly computed. In addition, the Faddeev–Jackiw symplectic approach is developed; we report the complete set of Faddeev–Jackiw constraints and the generalized brackets, then we show that the Dirac and the generalized Faddeev–Jackiw brackets coincide to each other. Finally, the similarities and advantages between Faddeev–Jackiw and Dirac’s formalism are briefly discussed. - Highlights: • We report the symplectic analysis for three dimensional gravity without dynamics. • We report the Faddeev–Jackiw constraints. • A pure Dirac’s analysis is performed. • The complete structure of Dirac’s constraints is reported. • We show that symplectic and Dirac’s brackets coincide to each other.

Vacuum expectation value of the surface energy-momentum tensor is evaluated for a massive scalar field with general curvature coupling parameter subject to Robin boundary conditions on two parallel branes located on (D+1)-dimensional anti-de Sitter bulk. The general case of different Robin coefficients on separate branes is considered. As a regularization procedure the generalized zeta function technique is used, in combination with contour integral representations. The surface energies on the branes are presented in the form of the sums of single brane and second brane-induced parts. For the geometry of a single brane both regions, on the left (L-region) and on the right (R-region), of the brane are considered. The surface densities for separate L- and R-regions contain pole and finite contributions. For an infinitely thin brane taking these regions together, in odd spatial dimensions the pole parts cancel and the total surface energy is finite. The parts in the surface densities generated by the presence of the second brane are finite for all nonzero values of the interbrane separation. It is shown that for large distances between the branes the induced surface densities give rise to an exponentially suppressed cosmological constant on the brane. In the Randall-Sundrum braneworld model, for the interbrane distances solving the hierarchy problem between the gravitational and electroweak mass scales, the cosmological constant generated on the visible brane is of the right order of magnitude with the value suggested by the cosmological observations

Let X be a complex manifold equipped with a projective structure P. There is a holomorphic principal C*-bundle L P ' over X associated with P. We show that the holomorphic cotangent bundle of the total space of L P ' equipped with the Liouville symplectic form has a canonical deformation quantization. This generalizes the construction in the work of and Ben-Zvi and Biswas [''A quantization on Riemann surfaces with projective structure,'' Lett. Math. Phys. 54, 73 (2000)] done under the assumption that dim C X=1.

Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave

Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.

We study the Lorentz force equation of charged particle dynamics by considering its K-symplectic structure. As the Hamiltonian of the system can be decomposed as four parts, we are able to construct the numerical methods that preserve the K-symplectic structure based on Hamiltonian splitting technique. The newly derived numerical methods are explicit, and are shown in numerical experiments to be stable over long-term simulation. The error convergency as well as the long term energy conservation of the numerical solutions is also analyzed by means of the Darboux transformation.

Symplectic integration had been adopted for orbital motion tracking in code SimTrack. SimTrack has been extensively used for dynamic aperture calculation with beam-beam interaction for the Relativistic Heavy Ion Collider (RHIC). Recently proton spin tracking has been implemented on top of symplectic orbital motion in this code. In this article, we will explain the implementation of spin motion based on Thomas-BMT equation, and the benchmarking with other spin tracking codes currently used for RHIC. Examples to calculate spin closed orbit and spin tunes are presented too.

We study the Lorentz force equation of charged particle dynamics by considering its K-symplectic structure. As the Hamiltonian of the system can be decomposed as four parts, we are able to construct the numerical methods that preserve the K-symplectic structure based on Hamiltonian splitting technique. The newly derived numerical methods are explicit, and are shown in numerical experiments to be stable over long-term simulation. The error convergency as well as the long term energy conservation of the numerical solutions is also analyzed by means of the Darboux transformation.

Symplectic mappings are discrete-time analogs of Hamiltonian systems. They appear in many areas of physics, including, for example, accelerators, plasma, and fluids. Integrable mappings, a subclass of symplectic mappings, are equivalent to a Twist map, with a rotation number, constant along the phase trajectory. In this letter, we propose a succinct expression to determine the rotation number and present two examples. Similar to the period of the bounded motion in Hamiltonian systems, the rotation number is the most fundamental property of integrable maps and it provides a way to analyze the phase-space dynamics.

Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)

Highlights: • We investigate the transverse vibration of FGM pipe conveying fluid. • The FGM pipe conveying fluid can be classified into two cases. • The variations between the frequency and the power law exponent are obtained. • “Case 1” is relatively more reasonable than “case 2”. - Abstract: Problems related to the transverse vibration of pipe conveying fluid made of functionally graded material (FGM) are addressed. Based on inside and outside surface material compositions of the pipe, FGM pipe conveying fluid can be classified into two cases. It is hypothesized that the physical parameters of the material along the direction of the pipe wall thickness change in the simple power law. A differential equation of motion expressed in non-dimensional quantities is derived by using Hamilton's principle for systems of changing mass. Using the assuming modal method, the pipe deflection function is expanded into a series, in which each term is expressed to admissible function multiplied by generalized coordinate. Then, the differential equation of motion is discretized into the two order differential equations expressed in the generalized coordinates. Based on symplectic elastic theory and the introduction of dual system and dual variable, Hamilton's dual equations are derived, and the original problem is reduced to eigenvalue and eigenvector problem in the symplectic space. Finally, a symplectic method is employed to analyze the vibration and stability of FGM pipe conveying fluid. For a clamped–clamped FGM pipe conveying fluid in “case 1” and “case 2”, the dimensionless critical flow velocity for first-mode divergence and the critical coupled-mode flutter flow velocity are obtained, and the variations between the real part and imaginary part of dimensionless complex frequency and fluid velocity, mass ratio and the power law exponent (or graded index, volume fraction) for FGM pipe conveying fluid are analyzed.

Highlights: • We investigate the transverse vibration of FGM pipe conveying fluid. • The FGM pipe conveying fluid can be classified into two cases. • The variations between the frequency and the power law exponent are obtained. • “Case 1” is relatively more reasonable than “case 2”. - Abstract: Problems related to the transverse vibration of pipe conveying fluid made of functionally graded material (FGM) are addressed. Based on inside and outside surface material compositions of the pipe, FGM pipe conveying fluid can be classified into two cases. It is hypothesized that the physical parameters of the material along the direction of the pipe wall thickness change in the simple power law. A differential equation of motion expressed in non-dimensional quantities is derived by using Hamilton's principle for systems of changing mass. Using the assuming modal method, the pipe deflection function is expanded into a series, in which each term is expressed to admissible function multiplied by generalized coordinate. Then, the differential equation of motion is discretized into the two order differential equations expressed in the generalized coordinates. Based on symplectic elastic theory and the introduction of dual system and dual variable, Hamilton's dual equations are derived, and the original problem is reduced to eigenvalue and eigenvector problem in the symplectic space. Finally, a symplectic method is employed to analyze the vibration and stability of FGM pipe conveying fluid. For a clamped–clamped FGM pipe conveying fluid in “case 1” and “case 2”, the dimensionless critical flow velocity for first-mode divergence and the critical coupled-mode flutter flow velocity are obtained, and the variations between the real part and imaginary part of dimensionless complex frequency and fluid velocity, mass ratio and the power law exponent (or graded index, volume fraction) for FGM pipe conveying fluid are analyzed.

A Hamiltonian structure on a finite-dimensional manifold can be introduced either by endowing it with a (pre)symplectic structure, or by describing the Poisson bracket with the help of a tensor with two upper indices named the Poisson structure. Under the assumption of nondegeneracy, the Poisson structure is nothing else than the inverse of the symplectic structure. Also in the degenerate case the distinction between the two approaches is almost insignificant, because both presymplectic and Poisson structures split into symplectic structures on leaves of appropriately chosen foliations. Hamiltonian structures that arise in the theory of evolution equations demonstrate something new in this respect: trying to operate in local terms, one is induced to develop both approaches independently. Hamiltonian operators, being the infinite-dimensional counterparts of Poisson structures, were the first to become the subject of investigations. A considerable period of time passed before the papers initiated research in the theory of symplectic operators, being the counterparts of presymplectic structures. In what follows, we focus on the main achievements in this field

Collective and microscopic pictures of the nuclear dynamics are related in the frame of time-dependent variational principle on symplectic trial manifolds. For symmetry braking systems such manifolds are constructed by cranking, and applied to study the nuclear isovector collective excitations. (author)

In the past few years there has been a substantial amount of research on symplectic integration. The subject is only part of a program concerned with numerically preserving a system`s inherent geometrical structures. Volume preservation, reversibility, local conservation laws for elliptic equations, and systems with integral invariants are but a few examples of such invariant structures. In many cases one requires a numerical method to stay in the smallest possible appropriate group of phase space maps. It is not the authors` opinion that symplecticity, for example, automatically makes a numerical method superior to all others, but it is their opinion that it should be taken seriously and that a conscious, informed decision be made in that regard. The authors present here a survey of open problems in symplectic integration, including other problems from the larger program. This is not intended as a review of symplectic integration and is naturally derived from the authors` own research interests. At present, this survey is incomplete, but the authors hope the help of the colleagues to be able to include in the proceedings of this conference a more comprehensive survey. Many of the problems mentioned here call for numerical experimentation, some for application of suggested but untested methods, some for new methods, and some for theorems, Some envisage large research programs.

We find that with uniform mesh, the numerical schemes derived from finite element method can keep a preserved symplectic structure in one-dimensional case and a preserved multisymplectic structure in two-dimensional case respectively. These results are in fact the intrinsic reason why the numerical experiments show that such finite element algorithms are accurate in practice.``

Matrices with entries being differential operators, that endow the phase space of an evolution system with a (pre)symplectic structure are considered. Special types of such structures are explicitly described. Links with integrability, geometry of loop spaces, and Baecklund transformations are traces

We generalize Voisin's theorem on deformations of pairs of a symplectic manifold and a Lagrangian submanifold to the case of Lagrangian normal crossing subvarieties. Partial results are obtained for arbitrary Lagrangian subvarieties. We apply our results to the study of singular fibers of Lagrangian fibrations.

Full Text Available The significance of Radar Cross Section (RCS in the military applications makes its prediction an important problem. This paper uses large-scale parallel Physical Optics (PO to realize the fast computation of RCS to electrically large targets, which are modeled by Non-Uniform Rational B-Spline (NURBS surfaces and coated with dielectric materials. Some numerical examples are presented to validate this paper’s method. In addition, 1024 CPUs are used in Shanghai Supercomputer Center (SSC to perform the simulation of a model with the maximum electrical size 1966.7 λ for the first time in China. From which, it can be found that this paper’s method can greatly speed the calculation and is capable of solving the real-life problem of RCS prediction.

The parallels between an actual Antarctica South Pole re-supply traverse conducted by the National Science Foundation (NSF) Office of Polar Programs in 2009 have been studied with respect to the latest mission architecture concepts being generated by the United States National Aeronautics and Space Administration (NASA) for lunar and Mars surface systems scenarios. The challenges faced by both endeavors are similar since they must both deliver equipment and supplies to support operations in an extreme environment with little margin for error in order to be successful. By carefully and closely monitoring the manifesting and operational support equipment lists which will enable this South Pole traverse, functional areas have been identified. The equipment required to support these functions will be listed with relevant properties such as mass, volume, spare parts and maintenance schedules. This equipment will be compared to space systems currently in use and projected to be required to support equivalent and parallel functions in Lunar and Mars missions in order to provide a level of realistic benchmarking. Space operations have historically required significant amounts of support equipment and tools to operate and maintain the space systems that are the primary focus of the mission. By gaining insight and expertise in Antarctic South Pole traverses, space missions can use the experience gained over the last half century of Antarctic operations in order to design for operations, maintenance, dual use, robustness and safety which will result in a more cost effective, user friendly, and lower risk surface system on the Moon and Mars. It is anticipated that the U.S Antarctic Program (USAP) will also realize benefits for this interaction with NASA in at least two areas: an understanding of how NASA plans and carries out its missions and possible improved efficiency through factors such as weight savings, alternative technologies, or modifications in training and

This paper focuses on application of the symplectic integrator to numerical fluid analysis. For the purpose, we introduce Hamiltonian particle dynamics to simulate fluid behavior. The method is based on both the Hamiltonian formulation of a system and the particle methods, and is therefore called Hamiltonian Particle Dynamics (HPD). In this paper, an example of HPD applications, namely the behavior of incompressible inviscid fluid, is solved. In order to improve accuracy of HPD with respect to space, CIVA, which is a highly accurate interpolation method, is combined, but the combined method is subject to problems in that the invariants of the system are not conserved in a long-time computation. For solving the problems, symplectic time integrators are introduced and the effectiveness is confirmed by numerical analyses. (author)

Extending Griffiths’ classical theory of period mappings for compact Kähler manifolds, this book develops and applies a theory of period mappings of “Hodge-de Rham type” for families of open complex manifolds. The text consists of three parts. The first part develops the theory. The second part investigates the degeneration behavior of the relative Frölicher spectral sequence associated to a submersive morphism of complex manifolds. The third part applies the preceding material to the study of irreducible symplectic complex spaces. The latter notion generalizes the idea of an irreducible symplectic manifold, dubbed an irreducible hyperkähler manifold in differential geometry, to possibly singular spaces. The three parts of the work are of independent interest, but intertwine nicely.

Full Text Available This work is devoted to review the gauge embedding of either commutative and noncommutative (NC theories using the symplectic formalism framework. To sum up the main features of the method, during the process of embedding, the infinitesimal gauge generators of the gauge embedded theory are easily and directly chosen. Among other advantages, this enables a greater control over the final Lagrangian and brings some light on the so-called ''arbitrariness problem''. This alternative embedding formalism also presents a way to obtain a set of dynamically dual equivalent embedded Lagrangian densities which is obtained after a finite number of steps in the iterative symplectic process, oppositely to the result proposed using the BFFT formalism. On the other hand, we will see precisely that the symplectic embedding formalism can be seen as an alternative and an efficient procedure to the standard introduction of the Moyal product in order to produce in a natural way a NC theory. In order to construct a pedagogical explanation of the method to the nonspecialist we exemplify the formalism showing that the massive NC U(1 theory is embedded in a gauge theory using this alternative systematic path based on the symplectic framework. Further, as other applications of the method, we describe exactly how to obtain a Lagrangian description for the NC version of some systems reproducing well known theories. Naming some of them, we use the procedure in the Proca model, the irrotational fluid model and the noncommutative self-dual model in order to obtain dual equivalent actions for these theories. To illustrate the process of noncommutativity introduction we use the chiral oscillator and the nondegenerate mechanics.

Full Text Available Starting from generic bilinear Hamiltonians, constructed by covariant vector, bivector or tensor fields, it is possible to derive a general symplectic structure which leads to holonomic and anholonomic formulations of Hamilton equations of motion directly related to a hydrodynamic picture. This feature is gauge free and it seems a deep link common to all interactions, electromagnetism and gravity included. This scheme could lead toward a full canonical quantization.

It is shown that the symplectic Sp(4), Sp(6) and the exceptional G 2 gauge field theories with complete Spontaneous symmetry breaking through the Higgs mechanism are not asymptotically free. This, together with earlier results for other groups, hints at the existence of a general theorem according to which it would no longer be possible for asymptotic freedom to coexist with the absence of infrared divergences. (author)

The marginal distribution for two types of nonclassical states of trapped ion - for squeezed and correlated states and for squeezed even and odd coherent states (squeezed Schroedinger cat states) is studied. The obtained marginal distribution for the two types of states is shown to satisfy classical dynamical equation equivalent to standard quantum evolution equation for density matrix (wave function) derived in symplectic tomography scheme. (author). 20 refs

We construct families of Lagrangian 3-torus fibrations resembling the topology of some of the singularities in Topological Mirror Symmetry. We perform a detailed analysis of the affine structure on the base of these fibrations near their discriminant loci. This permits us to classify the aforementioned families up to fibre preserving symplectomorphism. The kind of degenerations we investigate give rise to a large number of symplectic invariants. (author)

Variational symplectic algorithms have recently been developed for carrying out long-time simulation of charged particles in magnetic fields [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008); H. Qin, X. Guan, and W. Tang, Phys. Plasmas (2009); J. Li, H. Qin, Z. Pu, L. Xie, and S. Fu, Phys. Plasmas 18, 052902 (2011)]. As a direct consequence of their derivation from a discrete variational principle, these algorithms have very good long-time energy conservation, as well as exactly preserving discrete momenta. We present stability results for these algorithms, focusing on understanding how explicit variational integrators can be designed for this type of system. It is found that for explicit algorithms, an instability arises because the discrete symplectic structure does not become the continuous structure in the t→0 limit. We examine how a generalized gauge transformation can be used to put the Lagrangian in the “antisymmetric discretization gauge,” in which the discrete symplectic structure has the correct form, thus eliminating the numerical instability. Finally, it is noted that the variational guiding center algorithms are not electromagnetically gauge invariant. By designing a model discrete Lagrangian, we show that the algorithms are approximately gauge invariant as long as A and φ are relatively smooth. A gauge invariant discrete Lagrangian is very important in a variational particle-in-cell algorithm where it ensures current continuity and preservation of Gauss’s law [J. Squire, H. Qin, and W. Tang (to be published)].

This paper deals mainly with the application of the second-order symplectic implicit midpoint rule and its symmetric compositions to a post-Newtonian Hamiltonian formulation with canonical spin variables in relativistic compact binaries. The midpoint rule, as a basic algorithm, is directly used to integrate the completely canonical Hamiltonian system. On the other hand, there are symmetric composite methods based on a splitting of the Hamiltonian into two parts: the Newtonian part associated with a Kepler motion, and a perturbation part involving the orbital post-Newtonian and spin contributions, where the Kepler flow has an analytic solution and the perturbation can be calculated by the midpoint rule. An example is the second-order mixed leapfrog symplectic integrator with one stage integration of the perturbation flow and two semistage computations of the Kepler flow at every integration step. Also, higher-order composite methods such as the Forest-Ruth fourth-order symplectic integrator and its optimized algorithm are applicable. Various numerical tests including simulations of chaotic orbits show that the mixed leapfrog integrator is always superior to the midpoint rule in energy accuracy, while both of them are almost equivalent in computational efficiency. Particularly, the optimized fourth-order algorithm compared with the mixed leapfrog scheme provides good precision and needs no expensive additional computational time. As a result, it is worth performing a more detailed and careful examination of the dynamical structure of chaos and order in the parameter windows and phase space of the binary system.

It was found that any homogeneous polynomial can be written as a sum of integrable polynomials of the same degree which Lie transformations can be evaluated exactly. By utilizing symplectic integrators, an integrable-polynomial factorization is developed to convert a symplectic map in the form of Dragt-Finn factorization into a product of Lie transformations associated with integrable polynomials. A small number of factorization bases of integrable polynomials enable one to use high order symplectic integrators so that the high-order spurious terms can be greatly suppressed. A symplectic map can thus be evaluated with desired accuracy

Full Text Available A new symplectic chaos synchronization of chaotic systems with uncertain chaotic parameters is studied. The traditional chaos synchronizations are special cases of the symplectic chaos synchronization. A sufficient condition is given for the asymptotical stability of the null solution of error dynamics and a parameter difference. The symplectic chaos synchronization with uncertain chaotic parameters may be applied to the design of secure communication systems. Finally, numerical results are studied for symplectic chaos synchronized from two identical Lorenz-Stenflo systems in three different cases.

Particle size was found to be an important factor in air bubble-induced detachment of colloidal particles from collector surfaces in a parallel plate flow chamber and generally polystyrene particles with a diameter of 806 nm detached less than particles with a diameter of 1400 nm. Particle

By allowing an air-bubble to pass through a parallel plate flow chamber with negatively charged, colloidal polystyrene particles adhering to the bottom collector plate of the chamber, the detachment of adhering particles stimulated by surface tension forces induced by the passage of a liquid-air

We present a simple explicit construction of hyper-Kaehler and hyper-symplectic (also known as neutral hyper-Kaehler or hyper-para-Kaehler) metrics in 4D using the Bianchi type groups of class A. The construction underlies a correspondence between hyper-Kaehler and hyper-symplectic structures of dimension 4. (paper)

We present a simple explicit construction of hyper-Kaehler and hyper-symplectic (also known as neutral hyper-Kaehler or hyper-parakaehler) metrics in 4D using the Bianchi type groups of class A. The construction underlies a correspondence between hyper-Kaehler and hyper-symplectic structures in dimension four.

We are presenting a family of trigonometrically fitted partitioned Runge-Kutta symplectic methods of fourth order with six stages. The solution of the one dimensional time independent Schroedinger equation is considered by trigonometrically fitted symplectic integrators. The Schroedinger equation is first transformed into a Hamiltonian canonical equation. Numerical results are obtained for the one-dimensional harmonic oscillator and the exponential potential

We present a collection of examples borrowed from celestial mechanics and projective dynamics. In these examples symplectic structures with singularities arise naturally from regularization transformations, Appell's transformation or classical changes like McGehee coordinates, which end up blowing up the symplectic structure or lowering its rank at certain points. The resulting geometrical structures that model these examples are no longer symplectic but symplectic with singularities which are mainly of two types: bm-symplectic and m-folded symplectic structures. These examples comprise the three body problem as non-integrable exponent and some integrable reincarnations such as the two fixed-center problem. Given that the geometrical and dynamical properties of bm-symplectic manifolds and folded symplectic manifolds are well-understood [10-12,9,15,13,14,24,20,22,25,28], we envisage that this new point of view in this collection of examples can shed some light on classical long-standing problems concerning the study of dynamical properties of these systems seen from the Poisson viewpoint.

We propose a new scheme for the generalized Kadomtsev-Petviashvili (KP) equation. The multi-symplectic conservation property of the new scheme is proved. Back error analysis shows that the new multi-symplectic scheme has second order accuracy in space and time. Numerical application on studying the KPI equation and the KPII equation are presented in detail.

In this paper, using homotopy components of symplectic matrices, and basic properties of the Maslov-type index theory, we establish precise iteration formulae of the Maslov-type index theory for any path in the symplectic group starting from the identity. (author)

Four consecutive ferroelectric polarization switchings and an abnormal ring-like domain pattern can be introduced by a single tip bias of a piezoresponse force microscope in the (010) triglycine sulfate (TGS) crystal. The external electric field anti-parallel to the original polarization induces the first polarization switching; however, the surface charges of TGS can move toward the tip location and induce the second polarization switching once the tip bias is removed. The two switchings allow a ring-like pattern composed of the central domain with downward polarization and the outer domain with upward polarization. Once the two domains disappear gradually as a result of depolarization, the other two polarization switchings occur one by one at the TGS where the tip contacts. However, the backswitching phenomenon does not occur when the external electric field is parallel to the original polarization. These results can be explained according to the surface charges instead of the charges injected inside.

Mean field theory is given a geometrical interpretation as a Hamiltonian dynamical system. The Hartree-Fock phase space is the Grassmann manifold, a symplectic submanifold of the projective space of the full many-fermion Hilbert space. The integral curves of the Hartree-Fock vector field are the time-dependent Hartree-Fock solutions, while the critical points of the energy function are the time-independent states. The mean field theory is generalized beyond determinants to coadjoint orbit spaces of the unitary group; the Grassmann variety is the minimal coadjoint orbit

Within the Langlands program, endoscopy is a fundamental process for relating automorphic representations of one group with those of another. In this book, Arthur establishes an endoscopic classification of automorphic representations of orthogonal and symplectic groups G. The representations are shown to occur in families (known as global L-packets and A-packets), which are parametrized by certain self-dual automorphic representations of an associated general linear group GL(N). The central result is a simple and explicit formula for the multiplicity in the automorphic discrete spectrum of G

Orthogonal, symplectic and unitary representations of finite groups lie at the crossroads of two more traditional subjects of mathematics-linear representations of finite groups, and the theory of quadratic, skew symmetric and Hermitian forms-and thus inherit some of the characteristics of both. This book is written as an introduction to the subject and not as an encyclopaedic reference text. The principal goal is an exposition of the known results on the equivalence theory, and related matters such as the Witt and Witt-Grothendieck groups, over the "classical" fields-algebraically closed, rea

We present evidence that second order matrix-based beam optics programs violate the symplectic condition. A simple method to avoid this difficulty, based on a generating function approach to evaluating transfer maps, is described. A simple example illustrating the non-symplectricity of second order matrix methods, and the effectiveness of our solution to the problem, is provided. We conclude that it is in fact possible to bring second order matrix optics methods to a canonical form. The procedure for doing so has been implemented in the program DIMAT, and could be implemented in programs such as TRANSPORT and TURTLE, making them useful in multiturn applications. 15 refs

The performance of many colliding storage rings is limited by the beam-beam interaction. A particle feels a nonlinear force produced by the encountering bunch at the collision. This beam-beam force acts mainly in the transverse directions so that the longitudinal effects have scarcely been studied, except for the cases of a collision with a crossing angle. Recently, however, high luminosity machines are being considered where the beams are focused extensively at the interaction point (IP) so that the beam sizes can vary significantly within the bunch length. Krishnagopal and Siemann have shown that they should not neglect the bunch length effect in this case. The transverse kick depends on the longitudinal position as well as on the transverse position. If they include this effect, however, from the action-reaction principle, they should expect, at the same time, an energy change which depends on the transverse coordinates. Such an effect is reasonably understood from the fact that the beam-beam force is partly due to the electric field, which can change the energy. The action-reaction principle comes from the symplecticity of the reaction: the electromagnetic influence on a particle is described by a Hamiltonian. The symplecticity is one of the most fundamental requirements when studying the beam dynamics. A nonsymplectic approximation can easily lead to unphysical results. In this paper, they propose a simple, approximately but symplectic mapping for the beam-beam interaction which includes the energy change as well as the bunch-length effect. In the next section, they propose the mapping in a Hamiltonian form, which directly assures its symplecticity. Then in section 3, they study the nature of the mapping by interpreting its consequences. The mapping itself is quite general and can be applied to any distribution function. They show in Section 4 how it appears when the distribution function is a Gaussian in transverse directions. The mapping is applied to the

The formation of sheeting joints has been an outstanding problem in geology. New observations and analyses indicate that sheeting joints develop in response to a near-surface tension induced by compressive stresses parallel to a convex slope (hypothesis 1) rather than by removal of overburden by erosion, as conventionally assumed (hypothesis 2). Opening mode displacements across the joints together with the absence of mineral precipitates within the joints mean that sheeting joints open in response to a near-surface tension normal to the surface rather than a pressurized fluid. Consideration of a plot of this tensile stress as a function of depth normal to the surface reveals that a true tension must arise in the shallow subsurface if the rate of that tensile stress change with depth is positive at the surface. Static equilibrium requires this rate (derivative) to equal P22 k2 + P33 k3 - ρ g cosβ, where k2 and k3 are the principal curvatures of the surface, P22 and P33 are the respective surface- parallel normal stresses along the principal curvatures, ρ is the material density, g is gravitational acceleration, and β is the slope. This derivative will be positive and sheeting joints can open if at least one principal curvature is sufficiently convex (negative) and the surface-parallel stresses are sufficiently compressive (negative). At several sites with sheeting joints (e.g., Yosemite National Park in California), the measured topographic curvatures and the measured surface-parallel stresses of about -10 MPa combine to meet this condition. In apparent violation of hypothesis 1, sheeting joints occur locally at the bottom of Tenaya Canyon, one of the deepest glaciated, U-shaped (concave) canyons in the park. The canyon-bottom sheeting joints only occur, however, where the canyon is convex downstream, a direction that nearly coincides with direction of the most compressive stress measured in the vicinity. The most compressive stress acting along the convex

An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

Full text: The FJ method is an approach that is geometrically motivated. It is based on the symplectic structure of the phase space. The first-order characteristic allows to obtain the Hamiltonian equations of motion from a variational principle. Its geometric structure of the Hamiltonian phase-space will be carried out directly from the equations of motion via the inverse of the so-called symplectic two-form, if the inverse exists. Few years after its publication, the FJ formalism was extended and through the years it has been applied to different systems. Gauge invariance is one of the most well established concepts in theoretical physics and it is one of the main ingredients in Standard Model theory. However, we can ask if it could have an alternative origin connected to another theory or principle. With this motivation in mind we will show in this paper that gauge invariance could be considered an emergent concept having its origin in the algebraic formalism of a well known method that deals with constrained systems, namely, the Faddeev-Jackiw (FJ) technique. Of course the gauge invariance idea is older than FJ's, but the results obtained here will show that the connection between both will prove that SU(3) and SU(3) X SU(2) X U(1) gauge groups, which are fundamental to important theories like QCD and Standard Model, can be obtained through FJ formalism. (author)

Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

The Schläfli identity, which is important in Regge calculus and loop quantum gravity, is examined from a symplectic and semiclassical standpoint in the special case of flat, three-dimensional space. In this case a proof is given, based on symplectic geometry. A series of symplectic and Lagrangian manifolds related to the Schläfli identity, including several versions of a Lagrangian manifold of tetrahedra, are discussed. Semiclassical interpretations of the various steps are provided. Possible generalizations to three-dimensional spaces of constant (nonzero) curvature, involving Poisson-Lie groups and q-deformed spin networks, are discussed.

Full Text Available In this paper, we examine time scale symplectic (or Hamiltonian systems and the associated quadratic functionals which contain a forward shift in the time variable. Such systems and functionals have a close connection to Jacobi systems for calculus of variations and optimal control problems on time scales. Our results, among which we consider the Reid roundabout theorem, generalize the corresponding classical theory for time reversed discrete symplectic systems, as well as they complete the recently developed theory of time scale symplectic systems.

A symplectic Poisson solver calculates numerically a potential and fields due to a 2D distribution of particles in a way that the symplecticity and smoothness are assured automatically. Such a code, based on Fast Fourier Transformation combined with Bicubic Interpolation, is developed for the use in multi-turn particle simulation in circular accelerators. Beside that, it may have a number of applications, where computations of space charge forces should obey a symplecticity criterion. Detailed computational schemes of all algorithms will be outlined to facilitate practical programming. (author)

The twisted products play an important role in Quantum Mechanics. A distinction is introduced between Vey *sub(γ) products and strong Vey *sub(γ) products and it is proved that each *sub(γ) product is equivalent to a Vey *sub(γ) product. If b 3 (W) = 0, the symplectic manifold (W,F) admits strong Vey *sub(Gn) products. If b 2 (W) = 0, all *sub(γ) products are equivalent as well as the Vey Lie algebras. In the general case the formal Lie algebras are characterized which are generated by a *sub(γ) product and it proved that the existance of a *sub(γ)-product is equivalent to the existance of a formal Lie algebra infinitesimally equivalent to a Vey Lie algebra at the first order. (Auth.)

We have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles and derive analytically their settings for the local chromatic compensation. As a result, the cell becomes nearly perfect up to the third-order of the momentum deviation.

We start from Wootter's construction of discrete phase spaces and Wigner functions for qubits and more generally for finite-dimensional Hilbert spaces. We look at this framework from a non-commutative space perspective and we focus on the Moyal product and the differential calculus on these discrete phase spaces. In particular, the qubit phase space provides the simplest example of a four-point non-commutative phase space. We give an explicit expression of the Moyal bracket as a differential operator. We then compare the quantum dynamics encoded by the Moyal bracket to the classical dynamics: we show that the classical Poisson bracket does not satisfy the Jacobi identity thus leaving the Moyal bracket as the only consistent symplectic structure. We finally generalize our analysis to Hilbert spaces of prime dimensions d and their associated d x d phase spaces.

We discuss the symplectic diffeomorphisms of a class of supermanifolds and the structure of the underlying infinite dimensional superalgebras. We construct a Chern-Simons (CS) gauge theory in 2+1 dimensions for these algebras. There exists a finite dimensional supersymmetric truncation which is the (2 n -1)-dimensional Hamiltonian superalgebra H-tilde(n). With a central charge added, it is a superalgebra, C(n), associated with a Clifford algebra. We find an embedding of d=3, N=2 anti-de Sitter superalgebra OSp(2|2)+OSp(2|2) in C(4), and construct a CS action for its infinite dimensional extension. We also discuss the construction of a CS action for the infinite dimensional extension of the d=3, N=2 superconformal algebra OSp(2,4). (author). 18 refs

-storage flexible-order accurate finite difference method that is known to be efficient and scalable on a CPU core (single thread). To achieve parallel performance of the relatively complex numerical model, we investigate a new trend in high-performance computing where many-core GPUs are utilized as high......-throughput co-processors to the CPU. We describe and demonstrate how this approach makes it possible to do fast desktop computations for large nonlinear wave problems in numerical wave tanks (NWTs) with close to 50/100 million total grid points in double/ single precision with 4 GB global device memory...... available. A new code base has been developed in C++ and compute unified device architecture C and is found to improve the runtime more than an order in magnitude in double precision arithmetic for the same accuracy over an existing CPU (single thread) Fortran 90 code when executed on a single modern GPU...

A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.

This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

Two new solutions are obtained for the symplecticity conditions of explicit third-order partitioned Runge-Kutta time integration method. One of them has larger stability limit and better dispersion property than the Ruth's method.

Orbit propagation algorithms for satellite relative motion relying on Runge-Kutta integrators are non-symplectic—a situation that leads to incorrect global behavior and degraded accuracy. Thus, attempts have been made to apply symplectic methods to integrate satellite relative motion. However, so far all these symplectic propagation schemes have not taken into account the effect of atmospheric drag. In this paper, drag-generalized symplectic and variational algorithms for satellite relative orbit propagation are developed in different reference frames, and numerical simulations with and without the effect of atmospheric drag are presented. It is also shown that high-order versions of the newly-developed variational and symplectic propagators are more accurate and are significantly faster than Runge-Kutta-based integrators, even in the presence of atmospheric drag.

We show how the classical model for the Dirac electron of Barut and coworkers can be obtained as a Hamiltonian theory by constructing an exact symplectic form on the total space of the spin bundle over spacetime. (orig.)

To each natural star product on a Poisson manifold $M$ we associate an antisymplectic involutive automorphism of the formal neighborhood of the zero section of the cotangent bundle of $M$. If $M$ is symplectic, this mapping is shown to be the inverse mapping of the formal symplectic groupoid of the star product. The construction of the inverse mapping involves modular automorphisms of the star product.

The calculation of Dirac brackets (DB) using a symplectic matrix approach but in a Hamiltonian framework is discussed, and the calculation of the DB for the supersymmetric extension of QED (super-QED) is shown. The relation between the zero-mode of the pre-symplectic matrix and the gauge transformations admitted by the model is verified. A general description to construct Lagrangians linear in the velocities is also presented. (author)

Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge–Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP. (general)

Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

Full Text Available Maxillofacial trauma are common, secondary to road traffic accident, sports injury, falls and require sophisticated radiological imaging to precisely diagnose. A direct surgical reconstruction is complex and require clinical expertise. Bio-modelling helps in reconstructing surface model from 2D contours. In this manuscript we have constructed the 3D surface using 2D Computerized Tomography (CT scan contours. The fracture part of the cranial vault are reconstructed using GC1 rational cubic Ball curve with three free parameters, later the 2D contours are flipped into 3D with equidistant z component. The constructed surface is represented by contours blending interpolant. At the end of this manuscript a case report of parietal bone fracture is also illustrated by employing this method with a Graphical User Interface (GUI illustration.

A variational symplectic integrator for the guiding center motion of charged particles in general magnetic fields is developed to enable accurate long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding center motion, the action of the guiding center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure and globally bounds the numerical error in energy by a small number for all simulation time steps. Compared with standard integrators, such as the fourth order Runge-Kutta method, the variational symplectic integrator has superior numerical properties over long integration time. For example, in a two-dimensional tokamak geometry, the variational symplectic integrator is able to guarantee the accuracy for both the trapped and transit particle orbits for arbitrarily long simulation time. This is important for modern large-scale simulation studies of fusion plasmas where it is critical to use algorithms with long-term accuracy and fidelity. The variational symplectic integrator is expected to have a wide range of applications.

This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

The first use of a surface coil to obtain a 31P NMR spectrum from an intact rat by Ackerman and colleagues initiated a revolution in magnetic resonance imaging (MRI) and spectroscopy (MRS). Today, we take it for granted that one can detect signals in regions external to an RF coil; at the time, however, this concept was most unusual. In the approximately four decade long period since its introduction, this simple idea gave birth to an increasing number of innovations that has led to transformative changes in the way we collect data in an in vivo magnetic resonance experiment, particularly with MRI of humans. These innovations include spatial localization and/or encoding based on the non-uniform B1 field generated by the surface coil, leading to new spectroscopic localization methods, image acceleration, and unique RF pulses that deal with B1 inhomogeneities and even reduce power deposition. Without the surface coil, many of the major technological advances that define the extraordinary success of MRI in clinical diagnosis and in biomedical research, as exemplified by projects like the Human Connectome Project, would not have been possible.

Full Text Available Fengjun Liu1,* Wen Ye1,* Jun Wang2 Fengxiang Song1 Yingsheng Cheng3 Bingbo Zhang21Department of Radiology, Shanghai Public Health Clinical Center, 2Institute of Photomedicine, Shanghai Skin Disease Hospital, The Institute for Biomedical Engineering & Nano Science, Tongji University School of Medicine, 3Department of Radiology, Shanghai Sixth People’s Hospital, Shanghai Jiao Tong University, Shanghai, China *These authors contributed equally to this work Abstract: Quantum dots (QDs have been considered to be promising probes for biosensing, bioimaging, and diagnosis. However, their toxicity issues caused by heavy metals in QDs remain to be addressed, in particular for their in vivo biomedical applications. In this study, a parallel comparative investigation in vitro and in vivo is presented to disclose the impact of synthetic methods and their following surface modifications on the toxicity of QDs. Cellular assays after exposure to QDs were conducted including cell viability assessment, DNA breakage study in a single cellular level, intracellular reactive oxygen species (ROS receptor measurement, and transmission electron microscopy to evaluate their toxicity in vitro. Mice experiments after QD administration, including analysis of hemobiological indices, pharmacokinetics, histological examination, and body weight, were further carried out to evaluate their systematic toxicity in vivo. Results show that QDs fabricated by the thermal decomposition approach in organic phase and encapsulated by an amphiphilic polymer (denoted as QDs-1 present the least toxicity in acute damage, compared with those of QDs surface engineered by glutathione-mediated ligand exchange (denoted as QDs-2, and the ones prepared by coprecipitation approach in aqueous phase with mercaptopropionic acid capped (denoted as QDs-3. With the extension of the investigation time of mice respectively injected with QDs, we found that the damage caused by QDs to the organs can be

A big challenge for standard interferogram analysis methods such as Temporal Phase Shifting or Fourier Transform is a parasitic set of fringes which might occur in the analyzed fringe pattern intensity distribution. It is encountered, for example, when transparent glass plates with quasi-parallelsurfaces are tested in Fizeau or Twyman-Green interferometers. Besides the beams reflected from the plate front surface and the interferometer reference the beam reflected from the plate rear surface also plays important role; its amplitude is comparable with the amplitude of other beams. In result we face three families of fringes of high contrast which cannot be easily separated. Earlier we proposed a competitive solution for flatness measurements which relies on eliminating one of those fringe sets from the three-beam interferogram and separating two remaining ones with the use of 2D Continuous Wavelet Transform. In this work we cover the case when the intensity of the reference beam is significantly higher than the intensities of two object beams. The main advantage of differentiating beam intensities is the change in contrast of individual fringe families. Processing of such three-beam interferograms is modified but also takes advantage of 2D CWT. We show how to implement this method in Twyman-Green and Fizeau setups and compare this processing path and measurement procedures with previously proposed solutions.

Amperometry is a powerful method to record quantal release events from chromaffin cells and is widely used to assess how specific drugs modify quantal size, kinetics of release, and early fusion pore properties. Surface-modified CMOS-based electrochemical sensor arrays allow simultaneous recordings from multiple cells. A reliable, low-cost technique is presented here for efficient targeting of single cells specifically to the electrode sites. An SU-8 microwell structure is patterned on the chip surface to provide insulation for the circuitry as well as cell trapping at the electrode sites. A shifted electrode design is also incorporated to increase the flexibility of the dimension and shape of the microwells. The sensitivity of the electrodes is validated by a dopamine injection experiment. Microwells with dimensions slightly larger than the cells to be trapped ensure excellent single-cell targeting efficiency, increasing the reliability and efficiency for on-chip single-cell amperometry measurements. The surface-modified device was validated with parallel recordings of live chromaffin cells trapped in the microwells. Rapid amperometric spikes with no diffusional broadening were observed, indicating that the trapped and recorded cells were in very close contact with the electrodes. The live cell recording confirms in a single experiment that spike parameters vary significantly from cell to cell but the large number of cells recorded simultaneously provides the statistical significance.

A parallel-plate flow chamber was used to measure the attachment and detachment rates of Escherichia coli to a glass surface at various fluid velocities. The effect of flagella on adhesion was investigated by performing experiments with several E. coli strains: AW405 (motile); HCB136 (nonmotile mutant with paralyzed flagella); and HCB137 (nonmotile mutant without flagella). We compared the total attachment rates and the fraction of bacteria retained on the surface to determine how the presence and movement of the flagella influence transport to the surface and adhesion strength in this dynamic system. At the lower fluid velocities, there was no significant difference in the total attachment rates for the three bacterial strains; nonmotile strains settled at a rate that was of the same order of magnitude as the diffusion rate of the motile strain. At the highest fluid velocity, the effect of settling was minimized to better illustrate the importance of motility, and the attachment rates of both nonmotile strains were approximately five times slower than that of the motile bacteria. Thus, different processes controlled the attachment rate depending on the parameter regime in which the experiment was performed. The fractions of motile bacteria retained on the glass surface increased with increasing velocity, whereas the opposite trend was found for the nonmotile strains. This suggests that the rotation of the flagella enables cells to detach from the surface (at the lower fluid velocities) and strengthens adhesion (at higher fluid velocities), whereas nonmotile cells detach as a result of shear. There was no significant difference in the initial attachment rates of the two nonmotile species, which suggests that merely the presence of flagella was not important in this stage of biofilm development. Copyright 2002 Wiley Periodicals, Inc.

In this paper, the capability of a specific cable-driven parallel mechanism to interact with a variety of surfaces is investigated. This capability could be of use in for example the cleaning of large building surfaces. A method is presented to investigate the workspace for which the cables do not

Full Text Available In the mid-19th century, Dr. Donald Stookey identified the importance and usability of nucleating agents and mechanisms for the development of glass-ceramic materials. Today, a number of various internal and surface mechanisms as well as combinations thereof have been established in the production of glass-ceramic materials. In order to create new innovative material properties the present study focuses on the precipitation of CaMgSiO6 as a minor phase in Li2Si2O5 based glass-ceramics. In the base glass of the SiO2-Li2O-P2O5-Al2O3-K2O-MgO-CaO system P2O5 serves as nucleating agent for the internal precipitation of Li2Si2O5 crystals while a mechanical activation of the glass surface by means of ball milling is necessary to nucleate the minor CaMgSi2O6 crystal phase. For a successful precipitation of CaMgSi2O6 a minimum ratio of MgO and CaO in the range between 1.4 mol% and 2.9 mol% in the base glasses was determined. The nucleation and crystallization of both crystal phases takes place during sintering a powder compact. Dependent on the quality of the sintering process the dense Li2Si2O5-CaMgSi2O6 glass-ceramics show a mean biaxial strength of up to 392 ± 98 MPa. The microstructure of the glass-ceramics is formed by large (5-10 µm bar like CaMgSi2O6 crystals randomly embedded in a matrix of small (≤ 0.5 µm plate like Li2Si2O5 crystals arranged in an interlocking manner. While there is no significant influence of the minor CaMgSi2O6 phase on the strength of the material, the translucency of the material decreases upon precipitation of the minor phase.

For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

We solve the complex extension of the chiral Gaussian symplectic ensemble, defined as a Gaussian two-matrix model of chiral non-Hermitian quaternion real matrices. This leads to the appearance of Laguerre polynomials in the complex plane and we prove their orthogonality. Alternatively, a complex eigenvalue representation of this ensemble is given for general weight functions. All k-point correlation functions of complex eigenvalues are given in terms of the corresponding skew orthogonal polynomials in the complex plane for finite-N, where N is the matrix size or number of eigenvalues, respectively. We also allow for an arbitrary number of complex conjugate pairs of characteristic polynomials in the weight function, corresponding to massive quark flavours in applications to field theory. Explicit expressions are given in the large-N limit at both weak and strong non-Hermiticity for the weight of the Gaussian two-matrix model. This model can be mapped to the complex Dirac operator spectrum with non-vanishing chemical potential. It belongs to the symmetry class of either the adjoint representation or two colours in the fundamental representation using staggered lattice fermions

Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step than the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.

Full Text Available A general symplectic method for the random response analysis of infinitely periodic structures subjected to stationary/non-stationary random excitations is developed using symplectic mathematics in conjunction with variable separation and the pseudo-excitation method (PEM. Starting from the equation of motion for a single loaded substructure, symplectic analysis is firstly used to eliminate the dependent degrees of the freedom through condensation. A Fourier expansion of the condensed equation of motion is then applied to separate the variables of time and wave number, thus enabling the necessary recurrence scheme to be developed. The random response is finally determined by implementing PEM. The proposed method is justified by comparison with results available in the literature and is then applied to a more complicated time-dependent coupled system.

A detailed development of the symplectic geometry formalism for a general Lagrangian field theory is presented. Special attention is paid to the theories with constraints and/or gauge degrees of freedom. Special cases of Yang-Mills theory, general relativity and Witten's string field theory are studied and the generators of (super-) Poincare transformations are derived using their respective symplectic forms. The formalism extends naturally to theories formulated in the superspace. The second part of the thesis deals with issues in covariant quantization. By studying the symplectic geometry of the Green-Schwarz covariant superstring action, we elucidate some aspects of its covariant quantization. We derive the on-shell gauge-fixed action and the equations of motion for all the fields. Finally, turning to Siegel's version of the superparticle action, we perform its BRST quantization

Given an arbitrary symplectic tracking code, one can construct a full-turn symplectic map that approximates the result of the code to high accuracy. The map is defined implicitly by a mixed-variable generating function. The implicit definition is no great drawback in practice, thanks to an efficient use of Newton's method to solve for the explicit map at each iteration. The generator is represented by a Fourier series in angle variables, with coefficients given as B-spline functions of action variables. It is constructed by using results of single-turn tracking from many initial conditions. The method has been appliedto a realistic model of the SSC in three degrees of freedom. Orbits can be mapped symplectically for 10 7 turns on an IBM RS6000 model 320 workstation, in a run of about one day

The Faddeev-Jackiw symplectic formalism for constrained systems is applied to analyze the dynamical content of a model describing two massive relativistic particles with interaction, which can also be interpreted as a bigravity model in one dimension. We systematically investigate the nature of the physical constraints, for which we also determine the zero-modes structure of the corresponding symplectic matrix. After identifying the whole set of constraints, we find out the transformation laws for all the set of dynamical variables corresponding to gauge symmetries, encoded in the remaining zero modes. In addition, we use an appropriate gauge-fixing procedure, the conformal gauge, to compute the quantization brackets (Faddeev-Jackiw brackets) and also obtain the number of physical degree of freedom. Finally, we argue that this symplectic approach can be helpful for assessing physical constraints and understanding the gauge structure of theories of interacting spin-2 fields.

In this paper, we propose a variational integrator for nonlinear Schrödinger equations with variable coefficients. It is shown that our variational integrator is naturally multi-symplectic. The discrete multi-symplectic structure of the integrator is presented by a multi-symplectic form formula that can be derived from the discrete Lagrangian boundary function. As two examples of nonlinear Schrödinger equations with variable coefficients, cubic nonlinear Schrödinger equations and Gross–Pitaevskii equations are extensively studied by the proposed integrator. Our numerical simulations demonstrate that the integrator is capable of preserving the mass, momentum, and energy conservation during time evolutions. Convergence tests are presented to verify that our integrator has second-order accuracy both in time and space. (paper)

With the advance of nanofabrication, the capability of nanoscale metallic structure fabrication opens a whole new study in nanoplasmonics, which is defined as the investigation of photon-electron interaction in the vicinity of nanoscale metallic structures. The strong oscillation of free electrons at the interface between metal and surrounding dielectric material caused by propagating surface plasmon resonance (SPR) or localized surface plasmon resonance (LSPR) enables a variety of new applications in different areas, especially biological sensing techniques. One of the promising biological sensing applications by surface resonance polariton is surface enhanced Raman spectroscopy (SERS), which significantly reinforces the feeble signal of traditional Raman scattering by at least 104 times. It enables highly sensitive and precise molecule identification with the assistance of a SERS substrate. Until now, the design of new SERS substrate fabrication process is still thriving since no dominant design has emerged yet. The ideal process should be able to achieve both a high sensitivity and low cost device in a simple and reliable way. In this thesis two promising approaches for fabricating nanostructured SERS substrate are proposed: thermal dewetting technique and nanoimprint replica technique. These two techniques are demonstrated to show the capability of fabricating high performance SERS substrate in a reliable and cost efficient fashion. In addition, these two techniques have their own unique characteristics and can be integrated with other sensing techniques to build a serial or parallel sensing system. The breakthrough of a combination system with different sensing techniques overcomes the inherent limitations of SERS detection and leverages it to a whole new level of systematic sensing. The development of a sensing platform based on thermal dewetting technique is covered as the first half of this thesis. The process optimization, selection of substrate material

Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders

An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon-matter interactions described by the Schrödinger-Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. This new numerical capability enables us to carry out first-principle based simulation study of important photon-matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.

A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders

We present experimental and theoretical results of noise-induced attractor hopping between dynamical states found in a single transverse mode vertical-cavity surface-emitting laser (VCSEL) subject to parallel optical injection. These transitions involve dynamical states with different polarizations of the light emitted by the VCSEL. We report an experimental map identifying, in the injected power-frequency detuning plane, regions where attractor hopping between two, or even three, different states occur. The transition between these behaviors is characterized by using residence time distributions. We find multistability regions that are characterized by heavy-tailed residence time distributions. These distributions are characterized by a -1.83 ±0.17 power law. Between these regions we find coherence enhancement of noise-induced attractor hopping in which transitions between states occur regularly. Simulation results show that frequency detuning variations and spontaneous emission noise play a role in causing switching between attractors. We also find attractor hopping between chaotic states with different polarization properties. In this case, simulation results show that spontaneous emission noise inherent to the VCSEL is enough to induce this hopping.

The purpose of this paper is to develop a high-efficiency air-cleaning system for volatile organic compounds (VOCs) existing in the workshop of a chemical factory. A novel parallelsurface/packed-bed discharge (PSPBD) reactor, which utilized a combination of surface discharge (SD) plasma with packed-bed discharge (PBD) plasma, was designed and employed for VOCs removal in a closed vessel. In order to optimize the structure of the PSPBD reactor, the discharge characteristic, benzene removal efficiency, and energy yield were compared for different discharge lengths, quartz tube diameters, shapes of external high-voltage electrode, packed-bed discharge gaps, and packing pellet sizes, respectively. In the circulation test, 52.8% of benzene was removed and the energy yield achieved 0.79 mg kJ −1 after a 210 min discharge treatment in the PSPBD reactor, which was 10.3% and 0.18 mg kJ −1 higher, respectively, than in the SD reactor, 21.8% and 0.34 mg kJ −1 higher, respectively, than in the PBD reactor at 53 J l −1 . The improved performance in benzene removal and energy yield can be attributed to the plasma chemistry effect of the sequential processing in the PSPBD reactor. The VOCs mineralization and organic intermediates generated during discharge treatment were followed by CO x selectivity and FT-IR analyses. The experimental results indicate that the PSPBD plasma process is an effective and energy-efficient approach for VOCs removal in an indoor environment. (paper)

This thesis is concerned with ''N pairs of symplectic fermions'' which are examples of logarithmic conformal field theories in two dimensions. The mathematical language of two-dimensional conformal field theories (on Riemannian surfaces of genus zero) are vertex operator algebras. The representation category of the even part of the symplectic fermion vertex operator super-algebra Rep V{sub ev} is conjecturally a factorisable finite ribbon tensor category. This determines an isomorphism of projective representations between two SL(2,Z)-actions associated to V{sub ev}. The first action is obtained by modular transformations on the space of so-called pseudo-trace functions of a vertex operator algebra. For V{sub ev} this was developed by A.M.Gaberdiel and I. Runkel. For the action one uses that Rep V{sub ev} is conjecturally a factorisable finite ribbon tensor category and thus carries a projective SL(2,Z)-action on a certain Hom-space [Ly1,Ly2,KL]. To do so we calculate the SL(2,Z)-action on the representation category of a general factorisable quasi-Hopf algebras. Then we show that Rep V{sub ev} is conjecturally ribbon equivalent to Rep Q, for Q a factorisable quasi-Hopf algebra, and calculate the SL(2,Z)-action explicitly on Rep Q. The result is that the two SL(2,Z)-action indeed agree. This poses the first example of such comparison for logarithmic conformal field theories.

This paper firstly applies the finite impulse response filter (FIR) theory combined with the fast Fourier transform (FFT) method to generate two-dimensional Gaussian rough surface. Using the electric field integral equation (EFIE), it introduces the method of moment (MOM) with RWG vector basis function and Galerkin's method to investigate the electromagnetic beam scattering by a two-dimensional PEC Gaussian rough surface on personal computer (PC) clusters. The details of the parallel conjugate gradient method (CGM) for solving the matrix equation are also presented and the numerical simulations are obtained through the message passing interface (MPI) platform on the PC clusters. It finds significantly that the parallel MOM supplies a novel technique for solving a two-dimensional rough surface electromagnetic-scattering problem. The influences of the root-mean-square height, the correlation length and the polarization on the beam scattering characteristics by two-dimensional PEC Gaussian rough surfaces are finally discussed. (classical areas of phenomenology)

The detachment of polystyrene particles adhering to collector surfaces with different electrostatic charge and hydrophobicity by attachment to a passing air bubble has been studied in a parallel plate flow chamber. Particle detachment decreased linearly with increasing air bubble velocity and

Electrostatic interactions between colloidal particles and collector surfaces were found tcr be important in particle detachment as induced by the passage of air bubbles in a parallel-plate Row chamber. Electrostatic interactions between adhering particles and passing air bubbles, however, a-ere

Complete exact classification of Hamiltonian systems with one degree of freedom and Morse Hamiltonian is carried out. As it is a main part of trajectory classification of integrable Hamiltonian systems with two degrees of freedom, the corresponding generalization is considered. The dual problem of classification of symplectic form together with Morse foliation is carried out as well. (author). 10 refs, 16 figs

Full Text Available A new construction of authentication codes with arbitration and multireceiver from singular symplectic geometry over finite fields is given. The parameters are computed. Assuming that the encoding rules are chosen according to a uniform probability distribution, the probabilities of success for different types of deception are also computed.

The ideal separatrix of the divertor tokamaks is a degenerate manifold where both the stable and unstable manifolds coincide. Non-axisymmetric magnetic perturbations remove the degeneracy; and split the separatrix manifold. This creates an extremely complex topological structure, called homoclinic tangles. The unstable manifold intersects the stable manifold and creates alternating inner and outer lobes at successive homoclinic points. The Hamiltonian system must preserve the symplectic topological invariance, and this controls the size and radial extent of the lobes. Very recently, lobes near the X-point have been experimentally observed in MAST [A. Kirk et al, PRL 108, 255003 (2012)]. We have used the DIII-D map [A. Punjabi, NF 49, 115020 (2009)] to calculate symplectic homoclinic tangles of the ideal separatrix of the DIII-D from the type I ELMs represented by the peeling-ballooning modes (m,n)=(30,10)+(40,10). The DIII-D map is symplectic, accurate, and is in natural canonical coordinates which are invertible to physical coordinates [A. Punjabi and H. Ali, POP 15, 122502 (2008)]. To our knowledge, we are the first to symplectically calculate these tangles in physical space. Homoclinic tangles of separatrix can cause radial displacement of mobile passing electrons and create sheared radial electric fields and currents, resulting in radial flows, drifts, differential spinning, and reduction in turbulence, and other effects. This work is supported by the grants DE-FG02-01ER54624 and DE-FG02-04ER54793.

We will present a consistent description of Hamiltonian dynamics on the 'symplectic extended phase space' that is analogous to that of a time-independent Hamiltonian system on the conventional symplectic phase space. The extended Hamiltonian H 1 and the pertaining extended symplectic structure that establish the proper canonical extension of a conventional Hamiltonian H will be derived from a generalized formulation of Hamilton's variational principle. The extended canonical transformation theory then naturally permits transformations that also map the time scales of the original and destination system, while preserving the extended Hamiltonian H 1 , and hence the form of the canonical equations derived from H 1 . The Lorentz transformation, as well as time scaling transformations in celestial mechanics, will be shown to represent particular canonical transformations in the symplectic extended phase space. Furthermore, the generalized canonical transformation approach allows us to directly map explicitly time-dependent Hamiltonians into time-independent ones. An 'extended' generating function that defines transformations of this kind will be presented for the time-dependent damped harmonic oscillator and for a general class of explicitly time-dependent potentials. In the appendix, we will re-establish the proper form of the extended Hamiltonian H 1 by means of a Legendre transformation of the extended Lagrangian L 1

Full Text Available We establish the concept of the principal and nonprincipal solution for the so-called symplectic dynamic systems on time scales. We also present a brief survey of the history of these concept for differential and difference equations.

Based on the eigenvalues of the real symplectic ABCD-matrix that characterizes the linear canonical integral transformation, a classification of this transformation and the associated ABCD-system is proposed and some nuclei (i.e. elementary members) in each class are described. In the

Using methods of geometry and cohomology developed recently, we study the Monge-Ampère equation, arising as the first nontrivial equation in the associativity equations, or WDVV equations. We describe Hamiltonian and symplectic structures as well as recursion operators for this equation in its

An optimized iterative formulation is presented for directly transforming a Taylor map of a symplectic system into a Deprit-type Lie transformation, which is a composition of a linear transfer matrix and a single Lie transformation, to an arbitrary order

Data from orbits of a symplectic integrator can be interpolated so as to construct an approximation to the generating function of a Poincare map. The time required to compute an orbit of the symplectic map induced by the generator can be much less than the time to follow the same orbit by symplectic integration. The construction has been carried out previously for full-turn maps of large particle accelerators, and a big saving in time (for instance a factor of 60) has been demonstrated. A shortcoming of the work to date arose from the use of canonical polar coordinates, which precluded map construction in small regions of phase space near coordinate singularities. This paper shows that Cartesian coordinates can also be used, thus avoiding singularities. The generator is represented in a basis of tensor product B-splines. Under weak conditions the spline expansion converges uniformly as the mesh is refined, approaching the exact generator of the Poincare map as defined by the symplectic integrator, in some parallelepiped of phase space centered at the origin

PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

A detailed chemistry model has been adapted and developed for surface chemistry, heat and mass transfer between H 2 /CO/air/steam/CO 2 mixtures and vertical parallel Pt-coated surfaces. This model is based onto a simplified Deutschmann reaction scheme for methane surface combustion and the analysis by Elenbaas for buoyancy-induced heat transfer between parallel plates. Mass transfer is treated by the heat and mass transfer analogy. The proposed model is able to simulate the H 2 /CO recombination phenomena characteristic of parallel-plate Passive Autocatalytic Recombiners (PARs), which have been proposed and implemented as a promising hydrogen-control strategy in the safety of nuclear power stations or other industries. The transient model is able to approach the warm-up phase of the PAR and its shut-down as well as the dynamic changes within the surrounding atmosphere. The model has been implemented within the MELCOR code and assessed against results of the Battelle Model Containment tests of the Zx series. Results show accurate predictions and a better performance than traditional methods in integral codes, i.e. empirical correlations, which are also much case-specific. Influence of CO present in the mixture on the PAR performance is also addressed in this paper

A parallel-plate flow chamber is developed in order to study cellular adhesion phenomena. An image analysis system is used to observe individual cells exposed to flow in situ and to determine area, perimeter, and shape of these cells as a function of time and shear stress. With this flow system the

Yang–Baxter R operators symmetric with respect to the orthogonal and symplectic algebras are considered in an uniform way. Explicit forms for the spinorial and metaplectic R operators are obtained. L operators, obeying the RLL relation with the orthogonal or symplectic fundamental R matrix, are considered in the interesting cases, where their expansion in inverse powers of the spectral parameter is truncated. Unlike the case of special linear algebra symmetry the truncation results in additional conditions on the Lie algebra generators of which the L operators is built and which can be fulfilled in distinguished representations only. Further, generalized L operators, obeying the modified RLL relation with the fundamental R matrix replaced by the spinorial or metaplectic one, are considered in the particular case of linear dependence on the spectral parameter. It is shown how by fusion with respect to the spinorial or metaplectic representation these first order spinorial L operators reproduce the ordinary L operators with second order truncation.

Yang–Baxter R operators symmetric with respect to the orthogonal and symplectic algebras are considered in an uniform way. Explicit forms for the spinorial and metaplectic R operators are obtained. L operators, obeying the RLL relation with the orthogonal or symplectic fundamental R matrix, are considered in the interesting cases, where their expansion in inverse powers of the spectral parameter is truncated. Unlike the case of special linear algebra symmetry the truncation results in additional conditions on the Lie algebra generators of which the L operators is built and which can be fulfilled in distinguished representations only. Further, generalized L operators, obeying the modified RLL relation with the fundamental R matrix replaced by the spinorial or metaplectic one, are considered in the particular case of linear dependence on the spectral parameter. It is shown how by fusion with respect to the spinorial or metaplectic representation these first order spinorial L operators reproduce the ordinary L operators with second order truncation.

The dynamical structure of topologically massive gravity in the context of the Faddeev-Jackiw symplectic approach is studied. It is shown that this method allows us to avoid some ambiguities arising in the study of the gauge structure via the Dirac formalism. In particular, the complete set of constraints and the generators of the gauge symmetry of the theory are obtained straightforwardly via the zero modes of the symplectic matrix. In order to obtain the generalized Faddeev-Jackiw brackets and calculate the local physical degrees of freedom of this model, an appropriate gauge-fixing procedure is introduced. Finally, the similarities and relative advantages between the Faddeev-Jackiw method and Dirac's formalism are briefly discussed. (orig.)

This is the third part of a series of talks in which the authors present applications of methods of wavelet analysis to polynomial approximations for a number of accelerator physics problems. They consider the generalization of the variational wavelet approach to nonlinear polynomial problems to the case of Hamiltonian systems for which they need to preserve underlying symplectic or Poissonian or quasicomplex structures in any type of calculations. They use the approach for the problem of explicit calculations of Arnold-Weinstein curves via Floer variational approach from symplectic topology. The loop solutions are parameterized by the solutions of reduced algebraical problem--matrix Quadratic Mirror Filters equations. Also they consider wavelet approach to the calculations of Melnikov functions in the theory of homoclinic chaos in perturbed Hamiltonian systems

The software SixTrack provides symplectic proton tracking over a large number of turns. The code is used for the tracking of beam halo particles and the simulation of their interaction with the collimators to study the efficiency of the LHC collimation system. Tracking simulations for heavy-ion beams require taking into account the mass to charge ratio of each particle because heavy ions can be subject to fragmentation at their passage through the collimators. In this paper we present the derivation of a Hamiltonian for multi-isotopic heavy-ion beams and symplectic tracking maps derived from it. The resulting tracking maps were implemented in the tracking software SixTrack. With this modification, SixTrack can be used to natively track heavy-ion beams of multiple isotopes through a magnetic accelerator lattice.

An explicit fourth-order finite-difference time-domain (FDTD) scheme using the symplectic integrator is applied to electromagnetic simulation. A feasible numerical implementation of the symplectic FDTD (SFDTD) scheme is specified. In particular, new strategies for the air-dielectric interface treatment and the near-to-far-field (NFF) transformation are presented. By using the SFDTD scheme, both the radiation and the scattering of three-dimensional objects are computed. Furthermore, the energy-conserving characteristic hold for the SFDTD scheme is verified under long-term simulation. Numerical results suggest that the SFDTD scheme is more efficient than the traditional FDTD method and other high-order methods, and can save computational resources

This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

In this paper, we consider the multi-symplectic Runge-Kutta (MSRK) methods applied to the nonlinear Dirac equation in relativistic quantum physics, based on a discovery of the multi-symplecticity of the equation. In particular, the conservation of energy, momentum and charge under MSRK discretizations is investigated by means of numerical experiments and numerical comparisons with non-MSRK methods. Numerical experiments presented reveal that MSRK methods applied to the nonlinear Dirac equation preserve exactly conservation laws of charge and momentum, and conserve the energy conservation in the corresponding numerical accuracy to the method utilized. It is verified numerically that MSRK methods are stable and convergent with respect to the conservation laws of energy, momentum and charge, and MSRK methods preserve not only the inner geometric structure of the equation, but also some crucial conservative properties in quantum physics. A remarkable advantage of MSRK methods applied to the nonlinear Dirac equation is the precise preservation of charge conservation law

The Feynman path integral is used to quantize the symplectic leaves of the Poisson-Lie group SU(2)*. In this way we obtain the unitary representations of Uq(su(2)). This is achieved by finding explicit Darboux coordinates and then using a phase space path integral. I discuss the *-structure of SU(2)* and give a detailed description of its leaves using various parameterizations and also compare the results with the path integral quantization of spin

A systematic approach to the C*-Weyl algebra W(E,σ) over a possibly degenerate pre-symplectic form σ on a real vector space E of possibly infinite dimension is elaborated in an almost self-contained manner. The construction is based on the theory of Kolmogorov decompositions for σ-positive-definite functions on involutive semigroups and their associated projective unitary group representations. The σ-positive-definite functions provide also the C*-norm of W(E,σ), the latter being shown to be *-isomorphic to the twisted group C*-algebra of the discrete vector group E. The connections to related constructions are indicated. The treatment of the fundamental symmetries is outlined for arbitrary pre-symplectic σ. The construction method is especially applied to the trivial symplectic form σ=0, leading to the commutative Weyl algebra over E, which is shown to be isomorphic to the C*-algebra of the almost periodic continuous function on the topological dual E τ ' of E with respect to an arbitrary locally convex Hausdorff topology τ on E. It is demonstrated that the almost periodic compactification aE τ ' of E τ ' is independent of the chosen locally convex τ on E, and that aE τ ' is continuously group isomorphic to the character group E of E. Applications of the results to the procedures of strict and continuous deformation quantizations are mentioned in the outlook

Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.

In the absence of a longitudinal magnetic field, symplectic tracking can be achieved by replacing the magnets by a series of point magnets and drift spaces. To treat the case when a longitudinal magnetic field is also present, this procedure is modified in this paper by replacing the drift space by a solenoidal drift, which is defined as the motion of a particle in a uniform longitudinal magnetic field. A symplectic integrator can be obtained by subdividing each magnet into pieces and replacing each magnet piece by point magnets, with only transverse fields, and solenoidal drift spaces. The reference orbit used here is made up of arcs of circles and straight lines which join smoothly with each other. For this choice of reference orbit, the required results are obtained to track particles, which are the transfer functions, and the transfer time for the different elements. It is shown that these results provide a symplectic integrator, and they are exact in the sense that as the number of magnet pieces is increased, the particle motion will converge to the particle motion of the exact equations of motion

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

The combinatorics computation is used to describe the Casimir operators of the symplectic Lie Algebra. This result is applied for determining the Center of the enveloping Algebra of the semidirect Product of the Heisenberg Lie Algebra and the symplectic Lie Algebra. (author). 10 refs

Polycyclic aromatic hydrocarbons (PAHs), a large group of persistent organic pollutants (POPs), have caused wide environmental pollution and ecological effects. Chromophoric dissolved organic matter (CDOM), which consists of complex compounds, was seen as a proxy of water quality. An attempt was made to understand the relationships of CDOM absorption parameters and parallel factor analysis (PARAFAC) components with PAHs under seasonal variation in the riverine, reservoir, and urban waters of the Yinma River watershed in 2016. These different types of water bodies provided wide CDOM and PAHs concentration ranges with CDOM absorption coefficients at a wavelength of 350 nm (a CDOM (350)) of 1.17-20.74 m -1 and total PAHs of 0-1829 ng/L. CDOM excitation-emission matrix (EEM) presented two fluorescent components, e.g., terrestrial humic-like (C1) and tryptophan-like (C2) were identified using PARAFAC. Tryptophan-like associated protein-like fluorescence often dominates the EEM signatures of sewage samples. Our finding is that seasonal CDOM EEM-PARAFAC and PAHs concentration showed consistent tendency indicated that PAHs were un-ignorable pollutants. However, the disparities in seasonal CDOM-PAH relationships relate to the similar sources of CDOM and PAHs, and the proportion of PAHs in CDOM. Overlooked and poorly appreciated, quantifying the relationship between CDOM and PAHs has important implications, because these results simplify ecological and health-based risk assessment of pollutants compared to the traditional chemical measurements.

In-depth comprehension of human joint function requires complex mathematical models, which are particularly necessary in applications of prosthesis design and surgical planning. Kinematic models of the knee joint, based on one-degree-of-freedom equivalent mechanisms, have been proposed to replicate the passive relative motion between the femur and tibia, i.e., the joint motion in virtually unloaded conditions. In the mechanisms analysed in the present work, some fibres within the anterior and posterior cruciate and medial collateral ligaments were taken as isometric during passive motion, and articulating surfaces as rigid. The shapes of these surfaces were described with increasing anatomical accuracy, i.e. from planar to spherical and general geometry, which consequently led to models with increasing complexity. Quantitative comparison of the results obtained from three models, featuring an increasingly accurate approximation of the articulating surfaces, was performed by using experimental measurements of joint motion and anatomical structure geometries of four lower-limb specimens. Corresponding computer simulations of joint motion were obtained from the different models. The results revealed a good replication of the original experimental motion by all models, although the simulations also showed that a limit exists beyond which description of the knee passive motion does not benefit considerably from further approximation of the articular surfaces.

The mechanical and dynamical properties of a model Au(111)/thiol surface system were investigated by using localized atomic-type orbital density functional theory in the local density approximation. Relaxing the system gives a configuration where the sulfur atom forms covalent bonds to two adjacent gold atoms as the lowest energy structure. Investigations based on ab initio molecular dynamics simulations at 300, 350 and 370 K show that this tethering system is stable. The rupture behaviour between the thiol and the surface was studied by displacing the free end of the thiol. Calculated energy profiles show a process of multiple successive ruptures that account for experimental observations. The process features successive ruptures of the two Au-S bonds followed by the extraction of one S-bonded Au atom from the surface. The force required to rupture the thiol from the surface was found to be dependent on the direction in which the thiol was displaced, with values comparable with AFM measurements. These results aid the understanding of failure dynamics of Au(111)-thiol-tethered biosurfaces in microfluidic devices where fluidic shear and normal forces are of concern

""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

Based on analysis of reduced geometric structures on fibered manifolds, invariant under action of a certain symmetry group, we construct the symplectic structures associated with connection forms on suitable principal fiber bundles. The application to the non-standard Hamiltonian analysis of the Maxwell and Yang-Mills type dynamical systems is presented. A symplectic reduction theory of the classical Maxwell electromagnetic field equations is formulated, the important Lorentz condition, ensuring the existence of electromagnetic waves, is naturally included into the Hamiltonian picture, thereby solving the well known Dirac, Fock and Podolsky problem. The symplectically reduced Poissonian structures and the related classical minimal interaction principle, concerning the Yang-Mills type equations, are considered. (author)

Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

Full Text Available “Symplectic” schemes for stochastic Hamiltonian dynamical systems are formulated through “composition methods (or operator splitting methods” proposed by Misawa (2001. In the proposed methods, a symplectic map, which is given by the solution of a stochastic Hamiltonian system, is approximated by composition of the stochastic flows derived from simpler Hamiltonian vector fields. The global error orders of the numerical schemes derived from the stochastic composition methods are provided. To examine the superiority of the new schemes, some illustrative numerical simulations on the basis of the proposed schemes are carried out for a stochastic harmonic oscillator system.

A beautiful and comprehensive introduction to this important field. -Dusa McDuff, Barnard College, Columbia University This excellent book gives a detailed, clear, and wonderfully written treatment of the interplay between the world of Stein manifolds and the more topological and flexible world of Weinstein manifolds. Devoted to this subject with a long history, the book serves as a superb introduction to this area and also contains the authors' new results. -Tomasz Mrowka, MIT This book is devoted to the interplay between complex and symplectic geometry in affine complex manifolds. Affine co

We find that the coherent state projection operator representation of the two-mode squeezing operator constitutes a loyal group representation of symplectic group, which is a remarkable property of the coherent state. As a consequence, the resultant effect of successively applying two-mode squeezing operators are equivalent to a single squeezing in the two-mode Fock space. Generalization of this property to the 2n-mode case is also discussed. The project supported by National Natural Science Foundation of China under Grant No. 10575057

Using the asymptotic boundary condition the time-dependent Schroedinger equations with initial conditions in the infinite space can be transformed into the problem with initial and boundary conditions, and it can further be discrected into the inhomogeneous canonic equations. The symplectic algorithms to solve the inhomogeneous canonic equations have been developed and adopted to compute the high-order harmonics of one-dimensional Hydrogen in the laser field. We noticed that there is saturation intensity for generating high-order harmonics, which are agree with previous results, and there is a relationship between harmonics and bound state probabilities

The aim of this paper is to construct six-dimensional symplectic thin-lens transport maps for the tracking program SIXTRACK, continuing an earlier report by using another method which consistes in applying Lie series and exponentiation as described by W. Groebner and for canonical systems by A.J. Dragt. We firstly use an approximate Hamiltonian obtained by a series expansion of the square root. Furthermore, nonlinear crossing terms due to the curvature in bending magnets are neglected. An improved Hamiltonian, excluding solenoids, is introduced in Appendix A by using the unexpanded square root mentioned above, but neglecting again nonlinear crossing terms...

Beginning with a tracking code for the LHC, we construct the canonical generator of the full-turn map in polar coordinates. For very fast mapping we adopt a model in which the momentum is modulated sinusoidally with a period of 130 turns (very close to the synchrotron period). We achieve symplectic mapping of 10 7 turns in 3.6 hours on a workstation. Quasi-invariant tori are constructed on the Poincare section corresponding to multiples of the synchrotron period. The possible use of quasi-invariants in derivin, long-term bounds on the motion is discussed

Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

We present a microscopic description of nuclei in the intermediate-mass region, including the proximity to the proton drip line, based on a no-core shell model with a schematic many-nucleon long-range interaction with no parameter adjustments. The outcome confirms the essential role played by the symplectic symmetry to inform the interaction and the winnowing of shell-model spaces. We show that it is imperative that model spaces be expanded well beyond the current limits up through 15 major shells to accommodate particle excitations, which appear critical to highly deformed spatial structures and the convergence of associated observables.

Full Text Available Yang–Baxter R operators symmetric with respect to the orthogonal and symplectic algebras are considered in an uniform way. Explicit forms for the spinorial and metaplectic R operators are obtained. L operators, obeying the RLL relation with the orthogonal or symplectic fundamental R matrix, are considered in the interesting cases, where their expansion in inverse powers of the spectral parameter is truncated. Unlike the case of special linear algebra symmetry the truncation results in additional conditions on the Lie algebra generators of which the L operators is built and which can be fulfilled in distinguished representations only. Further, generalized L operators, obeying the modified RLL relation with the fundamental R matrix replaced by the spinorial or metaplectic one, are considered in the particular case of linear dependence on the spectral parameter. It is shown how by fusion with respect to the spinorial or metaplectic representation these first order spinorial L operators reproduce the ordinary L operators with second order truncation.

Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.

We define a quantum generalization of the algebra of functions over an associated vector bundle of a principal bundle. Here the role of a quantum principal bundle is played by a Hopf-Galois extension. Smash products of an algebra times a Hopf algebra H are particular instances of these extensions, and in these cases we are able to define a differential calculus over their associated vector bundles without requiring the use of a (bicovariant) differential structure over H. Moreover, if H is coquasitriangular, it coacts naturally on the associated bundle, and the differential structure is covariant. We apply this construction to the case of the finite quotient of the SL q (2) function Hopf algebra at a root of unity (q 3 = 1) as the structure group, and a reduced 2-dimensional quantum plane as both the 'base manifold' and fibre, getting an algebra which generalizes the notion of classical phase space for this quantum space. We also build explicitly a differential complex for this phase space algebra, and find that levels 0 and 2 support a (co)representation of the quantum symplectic group. On this phase space we define vector fields, and with the help of the Sp q structure we introduce a symplectic form relating 1-forms to vector fields. This leads naturally to the introduction of Poisson brackets, a necessary step to do 'classical' mechanics on a quantum space, the quantum plane. (author)

This paper addresses the problem of developing an extension of the Marsden–Weinstein reduction process to symplectic-like Lie algebroids, and in particular to the case of the canonical cover of a fiberwise linear Poisson structure, whose reduction process is the analog to cotangent bundle reduction in the context of Lie algebroids. Dedicated to the memory of Jerrold E Marsden (paper)

Graphical abstract: - Highlights: • Wetting behavior of four metallic materials as a function of surface roughness has been studied. • A model to predict the abrasive particle size and water/oil contact angles relationship is proposed. • Active wetting regime is determined in different materials using the proposed model. - Abstract: In the present study, the wetting behavior of surfaces of various common metallic materials used in the water industry including C84400 brass, commercially pure aluminum (99.0% pure), Nickle–Molybdenum alloy (Hastelloy C22), and 316 Stainless Steel prepared by mechanical abrasion and contact angles of several materials after mechanical abrasion were measured. A model to estimate roughness factor, R{sub f}, and fraction of solid/oil interface, ƒ{sub so}, for surfaces prepared by mechanical abrasion is proposed based on the assumption that abrasive particles acting on a metallic surface would result in scratches parallel to each other and each scratch would have a semi-round cross-section. The model geometrically describes the relation between sandpaper particle size and water/oil contact angle predicted by both the Wenzel and Cassie–Baxter contact type, which can then be used for comparison with experimental data to find which regime is active. Results show that brass and Hastelloy followed Cassie–Baxter behavior, aluminum followed Wenzel behavior and stainless steel exhibited a transition from Wenzel to Cassie–Baxter. Microstructural studies have also been done to rule out effects beyond the Wenzel and Cassie–Baxter theories such as size of structural details.

Slow flows of a rarefied gas between two plane parallel walls with nonuniform surface properties are studied based on kinetic theory. It is assumed that one wall is a diffuse reflection boundary and the other wall is a Maxwell-type boundary whose accommodation coefficient varies periodically in the direction perpendicular to the flow. The time-independent Poiseuille, thermal transpiration and Couette flows are considered. The flow behavior is numerically studied based on the linearized Bhatnagar-Gross-Krook-Welander model of the Boltzmann equation. The flow field, the mass and heat flow rates in the gas, and the tangential force acting on the wall surface are studied over a wide range of the gas rarefaction degree and the parameters characterizing the distribution of the accommodation coefficient. The locally convex velocity distribution is observed in Couette flow of a highly rarefied gas, similarly to Poiseuille flow and thermal transpiration. The reciprocity relations are numerically confirmed over a wide range of the flow parameters.

A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods

A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods.

The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

A new finite element (FE) scheme, based on the decomposition of a second order differential equation into a set of first order symplectic (Hamiltonian) equations, is presented and tested on one-dimensional, driven Sturm-Liouville problem. Error analysis shows improved cubic convergence in the energy norm for piecewise linear `tent` elements, as compared to quadratic convergence for the standard and hybrid FE methods. The convergence deteriorates in the presence of a regular singular point, but can be recovered by appropriate mesh node packing. Optimal mesh packing exponents are derived to ensure cubic (respectively quadratic) convergence with minimal numerical error. A further suppression of the numerical error by a factor proportional to the square of the leading exponent of the singular solution, is achieved for a model problem based on determining the nonideal magnetohydrodynamic stability of a fusion plasma. (author) 7 figs., 14 refs.

A new finite element (FE) scheme, based on the decomposition of a second order differential equation into a set of first order symplectic (Hamiltonian) equations, is presented and tested on one-dimensional, driven Sturm-Liouville problem. Error analysis shows improved cubic convergence in the energy norm for piecewise linear 'tent' elements, as compared to quadratic convergence for the standard and hybrid FE methods. The convergence deteriorates in the presence of a regular singular point, but can be recovered by appropriate mesh node packing. Optimal mesh packing exponents are derived to ensure cubic (respectively quadratic) convergence with minimal numerical error. A further suppression of the numerical error by a factor proportional to the square of the leading exponent of the singular solution, is achieved for a model problem based on determining the nonideal magnetohydrodynamic stability of a fusion plasma. (author) 7 figs., 14 refs

We derive second and higher order explicit symplectic integrators for the charged particle motion in an s-dependent magnetic field with the paraxial approximation. The Hamiltonian of such a system takes the form of H (summation) k (p k - a k (rvec q), s) 2 + V((rvec q), s). This work solves a long-standing problem for modeling s-dependent magnetic elements. Important applications of this work include the studies of the charged particle dynamics in a storage ring with strong field wigglers, arbitrarily polarized insertion devices,and super-conducting magnets with strong fringe fields. Consequently, this work will have a significant impact on the optimal use of the above magnetic devices in the light source rings as well as in next generation linear collider damping rings

A map to describe propagation of particles through any section of a nonlinear lattice may be represented as a Taylor expansion about the origin in phase space. Although the technique to compute the Taylor coefficients has been improved recently, the expansion may fail to provide adequate accuracy in regions where nonlinear effects are substantial. A representation of the map in angle-action coordinates, with the angle dependence given by a Fourier series, and the action dependence by polynomials in I/sup 1/2/, may be more successful. Maps of this form are easily constructed by taking Fourier transforms of results from an arbitrary symplectic tracking code. Examples are given of one-turn and two turn maps for the SLC North Damping Ring in a strongly nonlinear region. Results for accuracy and speed of evaluation of the maps are quite encouraging. It seems feasible to make accurate maps for the SSC by this method. 9 refs., 1 tab

It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

A new algorithm for heat exchange between thermally coupled diffusely radiating interfaces is presented, which can be applied for closed and half open transparent radiating cavities. Interfaces between opaque and transparent materials are automatically detected and subdivided into elementary radiation surfaces named tiles. Contrary to the classical view factor method, the fixed unit sphere area subdivision oriented along the normal tile direction is projected onto the surrounding radiation mesh and not vice versa. Then, the total incident radiating flux of the receiver is approximated as a direct sum of radiation intensities of representative “senders” with the same weight factor. A hierarchical scheme for the space angle subdivision is selected in order to minimize the total memory and the computational demands during thermal calculations. Direct visibility is tested by means of a voxel-based ray tracing method accelerated by means of the anisotropic Chebyshev distance method, which reuses the computational grid as a Chebyshev one. The ray tracing algorithm is fully parallelized using MPI and takes advantage of the balanced distribution of all available tiles among all CPU's. This approach allows tracing of each particular ray without any communication. The algorithm has been implemented in a commercial casting process simulation software. The accuracy and computational performance of the new radiation model for heat treatment, investment and ingot casting applications is illustrated using industrial examples. (paper)

Adhesive interactions between yeasts and bacteria are important in the maintenance of infectious mixed biofilms on natural and biomaterial surfaces in the human body. In this study, the extended DLVO (Derjaguin-Landau-Verwey-Overbeek) approach has been applied to explain adhesive interactions between C. albicans ATCC 10261 and S. gordonii NCTC 7869 adhering on glass. Contact angles with different liquids and the zeta potentials of both the yeasts and bacteria were determined and their adhesive interactions were measured in a parallel-plate flow chamber.Streptococci were first allowed to adhere to the bottom glass plate of the flow chamber to different seeding densities, and subsequently deposition of yeasts was monitored with an image analysis system, yielding the degree of initial surface aggregation of the adhering yeasts and their spatial arrangement in a stationary end point. Irrespective of growth temperature, the yeast cells appeared uncharged in TNMC buffer, but yeasts grown at 37 degrees C were intrinsically more hydrophilic and had an increased electron-donating character than cells grown at 30 degrees C. All yeasts showed surface aggregation due to attractive Lifshitz-van der Waals forces. In addition, acid-base interactions between yeasts, yeasts and the glass substratum, and yeasts and the streptococci were attractive for yeasts grown at 30 degrees C, but yeasts grown at 37 degrees C only had favorable acid-base interactions with the bacteria, explaining the positive relationship between the surface coverage of the glass by streptococci and the surface aggregation of the yeasts. Copyright 1999 Academic Press.

FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering

Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship. ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

The concept of tangent vector is made more precise to meet the specific nature of the Sturm-Liouville problem, and on this basis a Poisson bracket that is modified compared with the Gardner form by special boundary terms is derived from the Zakharov-Faddeev symplectic form. This bracket is nondegenerate, and in it the variables of the discrete and continuous spectra are separated

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

We give the global mathematical formulation of a class of generalized four-dimensional theories of gravity coupled to scalar matter and to Abelian gauge fields. In such theories, the scalar fields are described by a section of a surjective pseudo-Riemannian submersion π over space-time, whose total space carries a Lorentzian metric making the fibers into totally-geodesic connected Riemannian submanifolds. In particular, π is a fiber bundle endowed with a complete Ehresmann connection whose transport acts through isometries between the fibers. In turn, the Abelian gauge fields are "twisted" by a flat symplectic vector bundle defined over the total space of π. This vector bundle is endowed with a vertical taming which locally encodes the gauge couplings and theta angles of the theory and gives rise to the notion of twisted self-duality, of crucial importance to construct the theory. When the Ehresmann connection of π is integrable, we show that our theories are locally equivalent to ordinary Einstein-Scalar-Maxwell theories and hence provide a global non-trivial extension of the universal bosonic sector of four-dimensional supergravity. In this case, we show using a special trivializing atlas of π that global solutions of such models can be interpreted as classical "locally-geometric" U-folds. In the non-integrable case, our theories differ locally from ordinary Einstein-Scalar-Maxwell theories and may provide a geometric description of classical U-folds which are "locally non-geometric".

A highly accurate calculation of the magnetic field line Hamiltonian in DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] is made from piecewise analytic equilibrium fit data for shot 115467 3000 ms. The safety factor calculated from this Hamiltonian has a logarithmic singularity at an ideal separatrix. The logarithmic region inside the ideal separatrix contains 2.5% of toroidal flux inside the separatrix. The logarithmic region is symmetric about the separatrix. An area-preserving map for the field line trajectories is obtained in magnetic coordinates from the Hamiltonian equations of motion for the lines and a canonical transformation. This map is used to calculate trajectories of magnetic field lines in DIII-D. The field line Hamiltonian in DIII-D is used as the generating function for the map and to calculate stochastic broadening from field-errors and spatial noise near the separatrix. A very negligible amount (0.03%) of magnetic flux is lost from inside the separatrix due to these nonaxisymmetric fields. It is quite easy to add magnetic perturbations to generating functions and calculate trajectories for maps in magnetic coordinates. However, it is not possible to integrate across the separatrix. It is also difficult to find the physical position corresponding to magnetic coordinates. For open field lines, periodicity in the poloidal angle is assumed, which is not satisfactory. The goal of this paper is to demonstrate the efficacy of the symplectic mapping approach rather than using realistic DIII-D parameters or modeling specific experimental results

While symplectic integration methods based on operator splitting are well established in many branches of science, high order methods for Hamiltonian systems that split in more than two parts have not been studied in great detail. Here, we present several high order symplectic integrators for Hamiltonian systems that can be split in exactly three integrable parts. We apply these techniques, as a practical case, for the integration of the disordered, discrete nonlinear Schrödinger equation (DDNLS) and compare their efficiencies. Three part split algorithms provide effective means to numerically study the asymptotic behavior of wave packet spreading in the DDNLS – a hotly debated subject in current scientific literature.

While symplectic integration methods based on operator splitting are well established in many branches of science, high order methods for Hamiltonian systems that split in more than two parts have not been studied in great detail. Here, we present several high order symplectic integrators for Hamiltonian systems that can be split in exactly three integrable parts. We apply these techniques, as a practical case, for the integration of the disordered, discrete nonlinear Schrödinger equation (DDNLS) and compare their efficiencies. Three part split algorithms provide effective means to numerically study the asymptotic behavior of wave packet spreading in the DDNLS – a hotly debated subject in current scientific literature.

In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design

Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

To support the development and characterization of chromophores with targeted photophysical properties, excited-state electronic structure calculations should rapidly and accurately predict how derivatization of a chromophore will affect its excitation and emission energies. This paper examines whether a time-independent excited-state density functional theory (DFT) approach meets this need through a case study of BODIPY chromophore photophysics. A restricted open-shell Kohn-Sham (ROKS) treatment of the S 1 excited state of BODIPY dyes is contrasted with linear-response time-dependent density functional theory (TDDFT). Vertical excitation energies predicted by the two approaches are remarkably different due to overestimation by TDDFT and underestimation by ROKS relative to experiment. Overall, ROKS with a standard hybrid functional provides the more accurate description of the S 1 excited state of BODIPY dyes, but excitation energies computed by the two methods are strongly correlated. The two approaches also make similar predictions of shifts in the excitation energy upon functionalization of the chromophore. TDDFT and ROKS models of the S 1 potential energy surface are then examined in detail for a representative BODIPY dye through molecular dynamics sampling on both model surfaces. We identify the most significant differences in the sampled surfaces and analyze these differences along selected normal modes. Differences between ROKS and TDDFT descriptions of the S 1 potential energy surface for this BODIPY derivative highlight the continuing need for validation of widely used approximations in excited state DFT through experimental benchmarking and comparison to ab initio reference data.

A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

We give a systematic derivation of the local expressions of the NS H-flux, geometric F- as well as non-geometric Q- and R-fluxes in terms of bivector β- and two-form B-potentials including vielbeins. They are obtained using a supergeometric method on QP-manifolds by twist of the standard Courant algebroid on the generalized tangent space without flux. Bianchi identities of the fluxes are easily deduced. We extend the discussion to the case of the double space and present a formulation of T-duality in terms of canonical transformations between graded symplectic manifolds. Thus, we find a unified description of geometric as well as non-geometric fluxes and T-duality transformations in double field theory. Finally, the construction is compared to the formerly introduced Poisson Courant algebroid, a Courant algebroid on a Poisson manifold, as a model for R-flux.

A differential-algebraic approach to studying the Lax-type integrability of the generalized Riemann-type hydrodynamic hierarchy, proposed recently by O. D. Artemovych, M. V. Pavlov, Z. Popowicz and A. K. Prykarpatski, is developed. In addition to the Lax-type representation, found before by Z. Popowicz, a closely related representation is constructed in exact form by means of a new differential-functional technique. The bi-Hamiltonian integrability and compatible Poisson structures of the generalized Riemann type hierarchy are analyzed by means of the symplectic and gradient-holonomic methods. An application of the devised differential-algebraic approach to other Riemann and Vakhnenko type hydrodynamic systems is presented.

Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

The purpose of this split-mouth study was to compare macro- and microstructure implant surfaces at the marginal bone level during a stress-free healing period and under functional loading. From January to February 2006, 133 implants (70 rough-surfaced microthreaded implants and 63 machined-neck implants) were inserted in the mandible of 34 patients with Kennedy Class I residual dentitions and followed until February 2008. The marginal bone level was radiographically determined, using digitized panoramic radiographs, at four time points: at implant placement (baseline level), after the healing period, after 6 months of functional loading, and at the end of follow-up. The median follow-up time was 1.9 (range: 1.9-2.1) years. The machined-neck group had a mean crestal bone loss of 0.5 mm (range: 0-2.3) after the healing period, 0.8 mm after 6 months (range: 0-2.4), and 1.1 mm (range: 0-3) at the end of follow-up. The rough-surfaced microthreaded implant group had a mean bone loss of 0.1 mm (range: -0.4-2) after the healing period, 0.4 mm (range: 0-2.1) after 6 months, and 0.5 mm (range: 0-2.1) at the end of follow-up. The two implant types showed significant differences in marginal bone levels (healing period: P=0.01; end of follow-up: Pimplants showed that implants with the microthreaded design caused minimal changes in crestal bone levels during healing (stress-free) and under functional loading.

The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

In this paper, we propose a family of symplectic structure-preserving numerical methods for the coupled Klein-Gordon-Schroedinger (KGS) system. The Hamiltonian formulation is constructed for the KGS. We discretize the Hamiltonian system in space first with a family of canonical difference methods which convert an infinite-dimensional Hamiltonian system into a finite-dimensional one. Next, we discretize the finite-dimensional system in time by a midpoint rule which preserves the symplectic structure of the original system. The conservation laws of the schemes are analyzed in succession, including the charge conservation law and the residual of energy conservation law, etc. We analyze the truncation errors and global errors of the numerical solutions for the schemes to end the theoretical analysis. Extensive numerical tests show the accordance between the theoretical and numerical results

Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.

We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

The present publication contains a special collection of research and review articles on deformations of surface singularities, that put together serve as an introductory survey of results and methods of the theory, as well as open problems, important examples and connections to other areas of mathematics. The aim is to collect material that will help mathematicians already working or wishing to work in this area to deepen their insight and eliminate the technical barriers in this learning process. This also is supported by review articles providing some global picture and an abundance of examples. Additionally, we introduce some material which emphasizes the newly found relationship with the theory of Stein fillings and symplectic geometry. This links two main theories of mathematics: low dimensional topology and algebraic geometry. The theory of normal surface singularities is a distinguished part of analytic or algebraic geometry with several important results, its own technical machinery, and several op...

We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

Adhesive interactions between yeasts and bacteria are important in the maintenance of infectious mixed biofilms on natural and biomaterial surfaces in the human body. In this study, the extended DLVO (Derjaguin-Landau-Verwey-Overbeek) approach has been applied to explain adhesive interactions

Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

This article describes a method devised for eﬃcient evaluation of arbitrary static magnetic and electric ﬁelds in a source free region needed for long time tracking of charged particles. Field values given on the boundary of the region of interest are reproduced inside by an arrangement of hypothetical magnetic or electric monopoles surrounding the boundary surface. The vector and scalar potentials are obtained by summing the contributions of each monopole. The second step of the method improves the evaluation speed of the potentials and their derivatives by orders of magnitude. This comprises covering the region of interest by overlapping spheres, then calculating the spherical harmonic expansion of the potentials on each sphere. During tracking, ﬁeld values are evaluated by calculating the solid harmonics and their derivatives inside a sphere containing the particle. Software has been developed to test and demonstrate the method on a small particle accelerator. To our knowledge, there is no other meth...

The coordinates of the area-preserving map equations for integration of magnetic field line trajectories in tokamaks can be any coordinates for which a transformation to (ψ,θ,φ) coordinates exists [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. ψ is toroidal magnetic flux, θ is poloidal angle, and φ is toroidal angle. This freedom is exploited to construct a map that represents the magnetic topology of double-null divertor tokamaks. For this purpose, the generating function of the simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is slightly modified. The resulting map equations for the double-null divertor tokamaks are: x1=x0-ky0(1-y0^2 ), y1=y0+kx1. k is the map parameter. It represents the generic topological effects of toroidal asymmetries. The O-point is at (0.0). The X-points are at (0,±1). The equilibrium magnetic surfaces are calculated. These surfaces are symmetric about the x- and y- axes. The widths of stochastic layer near the X-points in the principal plane, and the fractal dimensions of the magnetic footprints on the inboard and outboard side of upper and lower X-points are calculated from the map. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of Â¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer. Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

Using methods of Kersten et al (2004 J. Geom. Phys. 50 273-302) and Krasil'shchik and Kersten (2000 Symmetries and Recursion Operators for Classical and Supersymmetric Differential Equations (Dordrecht: Kluwer)), we accomplish an extensive study of the N = 1 supersymmetric Korteweg-de Vries (KdV) equation. The results include a description of local and nonlocal Hamiltonian and symplectic structures, five hierarchies of symmetries, the corresponding hierarchies of conservation laws, recursion operators for symmetries and generating functions of conservation laws. We stress that the main point of the paper is not just the results on super-KdV equation itself, but merely exposition of the efficiency of the geometrical approach and of the computational algorithms based on it

The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

.... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

A new derivation of surface charges for 3 + 1 gravity coupled to electromagnetism is obtained. Gravity theory is written in the tetrad-connection variables. The general derivation starts from the Lagrangian, and uses the covariant symplectic formalism in the language of forms. For gauge theories, surface charges disentangle physical from gauge symmetries through the use of Noether identities and the exactness symmetry condition. The surface charges are quasilocal, explicitly coordinate independent, gauge invariant and background independent. For a black hole family solution, the surface charge conservation implies the first law of black hole mechanics. As a check, we show the first law for an electrically charged, rotating black hole with an asymptotically constant curvature (the Kerr–Newman (anti-)de Sitter family). The charges, including the would-be mass term appearing in the first law, are quasilocal. No reference to the asymptotic structure of the spacetime nor the boundary conditions is required and therefore topological terms do not play a rôle. Finally, surface charge formulae for Lovelock gravity coupled to electromagnetism are exhibited, generalizing the one derived in a recent work by Barnich et al Proc. Workshop ‘ About Various Kinds of Interactions’ in honour of Philippe Spindel (4–5 June 2015, Mons, Belgium) C15-06-04 (2016 (arXiv:1611.01777 [gr-qc])). The two different symplectic methods to define surface charges are compared and shown equivalent.

Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

Research into medical imaging using general purpose parallel processing architectures is described and a review of the performance of previous medical imaging machines is provided. Results demonstrating that general purpose parallel architectures can achieve performance comparable to other, specialized, medical imaging machine architectures is presented. A new back-to-front hidden-surface removal algorithm is described. Results demonstrating the computational savings obtained by using the modified back-to-front hidden-surface removal algorithm are presented. Performance figures for forming a full-scale medical image on a mesh interconnected multiprocessor are presented

The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products ParallelsÂ® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

The effect of parallel electric fields on whistler mode wave propagation has been studied. To account for the parallel electric fields, the dispersion equation has been analyzed, and refractive index surfaces for magnetospheric plasma have been constructed. The presence of parallel electric fields deforms the refractive index surfaces which diffuse the energy flow and produce defocusing of the whistler mode waves. The parallel electric field induces an instability in the whistler mode waves propagating through the magnetosphere. The growth or decay of whistler mode instability depends on the direction of parallel electric fields. It is concluded that the analyses of whistler wave records received on the ground should account for the role of parallel electric fields

This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators. This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1. Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

The saturation stage of a multipactor discharge is considered of interest, since it can guide towards a criterion to assess the multipactor onset. The electron cloud under multipactor regime within a parallel-plate waveguide is modeled by a thin continuous distribution of charge and the equations of motion are calculated taking into account the space charge effects. The saturation is identified by the interaction of the electron cloud with its image charge. The stability of the electron population growth is analyzed and two mechanisms of saturation to explain the steady-state multipactor for voltages near above the threshold onset are identified. The impact energy in the collision against the metal plates decreases during the electron population growth due to the attraction of the electron sheet on the image through the initial plate. When this growth remains stable till the impact energy reaches the first cross-over point, the electron surface density tends to a constant value. When the stability is broken before reaching the first cross-over point the surface charge density oscillates chaotically bounded within a certain range. In this case, an expression to calculate the maximum electron surface charge density is found whose predictions agree with the simulations when the voltage is not too high.

Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

This beautifully written book deals with one shining example: the Hilbert schemes of points on algebraic surfaces ... The topics are carefully and tastefully chosen ... The young person will profit from reading this book. --Mathematical Reviews The Hilbert scheme of a surface X describes collections of n (not necessarily distinct) points on X. More precisely, it is the moduli space for 0-dimensional subschemes of X of length n. Recently it was realized that Hilbert schemes originally studied in algebraic geometry are closely related to several branches of mathematics, such as singularities, symplectic geometry, representation theory--even theoretical physics. The discussion in the book reflects this feature of Hilbert schemes. One example of the modern, broader interest in the subject is a construction of the representation of the infinite-dimensional Heisenberg algebra, i.e., Fock space. This representation has been studied extensively in the literature in connection with affine Lie algebras, conformal field...

Fu, Yu, E-mail: yufudufe@gmail.com [Dongbei University of Finance and Economics, School of Mathematics and Quantitative Economics (China)

2013-12-15

In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces.

In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces

Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others. This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

.... This has been performed without the need for silicon nitride layers or multi-layered resists. (2) We have conducted experiments using a closed-loop MM to measure the coefficient of thermal expansion...

The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

We have studied the energy flow patterns of the radiation emitted by an electric dipole located in between parallel mirrors. It appears that the field lines of the Poynting vector (the flow lines of energy) can have very intricate structures, including many singularities and vortices. The flow line patterns depend on the distance between the mirrors, the distance of the dipole to one of the mirrors and the angle of oscillation of the dipole moment with respect to the normal of the mirror surfaces. Already for the simplest case of a dipole moment oscillating perpendicular to the mirrors, singularities appear at regular intervals along the direction of propagation (parallel to the mirrors). For a parallel dipole, vortices appear in the neighbourhood of the dipole. For a dipole oscillating under a finite angle with the surface normal, the radiating tends to swirl around the dipole before travelling off parallel to the mirrors. For relatively large mirror separations, vortices appear in the pattern. When the dipole is off-centred with respect to the midway point between the mirrors, the flow line structure becomes even more complicated, with numerous vortices in the pattern, and tiny loops near the dipole. We have also investigated the locations of the vortices and singularities, and these can be found without any specific knowledge about the flow lines. This provides an independent means of studying the propagation of dipole radiation between mirrors.

Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

We give a physicist oriented survey of Poisson-Lie symmetries of classical systems. We consider finite-dimensional geometric actions and the chiral WZNW model as examples for the general construction. An essential point is the appearance of quadratic Poisson brackets for group-like variables. It is believed that upon quantization they lead to quadratic exchange algebras. ((orig.))

Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-Eparallel-M algorithm. Examples presented in this report include item response…

A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

Full Text Available Introduction: Marginal adaptation is the most critical item in long-term prognosis of single crowns. This study aimed to assess the marginal quality as well asthe discrepancies in marginal integrity of some PFM single crowns of posterior teeth by employing parallel radiography in Shiraz Dental School, Shiraz, Iran. Methods: In this descriptive study, parallel radiographies were taken from 200 fabricated PFM single crowns of posterior teeth after cementation and before discharging the patient. To calculate the magnification of the images, a metallic sphere with the thickness of 4 mm was placed in the direction of the crown margin on the occlusal surface. Thereafter, the horizontal and vertical space between the crown margins, the margin of preparations and also the vertical space between the crown margin and the bone crest were measured by using digital radiological software. Results: Analysis of data by descriptive statistics revealed that 75.5% and 60% of the cases had more than the acceptable space (50µm in the vertical (130±20µm and horizontal (90±15µm dimensions, respectively. Moreover, 85% of patients were found to have either horizontal or vertical gap. In 77% of cases, the margins of crowns invaded the biologic width in the mesial and 70% in distal surfaces. Conclusion: Parallel radiography can be expedient in the stage of framework try-in to yield some important information that cannot be obtained by routine clinical evaluations and may improve the treatment prognosis

Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

The present paper explores the implications of parallel narrative structure in Paul Harding's "Tinkers" (2009). Besides primarily recounting the two sets of parallel narratives, "Tinkers" also comprises of seemingly unrelated fragments such as excerpts from clock repair manuals and diaries. The main stories, however, told…

The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.