Sample records for potential inversion revisited

The inverse Faraday effect is usually associated with circularly polarized laser beams. However, it was recently shown that it can also occur for linearly polarized radiation [1]. The quasi-static axial magnetic field by a laser beam propagating in plasma can be calculated by considering both the spin and the orbital angular momenta of the laser pulse. A net spin is present when the radiation is circularly polarized and a net orbital angular momentum is present if there is any deviation from perfect rotational symmetry. This orbital angular momentum has recently been discussed in the plasma context [2], and can give an additional contribution to the axial magnetic field, thus enhancing or reducing the inverse Faraday effect. As a result, this effect that is usually attributed to circular polarization can also be excited by linearly polarized radiation, if the incident laser propagates in a Laguerre-Gauss mode carrying a finite amount of orbital angular momentum.[4pt] [1] S. ALi, J.R. Davies and J.T. Mendonca, Phys. Rev. Lett., 105, 035001 (2010).[0pt] [2] J. T. Mendonca, B. Thidé, and H. Then, Phys. Rev. Lett. 102, 185005 (2009).

The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.

Heisenberg offered an interpretation of the quantum state which made use of a quantitative version of an earlier notion, , of Aristotle by both referring to it using its Latin name, potentia, and identifying its qualitative aspect with . The relationship between this use and Aristotle's notion was not made by Heisenberg in full detail, beyond noting their common character: that of signifying the system's objective capacity to be found later to possess a property in actuality. For such actualization, Heisenberg required measurement to have taken place, an interaction with external systems that disrupts the otherwise independent, natural evolution of the quantum system. The notion of state actualization was later taken up by others, including Shimony, in the search for a law-like measurement process. Yet, the relation of quantum potentiality to Aristotle's original notion has been viewed as mainly terminological, even by those who used it thus. Here, I reconsider the relation of Heisenberg's notion to Aristotle's and show that it can be explicated in greater specificity than Heisenberg did. This is accomplished through the careful consideration of the role of potentia in physical causation and explanation, and done in order to provide a fuller understanding of this aspect of Heisenberg's approach to quantum mechanics. Most importantly, it is pointed out that Heisenberg's requirement of an external intervention during measurement that disrupts the otherwise independent, natural evolution of the quantum system is in accord with Aristotle's characterization of spontaneous causation. Thus, the need for a teleological understanding of the actualization of potentia, an often assumed requirement that has left this fundamental notion neglected, is seen to be spurious. This article is part of the themed issue `Second quantum revolution: foundational questions'.

Marchenko inversion is used to determine local energy independent but channel dependent potential matrices from optimum sets of experimental phase shifts. 3 SD 1 and 3 PF 2 channels of nucleon-nucleon systems contain in their off-diagonal potential matrices explicitly the tensor force for T = 0 and 1 isospin. We obtain, together with single channels, complete sets of quantitative nucleon-nucleon potential results which are ready for application in nuclear structure and reaction analyses. The historic coupled channels inversion result of Newton and Fulton is revisited. (orig.)

Full Text Available Abstract One of the most famous, and most derided, arguments against the morality of abortion is the argument from potential, which maintains that the fetus' potential to become a person and enjoy the valuable life common to persons, entails that its destruction is prima facie morally impermissible. In this paper, I will revisit and offer a defense of the argument from potential. First, I will criticize the classical arguments proffered against the importance of fetal potential, specifically the arguments put forth by philosophers Peter Singer and David Boonin, by carefully unpacking the claims made in these arguments and illustrating why they are flawed. Secondly, I will maintain that fetal potential is morally relevant when it comes to the morality of abortion, but that it must be accorded a proper place in the argument. This proper place, however, cannot be found until we first answer a very important and complex question: we must first address the issue of personal identity, and when the fetus becomes the type of being who is relevantly identical to a future person. I will illustrate why the question of fetal potential can only be meaningfully addressed after we have first answered the question of personal identity and how it relates to the human fetus.

An exact theory of irreversibility was proposed by Misra, Prigogine and Courbage, based on non-unitary similarity transformations Λ that intertwine reversible dynamics and irreversible ones. This would advocate the idea that irreversible behavior would originate at the microscopic level. Reversible evolution with an internal time operator have the intertwining property. Recently the inverse intertwining problem has been answered in the negative, that is, not every unitary evolution allowing such Λ-transformation has an internal time. This work contributes new results in this direction.

We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

Using the fact that the continuous time random walk (CTRW) scheme is a random process subordinated to a simple random walk under the operational time given by the number of steps taken by the walker up to a given time, we revisit the problem of strongly dispersive transport in disordered media, which first lead Scher and Montroll to introducing the power law waiting time distributions. Using a subordination approach permits to disentangle the complexity of the problem, separating the solution of the boundary value problem (which is solved on the level of normal diffusive transport) from the influence of the waiting times, which allows for the solution of the direct problem in the whole time domain (including short times, out of reach of the initial approach), and simplifying strongly the analysis of the inverse problem. This analysis shows that the current traces do not contain information sufficient for unique restoration of the waiting time probability densities, but define a single-parametric family of functions that can be restored, all leading to the same photocurrent forms. The members of the family have the power-law tails which differ only by a prefactor, but may look astonishingly different at their body. The same applies to the multiple trapping model, mathematically equivalent to a special limiting case of CTRW. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

The inverse problem is studied in a system with mixed spectrum, i.e. the continuous part of the spectrum coincides with that of a repulsive δ-potential and the discrete part coincides with that of an attractive δ-potential. (author). 2 refs, 5 figs

were captured when they described entrepreneurs. Therefore, this paper aims to revisit gender role stereotypes among young adults. Design/methodology/approach: To measure stereotyping, participants were asked to describe entrepreneurs in general and either women or men in general. The Schein......Purpose: Entrepreneurship is shaped by a male norm, which has been widely demonstrated in qualitative studies. The authors strive to complement these methods by a quantitative approach. First, gender role stereotypes were measured in entrepreneurship. Second, the explicit notions of participants......: The images of men and entrepreneurs show a high and significant congruence (r = 0.803), mostly in those adjectives that are untypical for men and entrepreneurs. The congruence of women and entrepreneurs was low (r = 0.152) and insignificant. Contrary to the participants’ beliefs, their explicit notions did...

In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel

Analytical continuation of the solution for the Schroedinger equation of inverse square potential, together with the modified method for variation of constants makes it possible to construct admittable self-adjoint extensions and to completely analyze the respective scattering problem along the entire line. In this case, the current density conservation and the wave function continuity when passing through the singular point x=0 require, that a 8-shaped induced potential should be introduced in the Schroedinger equation. The relevant calculations have shown that the potential x -2 can be either absolutely penetrable or absolutely impenetrable. 16 refs

Classical motion in an inverse square potential is shown to be equivalent to free motion on a hyperbola. The existence of a classical splitting between the q>0 and q<0 regions of motion is demonstrated. We show that this last property may be regarded as the classical counterpart of the superselection rule occurring in the corresponding quantum problem. We solve the quantum problem in momentum space finding that there is no way of quantizing its energy but that the eigenfunctions suffice to describe the single renormalized bound state of the system. The dynamical symmetry of the classical problem is found to be O(1,1). Both this symmetry and the symmetry of inversion through the origin are found to be broken

A method to obtain the crystal potential from the intensities of the diffracted beams in high energy electron diffraction is proposed. It is based on a series of measurements for specific well determined orientations of the incident beam which determine the moduli of all elements of the scattering matrix. Using unitarity and the specific form of the scattering matrix (including symmetries) an overdetermined set of non-linear equations is obtained from these data. Solution of these equations yields the required phase information and allows the determination of a (projected) crystal potential by inversion which is unique up to an arbitrary shift of the origin. The reconstruction of potentials from intensities is illustrated for two realistic examples, a [111] systematic row case in ZnS and a [110] zone axis orientation in GaAs (both noncentrosymmetric crystals)

Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…

Identification of the trophic pathway that dominates a given planktonic assemblage is generally based on the distribution of biomasses among food-web compartments, or better, the flows of materials or energy among compartments. These flows are obtained by field observations and a posteriori analyses, including the linear inverse approach. In the present study, we re-analysed carbon flows obtained by inverse analysis at 32 stations in the global ocean and one large lake. Our results do not support two "classical" views of plankton ecology, i.e. that the herbivorous food web is dominated by mesozooplankton grazing on large phytoplankton, and the microbial food web is based on microzooplankton significantly consuming bacteria; our results suggest instead that phytoplankton are generally grazed by microzooplankton, of which they are the main food source. Furthermore, we identified the "phyto-microbial food web", where microzooplankton largely feed on phytoplankton, in addition to the already known "poly-microbial food web", where microzooplankton consume more or less equally various types of food. These unexpected results led to a (re)definition of the conceptual models corresponding to the four trophic pathways we found to exist in plankton, i.e. the herbivorous, multivorous, and two types of microbial food web. We illustrated the conceptual trophic pathways using carbon flows that were actually observed at representative stations. The latter can be calibrated to correspond to any field situation. Our study also provides researchers and managers with operational criteria for identifying the dominant trophic pathway in a planktonic assemblage, these criteria being based on the values of two carbon ratios that could be calculated from flow values that are relatively easy to estimate in the field.

We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting

The classical motion of a particle in a 3D inverse square potential with negative energy, E, is shown to be geodesic, i.e., equivalent to the particle's free motion on a non-compact phase space manifold irrespective of the sign of the coupling constant. We thus establish that all its classical orbits with E < 0 are unbounded. To analyse the corresponding quantum problem, the Schrödinger equation is solved in momentum space. No discrete energy levels exist in the unrenormalized case and the system shows a complete “fall-to-the-center” with an energy spectrum unbounded by below. Such behavior corresponds to the non-existence of bound classical orbits. The symmetry of the problem is SO(3) × SO(2, 1) corroborating previously obtained results

The classical motion of a particle in a 3D inverse square potential with negative energy, E, is shown to be geodesic, i.e., equivalent to the particle's free motion on a non-compact phase space manifold irrespective of the sign of the coupling constant. We thus establish that all its classical orbits with E < 0 are unbounded. To analyse the corresponding quantum problem, the Schrödinger equation is solved in momentum space. No discrete energy levels exist in the unrenormalized case and the system shows a complete “fall-to-the-center” with an energy spectrum unbounded by below. Such behavior corresponds to the non-existence of bound classical orbits. The symmetry of the problem is SO(3) × SO(2, 1) corroborating previously obtained results.

Global climate models (GCMs) tend to simulate too few intense extratropical cyclones (ETCs) in the Northern Hemisphere (NH) under historic climate conditions. This bias may arise from the interactions of multiple drivers, including surface temperature gradients, latent heating in the lower troposphere, and the upper-level jet stream. Previous attempts to quantify the importance of these drivers include idealized model experiments or statistical approaches. The first method however cannot easily be implemented for a multi-GCM ensemble, and the second approach does not disentangle the interactions among drivers, nor does it prove causality. An alternative method that overcomes these limitations is piecewise potential vorticity inversion (PPVI). PPVI derives the wind and geopotential height fields by inverting potential vorticity (PV) for discrete atmospheric levels. Despite being a powerful diagnostic tool, PPVI has primarily been used to study the dynamics of individual events only. This study presents the first PPVI climatology for the 5% most intense NH ETCs that occurred from 1980 to 2016. Conducting PPVI to 3273 ETC tracks identified in ERA-Interim reanalysis, we quantified the contributions from 3 atmospheric layers to ETC intensity. The respective layers are the surface (1000 hPa), a lower atmospheric level (700-850 hPa) and an upper atmospheric level (100-500 hPa) that are associated with the contributions from surface temperature gradients, latent heating, and the jet stream, respectively. Results show that contributions are dominated by the lower level (40%), followed by the upper level (20%) and the surface (17%), while the remaining 23% are associated with the background flow. Contributions from the surface and the lower level are stronger in the western ocean basins owed to the presence of the warm ocean currents, while contributions from the upper level are stronger in the eastern basins. Vertical cross sections of ETC-centered composites show an

The role of inverse scattering method is illustrated to examine the connection between the multi-soliton solutions of Korteweg-de Vries (KdV) equation and discrete eigenvalues of Schrodinger equation. The necessity of normalization of the Schrodinger wave functions, which are constructed purely from a supersymmetric consideration is pointed out

Some aspects of the N-dimensional isotropic harmonic plus inverse quadratic potential were discussed. The hyperradial equation for isotropic harmonic oscillator plus inverse quadratic potential is solved by transformation into the confluent hypergeometric equation to obtain the normalized hyperradial solution. Together with the hyperangular solutions (hyperspherical harmonics), these form the complete energy eigenfunctions of the N-dimensional isotropic harmonic oscillator plus inverse quadratic potential and the energy eigenvalues are also obtained. These are dimensionally dependent. The dependence of radial solution on the dimensions or potential strength and the degeneracy of the energy levels are discussed. (author)

We consider the problem of recovering a smooth, compactly supported potential on R 3 from its backscattering data. We show that if two such potentials have the same backscattering data and the difference of the two potentials has controlled angular derivatives, then the two potentials are identical. In particular, if two potentials differ by a finite linear combination of spherical harmonics with radial coefficients and have the same backscattering data then the two potentials are identical. (paper)

During the past few years a considerable interest has been focused on the inverse boundary value problem for the Schroedinger operator with a scalar (electric) potential. The popularity gained by this subject seems to be due to its connection with the inverse scattering problem at fixed energy, the inverse conductivity problem and other important inverse problems. This paper deals with an inverse boundary value problem for the Schroedinger operator with vector (electric and magnetic) potentials. As in the case of the scalar potential, results of this study would have immediate consequences in the inverse scattering problem for magnetic field at fixed energy. On the other hand, inverse boundary value problems for elliptic operators are of independent interest. The study is partly devoted to the understanding of the inverse boundary value problem for a class of general elliptic operator of second order. Note that a self-adjoint elliptic operator of second order with Δ as its principal symbol can always be written as a Schroedinger operator with vector potentials

The atomic interactions of PuC with B1 structure were described by Chen–Möbius lattice inversion combined with first-principle calculations. In order to obtain the inversionpotential parameters of PuC, three different structures including two virtual crystals were built and the Morse function plus a modified term was adopted to fit the pair-potential curves. The reliability of the inversionpotential was tested by checking the stability of the transition of PuC from disordered to ordered state and comparing the calculated and experimental physical and thermal properties of PuC. All the results show that the inversionpotential could give a stable and accurate description of the atomic interactions in PuC and the physical and thermal properties of PuC are well reproduced by the potential

The modular neural network (MNN) inversion method has been used for inversion of self-potential (SP) data anomalies caused by 2D inclined sheets of infinite horizontal extent. The analysed parameters are the depth (h), the half-width (a), the inclination (α), the zero distance from the origin (x o ) and the polarization amplitude (k). The MNN inversion has been first tested on a synthetic example and then applied to two field examples from the Surda area of Rakha mines, India, and Kalava fault zone, India. The effect of random noise has been studied, and the technique showed satisfactory results. The inversion results show good agreement with the measured field data compared with other inversion techniques in use

By using the Nikiforov-Uvarov (NU) method, the Schrödinger equation has been solved for the interaction of inversely quadratic Hellmann (IQHP) and inversely quadratic potential (IQP) for any angular momentum quantum number, l. The energy eigenvalues and their corresponding eigenfunctions have been obtained in terms of Laguerre polynomials. Special cases of the sum of these potentials have been considered and their energy eigenvalues also obtained

AB: Inversion of large-scale potential-field anomalies, aimed at determining density or magnetization, is usually made in the Fourier domain. The commonly adopted geometry is based on a layer of constant thickness, characterized by a bottom surface at a fixed distance from the top surface.....

In this paper, we clarify the fundamental solutions for Schrödinger operators given as (Formula presented.), where the potential V is a general inverse square potential in (Formula presented.) with (Formula presented.). In particular, letting (Formula presented.),(Formula presented.) where (Formula presented.), we discuss the existence and nonexistence of positive fundamental solutions for Hardy operator (Formula presented.), which depend on the parameter t.

In this paper, we clarify the fundamental solutions for Schrödinger operators given as (Formula presented.), where the potential V is a general inverse square potential in (Formula presented.) with (Formula presented.). In particular, letting (Formula presented.),(Formula presented.) where (Formula presented.), we discuss the existence and nonexistence of positive fundamental solutions for Hardy operator (Formula presented.), which depend on the parameter t.

Harmonic pumping tests consist in stimulating an aquifer by the means of hydraulic stimulations at some discrete frequencies. The inverse problem consisting in retrieving the hydraulic properties is inherently ill posed and is usually underdetermined when considering the number of well head data available in field conditions. To better constrain this inverse problem, we add self-potential data recorded at the ground surface to the head data. The self-potential method is a passive geophysical method. Its signals are generated by the groundwater flow through an electrokinetic coupling. We showed using a 3-D saturated unconfined synthetic aquifer that the self-potential method significantly improves the results of the harmonic hydraulic tomography. The hydroelectric forward problem is obtained by solving first the Richards equation, describing the groundwater flow, and then using the result in an electrical Poisson equation describing the self-potential problem. The joint inversion problem is solved using a reduction model based on the principal component geostatistical approach. In this method, the large prior covariance matrix is truncated and replaced by its low-rank approximation, allowing thus for notable computational time and storage savings. Three test cases are studied, to assess the validity of our approach. In the first test, we show that when the number of harmonic stimulations is low, combining the harmonic hydraulic and self-potential data does not improve the inversion results. In the second test where enough harmonic stimulations are performed, a significant improvement of the hydraulic parameters is observed. In the last synthetic test, we show that the electrical conductivity field required to invert the self-potential data can be determined with enough accuracy using an electrical resistivity tomography survey using the same electrodes configuration as used for the self-potential investigation.

The IP S-matrix to potentialinversion procedure is applied to phase shifts for selected partial waves over a range of energies below the inelastic threshold for {alpha}-{sup 12}C scattering. The phase shifts were determined by Plaga et al. Potentials found by Buck and Rubio to fit the low-energy alpha cluster resonances need only an increased attraction in the surface to accurately reproduce the phase-shift behaviour. Substantial differences between the potentials for odd and even partial waves are necessary. The surface tail of the potential is postulated to be a threshold effect. (orig.).

The IP S-matrix to potentialinversion procedure is applied to phase shifts for selected partial waves over a range of energies below the inelastic threshold for α-12C scattering. The phase shifts were determined by Plaga et al. Potentials found by Buck and Rubio to fit the low-energy alpha cluster resonances need only an increased attraction in the surface to accurately reproduce the phase-shift behaviour. Substantial differences between the potentials for odd and even partial waves are necessary. The surface tail of the potential is postulated to be a threshold effect.

Based on the Chen-Moebius lattice inversion and a series of pseudopotential total-energy curves, a different method is presented to derive the ab initio interionic pair potentials for B1-type ionic crystals. Comparing with the experimental data, the static properties of B1- and B2-type NaCl are well reproduced by the interionic potentials. Moreover, the phase stability of B1-NaCl has been described by the energy minimizations from the global deformed and disturbed states. The molecular-dynamics simulations for the molten NaCl indicate that the calculated mean-square displacements, radial distribution function, and diffusion coefficients gain good agreements with the experimental results. It can be concluded that the inversion pair potentials are valid over a wide range of interionic separations for describing the structural properties of B1-type ionic crystals

Mapping the redox potential of shallow aquifers impacted by hydrocarbon contaminant plumes is important for the characterization and remediation of such contaminated sites. The redox potential of groundwater is indicative of the biodegradation of hydrocarbons and is important in delineating the shapes of contaminant plumes. The self-potential method was used to reconstruct the redox potential of groundwater associated with an organic-rich contaminant plume in northern France. The self-potential technique is a passive technique consisting in recording the electrical potential distribution at the surface of the Earth. A self-potential map is essentially the sum of two contributions, one associated with groundwater flow referred to as the electrokinetic component, and one associated with redox potential anomalies referred to as the electroredox component (thermoelectric and diffusion potentials are generally negligible). A groundwater flow model was first used to remove the electrokinetic component from the observed self-potential data. Then, a residual self-potential map was obtained. The source current density generating the residual self-potential signals is assumed to be associated with the position of the water table, an interface characterized by a change in both the electrical conductivity and the redox potential. The source current density was obtained through an inverse problem by minimizing a cost function including a data misfit contribution and a regularizer. This inversion algorithm allows the determination of the vertical and horizontal components of the source current density taking into account the electrical conductivity distribution of the saturated and non-saturated zones obtained independently by electrical resistivity tomography. The redox potential distribution was finally determined from the inverted residual source current density. A redox map was successfully built and the estimated redox potential values correlated well with in

Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.

We consider the Schrödinger equation with an additional quadratic potential on the entire axis and use the transformation operator method to study the direct and inverse problems of the scattering theory. We obtain the main integral equations of the inverse problem and prove that the basic equations are uniquely solvable.

Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant

We consider the Hamiltonian H=(p-A(x)) 2 /(2m)+V(x) of a quantum particle in a magnetic field B=rotA and a potential V in space dimensions ν≥2. If V is of short range, then the high-velocity limit of the scattering operator uniquely determines the magnetic field B and the potential V. If, in addition, long-range potentials V l are present, some knowledge of (the far out tail of) V l is needed to define a modified Dollard wave operator and a scattering operator S D . Again its high- velocity limit uniquely determines B and V=V s +V l . Moreover, we give explicit error bounds which are inverse proportional to the velocity. copyright 1997 American Institute of Physics

The relations between many particle problem with inverse square potential on the line and meromorphic eigenfunctions of Schroedinger operator are presented. This gives new type of Backlund transformations for many particle problem [fr

Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

The Cox-Thompson inverse scattering method at fixed energy has been generalized to treat complex phase shifts derived from experiments. New formulae for relating phase shifts to shifted angular momenta are derived. The method is applied to phase shifts of known potentials in order to test its quality and stability and, further, it is used to invert experimental n-α and n- 12 C phase shifts

We study a tachyon cosmological model based on the dynamics of a 3-brane in the bulk of the second Randall-Sundrum model extended to more general warp functions. A well known prototype of such a generalization is the bulk with a selfinteracting scalar field. As a consequence of a generalized bulk geometry the cosmology on the observer brane is modified by the scale dependent four-dimensional gravitational constant. In particular, we study a power law warp factor which generates an inverse power-law potential V\\propto \\varphi-n of the tachyon field φ. We find a critical power n cr that divides two subclasses with distinct asymptotic behaviors: a dust universe for n>n_cr and a quasi de Sitter universe for 0.

In potential-field inversion problems, it can be dicult to obtain reliable information about the source distribution with respect to depth. Moreover, spatial resolution of the reconstructions decreases with depth, and in fact the more ill-posed the problem - and the more noisy the data - the less...... reliable the depth information. Based on earlier work using the singular value decomposition, we introduce a tool ApproxDRP which uses approximations of the singular vectors obtained by the iterative Lanczos bidiagonalization algorithm, making it well suited for large-scale problems. This tool allows...... successfully show the limitations of depth resolution resulting from noise in the data. This allows a reliable analysis of the retrievable depth information and effectively guides the user in choosing the optimal number of iterations, for a given problem....

We present a conditionally integrable potential, belonging to the bi-confluent Heun class, for which the Schrödinger equation is solved in terms of the confluent hypergeometric functions. The potential involves an attractive inverse square root term x-1/2 with arbitrary strength and a repulsive centrifugal barrier core x-2 with the strength fixed to a constant. This is a potential well defined on the half-axis. Each of the fundamental solutions composing the general solution of the Schrödinger equation is written as an irreducible linear combination, with non-constant coefficients, of two confluent hypergeometric functions. We present the explicit solution in terms of the non-integer order Hermite functions of scaled and shifted argument and discuss the bound states supported by the potential. We derive the exact equation for the energy spectrum and approximate that by a highly accurate transcendental equation involving trigonometric functions. Finally, we construct an accurate approximation for the bound-state energy levels.

During the past four years, the number of earthquakes with magnitudes greater than three has substantially increased in the southern section of Western Canada Sedimentary Basin (WCSB). While some of these events are likely associated with tectonic forces, especially along the foothills of the Canadian Rockies, a significant fraction occurred in previously quiescent regions and has been linked to waste water disposal or hydraulic fracturing. A proper assessment of the origin and source properties of these 'induced earthquakes' requires careful analyses and modeling of regional broadband data, which steadily improved during the past 8 years due to recent establishments of regional broadband seismic networks such as CRANE, RAVEN and TD. Several earthquakes, especially those close to fracking activities (e.g. Fox creek town, Alberta) are analyzed. Our preliminary full moment tensor inversion results show maximum horizontal compressional orientations (P-axis) along the northeast-southwest orientation, which agree with the regional stress directions from borehole breakout data and the P-axis of historical events. The decomposition of those moment tensors shows evidence of strike-slip mechanism with near vertical fault plane solutions, which are comparable to the focal mechanisms of injection induced earthquakes in Oklahoma. Minimal isotropic components have been observed, while a modest percentage of compensated-linear-vector-dipole (CLVD) components, which have been linked to fluid migraition, may be required to match the waveforms. To further evaluate the non-double-couple components, we compare the outcomes of full, deviatoric and pure double couple (DC) inversions using multiple frequency ranges and phases. Improved location and depth information from a novel grid search greatly assists the identification and classification of earthquakes in potential connection with fluid injection or extraction. Overall, a systematic comparison of the source attributes of

We pursue the analysis of the Schrödinger operator on the unit interval in inverse spectral theory initiated in the work of Amour and Raoux ["Inverse spectral results for Schrödinger operators on the unit interval with potentials in Lp spaces," Inverse Probl. 23, 2367 (2007)]. While the potentials in the work of Amour and Raoux belong to L1 with their difference in Lp (1≤ppotentials in Wk,1 spaces having their difference in Wk,p, where 1≤p≤+∞, k ɛ{0,1,2}. It is proved that two potentials in Wk,1([0,1]) being equal on [a,1] are also equal on [0,1] if their difference belongs to Wk,p([0,a]) and if the number of their common eigenvalues is sufficiently high. Naturally, this number decreases as the parameter a decreases and as the parameters k and p increase.

In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

We present a conditionally exactly solvable singular potential for the one-dimensional Schrödinger equation which involves the exactly solvable inverse square root potential. Each of the two fundamental solutions that compose the general solution of the problem is given by a linear combination with non-constant coefficients of two confluent hypergeometric functions. Discussing the bound-state wave functions vanishing both at infinity and in the origin, we derive the exact equation for the energy spectrum which is written using two Hermite functions of non-integer order. In specific auxiliary variables this equation becomes a mathematical equation that does not refer to a specific physical context discussed. In the two-dimensional space of these auxiliary variables the roots of this equation draw a countable infinite set of open curves with hyperbolic asymptotes. We present an analytic description of these curves by a transcendental algebraic equation for the involved variables. The intersections of the curves thus constructed with a certain cubic curve provide a highly accurate description of the energy spectrum. - Highlights: • We present a conditionally exactly solvable singular potential for 1D Schrödinger equation. • Each of the two fundamental solutions is given by a linear combination with non-constant coefficients of two confluent hypergeometric functions. • The exact equation for the energy spectrum is written using two Hermite functions that do not reduce to polynomials.

Full Text Available In article attention that when training in the inverse problems for differential equations at students scientific and cognitive potential develops is paid. Students realize that mathematical models of the inverse problems for differential equations find the application in economy, the industries, ecology, sociology, biology, chemistry, mathematician, physics, in researches of the processes and the phenomena occurring in water and earth’s environment, air and space.Attention of the reader that in training activity to the inverse problems for differential equations at students the scientific outlook, logical, algorithmic, information thinking, creative activity, independence and ingenuity develop is focused. Students acquire skills to apply knowledge of many physical and mathematical disciplines, to carry out the analysis of the received decision of the reverse task and to formulate logical outputs of application-oriented character. Solving the inverse problems for differential equations, students acquire new knowledge in the field of applied and calculus mathematics, informatics, natural sciences and other knowledge.

Full Text Available The Schrödinger solutions for a three-dimensional central potential system whose Hamiltonian is composed of a time-dependent harmonic plus an inverse harmonic potential are investigated. Because of the time-dependence of parameters, we cannot solve the Schrödinger solutions relying only on the conventional method of separation of variables. To overcome this difficulty, special mathematical methods, which are the invariant operator method, the unitary transformation method, and the Nikiforov-Uvarov method, are used when we derive solutions of the Schrödinger equation for the system. In particular, the Nikiforov-Uvarov method with an appropriate coordinate transformation enabled us to reduce the eigenvalue equation of the invariant operator, which is a second-order differential equation, to a hypergeometric-type equation that is convenient to treat. Through this procedure, we derived exact Schrödinger solutions (wave functions of the system. It is confirmed that the wave functions are represented in terms of time-dependent radial functions, spherical harmonics, and general time-varying global phases. Such wave functions are useful for studying various quantum properties of the system. As an example, the uncertainty relations for position and momentum are derived by taking advantage of the wave functions.

We pursue the analysis of the Schroedinger operator on the unit interval in inverse spectral theory initiated in the work of Amour and Raoux [''Inverse spectral results for Schroedinger operators on the unit interval with potentials in Lp spaces,'' Inverse Probl. 23, 2367 (2007)]. While the potentials in the work of Amour and Raoux belong to L 1 with their difference in L p (1≤p k,1 spaces having their difference in W k,p , where 1≤p≤+∞, k(set-membership sign)(0,1,2). It is proved that two potentials in W k,1 ([0,1]) being equal on [a,1] are also equal on [0,1] if their difference belongs to W k,p ([0,a]) and if the number of their common eigenvalues is sufficiently high. Naturally, this number decreases as the parameter a decreases and as the parameters k and p increase

Multiparameter full waveform inversion (FWI) applied to an elastic orthorhombic model description of the subsurface requires in theory a nine-parameter representation of each pixel of the model. Even with optimal acquisition on the Earth surface that includes large offsets, full azimuth, and multicomponent sensors, the potential for trade-off between the elastic orthorhombic parameters are large. The first step to understanding such trade-off is analysing the scattering potential of each parameter, and specifically, its scattering radiation patterns. We investigate such radiation patterns for diffraction and for scattering from a horizontal reflector considering a background isotropic model. The radiation patterns show considerable potential for trade-off between the parameters and the potentially limited resolution in their recovery. The radiation patterns of C11, C22, and C33 are well separated so that we expect to recover these parameters with limited trade-offs. However, the resolution of their recovery represented by recovered range of model wavenumbers varies between these parameters. We can only invert for the short wavelength components (reflection) of C33 while we can mainly invert for the long wavelength components (transmission) of the elastic coefficients C11 and C22 if we have large enough offsets. The elastic coefficients C13, C23, and C12 suffer from strong trade-offs with C55, C44, and C66, respectively. The trade-offs between C13 and C55, as well as C23 and C44, can be partially mitigated if we acquire P–SV and SV–SV waves. However, to reduce the trade-offs between C12 and C66, we require credible SH–SH waves. The analytical radiation patterns of the elastic constants are supported by numerical gradients of these parameters.

Full Text Available In article methodical aspects of training for the inverse problems for differential equations of students of higher education institutions of the physical and mathematical and natural-science directions of preparation are stated. The attention to expediency of development in students of scientific outlook allowing to acquire fundamental knowledge of methods and methodology of research of mathematical models of the inverse problems, to master the principles of the organization of theoretical and practical researches of the inverse problem, to create ideas of the inverse problems as universal tools of knowledge of world around is paid.In article attention that development of scientific outlook in training activity to the inverse problems for differential equations allows students to deep understanding of idea of integrity of the world, assimilation of disciplines of applied mathematics, disciplines from other data domains is paid. It is marked that in the course of such training in students lines of humanitarization take root. Students acquire skills to analyze the received solutions of the inverse problems for differential equations, to formulate logical outputs about an ecological status of air space, earth’s environment or the water environment, to apply results of solutions of the inverse problems for differential equations in the humanitarian analysis of applied researches.

Singular potentials (the inverse-square potential, for example) arise in many situations and their quantum treatment leads to well-known ambiguities in choosing boundary conditions for the wave-function at the position of the potential’s singularity. These ambiguities are usually resolved by developing a self-adjoint extension of the original problem; a non-unique procedure that leaves undetermined which extension should apply in specific physical systems. We take the guesswork out of this picture by using techniques of effective field theory to derive the required boundary conditions at the origin in terms of the effective point-particle action describing the physics of the source. In this picture ambiguities in boundary conditions boil down to the allowed choices for the source action, but casting them in terms of an action provides a physical criterion for their determination. The resulting extension is self-adjoint if the source action is real (and involves no new degrees of freedom), and not otherwise (as can also happen for reasonable systems). We show how this effective-field picture provides a simple framework for understanding well-known renormalization effects that arise in these systems, including how renormalization-group techniques can resum non-perturbative interactions that often arise, particularly for non-relativistic applications. In particular we argue why the low-energy effective theory tends to produce a universal RG flow of this type and describe how this can lead to the phenomenon of reaction catalysis, in which physical quantities (like scattering cross sections) can sometimes be surprisingly large compared to the underlying scales of the source in question. We comment in passing on the possible relevance of these observations to the phenomenon of the catalysis of baryon-number violation by scattering from magnetic monopoles.

Possibilities of using the method of the inverse scattering problem for describing simultaneously the two-nucleon and the low-energy three-nucleon data in the S-interaction approximation are examined. 20 refs., 3 figs., 1 tab

A theoretical dissertation and experimental assays of the irreversible phenomena applied to electro-kinetics and inverse osmosis is presented. Experimental assays were made on simple equipment to evidence the occurrence of connected irreversible phenomena between electric current flow and global mass flow. The coupling of these two phenomena allowed us to make conclusions about the possibility of reducing operation costs of the inverse osmosis equipment due to increasing the saline solution flow between 12% and 20%.

A theoretical dissertation and experimental assays of the irreversible phenomena applied to electro-kinetics and inverse osmosis is presented. Experimental assays were made on simple equipment to evidence the occurrence of connected irreversible phenomena between electric current flow and global mass flow. The coupling of these two phenomena allowed us to make conclusions about the possibility of reducing operation costs of the inverse osmosis equipment due to increasing the saline solution flow between 12% and 20%. (author)

In potential-field inversion, careful management of singular value decomposition components is crucial for obtaining information about the source distribution with respect to depth. In principle, the depth-resolution plot provides a convenient visual tool for this analysis, but its computational...... on memory and computing time. We used the ApproxDRP to study retrievable depth resolution in inversion of the gravity field of the Neapolitan Volcanic Area. Our main contribution is the combined use of the Lanczos bidiagonalization algorithm, established in the scientific computing community, and the depth...

A quantal inversion of the 16 O- 16 O scattering data at 350 MeV yields an optical potential which gives an excellent fit (χ 2 /F = 1.65) to the measured cross-section. The real part of this potential is shallower than any potential used by others for distances between 2 and 6 fm. The imaginary potential is also relatively weak. This potential does not favour a rainbow interpretation of the structure in data observed at large scattering angles. 12 refs., 1 tab., 4 figs

Pumping tests are usually employed to predict the hydraulic conductivity filed from the inversion of the head measurements. Nevertheless, the inverse problem is strongly underdetermined and a reliable imaging requires a considerable number of wells. We propose to add more information to the inversion of the heads by adding (non-intrusive) streaming potentials (SP) data. The SP corresponds to perturbations in the local electrical field caused directly by the fow of the ground water. These SP are obtained with a set of the non-polarising electrodes installed at the ground surface. We developed a geostatistical method for the estimation of the hydraulic conductivity field from measurements of hydraulic heads and SP during pumping and injection experiments. We use the adjoint state method and a recent petrophysical formulation of the streaming potential problem in which the streaming coupling coefficient is derived from the hydraulic conductivity allowed reducing of the unknown parameters. The geostatistical inverse framework is applied to three synthetic case studies with different number of the wells and electrodes used to measure the hydraulic heads and the streaming potentials. To evaluate the benefits of the incorporating of the streaming potential to the hydraulic data, we compared the cases in which the data are coupled or not to map the hydraulic conductivity. The results of the inversion revealed that a dense distribution of electrodes can be used to infer the heterogeneities in the hydraulic conductivity field. Incorporating the streaming potential information to the hydraulic head data improves the estimate of hydraulic conductivity field especially when the number of piezometers is limited.

Electrokinetic phenomena play an important role in the electrical characterization of surfaces. In terms of planar or porous substrates, streaming potential and/or streaming current measurements can be used to determine the zeta potential of the substrates in contact with aqueous electrolytes. In this work, we perform electrical impedance spectroscopy measurements to infer the electrical resistance in a microchannel with the same conditions as for a streaming potential experiment. Novel correlations are derived to relate the streaming current and streaming potential to the Reynolds number of the channel flow. Our results not only quantify the influence of surface conductivity, and here especially the contribution of the stagnant layer, but also reveal that channel resistance and therefore zeta potential are influenced by the flow in the case of low ionic strengths. We conclude that convection can have a significant impact on the electrical double layer configuration which is reflected by changes in the surfaces conductivity.

A distorted-wave version of the renormalization group is applied to scattering by an inverse-square potential and to three-body systems. In attractive three-body systems, the short-distance wavefunction satisfies a Schroedinger equation with an attractive inverse-square potential, as shown by Efimov. The resulting oscillatory behaviour controls the renormalization of the three-body interactions, with the renormalization-group flow tending to a limit cycle as the cut-off is lowered. The approach used here leads to single-valued potentials with discontinuities as the bound states are cut off. The perturbations around the cycle start with a marginal term whose effect is simply to change the phase of the short-distance oscillations, or the self-adjoint extension of the singular Hamiltonian. The full power counting in terms of the energy and two-body scattering length is constructed for short-range three-body forces

Full Text Available Abstract Background The frequent occurrence of ferret badger-associated human rabies cases in southeast China highlights the lack of laboratory-based surveillance and urges revisiting the potential importance of this animal in rabies transmission. To determine if the ferret badgers actually contribute to human and dog rabies cases, and the possible origin of the ferret badger-associated rabies in the region, an active rabies survey was conducted to determine the frequency of rabies infection and seroprevalence in dogs and ferret badgers. Methods A retrospective survey on rabies epidemics was performed in Zhejiang, Jiangxi and Anhui provinces in southeast China. The brain tissues from ferret badgers and dogs were assayed by fluorescent antibody test. Rabies virus was isolated and sequenced for phylogenetic analysis. The sera from ferret badgers and dogs were titrated using rabies virus neutralizing antibodies (VNA test. Results The ferret badgers presented a higher percentage of rabies seroconversion than dogs did in the endemic region, reaching a maximum of 95% in the collected samples. Nine ferret badger-associated rabies viruses were isolated, sequenced, and were phylogenetically clustered as a separate group. Nucleotide sequence revealed 99.4-99.8% homology within the ferret badger isolates, and 83-89% homology to the dog isolates in the nucleoprotein and glycoprotein genes in the same rabies endemic regions. Conclusions Our data suggest ferret badger-associated rabies has likely formed as an independent enzootic originating from dogs during the long-term rabies infestation in southeast China. The eventual role of FB rabies in public health remains unclear. However, management of ferret badger bites, rabies awareness and control in the related regions should be an immediate need.

Selective androgen receptor modulators (SARMS) bind to the androgen receptor and demonstrate anabolic activity in a variety of tissues; however, unlike testosterone and other anabolic steroids, these nonsteroidal agents are able to induce bone and muscle growth, as well as shrinking the prostate. The potential of SARMS is to maximise the positive attributes of steroidal androgens as well as minimising negative effects, thus providing therapeutic opportunities in a variety of diseases, including muscle wasting associated with burns, cancer, end-stage renal disease, osteoporosis, frailty and hypogonadism. This review summarises androgen physiology, the current status of the R&D of SARMS and potential therapeutic indications for this emerging class of drugs.

Combining measurements of atmospheric CO2 and its radiocarbon (14CO2) fraction and transport modeling in atmospheric inversions offers a way to derive improved estimates of CO2 emitted from fossil fuel (FFCO2). In this study, we solve for the monthly FFCO2 emission budgets at regional scale (i.e., the size of a medium-sized country in Europe) and investigate the performance of different observation networks and sampling strategies across Europe. The inversion system is built on the LMDZv4 global transport model at 3.75° × 2.5° resolution. We conduct Observing System Simulation Experiments (OSSEs) and use two types of diagnostics to assess the potential of the observation and inverse modeling frameworks. The first one relies on the theoretical computation of the uncertainty in the estimate of emissions from the inversion, known as posterior uncertainty, and on the uncertainty reduction compared to the uncertainty in the inventories of these emissions, which are used as a prior knowledge by the inversion (called prior uncertainty). The second one is based on comparisons of prior and posterior estimates of the emission to synthetic true emissions when these true emissions are used beforehand to generate the synthetic fossil fuel CO2 mixing ratio measurements that are assimilated in the inversion. With 17 stations currently measuring 14CO2 across Europe using 2-week integrated sampling, the uncertainty reduction for monthly FFCO2 emissions in a country where the network is rather dense like Germany, is larger than 30 %. With the 43 14CO2 measurement stations planned in Europe, the uncertainty reduction for monthly FFCO2 emissions is increased for the UK, France, Italy, eastern Europe and the Balkans, depending on the configuration of prior uncertainty. Further increasing the number of stations or the sampling frequency improves the uncertainty reduction (up to 40 to 70 %) in high emitting regions, but the performance of the inversion remains limited over low

Full Text Available Combining measurements of atmospheric CO2 and its radiocarbon (14CO2 fraction and transport modeling in atmospheric inversions offers a way to derive improved estimates of CO2 emitted from fossil fuel (FFCO2. In this study, we solve for the monthly FFCO2 emission budgets at regional scale (i.e., the size of a medium-sized country in Europe and investigate the performance of different observation networks and sampling strategies across Europe. The inversion system is built on the LMDZv4 global transport model at 3.75° × 2.5° resolution. We conduct Observing System Simulation Experiments (OSSEs and use two types of diagnostics to assess the potential of the observation and inverse modeling frameworks. The first one relies on the theoretical computation of the uncertainty in the estimate of emissions from the inversion, known as posterior uncertainty, and on the uncertainty reduction compared to the uncertainty in the inventories of these emissions, which are used as a prior knowledge by the inversion (called prior uncertainty. The second one is based on comparisons of prior and posterior estimates of the emission to synthetic true emissions when these true emissions are used beforehand to generate the synthetic fossil fuel CO2 mixing ratio measurements that are assimilated in the inversion. With 17 stations currently measuring 14CO2 across Europe using 2-week integrated sampling, the uncertainty reduction for monthly FFCO2 emissions in a country where the network is rather dense like Germany, is larger than 30 %. With the 43 14CO2 measurement stations planned in Europe, the uncertainty reduction for monthly FFCO2 emissions is increased for the UK, France, Italy, eastern Europe and the Balkans, depending on the configuration of prior uncertainty. Further increasing the number of stations or the sampling frequency improves the uncertainty reduction (up to 40 to 70 % in high emitting regions, but the performance of the inversion

We extend the definition of the electronic chemical potential (μ e ) and chemical hardness (η e ) to finite temperatures by considering a reactive chemical species as a true open system to the exchange of electrons, working exclusively within the framework of the grand canonical ensemble. As in the zero temperature derivation of these descriptors, the response of a chemical reagent to electron-transfer is determined by the response of the (average) electronic energy of the system, and not by intrinsic thermodynamic properties like the chemical potential of the electron-reservoir which is, in general, different from the electronic chemical potential, μ e . Although the dependence of the electronic energy on electron number qualitatively resembles the piecewise-continuous straight-line profile for low electronic temperatures (up to ca. 5000 K), the introduction of the temperature as a free variable smoothens this profile, so that derivatives (of all orders) of the average electronic energy with respect to the average electron number exist and can be evaluated analytically. Assuming a three-state ensemble, well-known results for the electronic chemical potential at negative (−I), positive (−A), and zero values of the fractional charge (−(I + A)/2) are recovered. Similarly, in the zero temperature limit, the chemical hardness is formally expressed as a Dirac delta function in the particle number and satisfies the well-known reciprocity relation with the global softness

We extend the definition of the electronic chemical potential (μ{sub e}) and chemical hardness (η{sub e}) to finite temperatures by considering a reactive chemical species as a true open system to the exchange of electrons, working exclusively within the framework of the grand canonical ensemble. As in the zero temperature derivation of these descriptors, the response of a chemical reagent to electron-transfer is determined by the response of the (average) electronic energy of the system, and not by intrinsic thermodynamic properties like the chemical potential of the electron-reservoir which is, in general, different from the electronic chemical potential, μ{sub e}. Although the dependence of the electronic energy on electron number qualitatively resembles the piecewise-continuous straight-line profile for low electronic temperatures (up to ca. 5000 K), the introduction of the temperature as a free variable smoothens this profile, so that derivatives (of all orders) of the average electronic energy with respect to the average electron number exist and can be evaluated analytically. Assuming a three-state ensemble, well-known results for the electronic chemical potential at negative (−I), positive (−A), and zero values of the fractional charge (−(I + A)/2) are recovered. Similarly, in the zero temperature limit, the chemical hardness is formally expressed as a Dirac delta function in the particle number and satisfies the well-known reciprocity relation with the global softness.

Approximate solutions of the Dirac equation with positron-dependent mass are presented for the inversely quadratic Yukawa potential and Coulomb-like tensor interaction by using the asymptotic iteration method. The energy eigenvalues and the corresponding normalized eigenfunctions are obtained in the case of positron-dependent mass and arbitrary spin-orbit quantum number k state and approximation on the spin-orbit coupling term. (author)

The solution of Grad-Shafranov equation determines the stationary behavior of fusion plasma inside a tokamak. To solve the equation it is necessary to know the toroidal current density profile. Recent works show that it is possible to determine a magnetohydrodynamic (MHD) equilibrium with reversed current density (RCD) profiles that presents magnetic islands. In this work we show analytical MHD equilibrium with a RCD profile and analyze the structure of the vacuum vector potential associated with these equilibria using the virtual casing principle.

To improve the reliability of sectoral mitigation potential and cost analysis, this paper made an in-depth exploration into China's electricity sector's thermal efficiency and inner structure. It is found that unlike what many literatures portray, China is actually among the world's leaders in coal-fired power plants' generating efficiencies; besides, although there are still numerous small and inefficient generating units in the current generation fleet, many of them are in fact playing important roles in supporting local economic development, meeting peak load needs, balancing heat and electricity supply and providing job opportunities to the local economy, therefore their existence does not necessarily mean low-cost mitigation potential. Given the efficiency and structural characteristics of China's electricity sector, it is pointed out that some other mitigation options, such as demand side management, IGCC and renewable energy as well as the break-through of CCS technology may play an even more important role in emission reduction. Considering the significant lock-in effects in electricity sector, it is warned that China, if continues putting majority investment in large and advanced coal-fired generating units, will face another round of chasing-after for the new and advanced renewable generation technologies. Therefore China should put more efforts in renewable generation technologies now.

We discuss the possibility of realizing metal-insulator transitions with ultracold atoms in two-dimensional optical lattices in the presence of artificial gauge potentials. For Abelian gauges, such transitions occur when the magnetic flux penetrating the lattice plaquette is an irrational multiple of the magnetic flux quantum. Here we present the first study of these transitions for non-Abelian U(2) gauge fields. In contrast to the Abelian case, the spectrum and localization transition in the non-Abelian case is strongly influenced by atomic momenta. In addition to determining the localization boundary, the momentum fragments the spectrum. Other key characteristics of the non-Abelian case include the absence of localization for certain states and satellite fringes around the Bragg peaks in the momentum distribution and an interesting possibility that the transition can be tuned by the atomic momenta

The approximate method for solution of the inverse scattering problem (ISP) at fixed energy for complex spherically symmetric potentials decreasing faster 1/r is considered. The method is based on using a generalized WKB approximation. For the designed potential V(r) a sufficiently ''close'' reference potential V(r) has been chosen. For both potentials S-matrix elements (ME) have been calculated and inversion procedure has been carried out. S-ME have been calculated for integral-valued and intermediate angular moment values. S-ME are presented in a graphical form for being restored reference, and restored potentials for proton scattering with Esub(p)=49.48 MeV energy on 12 C nuclei. The restoration is the better the ''closer'' the sought-for potential to the reference one. This allows to specify the potential by means of iterations: the restored potential can be used as a reference one, etc. The operation of a restored potential smoothing before the following iteration is introduced. Drawbacks and advantages of the ISP solution method under consideration are pointed out. The method application is strongly limited by the requirement that the energy should be higher than a certain ''critical'' one. The method is applicable in a wider region of particle energies (in the low-energies direction) than the ordinary WKB method. The method is more simple in realization conformably to complex potentials. The investigations carried out of the proposed ISP solution method at fixed energy for complex spherically-symmetric potentials allow to conclude that the method can be successFully applied to specify the central part of interaction of nucleons, α-particles and heavy ions of average and high energies with atomic nuclei [ru

Full Text Available Snake venoms are sources of molecules with proven and potential therapeutic applications. However, most activities assayed in venoms (or their components are of hemorrhagic, hypotensive, edematogenic, neurotoxic or myotoxic natures. Thus, other relevant activities might remain unknown. Using functional genomics coupled to the connectivity map (C-map approach, we undertook a wide range indirect search for biological activities within the venom of the South American pit viper Bothrops jararaca. For that effect, venom was incubated with human breast adenocarcinoma cell line (MCF7 followed by RNA extraction and gene expression analysis. A list of 90 differentially expressed genes was submitted to biosimilar drug discovery based on pattern recognition. Among the 100 highest-ranked positively correlated drugs, only the antihypertensive, antimicrobial (both antibiotic and antiparasitic, and antitumor classes had been previously reported for B. jararaca venom. The majority of drug classes identified were related to (1 antimicrobial activity; (2 treatment of neuropsychiatric illnesses (Parkinson’s disease, schizophrenia, depression, and epilepsy; (3 treatment of cardiovascular diseases, and (4 anti-inflammatory action. The C-map results also indicated that B. jararaca venom may have components that target G-protein-coupled receptors (muscarinic, serotonergic, histaminergic, dopaminergic, GABA, and adrenergic and ion channels. Although validation experiments are still necessary, the C-map correlation to drugs with activities previously linked to snake venoms supports the efficacy of this strategy as a broad-spectrum approach for biological activity screening, and rekindles the snake venom-based search for new therapeutic agents.

Full Text Available Abstract 4,4'-Diaminodiphenylsulphone (Dapsone is widely used for a variety of infectious, immune and hypersensitivity disorders, with indications ranging from Hansen's disease, inflammatory disease and insect bites, all of which may be seen as manifestations in certain occupational diseases. However, the use of dapsone may be associated with a plethora of adverse effects, some of which may involve the pulmonary parenchyma. Methemoglobinemia with resultant cyanosis, bone marrow aplasia and/or hemolytic anemia, peripheral neuropathy and the potentially fatal dapsone hypersensitivity syndrome (DHS, the focus of this review, may all occur individually or in combination. DHS typically presents with a triad of fever, skin eruption, and internal organ (lung, liver, neurological and other systems involvement, occurring several weeks to as late as 6 months after the initial administration of the drug. In this sense, it may resemble a DRESS syndrome (Drug Rash with Eosinophilia and Systemic Symptoms. DHS must be promptly identified, as untreated, the disorder could be fatal. Moreover, the pulmonary/systemic manifestations may be mistaken for other disorders. Eosinophilic infiltrates, pneumonitis, pleural effusions and interstitial lung disease may be seen. This syndrome is best approached with the immediate discontinuation of the offending drug and prompt administration of oral or intravenous glucocorticoids. An immunological-inflammatory basis of the syndrome can be envisaged, based on the pathological picture and excellent response to antiinflammatory therapy. Since dapsone is used for various indications, physicians from all specialties may encounter DHS and need to familiarize themselves with the salient features about the syndrome and its management.

New inversion formulas are obtained for the classical scattering of a charged particle by a spherical or axisymmetric electric or magnetic field at a fixed impact parameter or angular momentum. For different cases, focusing fields are obtained similar to those previously considered for scattering by an electric field at a given energy, viz., of the backscattering (cat's eye), Maxwell fish eye, or Luneberg lens type. A magnetoelectric analogy is formulated, namely the existence of equivalent axisymmetric electric and magnetic fields that scatter charged particles in identical fashion

Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.

The linear and nonlinear optical absorption in a disk-shaped quantum dot (DSQD) with parabolic potential plus an inverse squared potential in the presence of a static magnetic field are theoretically investigated within the framework of the compact-density-matrix approach and iterative method. The energy levels and the wave functions of an electron in the DSQD are obtained by using the effective mass approximation. Numerical calculations are presented for typical GaAs/AlAs DSQD. It is found that the optical absorption coefficients are strongly affected not only by a static magnetic field, but also by the strength of external field, the confinement frequency and the incident optical intensity.

The linear and nonlinear optical absorption in a disk-shaped quantum dot (DSQD) with parabolic potential plus an inverse squared potential in the presence of a static magnetic field are theoretically investigated within the framework of the compact-density-matrix approach and iterative method. The energy levels and the wave functions of an electron in the DSQD are obtained by using the effective mass approximation. Numerical calculations are presented for typical GaAs/AlAs DSQD. It is found that the optical absorption coefficients are strongly affected not only by a static magnetic field, but also by the strength of external field, the confinement frequency and the incident optical intensity.

that includes large offsets, full azimuth, and multicomponent sensors, the potential for trade-off between the elastic orthorhombic parameters are large. The first step to understanding such trade-off is analysing the scattering potential of each parameter

Background and purpose: Three-dimensional fluid attenuation inversion recovery (3D FLAIR) may demonstrate high signal in the inner ears of patients with idiopathic sudden sensorineural hearing loss (ISSNHL), but the correlations of this finding with outcomes are still controversial. Here we compared 4 3D MRI sequences with the outcomes of patients with ISSNHL. Materials and methods: 77 adult patients with ISSNHL underwent MRI with pre contrast FLAIR, fast imaging employing steady-state acquisition images (FIESTA-C), post contrast T1WI and post contrast FLAIR. The extent and degree of high signal in both cochleas were evaluated in all patients, and asymmetry ratios between the affected ears and the normal ones were calculated. The relationships among MRI findings, including extent and asymmetry of abnormal cochlear high signals, degree of FLAIR enhancement, and clinical information, including age, vestibular symptoms, baseline hearing loss, and final hearing outcomes were analyzed. Results: 54 patients (28 men; age, 52.1 ± 15.5 years) were included in our study. Asymmetric cochlear signal intensities were more frequently observed in pre contrast and post contrast FLAIR (79.6% and 68.5%) than in FIESTA-C (61.1%) and T1WI (51.9%) (p < 0.001). Age, baseline hearing loss, extent of high signal and asymmetry ratios of pre contrast and post contrast FLAIR were all correlated with final hearing outcomes. In multivariate analysis, age and the extent of high signals were the most significant predictors of final hearing outcomes. Conclusion: 3D FLAIR provides a higher sensitivity in detecting the asymmetric cochlear signal abnormality. The more asymmetric FLAIR signals and presence of high signals beyond cochlea indicated a poorer prognosis.

The problem of plotting confining (unlimitedly increasing on the infinity) potentials of the central field by the given energy spectrum is discussed. The radial Schroedinger equation has pure discrete spectrum with infinite number of levels for these potentials. The problem is solved using the Helfand-Levitan equation with a certain reference potential V(r) for which spectral characteristics differ from the given ones only in the finite number of elements. The regular solutions PHIsub(l)(E, r) of the Schroedinger equation for the reference potential V(r) are supposed to be known. The initial potential and regular solutions of the Schroedinger equation are restored by the reference potential V(r) and regular PHIsub(l)(E, r) functions by means of the known formulas. It is observed from the paper data that confining potentials with any type of spectrum can be restored. Choice of the corresponding reference potential providing Fredholm nature of the Helfand-Levitan equation is the basic problem in this case

Revisits and reviews Imre Lakatos' ideas on "Falsification and the Methodology of Scientific Research Programmes." Suggests that Lakatos' framework offers an insightful way of looking at the relationship between theory and research that is relevant not only for evaluating research programs in theoretical physics, but in the social…

The one-dimensional Schroedinger equation y + ''+ ) 7k 2 -V + (k,x){y + =0, x belonging to R, was previously considered when the potential V + (k,x) depends on the energy k 2 in the following way: V + (k,x)=U(x)+2kQ(x), (U(x), Q(x)) belonging to a large class of pairs of real potentials admitting no bound state). The two systems of differential and integral equations then introduced are solved. Then, investigating the inverse scattering problem it is found that a necessary and sufficient condition for one of the functions S + (k) and Ssub(-1)sup(+)(k) to be the scattering matrix associated with a pair (U(x), Q(x)) is that S + (k) (or equivalently Ssub(-1)sup(+)(k) belongs to the class S introduced. This pair is the only one admitting this function as its scattering matrix. Investigating the inverse reflection problem, it is found that a necessary and sufficient condition for a function S 21 + (k) to be the reflection coefficient to the right associated with a pair (U(x), Q(x)) is that S 21 + (k) belongs to the class R introduced. This pair is the only one admitting this function as

Controlling the time evolution of the population of two states in cavity quantum electrodynamics is necessary by tuning the modified Rabi frequency in which the extra classical effect of electromagnetic field is taken into account. The theoretical explanation underlying the perturbation of potential on spatial regime of bloch sphere is by the use of Bagrov-Baldiotti-Gitman-Shamshutdinova-Darboux transformations [Bagrov et al., 'Darboux transformation for two-level system', Ann. Phys. 14, 390 (2005)] on the electromagnetic field potential in one-dimensional stationary Dirac model in which the Pauli matrices are the central parameters for controlling the collapse and revival of the Rabi oscillations. It is shown that by choosing σ 1 in the transformation generates the parabolic potential causing the total collapse of oscillations, while (σ 2 ,σ 3 ) yield the harmonic oscillator potentials ensuring the coherence of qubits.

For the AKNS operator on L 2 ([0,1],C 2 ) it is well known that the data of two spectra uniquely determine the corresponding potential φ a.e. on [0,1] (Borg's type Theorem). We prove that, in the case where φ is a-priori known on [a,1], then only a part (depending on a) of two spectra determine φ on [0,1]. Our results include generalizations for Dirac systems of classical results obtained by Hochstadt and Lieberman for the Sturm-Liouville case, where they showed that half of the potential and one spectrum determine all the potential functions. An important ingredient in our strategy is the link between the rate of growth of an entire function and the distribution of its zeros

The retrieval of a unique crystal potential from the scattering matrix S in high energy transmission electron diffraction is discussed. It is shown that, in general, data taken at a single orientation are not sufficient to determine all the elements of S. Additional measurements with tilted incident beam are required for the determination of the whole S-matrix. An algorithm for the extraction of the crystal potential from the S-matrix measured at a single energy and thickness is presented. The limiting case of thin crystals is discussed. Several examples with simulated data are considered

We study the Dirac equation for spinor wavefunctions minimally coupled to an external field, from the perspective of an algebraic system of linear equations for the vector potential. By analogy with the method in electromagnetism, which has been well-studied, and leads to classical solutions of the Maxwell–Dirac equations, we set up the formalism for non-Abelian gauge symmetry, with the SU(2) group and the case of four-spinor doublets. An extended isospin-charge conjugation operator is defined, enabling the hermiticity constraint on the gauge potential to be imposed in a covariant fashion, and rendering the algebraic system tractable. The outcome is an invertible linear equation for the non-Abelian vector potential in terms of bispinor current densities. We show that, via application of suitable extended Fierz identities, the solution of this system for the non-Abelian vector potential is a rational expression involving only Pauli scalar and Pauli triplet, Lorentz scalar, vector and axial vector current densities, albeit in the non-closed form of a Neumann series. (paper)

This investigation explores using the beta function formalism to calculate analytic solutions for the observable parameters in rolling scalar field cosmologies. The beta function in this case is the derivative of the scalar ϕ with respect to the natural log of the scale factor a, β (φ )=d φ /d ln (a). Once the beta function is specified, modulo a boundary condition, the evolution of the scalar ϕ as a function of the scale factor is completely determined. A rolling scalar field cosmology is defined by its action which can contain a range of physically motivated dark energy potentials. The beta function is chosen so that the associated "beta potential" is an accurate, but not exact, representation of the appropriate dark energy model potential. The basic concept is that the action with the beta potential is so similar to the action with the model potential that solutions using the beta action are accurate representations of solutions using the model action. The beta function provides an extra equation to calculate analytic functions of the cosmologies parameters as a function of the scale factor that are that are not calculable using only the model action. As an example this investigation uses a quintessence cosmology to demonstrate the method for power and inverse power law dark energy potentials. An interesting result of the investigation is that the Hubble parameter H is almost completely insensitive to the power of the potentials and that ΛCDM is part of the family of quintessence cosmology power law potentials with a power of zero.

Full Text Available The aim of this paper is to identify determinants of liquidity of commercial banks in the Republic of Serbia, observing the macroeconomic and banking-specific indicators, or micro-economic indicators which were analyzed by descriptive statistics, correlation and regression analysis from 2008 to 2014. The correlation for the observed variables is calculated from 140 samples for internal and external independent variables of impact to the dependent variable - liquidity measured by indicator of deposits. The subject of research is the process of optimization model reducing the factors of liquidity to variables that have the most significant impact on liquidity indicator measured by deposit potential. Results of the model show that liquidity of banks is dominantly determined by the size of banks assets. With growth of the assets, banks are exposed to a greater risk of liquidity. The increase in capital adequacy ratio has a positive effect on the liquidity of banks. Net interest margin is positively correlated with the indicator of deposit potential which indicates a negative impact on the liquidity of banks as well as the ratio of operating expenses to operating income.

We critique and extend theory on organizational sensemaking around three themes. First, we investigate sense arising non-productively and so beyond any instrumental relationship with things; second, we consider how sense is experienced through mood as well as our cognitive skills of manipulation ...... research by revisiting Weick’s seminal reading of Norman Maclean’s book surrounding the tragic events of a 1949 forest fire at Mann Gulch, USA....

Much effort and research has been invested into understanding and bridging the ‘gaps’ which many students experience in terms of contents and expectations as they begin university studies with a heavy component of mathematics, typically in the form of calculus courses. We have several studies...... of bridging measures, success rates and many other aspects of these “entrance transition” problems. In this paper, we consider the inverse transition, experienced by university students as they revisit core parts of high school mathematics (in particular, calculus) after completing the undergraduate...... mathematics courses which are mandatory to become a high school teacher of mathematics. To what extent does the “advanced” experience enable them to approach the high school calculus in a deeper and more autonomous way ? To what extent can “capstone” courses support such an approach ? How could it be hindered...

Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)

Nicotine is a psychoactive substance that is commonly consumed in the context of music. However, the reason why music and nicotine are co-consumed is uncertain. One possibility is that nicotine affects cognitive processes relevant to aspects of music appreciation in a beneficial way. Here we investigated this possibility using Event-Related Potentials. Participants underwent a simple decision-making task (to maintain attentional focus), responses to which were signalled by auditory stimuli. Unlike previous research looking at the effects of nicotine on auditory processing, we used complex tones that varied in pitch, a fundamental element of music. In addition, unlike most other studies, we tested non-smoking subjects to avoid withdrawal-related complications. We found that nicotine (4.0 mg, administered as gum) increased P2 amplitude in the frontal region. Since a decrease in P2 amplitude and latency is related to habituation processes, and an enhanced ability to disengage from irrelevant stimuli, our findings suggest that nicotine may cause a reduction in habituation, resulting in non-smokers being less able to adapt to repeated stimuli. A corollary of that decrease in adaptation may be that nicotine extends the temporal window during which a listener is able and willing to engage with a piece of music.

The analytic solution of the radial Schroedinger equation is studied by using the tight coupling condition of several positive-power and inverse-power potential functions in this article. Furthermore, the precisely analytic solutions and the conditions that decide the existence of analytic solution have been searched when the potential of the radial Schroedinger equation is V(r) = α 1 r 8 + α 2 r 3 + α 3 r 2 + β 3 r -1 + β 2 r -3 + β 1 r -4 . Generally speaking, there is only an approximate solution, but not analytic solution for Schroedinger equation with several potentials' superposition. However, the conditions that decide the existence of analytic solution have been found and the analytic solution and its energy level structure are obtained for the Schroedinger equation with the potential which is motioned above in this paper. According to the single-value, finite and continuous standard of wave function in a quantum system, the authors firstly solve the asymptotic solution through the radial coordinate r → and r → 0; secondly, they make the asymptotic solutions combining with the series solutions nearby the neighborhood of irregular singularities; and then they compare the power series coefficients, deduce a series of analytic solutions of the stationary state wave function and corresponding energy level structure by tight coupling among the coefficients of potential functions for the radial Schroedinger equation; and lastly, they discuss the solutions and make conclusions. (general)

Full Text Available Two key ideas stand out as crucial to understanding atmosphere-ocean dynamics, and the dynamics of other planets including the gas giants. The first key idea is the invertibility principle for potential vorticity (PV. Without it, one can hardly give a coherent account of even so important and elementary a process as Rossby-wave propagation, going beyond the simplest textbook cases. Still less can one fully understand nonlinear processes like the self-sharpening or narrowing of jets – the once-mysterious "negative viscosity" phenomenon. The second key idea, also crucial to understanding jets, might be summarized in the phrase "there is no such thing as turbulence without waves", meaning Rossby waves especially. Without this idea one cannot begin to make sense of, for instance, momentum budgets and eddy momentum transports in complex large-scale flows. Like the invertibility principle the idea has long been recognized, or at least adumbrated. However, it is worth articulating explicitly if only because it can be forgotten when, in the usual way, we speak of "turbulence" and "turbulence theory" as if they were autonomous concepts. In many cases of interest, such as the well-studied terrestrial stratosphere, reality is more accurately described as a highly inhomogeneous "wave-turbulence jigsaw puzzle" in which wavelike and turbulent regions fit together and crucially affect each other's evolution. This modifies, for instance, formulae for the Rhines scale interpreted as indicating the comparable importance of wavelike and turbulent dynamics. Also, weakly inhomogeneous turbulence theory is altogether inapplicable. For instance there is no scale separation. Eddy scales are not much smaller than the sizes of the individual turbulent regions in the jigsaw. Here I review some recent progress in clarifying these ideas and their implications.

The steadily increasing number of explosive threat classes, including home-made explosives (HMEs), liquids, amorphous and gels (LAGs), is forcing up the false-alarm rates of security screening equipment. This development can best be countered by increasing the number of features available for classification. X-ray diffraction intrinsically offers multiple features for both solid and LAGs explosive detection, and is thus becoming increasingly important for false-alarm and cost reduction in both carry-on and checked baggage security screening. Following a brief introduction to X-ray diffraction imaging (XDI), which synthesizes in a single modality the image-forming and material-analysis capabilities of X-rays, the Multiple Inverse Fan Beam (MIFB) XDI topology is described. Physical relationships obtaining in such MIFB XDI components as the radiation source, collimators and room-temperature detectors are presented with experimental performances that have been achieved. Representative X-ray diffraction profiles of threat substances measured with a laboratory MIFB XDI system are displayed. The performance of Next-Generation (MIFB) XDI relative to that of the 2nd Generation XRD 3500{sup TM} screener (Morpho Detection Germany GmbH) is assessed. The potential of MIFB XDI, both for reducing the exorbitant cost of false alarms in hold baggage screening (HBS), as well as for combining 'in situ' liquid and solid explosive detection in carry-on luggage screening is outlined. - Highlights: Black-Right-Pointing-Pointer X-ray diffraction imaging (XDI) synthesizes analysis and imaging in one x-ray modality. Black-Right-Pointing-Pointer A novel XDI beam topology comprising multiple inverse fan-beams (MIFB) is described. Black-Right-Pointing-Pointer The MIFB topology is technically easy to realize and has high photon collection efficiency. Black-Right-Pointing-Pointer Applications are envisaged in checkpoint, hold baggage and cargo screening.

In this paper, the problem of the charged harmonic plus an inverse harmonic oscillator with time-dependent mass and frequency in a time-dependent electromagnetic field is investigated. It is reduced to the problem of the inverse harmonic oscillator with time-independent parameters and the exact wave function is obtained

Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...

The present limited retrospective study was performed to assess MR imaging of lipomatous tumours of the musculoskeletal system and to evaluate the potential of the T2 short tau inversion-recovery (STIR) technique for differentiating lipomas from liposarcomas. Magnetic resonance imaging of 12 patients with lipomatous tumours of the musculoskeletal system (eight benign lipomas, three well differentiated liposarcomas and one myxoid liposarcoma) were reviewed. Benign lipomas were usually superficial and showed homogeneity on T1- and T2-weighted spin echo sequences. Full suppression at T2-STIR was readily demonstrated. In contrast the liposarcomas in the present series were all deep-seated. Two well-differentiated liposarcomas showed homogeneity at long and short relaxation time (TR) but failed to show complete suppression at T2-STIR. One case of well-differentiated liposarcoma (dedifferentiated liposarcoma) and one of myxoid liposarcoma showed mild and moderate heterogeneity at T1 and T2, respectively and posed no difficulty in being diagnosed correctly. In conclusion, short and long TR in combination with T2 STIR show promise in differentiating benign from malignant lipomatous tumours of the musculoskeletal system, when taken in combination with the position of the tumour. Copyright (1999) Blackwell Science Pty Ltd

This study developed a multicomponent geochemical model to interpret responses of water chemistry to introduction of CO2 into six water-rock batches with sedimentary samples collected from representative potable aquifers in the Gulf Coast area. The model simulated CO2 dissolution in groundwater, aqueous complexation, mineral reactions (dissolution/precipitation), and surface complexation on clay mineral surfaces. An inverse method was used to estimate mineral surface area, the key parameter for describing kinetic mineral reactions. Modeling results suggested that reductions in groundwater pH were more significant in the carbonate-poor aquifers than in the carbonate-rich aquifers, resulting in potential groundwater acidification. Modeled concentrations of major ions showed overall increasing trends, depending on mineralogy of the sediments, especially carbonate content. The geochemical model confirmed that mobilization of trace metals was caused likely by mineral dissolution and surface complexation on clay mineral surfaces. Although dissolved inorganic carbon and pH may be used as indicative parameters in potable aquifers, selection of geochemical parameters for CO2 leakage detection is site-specific and a stepwise procedure may be followed. A combined study of the geochemical models with the laboratory batch experiments improves our understanding of the mechanisms that dominate responses of water chemistry to CO2 leakage and also provides a frame of reference for designing monitoring strategy in potable aquifers.

Aluminum nitride (AlN) has a polar crystal structure that is susceptible to electric dipolar interactions. The inversion domains in AlN, similar to those in GaN and other wurtzite-structure materials, decrease the energy associated with the electric dipolar interactions at the expense of inversion-domain boundaries, whose interface energy has not been quantified. We study the atomic structures of six different inversion-domain boundaries in AlN, and compare their interface energies from density functional theory calculations. The low-energy interfaces have atomic structures with similar bonding geometry as those in the bulk phase, while the high-energy interfaces contain N-N wrong bonds. We calculate the formation energy of an inversion domain using the interface energy and dipoles' electric-field energy, and find that the distribution of the inversion domains is an important parameter for the microstructures of AlN films. Using this thermodynamic model, it is possible to control the polarity and microstructure of AlN films by tuning the distribution of an inversion-domain nucleus and by selecting the low-energy synthesis methods.

hybridus). He was able to get early stages before twinning occurred and show it was preceded by inversion of the germ layers. By the primitive streak stage there were separate embryonic shields and partition of the amnion. There was, however, a single exocoelom and all embryos were enclosed in a common set...

Epicardial potentials (EPs) derived from the body surface potential map (BSPM) improve acute myocardial infarction (AMI) diagnosis. In this study, we compared EPs derived from the 80-lead BSPM using a standard thoracic volume conductor model (TVCM) with those derived using a patient-specific torso model (PSTM) based on body mass index (BMI). Consecutive patients presenting to both the emergency department and pre-hospital coronary care unit between August 2009 and August 2011 with acute ischaemic-type chest pain at rest were enrolled. At first medical contact, 12-lead electrocardiograms and BSPMs were recorded. The BMI for each patient was calculated. Cardiac troponin T (cTnT) was sampled 12 hours after symptom onset. Patients were excluded from analysis if they had any ECG confounders to interpretation of the ST-segment. A cardiologist assessed the 12-lead ECG for ST-segment elevation myocardial infarction by Minnesota criteria and the BSPM. BSPM ST-elevation (STE) was ⩾0.2 mV in anterior, ⩾0.1 mV in lateral, inferior, right ventricular or high right anterior and ⩾0.05 mV in posterior territories. To derive EPs, the BSPM data were interpolated to yield values at 352 nodes of a Dalhousie torso. Using an inverse solution based on the boundary element method, EPs at 98 cardiac nodes positioned within a standard TVCM were derived. The TVCM was then scaled to produce a PSTM using a model developed from computed tomography in 48 patients of varying BMIs, and EPs were recalculated. EPs >0.3 mV defined STE. A cardiologist blinded to both the 12-lead ECG and BSPM interpreted the EP map. AMI was defined as cTnT ⩾0.1 µg/L. Enrolled were 400 patients (age 62 ± 13 years; 57% male); 80 patients had exclusion criteria. Of the remaining 320 patients, the BMI was an average of 27.8 ± 5.6 kg/m 2 . Of these, 180 (56%) had AMI. Overall, 132 had Minnesota STE on ECG (sensitivity 65%, specificity 89%) and 160 had BSPM STE (sensitivity 81%, specificity 90

Full Text Available Regarding the acceleration of renewable energy diffusion in Indonesia as well as achieving the national energy mix target, renewable energy map is essential to provide useful information to build renewable energy system. This work aims at updating the renewable energy potential map, i.e. hydro and solar energy potential, with a revised model based on the global climate data. The renewable energy map is intended to assist the design off-grid system by hydropower plant or photovoltaic system, particularly for rural electrification. Specifically, the hydro energy map enables the stakeholders to determine the suitable on-site hydro energy technology (from pico-hydro, micro-hydro, mini-hydro to large hydropower plant. Meanwhile, the solar energy map depicts not only seasonal solar energy potential but also estimated energy output from photovoltaic system.

The calculation of the second and third harmonic generation coefficients is carried out within the framework of the effective mass approximation in two-dimensional GaAs quantum discs under the combined effect of an external magnetic field and parabolic and inverse square confining potentials. Due to the electric dipole selection rules, the system is shown to have second harmonic generation coefficient identically zero for all the values of incident frequency. The generation of third optical harmonics is significantly dependent on the values of the different input parameters, with the presence of resonant peak blueshifts associated with the magnitudes of the parabolic confinement and the applied magnetic field. -- Highlights: ► One-electron conduction states in a two-dimensional quantum dot. ► Magnetic field and an inverse square repulsive potential. ► Generation of second harmonics is always null. ► Magnetic field induces a blueshift of the resonant peaks. ► The inverse square potential induces a reduction of the peak intensities

The calculation of the second and third harmonic generation coefficients is carried out within the framework of the effective mass approximation in two-dimensional GaAs quantum discs under the combined effect of an external magnetic field and parabolic and inverse square confining potentials. Due to the electric dipole selection rules, the system is shown to have second harmonic generation coefficient identically zero for all the values of incident frequency. The generation of third optical harmonics is significantly dependent on the values of the different input parameters, with the presence of resonant peak blueshifts associated with the magnitudes of the parabolic confinement and the applied magnetic field. -- Highlights: ► One-electron conduction states in a two-dimensional quantum dot. ► Magnetic field and an inverse square repulsive potential. ► Generation of second harmonics is always null. ► Magnetic field induces a blueshift of the resonant peaks. ► The inverse square potential induces a reduction of the peak intensities.

A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.

A method for subsurface recognition of blind geological bodies is presented using combined surface constraints and 3-D structural modelling that incorporates constraints from detailed mapping, and potential-field inversion modelling. This method is applied to the Mount Painter Province and demonstrates that addition of low density material is required to reconcile the gravity signature of the region. This method may be an effective way to construct 3-D models in regions of excellent structural control, and can be used to assess the validity of surface structures with 3-D architecture. Combined geological and potential-field constrained inversion modelling of the Mount Painter Province was conducted to assess the validity of the geological models of the region. Magnetic susceptibility constrained stochastic property inversions indicates that the northeast to southwest structural trend of the relatively magnetic meta-sedimentary rocks of the Radium Creek Group in the Mount Painter Inlier is reconcilable with the similar, northeast to southwest trending positive magnetic anomalies in the region. Radium Creek Group packages are the major contributor of the total magnetic response of the region. However field mapping and the results of initial density constrained stochastic property inversion modelling do not correlate with a large residual negative gravity anomaly central to the region. Further density constrained inversion modelling indicates that an additional large body of relatively low density material is needed within the model space to account for this negative density anomaly. Through sensitivity analysis of multiple geometrical and varied potential-field property inversions, the best-fitting model records a reduction in gravity rms misfit from 21.9 to 1.69 mGal, representing a reduction from 56 to 4.5 per cent in respect to the total dynamic range of 37.5 mGal of the residual anomaly. This best-fitting model incorporates a volumetrically significant source

Tourism is an experience-intensive sector in which customers seek and pay for experiences above everything else. Remembering past tourism experiences is also crucial for an understanding of the present, including the predicted behaviours of visitors to tourist destinations. We adopt a longitudinal...... approach to memory data collection from psychological science, which has the potential to contribute to our understanding of tourist behaviour. In this study, we examine the impact of remembered tourist experiences in a safari park. In particular, using matched survey data collected longitudinally and PLS...... path modelling, we examine the impact of positive affect tourist experiences on the development of revisit intentions. We find that longer-term remembered experiences have the strongest impact on revisit intentions, more so than predicted or immediate memory after an event. We also find that remembered...

Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)

Objective: To determine the frequency, causes, clinical presentations, management and maternal mortality associated with acute puerperal inversion of the uterus. Materials and Methods: All the patients who developed acute puerperal inversion of the uterus either in or outside the JPMC were included in the study. Patients of chronic uterine inversion were not included in the present study. Abdominal and vaginal examination was done to confirm and classify inversion into first, second or third degrees. Results: 57036 deliveries and 36 acute uterine inversions occurred during the study period, so the frequency of uterine inversion was 1 in 1584 deliveries. Mismanagement of third stage of labour was responsible for uterine inversion in 75% of patients. Majority of the patients presented with shock, either hypovolemic (69%) or neurogenic (13%) in origin. Manual replacement of the uterus under general anaesthesia with 2% halothane was successfully done in 35 patients (97.5%). Abdominal hysterectomy was done in only one patient. There were three maternal deaths due to inversion. Conclusion: Proper education and training regarding placental delivery, diagnosis and management of uterine inversion must be imparted to the maternity care providers especially to traditional birth attendants and family physicians to prevent this potentially life-threatening condition. (author)

This paper describes a new method for tracing paleo-shear zones of the continental crust by self-potential (SP) data inversion. The method falls within the deterministic inversion framework, and it is exclusively applicable for the interpretation of the SP anomalies measured along a profile over sheet-type structures such as conductive thin films of interconnected graphite precipitations formed on shear planes. The inverse method fits a residual SP anomaly by a single thin sheet and recovers the characteristic parameters (depth to the top h, extension in depth a, amplitude coefficient k, and amount and direction of dip θ) of the sheet. This method minimizes an objective functional in the space of the logarithmed and non-logarithmed model parameters (log( h), log( a), log( k), and θ) successively by the steepest descent (SD) and Gauss-Newton (GN) techniques in order to essentially maintain the stability and convergence of this inverse method. Prior to applying the method to real data, its accuracy, convergence, and stability are successfully verified on numerical examples with and without noise. The method is then applied to SP profiles from the German Continental Deep Drilling Program (Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschla - KTB), Rittsteig, and Grossensees sites in Germany for tracing paleo-shear planes coated with graphitic deposits. The comparisons of geologic sections constructed in this paper (based on the proposed deterministic approach) against the existing published interpretations (obtained based on trial-and-error modeling) for the SP data of the KTB and Rittsteig sites have revealed that the deterministic approach suggests some new details that are of some geological significance. The findings of the proposed inverse scheme are supported by available drilling and other geophysical data. Furthermore, the real SP data of the Grossensees site have been interpreted (apparently for the first time ever) by the deterministic inverse

Schroedinger inverse scattering uses scattering coefficients and bound state data to compute underlying potentials. Inverse scattering has been studied extensively for isolated potentials q(x), which tend to zero as vertical strokexvertical stroke→∞. Inverse scattering for isolated impurities in backgrounds p(x) that are periodic, are Heaviside steps, are constant for x>0 and periodic for x<0, or that tend to zero as x→∞ and tend to ∞ as x→-∞, have also been studied. This paper identifies literature for the five inverse problems just mentioned, and for four other inverse problems. Heaviside-step backgrounds are discussed at length. (orig.)

Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen

. The data are revisited for objective mapping of the temperature fields using Stochastic Inverse Method. Hourly reciprocal transmissions were carried with time lag of 30 minutes between each direction. From the multipath arrival patterns, significant peaks...

understand the existing baseline subsurface resistivity structure at the Newberry site prior to well stimulation, magnetotelluric (MT) data will be collected in late July 2012 using two long period (1 Hz sampling) Narod Geophysics NIMS MT instruments along with EarthScope MT data aligned in a ~210 km long N-S profile centered on the stimulation zone. A 2-D inverse model will be obtained from the MT data set. The goal of this investigation is to determine the variations in the electrical resistivity in the mid-to-lower crust beneath the western flank of the caldera, providing a deeper view of putative heat sources than existing studies in this

The number of beta-turns in a representative set of 426 protein three-dimensional crystal structures selected from the recent Protein Data Bank has nearly doubled and the number of gamma-turns in a representative set of 320 proteins has increased over seven times since the previous analysis. Beta-turns (7153) and gamma-turns (911) extracted from these proteins were used to derive a revised set of type-dependent amino acid positional preferences and potentials. Compared with previous results, the preference for proline, methionine and tryptophan has increased and the preference for glutamine, valine, glutamic acid and alanine has decreased for beta-turns. Certain new amino acid preferences were observed for both turn types and individual amino acids showed turn-type dependent positional preferences. The rationale for new amino acid preferences are discussed in the light of hydrogen bonds and other interactions involving the turns. Where main-chain hydrogen bonds of the type NH(i + 3) --> CO(i) were not observed for some beta-turns, other main-chain hydrogen bonds or solvent interactions were observed that possibly stabilize such beta-turns. A number of unexpected isolated beta-turns with proline at i + 2 position were also observed. The NH(i + 2) --> CO(i) hydrogen bond was observed for almost all gamma-turns. Nearly 20% classic gamma-turns and 43% inverse gamma-turns are isolated turns.

The fundamentals of oxidative phosphorylation and photophosphorylation are revisited. New experimental data on the involvement of succinate and malate anions respectively in oxidative phosphorylation and photophosphorylation are presented. These new data offer a novel molecular mechanistic...

Understanding the influence of coupled biological, chemical, and hydrological processes on subsurface contaminant behavior at multiple scales is a prerequisite for developing effective remedial approaches, whether they are active remediation or natural attenuation strategies. To develop this understanding, methods are needed that can measure critical components of the natural system in real time. The self-potential method corresponds to the passive measurement of the distribution of the electrical potential at the surface of the Earth or in boreholes. This method is very complemetary to other geophysical methods like DC resistivity and induced polarization. In this report, we summarize of research efforts to advance the theory of low-frequency geoelectrical methods and their applications to the contaminant plumes in the vicinity of the former S-3 settling basins at Oak Ridge, TN.

The identification of potential recharge areas and estimation of recharge rates to the confined semi-fossil Ohangwena II Aquifer (KOH-2) is crucial for its future sustainable use. The KOH-2 is located within the endorheic transboundary Cuvelai-Etosha-Basin (CEB), shared by Angola and Namibia. The main objective was the development of a strategy to tackle the problem of data scarcity, which is a well-known problem in semi-arid regions. In a first step, conceptual geological cross sections were created to illustrate the possible geological setting of the system. Furthermore, groundwater travel times were estimated by simple hydraulic calculations. A two-dimensional numerical groundwater model was set up to analyze flow patterns and potential recharge zones. The model was optimized against local observations of hydraulic heads and groundwater age. The sensitivity of the model against different boundary conditions and internal structures was tested. Parameter uncertainty and recharge rates were estimated. Results indicate that groundwater recharge to the KOH-2 mainly occurs from the Angolan Highlands in the northeastern part of the CEB. The sensitivity of the groundwater model to different internal structures is relatively small in comparison to changing boundary conditions in the form of influent or effluent streams. Uncertainty analysis underlined previous results, indicating groundwater recharge originating from the Angolan Highlands. The estimated recharge rates are less than 1% of mean yearly precipitation, which are reasonable for semi-arid regions.

of hypersusceptibility, multiple causes of underestimated toxicity, and the continuous presence of uncertainty, even in regard to otherwise well-studied mercury compounds. Further, the wealth of industrial chemicals now challenges the 'untested-chemical assumption', that the lack of documentation means that toxic...... potentials can be ignored. Unfortunately, in its ambition to provide solid evidence, toxicology has been pushed into almost endless replications, as evidenced by the thousands of toxicology publications every year that focus on toxic metals, including mercury, while less well-known hazards are ignored. From...... a public health viewpoint, toxicology needs to provide better guidance on decision-making under ever-present uncertainty. In this role, we need to learn from the stalwart Paracelsus the insistence on relying on facts rather than authority alone to protect against chemical hazards....

It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversionpotential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for.

Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.

Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)

New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.

Our paper revisits Okun's relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985-2013. We find that the

textabstractOur article revisits the Okun relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985–2013. We

Bounded intention planning provides a pruning technique for optimal planning that has been proposed several years ago. In addition partial order reduction techniques based on stubborn sets have recently been investigated for this purpose. In this paper we revisit bounded intention planning in the view of stubborn sets.

This paper revisits a well-known hydrostatic paradox, observed when turning upside down a glass partially filled with water and covered with a sheet of light material. The phenomenon is studied in its most general form by including the mass of the cover. A historical survey of this experiment shows that a common misunderstanding of the phenomenon…

This paper is the second in a series revisiting the (effect of) Faraday rotation. We formulate and prove the thermodynamic limit for the transverse electric conductivity of Bloch electrons, as well as for the Verdet constant. The main mathematical tool is a regularized magnetic and geometric...

We revisit the sensitivity study of the Tokai-to-Kamioka-and-Korea (T2KK) and Tokai-to-Kamioka-and-Oki (T2KO) proposals where a water Cerenkov detector with the 100 kton fiducial volume is placed in Korea (L = 1000 km) and Oki island (L = 653 km) in Japan, respectively, in addition to the Super-Kamiokande for determination of the neutrino mass hierarchy and leptonic CP phase (δ{sub CP}). We systematically study the running ratio of the ν{sub μ} and anti ν{sub μ} focusing beams with dedicated background estimation for the ν{sub e} appearance and ν{sub μ} disappearance signals, especially improving treatment of the neutral-current π{sup 0} backgrounds. Using a ν{sub μ}- anti ν{sub μ} beam ratio between 3:2 and 2.5:2.5 (in units of 10{sup 21}POT with the proton energy of 40 GeV), the mass-hierarchy determination with the median sensitivity of 3-5 σ by the T2KK and 1-4 σ by the T2KO experiment are expected when sin{sup 2}θ{sub 23} = 0.5, depending on the mass-hierarchy pattern and CP phase. These sensitivities are enhanced (reduced) by 30-40% in Δχ{sup 2} when sin{sup 2}θ{sub 23} = 0.6 (0.4). The CP phase is measured with the uncertainty of 20 {sup circle} -50 {sup circle} by the T2KK and T2KO using the ν{sub μ}- anti ν{sub μ} focusing beam ratio between 3.5:1.5 and 1.5:3.5. These findings indicate that inclusion of the anti ν{sub μ} focusing beam improves the sensitivities of the T2KK and T2KO experiments to both the mass-hierarchy determination and the leptonic CP phase measurement simultaneously with the preferred beam ratio being between 3:2-2.5:2.5 (x 10{sup 21}POT). (orig.)

In this article we revisit, with the help of images, those classic signs in chest radiography described by Dr Benjamin Felson himself, or other illustrious radiologists of his time, cited and discussed in 'Chest Roentgenology'. We briefly describe the causes of the signs, their utility and the differential diagnosis to be considered when each sign is seen. Wherever possible, we use CT images to illustrate the basis of some of these classic radiographic signs.

In this paper we revisit our joint work with Antonio Siconolfi on time functions. We will give a brief introduction to the subject. We will then show how to construct a Lipschitz time function in a simplified setting. We will end with a new result showing that the Aubry set is not an artifact of our proof of existence of time functions for stably causal manifolds.

It has been 15 years since the original presentation by Frank Halasz at Hypertext'87 on seven issues for the next generation of hypertext systems. These issues are: Search and Query Composites Virtual Structures Computation in/over hypertext network Versioning Collaborative Work Extensibility and Tailorability Since that time, these issues have formed the nucleus of multiple research agendas within the Hypertext community. Befitting this direction-setting role, the issues have been revisited ...

We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

This paper will discuss the potentiality towards a methodology for creating perceptual shifts in virtual reality (VR) environments. A perceptual shift is a cognitive recognition of having experienced something extra-marginal, on the boundaries of normal awareness, outside of conditioned attenuation. Definitions of perceptual shifts demonstrate a historical tradition for the wonder of devices as well as analyze various categories of sensory and optical illusions. Neuroscience and cognitive science attempt to explain perceptual shifts through biological and perceptual mechanisms using the sciences. This paper explores perspective, illusion and projections to situate an artistic process in terms of perceptual shifts. Most VR environments rely on a single perceptual shift while there remains enormous potential for perceptual shifts in VR. Examples of artwork and VR environments develop and present this idea.

An improved inverse simulated annealing method is presented to determine the structure of complex disordered systems from first principles in agreement with available experimental data or desired predetermined target properties. The effectiveness of this method is demonstrated by revisiting the structure of amorphous InSb. The resulting network is mostly tetrahedral and in excellent agreement with available experimental data.

We revisit the bottomonium spectrum motivated by the recently exciting experimental progress in the observation of new bottomonium states, both conventional and unconventional. Our framework is a nonrelativistic constituent quark model which has been applied to a wide range of hadronic observables from the light to the heavy quark sector and thus the model parameters are completely constrained. Beyond the spectrum, we provide a large number of electromagnetic, strong and hadronic decays in order to discuss the quark content of the bottomonium states and give more insights about the better way to determine their properties experimentally.

We revisited the brachiopod fold hypothesis and investigated metamorphosis in the craniiform brachiopod Novocrania anomala. Larval development is lecithotrophic and the dorsal (brachial) valve is secreted by dorsal epithelia. We found that the juvenile ventral valve, which consists only of a thin...... brachiopods during metamorphosis to cement their pedicle to the substrate. N. anomala is therefore not initially attached by a valve but by material corresponding to pedicle cuticle. This is different to previous descriptions, which had led to speculations about a folding event in the evolution of Brachiopoda...

This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the

We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

Aug 18, 2016 ... health care research, it is therefore pertinent to revisit the state of nursing research in the country. .... platforms, updated libraries with electronic resource ... benchmarks for developing countries of 26%, [17] the amount is still ...

The hypervirial and Hellmann-Feynman theorems are used in the methods of 1/N expansion to construct Rayleigh-Schroedinger perturbation expansion for bound-state energy eigenvalues of spherical symmetric potentials. A new iteration procedure of calculating correction terms of arbitrarily high orders is obtained for any kind of 1/N expansion. The recurrence formulas for three variants of the 1/N expansion are considered in this work, namely, the 1/n expansion, the shifted and unshifted 1/N expansions which are applied to the Gaussian and Patil potentials. As a result, their credibility could be reliably judged when account is taken of high order terms of the eigenenergies. It is also found that there is a distinct advantage in using the shifted 1/N expansion over the two other versions. However, the shifted 1/N expansion diverges for s states and in certain cases is not applicable as far as complicated potentials are concerned. In an effort to solve these problems we have incorporated the principle of minimal sensitivity in the shifted 1/N expansion as a first step toward extending the scope of applicability of that technique, and then we have tested the obtained approach to some unfavorable cases of the Patil and Hellmann potentials. The agreement between our numerical calculations and reference data is quite satisfactory. (author)

Full Text Available The complexity of life in 21st century society requires new models for leading and managing change. With that in mind, this paper revisits the model for Advanced Change Theory (ACT as presented by Quinn, Spreitzer, and Brown in their article, “Changing Others Through Changing Ourselves: The Transformation of Human Systems” (2000. The authors present ACT as a potential model for facilitating change in complex organizations. This paper presents a critique of the article and summarizes opportunities for further exploring the model in the light of current trends in developmental and integral theory.

The Paraná basin, on the central-south region of the South American Plate, is one of the biggest South American intracratonic basins. It is composed by Paleozoic and Mesozoic sediments, which were covered by the enormous Cretaceous flood basalts, associated with the rifting of Gondwana and the opening of the South Atlantic Ocean. Its depocenter region, with a maximum estimated depth of just over 7000 m, was crossed by three magnetotelluric - MT profiles proposed by the Brazilian Petroleum Agency (ANP) aimed at better characterizing its geological structure, as the seismic images are very poor. The data include about 350 MT broadband soundings spanning from 1000 Hz down to 2,000 s. The MT data were processed using robust techniques and remote reference. Static shift observed in some stations were corrected based on Transient Electromagnetic - TEM measurements at each site. These models were integrated to existent gravity, magnetic and seismic data for a more comprehensive interpretation of the region. A pilot 3D model has also been constructed on a crustal scale covering the study area using four frequencies per decade in the 3D inversion scheme proposed by Siripunvaraporn et al. (2005). The inversion scheme produced a reliable model and the observations were adequately reproduced, with observed fitting particularly better for the deeper structures related to basement compared to the 2D results. The main features in the conductivity model correspond to known geological features. These included the conductivity structures obtained for the upper crust, i.e. the sedimentary sequences, underlain by more resistive material, assumed to be basement. Local resistive features in the near-surface are associated to volcanic basalts covering the sediments. Some highly resistivity horizontal and vertical bodies were associated to volcanic intrusion like dikes and sills. We observed depressions on basement consistent with half-graben structures possibly filled with sandstones.

The derivation of the life quality index (LQI) is revisited for a revision. This revision takes into account the unpaid but necessary work time needed to stay alive in clean and healthy conditions to be fit for effective wealth producing work and to enjoyable free time. Dimension analysis...... at birth should not vary between countries. Finally the distributional assumptions are relaxed as compared to the assumptions made in an earlier work by the author. These assumptions concern the calculation of the life expectancy change due to the removal of an accident source. Moreover a simple public...... consistency problems with the standard power function expression of the LQI are pointed out. It is emphasized that the combination coefficient in the convex differential combination between the relative differential of the gross domestic product per capita and the relative differential of the expected life...

We revisit the quantum two-person duel. In this problem, both Alice and Bob each possess a spin-1/2 particle which models dead and alive states for each player. We review the Abbott and Flitney result—now considering non-zero α 1 and α 2 in order to decide if it is better for Alice to shoot or not the second time—and we also consider a duel where players do not necessarily start alive. This simple assumption allows us to explore several interesting special cases, namely how a dead player can win the duel shooting just once, or how can Bob revive Alice after one shot, and the better strategy for Alice—being either alive or in a superposition of alive and dead states—fighting a dead opponent. (paper)

In January 1994, the two geostationary satellites known as Anik-E1 and Anik-E2, operated by Telesat Canada, failed one after the other within 9 hours, leaving many northern Canadian communities without television and data services. The outage, which shut down much of the country's broadcast television for hours and cost Telesat Canada more than $15 million, generated significant media attention. Lam et al. used publicly available records to revisit the event; they looked at failure details, media coverage, recovery effort, and cost. They also used satellite and ground data to determine the precise causes of those satellite failures. The researchers traced the entire space weather event from conditions on the Sun through the interplanetary medium to the particle environment in geostationary orbit.

Purpose – The purpose of this paper is to learn more about logistics innovation processes and their implications for the focal organization as well as the supply chain, especially suppliers. Design/methodology/approach – The empirical basis of the study is a longitudinal action research project...... that was triggered by the practical needs of new ways of handling material flows of a hospital. This approach made it possible to revisit theory on logistics innovation process. Findings – Apart from the tangible benefits reported to the case hospital, five findings can be extracted from this study: the logistics...... innovation process model may include not just customers but also suppliers; logistics innovation in buyer-supplier relations may serve as an alternative to outsourcing; logistics innovation processes are dynamic and may improve supplier partnerships; logistics innovations in the supply chain are as dependent...

Full Text Available The successful practice of dentistry involves a good combination of technical skills and soft skills. Soft skills or communication skills are not taught extensively in dental schools and it can be challenging to learn and at times in treating dental patients. Guiding the child′s behavior in the dental operatory is one of the preliminary steps to be taken by the pediatric dentist and one who can successfully modify the behavior can definitely pave the way for a life time comprehensive oral care. This article is an attempt to revisit a simple behavior guidance technique, reframing and explain the possible psychological perspectives behind it for better use in the clinical practice.

The authors present some preliminary results of numerical simulation to infer the sound velocity distribution in the solar interior from the oscillation data of the Sun as the inverse problem. They analyze the acoustic potential itself by taking account of some factors other than the sound velocity, and infer the sound velocity distribution in the deep interior of the Sun

Full Text Available In addition to their applicability as biopesticides, Bacillus thuringiensis (Bt Cry1Ac spore-crystals are being researched in the immunology field for their potential as adjuvants in mucosal and parenteral immunizations. We aimed to investigate the hematotoxicity and genotoxicity of Bt spore-crystals genetically modified to express Cry1Ac individually, administered orally (p.o. or with a single intraperitoneal (i.p. injection 24 h before euthanasia, to simulate the routes of mucosal and parenteral immunizations in Swiss mice. Blood samples were used to perform hemogram, and bone marrow was used for the micronucleus test. Cry1Ac presented cytotoxic effects on erythroid lineage in both routes, being more severe in the i.p. route, which also showed genotoxic effects. The greater severity noted in this route, mainly at 6.75 mg/kg, as well as the intermediate effects at 13.5 mg/kg, and the very low hematotoxicity at 27 mg/kg, suggested a possible inverse agonism. The higher immunogenicity for the p.o. route, particularly at 27 mg/kg, suggested that at this dose, Cry 1Ac could potentially be used as a mucosal adjuvant (but not in parenteral immunizations, due to the genotoxic effects observed. This potential should be investigated further, including making an evaluation of the proposed inverse agonism and carrying out cytokine profiling.

The present status of the three-dimensional inverse-scattering method with supersymmetric transformations is reviewed for the coupled-channel case. We first revisit in a pedagogical way the single-channel case, where the supersymmetric approach is shown to provide a complete, efficient and elegant solution to the inverse-scattering problem for the radial Schrödinger equation with short-range interactions. A special emphasis is put on the differences between conservative and non-conservative transformations, i.e. transformations that do or do not conserve the behaviour of solutions of the radial Schrödinger equation at the origin. In particular, we show that for the zero initial potential, a non-conservative transformation is always equivalent to a pair of conservative transformations. These single-channel results are illustrated on the inversion of the neutron–proton triplet eigenphase shifts for the S- and D-waves. We then summarize and extend our previous works on the coupled-channel case, i.e. on systems of coupled radial Schrödinger equations, and stress remaining difficulties and open questions of this problem by putting it in perspective with the single-channel case. We mostly concentrate on two-channel examples to illustrate general principles while keeping mathematics as simple as possible. In particular, we discuss the important difference between the equal-threshold and different-threshold problems. For equal thresholds, conservative transformations can provide non-diagonal Jost and scattering matrices. Iterations of such transformations in the two-channel case are studied and shown to lead to practical algorithms for inversion. A convenient particular technique where the mixing parameter can be fitted without modifying the eigenphases is developed with iterations of pairs of conjugate transformations. This technique is applied to the neutron–proton triplet S–D scattering matrix, for which exactly-solvable matrix potential models are constructed

The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.

Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

High transmission attenuated phase shift masks (Hi-T PSM) have been successfully applied in volume manufacturing for certain memory devices. Moreover, numerous studies have shown the potential benefits of Hi-T PSM for specific lithography applications. In this paper, the potential for extending Hi-T PSM to logic devices, is revisited with an emphasis on understanding layout, transmission, and manufacturing of Hi-T PSM versus traditional 6% embedded attenuated phase shift mask (EAPSM). Simulations on various layouts show Hi-T PSM has advantage over EAPSM in low duty cycle line patterns and high duty cycle space patterns. The overall process window can be enhanced when Hi- T PSM is combined with optimized optical proximity correction (OPC), sub-resolution assist features (SRAF), and source illumination. Therefore, Hi-T PSM may be a viable and lower cost alternative to other complex resolution enhancement technology (RET) approaches. Aerial image measurement system (AIMS) results on test masks, based on an inverse lithography technology (ILT) generated layout, confirm the simulation results. New advancement in high transmission blanks also make low topography Hi-T PSM a reality, which can minimize scattering effects in high NA lithography.

The neutron population in a prototype model of nuclear reactor can be described in terms of a collection of particles confined in a box and undergoing three key random mechanisms: diffusion, reproduction due to fissions, and death due to absorption events. When the reactor is operated at the critical point, and fissions are exactly compensated by absorptions, the whole neutron population might in principle go to extinction because of the wild fluctuations induced by births and deaths. This phenomenon, which has been named critical catastrophe, is nonetheless never observed in practice: feedback mechanisms acting on the total population, such as human intervention, have a stabilizing effect. In this work, we revisit the critical catastrophe by investigating the spatial behaviour of the fluctuations in a confined geometry. When the system is free to evolve, the neutrons may display a wild patchiness (clustering). On the contrary, imposing a population control on the total population acts also against the local fluctuations, and may thus inhibit the spatial clustering. The effectiveness of population control in quenching spatial fluctuations will be shown to depend on the competition between the mixing time of the neutrons (i.e. the average time taken for a particle to explore the finite viable space) and the extinction time

Consideration of core polarization, isobar currents and meson-exchange processes gives a satisfactory understanding of the ground-state magnetic moments in closed-shell-plus (or minus)-one nuclei, A = 3, 15, 17, 39 and 41. Ever since the earliest days of the nuclear shell model the understanding of magnetic moments of nuclear states of supposedly simple configurations, such as doubly closed LS shells +-1 nucleon, has been a challenge for theorists. The experimental moments, which in most cases are known with extraordinary precision, show a small yet significant departure from the single-particle Schmidt values. The departure, however, is difficult to evaluate precisely since, as will be seen, it results from a sensitive cancellation between several competing corrections each of which can be as large as the observed discrepancy. This, then, is the continuing fascination of magnetic moments. In this contribution, we revisit the subjet principally to identify the role played by isobar currents, which are of much concern at this conference. But in so doing we warn quite strongly of the dangers of considering just isobar currents in isolation; equal consideration must be given to competing processes which in this context are the mundane nuclear structure effects, such as core polarization, and the more popular meson-exchange currents

We revisit here the naturalness problem of Lorentz invariance violations on a simple toy model of a scalar field coupled to a fermion field via a Yukawa interaction. We first review some well-known results concerning the low-energy percolation of Lorentz violation from high energies, presenting some details of the analysis not explicitly discussed in the literature and discussing some previously unnoticed subtleties. We then show how a separation between the scale of validity of the effective field theory and that one of Lorentz invariance violations can hinder this low-energy percolation. While such protection mechanism was previously considered in the literature, we provide here a simple illustration of how it works and of its general features. Finally, we consider a case in which dissipation is present, showing that the dissipative behaviour does not percolate generically to lower mass dimension operators albeit dispersion does. Moreover, we show that a scale separation can protect from unsuppressed low-energy percolation also in this case.

Full Text Available Rank-three and -four separable3S1−3D1potentials have been constructed which reproduce the experimental phase shifts and a realistic deuteron wave function. The off-shell behaviour has been investigated and triton binding energies were calculated.

A successful full-waveform inversion implementation updates the low-wavenumber model components first for a proper description of the wavefield propagation and slowly adds the high wavenumber potentially scattering parts of the model. The low

The goal of the paper is to revisit and analyze key contributions to the understanding of leadership and management. As a part of the discussion a role perspective that allows for additional and/or integrated leader dimensions, including a change-centered, will be outlined. Seemingly, a major...

We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....

- crystallization and grain boun d ary migration occurring simultaneously in different minerals from several regions in the Hi malayas; the process appears to be repeated in cyclic order during progre s- sive deformation. R. Islam and others found ophiolitic... may have fine - tuned the geomagnetic inversion of the metamorphic sequence, advo cating the theory of post - metamor - phic tectonic modification. Considering the large - scale tecto nic activities and associated metamorphism...

We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

Much of what we know about the initiation of earthquakes comes from the temporal and spatial relationship of foreshocks to the initiation point of the mainshock. The 1999 Mw 7.6 Izmit, Turkey, earthquake was preceded by a 44 minute-long foreshock sequence. Bouchon et al. (Science, 2011) analyzed the foreshocks using a single seismic station, UCG, located to the north of the east-west fault, and concluded on the basis of waveform similarity that the foreshocks repeatedly re-ruptured the same fault patch, driven by slow slip at the base of the crust. We revisit the foreshock sequence using seismograms from 9 additional stations that recorded the four largest foreshocks (Mw 2.0 to 2.8) to better characterize spatial and temporal evolution of the foreshock sequence and their relationship to the mainshock hypocenter. Cross-correlation timing and hypocentroid location with hypoDD reveals a systematic west-to-east propagation of the four largest foreshocks toward the mainshock hypocenter. Foreshock rupture dimensions estimated using spectral ratios imply no major overlap for the first three foreshocks. The centroid of 4th and largest foreshock continues the eastward migration, but lies within the circular source area of the 3rd. The 3rd, however, has a low stress drop and strong directivity to the west . The mainshock hypocenter locates on the eastern edge of foreshock 4. We also re-analyzed waveform similarity of all 18 foreshocks recorded at UCG by removing the common mode signal and clustering the residual seismogram using the correlation coefficient as the distance metric. The smaller foreshocks cluster with the larger events in time order, sometimes as foreshocks and more commonly as aftershocks. These observations show that the Izmit foreshock sequence is consistent with a stress-transfer driven cascade, moving systematically to the east along the fault and that there is no observational requirement for creep as a driving mechanism.

In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....

The method of reconstruction of interaction from the scattering data is formulated in the frame of the R-matrix theory in which the potential is determined by position of resonance Esub(lambda) and their reduced widths γ 2 lambda. In finite difference approximation for the Schroedinger equation this new approach allows to make the logics of the inverse problem IP more clear. A possibility of applications of IP formalism to various nuclear systems is discussed. (author)

Possibility of application in analytical purposes of the process of tellurium precipitation electrosolution from the surfaces of graphite and mercury-graphite electrodes at the cathode scanning of the potential is shown. As a result of comparison of direct and inversion scanning with cathodic and anodic scanning of the potential, variants of voltammetric method of tellurium determination in artificial solutions and, taking the developed method of layer-by-layer analysis of the GaAsTe films as an example, advantage of mercury-graphite electrode with cathodic scanning as compared to graphite electrode with cathode scanning of the potential is shown. Reproducibility of the GaAs film analysis results according to anodic and cathodic tellurium peaks is satisfactory. Maximum deviation from the results of analysis of oxidation peaks and tellurium peduction does not exceed 15 rel. %. Thus, for tellurium concentrations, exceeding 5x10 -6 g-ion/l, both anodic and cathodic scanning of the potential can be used, though error in tellurium determination according to cathodic peaks is 1.5-2.0 times higher. At tellurium amounts lower 5x10 -6 g-ion/l the determination should be carried out according to the peaks of tellurium anodic oxidation from the surface of graphite electrode or according to the peaks of tellurium cathodic reduction from the surface of mercury-graphite electrode

We present a theoretical study of the distribution of Al atoms in zeolite ZSM-5 with Si/Al=47, where we focus on the role of Al–Al interactions rather than on the energetics of Al/Si substitutions at individual sites. Using interatomic potential methods, we evaluate the energies of the full set of symmetrically independent configurations of Al siting in a Si 94 Al 2 O 192 cell. The equilibrium Al distribution is determined by the interplay of two factors: the energetics of the Al/Si substitution at an individual site, which tends to populate particular T sites (e.g., the T14 site), and the Al–Al interaction, which at this Si/Al maximises Al–Al distances in general agreement with Dempsey’s rule. However, it is found that the interaction energy changes approximately as the inverse of the square of the distance between the two Al atoms, rather than the inverse of the distance expected if this were merely charge repulsion. Moreover, we find that the anisotropic nature of the framework density plays an important role in determining the magnitude of the interactions, which are not simply dependent on Al–Al distances. - Graphical abstract: Role of Al–Al interactions in high silica ZSM-5 is shown to be anisotropic in nature and not dependent solely on Coulombic interactions. Highlights: ► Si–Al distribution in ZSM-5 is revisited, stressing the role of the Al–Al interaction. ► Coulomb interactions are not the key factors controlling the Al siting. ► Anisotropy of the framework is identified as a source of departure from Dempsey’s rule.

We revisit gravitino production following inflation. As a first step, we review the standard calculation of gravitino production in the thermal plasma formed at the end of post-inflationary reheating when the inflaton has completely decayed. Next we consider gravitino production prior to the completion of reheating, assuming that the inflaton decay products thermalize instantaneously while they are still dilute. We then argue that instantaneous thermalization is in general a good approximation, and also show that the contribution of non-thermal gravitino production via the collisions of inflaton decay products prior to thermalization is relatively small. Our final estimate of the gravitino-to-entropy ratio is approximated well by a standard calculation of gravitino production in the post-inflationary thermal plasma assuming total instantaneous decay and thermalization at a time $t \\simeq 1.2/\\Gamma_\\phi$. Finally, in light of our calculations, we consider potential implications of upper limits on the gravitin...

Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

In this paper, the authors suggest a revised version of the Superconducting Super Collider (SSC) that employs the planned SSC first stage machine as an injector of 0.5 TeV protons into a power laser accelerator. The recently developed Non-linear Amplification of Inverse Bremsstrahlung Acceleration (NAIBA) concept dictates the scenario of the next stage of acceleration. Post Star Wars lasers, available at several laboratories, can be used for the purpose. The 40 TeV CM energy, a target of the SSC, can be obtained with a new machine which can be 20 times smaller than the planned SSC

To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.

Dynamic Topography Revisited Dynamic topography is usually considered to be one of the trinity of contributing causes to the Earth's non-hydrostatic topography along with the long-term elastic strength of the lithosphere and isostatic responses to density anomalies within the lithosphere. Dynamic topography, thought of this way, is what is left over when other sources of support have been eliminated. An alternate and explicit definition of dynamic topography is that deflection of the surface which is attributable to creeping viscous flow. The problem with the first definition of dynamic topography is 1) that the lithosphere is almost certainly a visco-elastic / brittle layer with no absolute boundary between flowing and static regions, and 2) the lithosphere is, a thermal / compositional boundary layer in which some buoyancy is attributable to immutable, intrinsic density variations and some is due to thermal anomalies which are coupled to the flow. In each case, it is difficult to draw a sharp line between each contribution to the overall topography. The second definition of dynamic topography does seem cleaner / more precise but it suffers from the problem that it is not measurable in practice. On the other hand, this approach has resulted in a rich literature concerning the analysis of large scale geoid and topography and the relation to buoyancy and mechanical properties of the Earth [e.g. refs 1,2,3] In convection models with viscous, elastic, brittle rheology and compositional buoyancy, however, it is possible to examine how the surface topography (and geoid) are supported and how different ways of interpreting the "observable" fields introduce different biases. This is what we will do. References (a.k.a. homework) [1] Hager, B. H., R. W. Clayton, M. A. Richards, R. P. Comer, and A. M. Dziewonski (1985), Lower mantle heterogeneity, dynamic topography and the geoid, Nature, 313(6003), 541-545, doi:10.1038/313541a0. [2] Parsons, B., and S. Daly (1983), The

The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

Mutual fund manager excess performance should be measured relative to their self-reported benchmark rather than the return of a passive portfolio with the same risk characteristics. Ignoring the self-reported benchmark introduces biases in the measurement of stock selection and timing components of excess performance. We revisit baseline empirical evidence in mutual fund performance evaluation utilizing stock selection and timing measures that address these biases. We introduce a new factor e...

We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

Inverse scattering theories, algebraic scattering theory and exactly solvable scattering potentials are diverse ways by which scattering potentials can be defined from S-functions specified by fits to fixed energy, quantal scattering data. Applications have been made in nuclear (heavy ion and nucleon-nucleus scattering), atomic and molecular (electron scattering from simple molecules) systems. Three inverse scattering approaches are considered in detail; the semiclassical WKB and fully quantal Lipperheide-Fiedeldey method, than algebraic scattering theory is applied to heavy ion scattering and finally the exactly solvable Ginocchio potentials. Some nuclear results are ambiguous but the atomic and molecular inversionpotentials are in good agreement with postulated forms. 21 refs., 12 figs.

The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested

The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

Adjoint functions have been used with forward functions to compute gradients in implicit (iterative) solution methods for inverse problems in optical tomography, geoscience, thermal science, and other fields, but only once has this approach been used for inverse solutions to the Boltzmann transport equation. In this paper, this approach is used to develop an inverse method that requires only angle-independent flux measurements, rather than angle-dependent measurements as was done previously. The method is applied to a simplified form of the transport equation that does not include scattering. The resulting procedure uses measured values of gamma-ray fluxes of discrete, characteristic energies to determine interface locations in a multilayer shield. The method was implemented with a Newton-Raphson optimization algorithm, and it worked very well in numerical one-dimensional spherical test cases. A more sophisticated optimization method would better exploit the potential of the inverse method.

Basin inversion is an intermediate-scale manifestation of continental intraplate deformation, which produces earthquake activity in the interior of continents. The sedimentary basins of central Europe, inverted in the Late Cretaceous– Paleocene, represent a classic example of this phenomenon....... It is known that inversion of these basins occurred in two phases: an initial one of transpressional shortening involving reverse activation of former normal faults and a subsequent one of uplift of the earlier developed inversion axis and a shift of sedimentary depocentres, and that this is a response...... to changes in the regional intraplate stress field. This European intraplate deformation is considered in thecontext of a new model of the present-day stress field of Europe (and the North Atlantic) caused by lithospheric potential energy variations. Stresses causingbasin inversion of Europe must have been...

This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.

At the current historical juncture in which differences and inequalities are surfacing greater than ever in the world, societies, and schools, the main goal of this essay is to revisit the aspects of structuralism that can potentially contribute productively to understanding the invisible structures and forces that everyone carries (mostly…

The application of supersymmetric quantum mechanics to the inverse scattering problem is reviewed. The main difference with standard treatments of the inverse problem lies in the simple and natural extension to potentials with singularities at the origin and with a Coulomb behaviour at infinity. The most general form of potentials which are phase-equivalent to a given potential is discussed. The use of singular potentials allows adding or removing states from the bound spectrum without contradicting the Levinson theorem. Physical applications of phase-equivalent potentials in nuclear reactions and in three-body systems are described. Derivation of a potential from the phase shift at fixed orbital momentum can also be performed with the supersymmetric inversion by using a Bargmann-type approximation of the scattering matrix or phase shift. A unique singular potential without bound states can be obtained from any phase shift. A limited number of bound states depending on the singularity can then be added. This inversion procedure is illustrated with nucleon-nucleon scattering

due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...

A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.

Schroedinger's original quantization procedure is revisited in the light of Nelson's stochastic framework of quantum mechanics. It is clarified why Schroedinger's proposal of a variational problem led us to a true description of quantum mechanics. (orig.)

The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.

The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory–motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability. PMID:25741274

Full Text Available The body schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert, but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the Equilibrium Point Hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.

Purpose - The overall purpose of this study is to explore tourists' perceptions and their intention to revisit Norway. The aim is to find out what are the factors that drive the overall satisfaction, the willingness to recommend and the revisit intention of international tourists that spend their holiday in Norway. Design-Method-Approach - the Theory of Planned Behavior (Ajzen 1991), is used as a framework to investigate tourists' intention and behavior towards Norway as destination. The o...

Full Text Available The complexity of life in 21st century society requires new models for leadingand managing change. With that in mind, this paper revisits the model for AdvancedChange Theory (ACT as presented by Quinn, Spreitzer, and Brown in their article,“Changing Others Through Changing Ourselves: The Transformation of HumanSystems” (2000. The authors present ACT as a potential model for facilitating change incomplex organizations. This paper presents a critique of the article and summarizesopportunities for further exploring the model in the light of current trends indevelopmental and integral theory.

We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test

Absorptive capacity has mostly been perceived as a 'passive' outcome of R&D investments. Recently, however, a growing interest into its 'proactive' potentials has emerged. This paper taps into this development and proposes a dynamic model for conceptualizing the determinants of the complementary...... learning processes of absorptive capacity, which comprise combinative and adaptive capabilities. Drawing on survey data (n=169), the study concludes that combinative capabilities primarily enhance transformative and exploratory learning processes, while adaptive capabilities strengthen all three learning...

Applying two different versions of QCD sum-rules, we reanalyze rigourously the rich spectroscopy of mesons and baryons built from charm and beauty quarks. An improved determination of the masses and the leptonic decay constants of B c (bc-bar), B c *(bc-bar), and Λ(bcu) is presented. Our optimal results, constrained by stability criteria, are consistent in both versions and support the general pattern common to potential models predictions

Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....

This research thesis reports the study of the use of micro-emulsions of water in oil as reaction support. Only the 'inverse micelles' domain of the ternary mixing (water/AOT/isooctane) has been studied. The main addressed issues have been: the micro-emulsion disturbance in presence of reactants, the determination of reactant distribution and the resulting kinetic theory, the effect of the interface on electron transfer reactions, and finally protein solubilization [fr

A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics

Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons

Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.

Satellite potentials in the outer plasmasphere range from near zero to +5 to +10 V. Under such conditions ion measurements may not include the low energy core of the plasma population. In eclipse, the photoelectron current drops to zero, and the spacecraft potential can drop to near zero volts. In regions where the ambient plasma density is below 100 cm -3 , previously unobserved portions of the ambient plasma distribution function can become visible in eclipse. A survey of the data obtained from the retarding ion mass spectrometer (RIMS) on Dynamics Explorer 1 shows that the RIMS detector generally measured the isotropic background in both sunlight and eclipse in the plasma-sphere. Absolute density measurements for the ''hidden'' ion population are obtained for the first time using the plasma wave instrument observations of the upper hybrid resonance. Agreement in total density is found in sunlight and eclipse measurements at densities above 80 cm -3 . In eclipse, agreement is found at densities as low as 20 cm -3 . The isotropic plasma composition is primarily H + , with approx.10% He + , and 0.1 to 1.0% O + . A low energy field-aligned ion population appears in eclipse measurements outside the plasmasphere, which is obscured in sunlight. These field-aligned ions can be interpreted as field-aligned flows with densities of a few particles per cubic centimeter, flowing at 5-20 km/s. The problem in measuring these field-aligned flows in sunlight is the masking of the high energy tail of the field-aligned distribution by the isotropic background. Effective measurement of the core of the magnetospheric plasma distribution awaits satellites with active means of controlling the satellite potential

Cosmological phase transitions are examined using a new approach based on the dynamical analysis of the equations of motion of quantum fields rather than on static effective potential considerations. In many models the universe enters a period of exponential expansion required for an inflationary cosmology. Analytical methods show that this will be the case if the interaction rate due to quantum field nonlinearities is small compared to the expansion rate of the universe. They derive a heuristic criterion for the maximal value of the coupling constant for which they expect inflation. The prediction is in good agreement with numerical results

Alley in order to avoid the bullets of the Bosnian Serbian snipers positioned around the city. Based on a close reading of Sala’s work, this article will scrutinize how subjectivating techniques of power, during times of war, affectively work to create boundaries between those excluded from and those...... included within humanity. Conversely, focusing on how these techniques are being questioned within the work, I will discuss the resistance potential of what I will refer to as practices of subjectivization. Eventually, I will seek to position the “war-critical” strategy of the work within a broader context...... of the late modern war paradigm....

We present a scheme for playing quantum repeated 2 × 2 games based on Marinatto and Weber’s approach to quantum games. As a potential application, we study the twice repeated Prisoner’s Dilemma game. We show that results not available in the classical game can be obtained when the game is played in the quantum way. Before we present our idea, we comment on the previous scheme of playing quantum repeated games proposed by Iqbal and Toor. We point out the drawbacks that make their results unacceptable. (paper)

As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

At the Children's Hospital of Georgia (CHOG), we found that outpatient revisits for pediatric asthma were significantly above national norms. According to the NIH, costly hospital revisits for asthma can be prevented through guidelines-based self-management of asthma, central to which, is the use of a written Asthma-Action Plan (AAP). The asthma services literature has emphasized the role of the healthcare provider in promoting asthma self-management using the AAP, to prevent hospital revisits. On the other hand, the asthma policy literature has emphasized the need for community-based interventions to promote asthma self-management. A gap remains in understanding the extent of leverage that healthcare providers may have in preventing hospital revisits for asthma, through effective communication of AAP in the outpatient setting. Our study sought to address this gap. We conducted a 6-month intervention to implement "patient-and-family-centered communication of the AAP" in CHOG outpatient clinics, based on the "change-management" theoretical framework. Provider communication of AAP was assessed through a survey of "Parent Understanding of the Child's AAP." A quasi-experimental approach was used to measure outpatient revisits for pediatric asthma, pre- and post-intervention. Survey results showed that provider communication of the AAP was unanimously perceived highly positively by parents of pediatric asthma patients, across various metrics of patient-centered care. However, there were no statistically significant differences in outpatient "revisit behavior" for pediatric asthma between pre- and post-intervention periods after controlling for several demographic variables. Additionally, revisits remained significantly above national norms. Results suggest limited potential of "effective provider communication of AAP," in reducing outpatient revisits for pediatric asthma; and indicate need for broader community-based interventions to address patient life variables

Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data

The incentives for separating and eliminating various elements from radioactive waste prior to final geologic disposal were investigated. Exposure pathways to humans were defined, and potential radiation doses to an individual living within the region of influence of the underground storage site were calculated. The assumed radionuclide source was 1/5 of the accumulated high-level waste from the US nuclear power economy through the year 2000. The repository containing the waste was assumed to be located in a reference salt site geology. The study required numerous assumptions concerning the transport of radioactivity from the geologic storage site to man. The assumptions used maximized the estimated potential radiation doses, particularly in the case of the intrusion water well scenario, where hydrologic flow field dispersion effects were ignored. Thus, incentives for removing elements from the waste tended to be maximized. Incentives were also maximized by assuming that elements removed from the waste could be eliminated from the earth without risk. The results of the study indicate that for reasonable disposal conditions, incentives for partitioning any elements from the waste in order to minimize the risk to humans are marginal at best

Full Text Available The aim of all current forms of treatment of achalasia is to enable the patient to eat without disabling symptoms such as dysphagia, regurgitation, coughing or choking. Historically, this has been accomplished by mechanical disruption of the lower esophageal sphincter fibres, either by means of pneumatic dilation (PD or by open surgical myotomy. The addition of laparoscopic myotomy and botulinum toxin (BTX injection to the therapeutic armamentarium has triggered a recent series of reviews to determine the optimal therapeutic approach. Both PD and BTX have excellent short term (less than three months efficacy in the majority of patients. New data have been published that suggest that PD and BTX (with repeat injections can potentially obtain long term efficacy. PD is still considered the first-line treatment by most physicians; its main disadvantage is risk of perforation. BTX injection is evolving as an excellent, safe option for patients who are considered high risk for more invasive procedures. Laparoscopic myotomy with combined antireflux surgery is an increasingly attractive option in younger patients with achalasia, but long term follow-up studies are required to establish its efficacy and the potential for reflux-related sequelae.

We revisit gravitino production following inflation. As a first step, we review the standard calculation of gravitino production in the thermal plasma formed at the end of post-inflationary reheating when the inflaton has completely decayed. Next we consider gravitino production prior to the completion of reheating, assuming that the inflaton decay products thermalize instantaneously while they are still dilute. We then argue that instantaneous thermalization is in general a good approximation, and also show that the contribution of non-thermal gravitino production via the collisions of inflaton decay products prior to thermalization is relatively small. Our final estimate of the gravitino-to-entropy ratio is approximated well by a standard calculation of gravitino production in the post-inflationary thermal plasma assuming total instantaneous decay and thermalization at a time t ≅ 1.2/Γ{sub φ}. Finally, in light of our calculations, we consider potential implications of upper limits on the gravitino abundance for models of inflation, with particular attention to scenarios for inflaton decays in supersymmetric Starobinsky-like models.

Full Text Available The survival processing paradigm is designed to explore the adaptive nature of memory functioning. The mnemonic advantage of processing information in fitness-relevant contexts, as has been demonstrated using this paradigm, is now well established, particularly in young adults; this phenomenon is often referred to as the “survival processing effect.” In the current experiment, we revisited the investigation of this effect in children and tested it in a new cultural group, using a procedure that differs from the existing studies with children. A group of 40 Portuguese children rated the relevance of unrelated words to a survival and a new moving scenario. This encoding task was followed by a surprise free-recall task. Akin to what is typically found, survival processing produced better memory performance than the control condition (moving. These data put on firmer ground the idea that a mnemonic tuning to fitness-relevant encodings is present early in development. The theoretical importance of this result to the adaptive memory literature is discussed, as well as potential practical implications of this kind of approach to the study of memory in children.

Inverse design can be a useful strategy for discovering interactions that drive particles to spontaneously self-assemble into a desired structure. Here, we extend an inverse design methodology—relative entropy optimization—to determine isotropic interactions that promote assembly of targeted multicomponent phases, and we apply this extension to design interactions for a variety of binary crystals ranging from compact triangular and square architectures to highly open structures with dodecagonal and octadecagonal motifs. We compare the resulting optimized (self- and cross) interactions for the binary assemblies to those obtained from optimization of analogous single-component systems. This comparison reveals that self-interactions act as a "primer" to position particles at approximately correct coordination shell distances, while cross interactions act as the "binder" that refines and locks the system into the desired configuration. For simpler binary targets, it is possible to successfully design self-assembling systems while restricting one of these interaction types to be a hard-core-like potential. However, optimization of both self- and cross interaction types appears necessary to design for assembly of more complex or open structures.

Has the orthodoxy of progressive pedagogy, or what praise as the student centered, become means of an overall managerial turn that erodes students’ freedom do learn? This is the main question in Bruce Macfarlane’s book Freedom to learn - The Threat to Student Academic Freedom and Why it Needs...... to be Reclaimed (2017). In eighth well-written chapters, Macfarlane explores an often-overlooked paradox in higher education teaching and learning: The idea of the student centered learning, deriving from humanist psychology and progressive pedagogy, has been hijacked by increased and continuous demands of bodily......, cognitive and emotional performance that restricts students’ freedom to develop as autonomous adults. Macfarlane’s catch 22 is, however, that his heritage from humanist psychology, i.e. the idea that we as humans are born with an inner potential that we should be free to realise though education...

Full Text Available We all take special care when holding a tiny baby. This is partly because we know that "babies" head is particularly vulnerable, as it is still ′soft′ and the protective skull is yet forming. Skull growth continues until late adolescence and its proper functioning is crucial. Craniosynostosis, an inherited genetic condition, is characterized by the premature closure of sutures of the skull with effects that are wide - ranging and potentially devastating. Normally sutures and fontanelles allow the bones of the cranial vault to overlap during birth thus acting as an expansion joint, enabling the bone to enlarge evenly as the brain grows resulting in a symmetrically shaped skull. However, craniosynostosis occurs due to mutation in Homeobox gene - MSX2 and ALX4 or Fibroblast growth factor receptors (FGFR 1,2,3 gene, thus explaining for its association with Apert, Crouzon, Chotzen, Pteiffers and carpenter syndromes.

It's going to be a hot summer at CERN. At least in the Main Building, where from 13 July to 20 August an exhibition is being hosted on nuclear fusion, the energy of the Stars. Nuclear fusion is the engine driving the stars but also a potential source of energy for mankind. The exhibition shows the different nuclear fusion techniques and research carried out on the subject in Europe. Inaugurated at CERN in 1993, following collaboration between Lausanne's CRPP-EPFL and CERN, with input from Alessandro Pascolini of Italy's INFN, this exhibition has travelled round Europe before being revamped and returning to CERN. 'Fusion, Energy of the Stars', from 13 July onwards, Main Building

This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets

Whereas digital technologies are often depicted as being capable of disrupting long-standing power structures and facilitating new governance mechanisms, the power reinforcement framework suggests that information and communications technologies tend to strengthen existing power arrangements within...... public organizations. This article revisits the 30-yearold power reinforcement framework by means of an empirical analysis on the use of mobile technology in a large-scale programme in Danish public sector home care. It explores whether and to what extent administrative management has controlled decision......-making and gained most benefits from mobile technology use, relative to the effects of the technology on the street-level workers who deliver services. Current mobile technology-in-use might be less likely to be power reinforcing because it is far more decentralized and individualized than the mainly expert...

This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.

Histamine (or scombroid) fish poisoning (HFP) is reviewed in a risk-assessment framework in an attempt to arrive at an informed characterisation of risk. Histamine is the main toxin involved in HFP, but the disease is not uncomplicated histamine poisoning. Although it is generally associated with high levels of histamine (> or =50 mg/100 g) in bacterially contaminated fish of particular species, the pathogenesis of HFP has not been clearly elucidated. Various hypotheses have been put forward to explain why histamine consumed in spoiled fish is more toxic than pure histamine taken orally, but none has proved totally satisfactory. Urocanic acid, like histamine, an imidazole compound derived from histidine in spoiling fish, may be the "missing factor" in HFP. cis-Urocanic acid has recently been recognised as a mast cell degranulator, and endogenous histamine from mast cell degranulation may augment the exogenous histamine consumed in spoiled fish. HFP is a mild disease, but is important in relation to food safety and international trade. Consumers are becoming more demanding, and litigation following food poisoning incidents is becoming more common. Producers, distributors and restaurants are increasingly held liable for the quality of the products they handle and sell. Many countries have set guidelines for maximum permitted levels of histamine in fish. However, histamine concentrations within a spoiled fish are extremely variable, as is the threshold toxic dose. Until the identity, levels and potency of possible potentiators and/or mast-cell-degranulating factors are elucidated, it is difficult to establish regulatory limits for histamine in foods on the basis of potential health hazard. Histidine decarboxylating bacteria produce histamine from free histidine in spoiling fish. Although some are present in the normal microbial flora of live fish, most seem to be derived from post-catching contamination on board fishing vessels, at the processing plant or in the

Using the inversion with respect to a sphere in the coordinate space, the equivalence between the Schroedinger equations with different potentials is established. It is shown that the zero-energy equation for a spherically symmetric potential is equivalent to the equation with an axially symmetric potential of a special form. The particular exact solutions of the zero-energy problem for the motion of a particle in the field of two Maxwell ''fish-eye'' potentials and potentials with the two Coulomb singularities are found

The problem of identification of multiphonon states for vibrational nuclei is discussed. It is shown that an examination of the excitation patterns provides an adequate filter to select good or potentially good vibrational nuclei as the global nuclear properties (such as the level energies) being less strongly perturbed by the presence of additional structures than the local properties (like the wave functions and the transitions probabilities). The energies of the first 2 + states are systematically low by about 15% with respect to the values expected from the global nuclear properties. This appears to be in contradiction with the general belief that these states have a high purity. The comparison of the experimental results with the predictions of the Brink model is made. The conclusion is made that the predictions are quite good, but it is necessary to renormalize the 1 phonon energy, i.e. to increase it by about 15%. Since the modified Brink method involves only the use of a virtual 2 1 + energy and no level fit, a problem of weights cannot be invoked. The calculations confirm the existence of multiphonon states at high excitation energies and the persistence of the symmetry properties well inside regions where one would expect the appearance of disorder

When a high-voltage direct-current is applied to two beakers filled with water or polar liquid dielectrica, a horizontal bridge forms between the two beakers. This experiment was first carried out by Lord Armstrong in 1893 and then forgotten until recently. Such bridges are stable by the action of electrohydrodynamic (EHD) forces caused by electric field gradients counteracting gravity. Due to these gradients a permanent pumping of liquid from one beaker into the other is observed. At macroscopic scale several of the properties of a horizontal water bridge can be explained by modern electrohydrodynamics, analyzing the motion of fluids in electric fields. Whereas on the molecular scale water can be described by quantum mechanics, there is a conceptual gap at mesoscopic scale which is bridged by a number of theories including quantum mechanical entanglement and coherent structures in water - theories that we discuss here. Much of the phenomenon is already understood, but even more can still be learned from it, since such "floating" liquid bridges resemble a small high voltage laboratory of their own: The physics of liquids in electric fields of some kV/cm can be studied, even long time experiments like neutron or light scattering are feasible since the bridge is in a steady-state equilibrium and can be kept stable for hours. It is also an electro-chemical reactor where compounds are transported through by the EHD flow, enabling the study of electrochemical reactions under potentials which are otherwise not easily accessible. Last but not least the bridge provides the experimental biologist with the opportunity to expose living organisms such as bacteria to electric fields without killing them, but with a significant influence on their behavior, and possibly, even on their genome.

A complete, isostructural series of lanthanide complexes (except Pm) with the ligand TREN-1,2-HOIQO has been synthesized and structurally characterized by means of single-crystal X-ray analysis. All complexes are 1D-polymeric species in the solid state, with the lanthanide being in an eight-coordinate, distorted trigonal-dodecahedral environment with a donor set of eight unique oxygen atoms. This series constitutes the first complete set of isostructural lanthanide complexes with a ligand of denticity greater than two. The geometric arrangement of the chelating moieties slightly deviates across the lanthanide series, as analyzed by a shape parameter metric based on the comparison of the dihedral angles along all edges of the coordination polyhedron. The apparent lanthanide contraction in the individual Ln-O bond lengths deviates considerably from the expected quadratic decrease that was found previously in a number of complexes with ligands of low denticity. The sum of all bond lengths around the trivalent metal cation, however, is more regular, showing an almost ideal quadratic behavior across the entire series. The quadratic nature of the lanthanide contraction is derived theoretically from Slater's model for the calculation of ionic radii. In addition, the sum of all distances along the edges of the coordination polyhedron show exactly the same quadratic dependency as the Ln-X bond lengths. The universal validity of this coordination sphere contraction, concomitant with the quadratic decrease in Ln-X bond lengths, was confirmed by reexamination of four other, previously published, almost complete series of lanthanide complexes. Due to the importance of multidentate ligands for the chelation of rare-earth metals, this result provides a significant advance for the prediction and rationalization of the geometric features of the corresponding lanthanide complexes, with great potential impact for all aspects of lanthanide coordination.

which specific medical treatment can reverse infertility. When an unassisted pregnancy is not achieved, assisted reproductive techniques ranging from intrauterine insemination to in vitro fertilization to the acquisition of viable sperm from the ejaculate or directly from the testes through testicular sperm extraction or testicular microdissection can also be used, depending on the woman's potential for pregnancy and the quality and quantity of the sperm.

medical treatment can reverse infertility. When an unassisted pregnancy is not achieved, assisted reproductive techniques ranging from intrauterine insemination to in vitro fertilization to the acquisition of viable sperm from the ejaculate or directly from the testes through testicular sperm extraction or testicular microdissection can also be used, depending on the woman's potential for pregnancy and the quality and quantity of the sperm. PMID:23503957

The importance of radiolytic oxidation in graphite-moderated CO 2 -cooled reactors has long been recognised, especially in the Advanced Gas-Cooled Reactors where potential rates are higher because of the higher gas pressure and ratings than the earlier Magnox designs. In all such reactors, the rate of oxidation is partly inhibited by the CO produced in the reaction and, in the AGR, further reduced by the deliberate addition of CH 4 . Significant roles are also played by H 2 and H 2 O. This paper reviews briefly the mechanisms of these processes and the data on which they are based. However, operational experience has demonstrated that these basic principles are unsatisfactory in a number of respects. Gilsocarbon graphites produced by different manufacturers have demonstrated a significant difference in oxidation rate despite a similar specification and apparent equivalence in their pore size and distribution, considered to be the dominant influence on oxidation rate for a given coolant-gas composition. Separately, the inhibiting influence of CH 4 , which for many years had been considered to arise from the formation of a sacrificial deposit on the pore walls, cannot adequately be explained by the actual quantities of such deposits found in monitoring samples which frequently contain far less deposited carbon than do samples from Magnox reactors where the only source of such deposits is the CO. The paper also describes the current status of moderator weight-loss predictions for Magnox and AGR Moderators and the validation of the POGO and DIFFUSE6 codes respectively. 2 refs, 5 figs

It is a common occurrence in the balance function laboratory to evaluate patients in the post-acute period following unilateral vestibular system impairment. It is important to be able to differentiate spontaneous nystagmus (SN) emanating from peripheral vestibular system impairments from asymmetric gaze-evoked nystagmus (GEN) that originates from central ocular motility impairment. To describe the three elements of Alexander's Law (AL) that have been used to define SN from unilateral peripheral impairment. Additionally, a fourth element is described (i.e., augmentation of spontaneous nystagmus from unilateral peripheral vestibular system impairment) that differentiates nystagmus of peripheral vestibular system origin from nystagmus that originates from a central eye movement disorder. Case reports. Case data were obtained from two patients both showing a nystagmus that followed AL. None Videonystagmography (VNG), rotational, vestibular evoked myogenic potential (VEMP), and neuro-imaging studies were presented for each patient. The nystagmus in Case 1 occurred as a result of a unilateral, peripheral, vestibular system impairment. The nystagmus was direction-fixed and intensified in the vision-denied condition. The nystagmus in Case 2, by appearance identical to that in Case 1, was an asymmetric gaze-evoked nystagmus originating from a space-occupying lesion in the cerebello-pontine angle. Unlike Case 1, the nystagmus did not augment in the vision-denied condition. Although nystagmus following AL usually occurs in acute peripheral vestibular system impairment, it can occur in cases of central eye movement impairment. The key element is whether the SN that follows AL is attenuated or augmented in the vision-denied condition. The SN from a unilateral peripheral vestibular system impairment should augment in the vision denied condition. An asymmetric GEN will either not augment, decrease in magnitude, or disappear entirely, in the vision-denied condition.

Inversion polymorphisms constitute an evolutionary puzzle: they should increase embryo mortality in heterokaryotypic individuals but still they are widespread in some taxa. Some insect species have evolved mechanisms to reduce the cost of embryo mortality but humans have not. In birds, a detailed analysis is missing although intraspecific inversion polymorphisms are regarded as common. In Australian zebra finches (Taeniopygia guttata), two polymorphic inversions are known cytogenetically and we set out to detect these two and potentially additional inversions using genomic tools and study their effects on embryo mortality and other fitness-related and morphological traits. Using whole-genome SNP data, we screened 948 wild zebra finches for polymorphic inversions and describe four large (12-63 Mb) intraspecific inversion polymorphisms with allele frequencies close to 50 %. Using additional data from 5229 birds and 9764 eggs from wild and three captive zebra finch populations, we show that only the largest inversions increase embryo mortality in heterokaryotypic males, with surprisingly small effect sizes. We test for a heterozygote advantage on other fitness components but find no evidence for heterosis for any of the inversions. Yet, we find strong additive effects on several morphological traits. The mechanism that has carried the derived inversion haplotypes to such high allele frequencies remains elusive. It appears that selection has effectively minimized the costs associated with inversions in zebra finches. The highly skewed distribution of recombination events towards the chromosome ends in zebra finches and other estrildid species may function to minimize crossovers in the inverted regions.

Phase wrapping in the frequency-domain (or cycle skipping in the time-domain) is the major cause of the local minima problem in the waveform inversion. The unwrapped phase has the potential to provide us with a robust and reliable waveform inversion

A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.

This paper investigates the impact of a new metric recently published [R. Plamondon and C. Ouellet-Plamondon, in On Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories, edited by K. Rosquist, R. T. Jantzen, and R. Ruffini (World Scientific, Singapore, 2015), p. 1301] for studying the space-time geometry of a static symmetric massive object. This metric depends on a complementary error function (erfc) potential that characterizes the emergent gravitation field predicted by the model. This results in two types of deviations as compared to computations made on the basis of a Newtonian potential: a constant and a radial outcome. One key feature of the metric is that it postulates the existence of an intrinsic physical constant σ , the massive object-specific proper length that scales measurements in its surroundings. Although σ must be evaluated experimentally, we use a heuristic to estimate its value and point out some latent relationships between the Hubble constant, the secular increase in the astronomical unit, and the Pioneers delay. Indeed, highlighting the systematic errors that emerge when the effect of σ is neglected, one can link the Hubble constant H 0 to σ Sun and the secular increase V AU to σ Earth . The accuracy of the resulting numerical predictions, H 0 = 74 . 42 ( 0 . 02 ) ( km / s ) / Mpc and V AU ≅ 7.8 cm yr-1 , calls for more investigations of this new metric by specific experts. Moreover, we investigate the expected impacts of the new metric on the flyby anomalies, and we revisit the Pioneers delay. It is shown that both phenomena could be partly taken into account within the context of this unifying paradigm, with quite accurate numerical predictions. A correction for the osculating asymptotic velocity at the perigee of the order of 10 mm/s and an inward radial acceleration of 8 . 34 × 10 - 10 m / s 2 affecting the Pioneer ! space crafts could be explained by this new model.

The quark potential models with an energy-independent central potential have been successful for understanding the conventional charmonium states especially below the open charm threshold. As one might consider, however, the interquark potential is in general energy-dependent, and its tendency gets stronger in higher lying states. Confirmation of whether the interquark potential is energy-independent is also important to verify the validity of the quark potential models. In this talk, we examine the energy dependence of the charmonium potential, which can be determined from the Bethe-Salpeter (BS) amplitudes of cc̅ mesons in lattice QCD.We first calculate the BS amplitudes of radially excited charmonium states, the ηc(2S) and ψ(2S) states, using the variational method and then determine both the quark kinetic mass and the charmonium potential within the HAL QCD method. Through a direct comparison of charmonium potentials determined from both the 1S and 2S states, we confirm that neither the central nor spin-spin potential shows visible energy dependence at least up to 2S state.

Full Text Available Inverse fusion PCR cloning (IFPC is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.

We illustrate in some detail a 2D inverse-equilibrium solver that was constructed to analyze tokamak configurations and stellarators (the latter in the context of the average method). To ensure that the method is suitable not only to determine equilibria, but also to provide appropriately represented data for existing stability codes, it is important to be able to control the Jacobian, tilde J is identical to delta(R,Z)/delta(rho, theta). The form chosen is tilde J = J 0 (rho)R/sup l/rho where rho is a flux surface label, and l is an integer. The initial implementation is for a fixed conducting-wall boundary, but the technique can be extended to a free-boundary model

This thesis present the extension of mono-component seismic pre-stack data stratigraphical inversion method to multicomponent data, with the objective of improving the determination of reservoir elastic parameters. In addiction to the PP pressure waves, the PS converted waves proved their interest for imaging under gas clouds; and their potential is highly significant for the characterization of lithologies, fluids, fractures... Nevertheless the simultaneous use ol PP and PS data remains problematic because of their different the time scales. To jointly use the information contained in PP and PS data, we propose a method in three steps first, mono-component stratigraphic inversions of PP then PS data; second, estimation of the PP to PS time conversion law; third, multicomponent stratigraphic inversion. For the second point, the estimation of the PP to PS conversion law is based on minimizing the difference between the S impedances obtained from PP and PS mono-component stratigraphic inversion. The pre-stack mono-component stratigraphic inversions was adapted to the case of multicomponent data by leaving each type of data in its own time scale in order to avoid the distortion of the seismic wavelet. The results obtained on a realistic synthetic PP-PS case show on one hand that determining PP to PS conversion law (from the mono-component inversion results) is feasible, and on the other hand that the joint inversion of PP and PS data with this conversion law improves the results compared to the mono-component inversion ones. Although this is presented within the framework of the PP and PS multi-component data, the developed methodology adapts directly to PP and SS data for example. (author)

The main fundamental principles characterizing the vacuum field structure are formulated and the modeling of the related vacuum medium and charged point particle dynamics by means of de- vised field theoretic tools are analyzed. The Maxwell electrodynamic theory is revisited and newly derived from the suggested vacuum field structure principles and the classical special relativity theory relationship between the energy and the corresponding point particle mass is revisited and newly obtained. The Lorentz force expression with respect to arbitrary non-inertial reference frames is revisited and discussed in detail, and some new interpretations of relations between the special relativity theory and quantum mechanics are presented. The famous quantum-mechanical Schroedinger type equations for a relativistic point particle in the external potential and magnetic fields within the quasiclassical approximation as the Planck constant (h/2π) → 0 and the light velocity c → ∞ are obtained. (author)

The Cyclone Global Navigation Satellite System (CYGNSS) is a space-borne GNSS-R (GNSS-Reflectometry) mission that launched December 15, 2016 for ocean surface wind speed measurements. CYGNSS includes 8 small satellites in the same LEO orbit, so that the mission provides wind speed products having unprecedented coverage both in time and space to study multi-temporal behaviors of oceanic winds. The nature of CYGNSS coverage results in some locations on Earth experiencing multiple wind speed measurements within a short period of time (a "clump" of observations in time resulting in a "rapid revisit" series of measurements). Such observations could seemingly provide indications of regions experiencing rapid changes in wind speeds, and therefore be of scientific utility. Temporally "clumped" properties of CYGNSS measurements are investigated using early CYGNSS L1/L2 measurements, and the results show that clump durations and spacing vary with latitude. For example, the duration of a clump can extend as long as a few hours at higher latitudes, with gaps between clumps ranging from 6 to as high as 12 hours depending on latitude. Examples are provided to indicate the potential of changes within a clump to produce a "rapid revisit" product for detecting convective activity. Also, we investigate detector design for identifying convective activities. Results from analyses using recent CYGNSS L2 winds will be provided in the presentation.

Full Text Available Taking tourists’ perspective rather than destination offerings as its core concept, this study introduces “perceived destination brand worldness” as a variable. Perceived destination brand worldness is defined as the positive perception that a tourist has of a country that is visited by tourists from all over the world. Then, the relationship between perceived destination brand worldness and intention to revisit is analyzed using partial least squares regression. This empirical study selects Taiwanese tourists as its sample, and the results show that perceived destination brand worldness is a direct predictor of intention to revisit. In light of these empirical findings and observations, practical and theoretical implications are discussed.

A generalization of the generalized inverse Weibull distribution so-called transmuted generalized inverse Weibull dis- tribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM) in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expression...

In this paper, we numerically investigate an inverse problem of recovering the potential term in a fractional Sturm-Liouville problem from one spectrum. The qualitative behaviors of the eigenvalues and eigenfunctions are discussed, and numerical

The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from

A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)

Full Text Available While holist views such as ecocentrism have considerable intuitive appeal, arguing for the moral considerability of ecological wholes such as ecosystems has turned out to be a very difficult task. In the environmental ethics literature, individualist biocentrists have persuasively argued that individual organisms—but not ecological wholes—are properly regarded as having a good of their own . In this paper, I revisit those arguments and contend that they are fatally flawed. The paper proceeds in five parts. First, I consider some problems brought about by climate change for environmental conservation strategies and argue that these problems give us good pragmatic reasons to want a better account of the welfare of ecological wholes. Second, I describe the theoretical assumptions from normative ethics that form the background of the arguments against holism. Third, I review the arguments given by individualist biocentrists in favour of individualism over holism. Fourth, I review recent work in the philosophy of biology on the units of selection problem, work in medicine on the human biome, and work in evolutionary biology on epigenetics and endogenous viral elements. I show how these developments undermine both the individualist arguments described above as well as the distinction between individuals and wholes as it has been understood by individualists. Finally, I consider five possible theoretical responses to these problems.

Many grand unified theory (GUT) models conserve the difference between the baryon and lepton number, B -L . These models can create baryon and lepton asymmetries from heavy Higgs or gauge boson decays with B +L ≠0 but with B -L =0 . Since the sphaleron processes violate B +L , such GUT-generated asymmetries will finally be washed out completely, making GUT baryogenesis scenarios incapable of reproducing the observed baryon asymmetry of the Universe. In this work, we revisit the idea to revive GUT baryogenesis, proposed by Fukugita and Yanagida, where right-handed neutrinos erase the lepton asymmetry before the sphaleron processes can significantly wash out the original B +L asymmetry, and in this way one can prevent a total washout of the initial baryon asymmetry. By solving the Boltzmann equations numerically for baryon and lepton asymmetries in a simplified 1 +1 flavor scenario, we can confirm the results of the original work. We further generalize the analysis to a more realistic scenario of three active and two right-handed neutrinos to highlight flavor effects of the right-handed neutrinos. Large regions in the parameter space of the Yukawa coupling and the right-handed neutrino mass featuring successful baryogenesis are identified.

An experiment which discussed the appearance of an internal wave attractor in a uniformly stratified, free-surface fluid [Maas, L.R.M., Benielli, D., Sommeria, J., Lam, F.-P.A., 1997. Observation of an internal wave attractor in a confined, stably stratified fluid. Nature 388(6642), 557-561] is revisited. This is done in order to give a more detailed and more accurate description of the underlying focusing process. Evolution of the attractor can now be quantified. For the tank with one sloping sidewall, and for the parameter regime (density stratification, forcing frequency) studied, the inverse exponential growth rate determined at several locations in the fluid turns out to be 122 s always. Only the start and duration of the growth differed: away from the attractor region it appeared later and of shorter duration. Here, these features are interpreted by employing a new theoretical basis that incorporates an external forcing via a surface boundary condition (an infinitesimal barotropic seiche) and that describes the solution in terms of propagating waves.

The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le

The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy

This paper revisits the landmark CEE series, "The Future of Engineering Education," published in 2000 (available free in the CEE archives on the internet) to examine the predictions made in the original paper as well as the tools and approaches documented. Most of the advice offered in the original series remains current. Despite new…

One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3)

of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...

A contingent of weakly calcified coccolithophorid genera and species were described from polar regions almost 40 years ago. In the interim period a few additional findings have been reported enlarging the realm of some of the species. The genus Wigwamma is revisited here with the purpose of provi...... appearance of the coccolith armour of the cell...

This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. At zero temperature and zero frequency...

This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse...

An interpretation in terms of the cranking model is presented to explain why signature inversion occurs for positive γ of the axially asymmetric deformation parameter and emerges into specific orbitals. By introducing a continuous variable, the eigenvalue equation can be reduced to a one dimensional Schroedinger equation by means of which one can easily understand the cause of signature inversion. (author)

The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.

This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...

by-products, especially drinking water, aquaculture and mariculture, can easily translate into billions of dollars in business opportunities. The current status of the OTEC system definitely deserves to be carefully revisited. This paper will examine recent major advancements in technology, evaluate costs and effectiveness, and assess the overall market environment of the OTEC system and describe its great renewable energy potential and overall benefits to the nations of the world

This article investigates the impact of eWOM on intention to revisit and destination trust, and the moderating role of gender in medical tourism industry. Result from structural equation modeling (n=240) suggests the following: (1) that eWOM influences intention to revisit and destination trust; (2) that destination trust influences intention to revisit; (3) that the impact of eWOM on intention to revisit is about 1.3 times higher in men; (4) that the impact of eWOM on destination trust is ab...

The final report describes work performed to investigate inverse Cherenkov acceleration (ICA) as a promising method for laser particle acceleration. In particular, an improved configuration of ICA is being tested in a experiment presently underway on the Accelerator Test Facility (ATF). In the experiment, the high peak power (∼ 10 GW) linearly polarized ATF CO 2 laser beam is converted to a radially polarized beam. This is beam is focused with an axicon at the Cherenkov angle onto the ATF 50-MeV e-beam inside a hydrogen gas cell, where the gas acts as the phase matching medium of the interaction. An energy gain of ∼12 MeV is predicted assuming a delivered laser peak power of 5 GW. The experiment is divided into two phases. The Phase I experiments, which were completed in the spring of 1992, were conducted before the ATF e-beam was available and involved several successful tests of the optical systems. Phase II experiments are with the e-beam and laser beam, and are still in progress. The ATF demonstrated delivery of the e-beam to the experiment in Dec. 1992. A preliminary ''debugging'' run with the e-beam and laser beam occurred in May 1993. This revealed the need for some experimental modifications, which have been implemented. The second run is tentatively scheduled for October or November 1993. In parallel to the experimental efforts has been ongoing theoretical work to support the experiment and investigate improvement and/or offshoots. One exciting offshoot has been theoretical work showing that free-space laser acceleration of electrons is possible using a radially-polarized, axicon-focused laser beam, but without any phase-matching gas. The Monte Carlo code used to model the ICA process has been upgraded and expanded to handle different types of laser beam input profiles

The study of the INVERSE FREE ELECTRON LASER, as a potential mode of electron acceleration, is being pursued at Brookhaven National Laboratory. Recent studies have focussed on the development of a low energy, high gradient, multi stage linear accelerator. The elementary ingredients for the IFEL interaction are the 50 MeV Linac e - beam and the 10 11 Watt CO 2 laser beam of BNL's Accelerator Test Facility (ATF), Center for Accelerator Physics (CAP) and a wiggler. The latter element is designed as a fast excitation unit making use of alternating stacks of Vanadium Permendur (VaP) ferromagnetic laminations, periodically interspersed with conductive, nonmagnetic laminations, which act as eddy current induced field reflectors. Wiggler parameters and field distribution data will be presented for a prototype wiggler in a constant period and in a ∼ 1.5 %/cm tapered period configuration. The CO 2 laser beam will be transported through the IFEL interaction region by means of a low loss, dielectric coated, rectangular waveguide. Short waveguide test sections have been constructed and have been tested using a low power cw CO 2 laser. Preliminary results of guide attenuation and mode selectivity will be given, together with a discussion of the optical issues for the IFEL accelerator. The IFEL design is supported by the development and use of 1D and 3D simulation programs. The results of simulation computations, including also wiggler errors, for a single module accelerator and for a multi-module accelerator will be presented

Breast intensity-modulated radiation therapy (IMRT) improves dose distribution homogeneity within the whole breast. Previous publications report the use of inverse or forward dose optimization algorithms. Because the inverse technique is not widely available in commercial treatment planning systems, it is important to compare the 2 algorithms. The goal of this work is to compare them on a prospective cohort of 30 patients. Dose distributions were evaluated on differential dose-volume histograms using the volumes receiving more than 105% (V 105 ) and 110% (V 110 ) of the prescribed dose, and on the maximum dose (D max ) or hot spot and the sagittal dose gradient (SDG) being the gradient between the dose on inframammary crease and the dose prescribed. The data were analyzed using Wilcoxon signed rank test. The inverse planning significantly improves the V 105 (mean value 9.7% vs. 14.5%, p = 0.002), and the V 110 (mean value 1.4% vs. 3.2%, p = 0.006). However, the SDG is not statistically significantly different for either algorithm. Looking at the potential impact on skin acute reaction, although there is a significant reduction of V 110 using an inverse algorithm, it is unlikely this 1.6% volume reduction will present a significant clinical advantage over a forward algorithm. Both algorithms are equivalent in removing the hot spots on the inframammary fold, where acute skin reactions occur more frequently using a conventional wedge technique. Based on these results, we recommend that both forward and inverse algorithms should be considered for breast IMRT planning

In the present article, we introduce a generalization of the spherical inversion. In particular, we define an inversion with respect to an ellipsoid, and prove several properties of this new transformation. The inversion in an ellipsoid is the generalization of the elliptic inversion to the three-dimensional space. We also study the inverse images…

Positive tectonic inversion structures are common features that were recognized in many deformed sedimentary basins (Lowell, 1995). They are characterized by a two phase fault evolution, where initial normal faulting was followed by reverse faulting along the same fault, accompanied by the development of hanging wall deformation. Analysing the evolution of such inversion structures is important for understanding the tectonics of sedimentary basins and the formation of hydrocarbon traps. We used a 2D tectonic forward modelling approach to simulate the stepwise structural evolution of inversion structures in cross-section. The modelling was performed with the software FaultFold Forward v. 6, which is based on trishear kinematics (Zehnder and Allmendinger, 2000). Key aspect of the study was to derive the controlling factors for the geometry of inversion structures. The simulation results show, that the trishear approach is able to reproduce the geometry of tectonic inversion structures in a realistic way. This implies that inversion structures are simply fault-related folds that initiated as extensional fault-propagation folds, which were subsequently transformed into compressional fault-propagation folds when the stress field changed. The hanging wall deformation is a consequence of the decrease in slip towards the tip line of the fault. Trishear angle and propagation-to-slip ratio are the key controlling factors for the geometry of the fault-related deformation. We tested trishear angles in the range of 30 - 60 and propagation-to-slip ratios between 1 and 2 in increments of 0.1. Small trishear angles and low propagation-to-slip ratios produced tight folds, whereas large trishear angles and high propagation-to-slip ratios led to more open folds with concentric shapes. This has a direct effect on the size and geometry of potential hydrocarbon traps. The 2D simulations can be extended to a pseudo 3D approach, where a set of parallel cross-sections is used to describe

Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.

Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.

Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.

Here, using ab initio molecular dynamics (AIMD) simulations, we elucidate the role of the umbrella inversion mode of the hydronium in proton transfer (PT) in liquid water. The hydrophobic face of the hydronium oxygen experiences asymmetries in the solvent potential along the inversion coordinate and this has a rather drastic effect on the barrier for proton transfer. This behavior is coupled to the fluctuations of voids or cavities in the vicinity of the hydronium in the water network. The peculiar inversion mode can either trap or release the proton from different parts of the water network.

Inverse scattering theory is used to determine local, energy independent, coordinate space nucleon-nucleon potentials. Inversions are made of phase shifts obtained by analyzes of data and from meson exchange theory, in particular the Paris and the Bonn parametrizations. Half off-shell T-matrices are generated to compare the exact meson theoretical results with those of inversion and it is found that phase equivalent interactions have essentially the same off-shell behaviour for any physically significant range of momenta. 8 refs., 8 figs

Fixed energy inverse scattering theory has been used to analyse the differential cross-sections for the elastic scattering of electrons from water molecules. Both semiclassical (WKB) and fully quantal inversion methods have been used with data taken in the energy range 100 to 1000 eV. Constrained to be real, the local inversionpotentials are found to be energy dependent; a dependence that can be interpreted as the local equivalence of true nonlocality in the actual interaction. 14 refs., 4 tabs., 8 figs

of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...

Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded

We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained

Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe

Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed

Most primary cells use Zn or Li as the anode, a metallic oxide as the cathode, and an acidic or alkaline solution or moist past as the electrolytic solution. In this paper, highly ordered poly pyrrole (PPy) inverse opals have been successfully synthesized in the acetonitrile solution containing [bmim]PF 6 . PPy films were prepared under the same experimental conditions. Cyclic voltammograms of the PPy film and the PPy inverse opal in neutral phosphate buffer solution (PBS) were recorded. X-ray photoelectron spectroscopy technique was used to investigate the structural surface of the PPy films and the PPy inverse opals. It is found that the PF 6 - anions kept de doping from the PPy films during the potential scanning process, resulting in the electrochemical inactivity. Although PF 6 - anions also kept de doping from the PPy inverse opals, the PO 4 3- anions from PBS could dope into the inverse opal, explaining why the PPy inverse opals kept their electrochemical activity. An environmental friendly cell prototype was constructed, using the PPy inverse opal as the anode. The electrolytes in both the cathodic and anodic half-cells were neutral PBSs. The open-circuit potential of the cell prototype reached 0.487 V and showed a stable output over several hundred hours

Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

Inverse methods of statistical mechanics are becoming productive tools in the design of materials with specific microstructures or properties. While initial studies have focused on solid-state design targets (e.g, assembly of colloidal superlattices), one can alternatively design fluid states with desired morphologies. This work addresses the latter and demonstrates how a simple iterative Boltzmann inversion strategy can be used to determine the isotropic pair potential that reproduces the ra...

Full Text Available Cementoblastoma is a rare benign odontogenic neoplasm which is characterized by the proliferation of cellular cementum. Diagnosis of cementoblastoma is challenging because of its protracted clinical, radiographic features, and bland histological appearance; most often cementoblastoma is often confused with other cementum and bone originated lesions. The aim of this article is to overview/revisit, approach the diagnosis of cementoblastoma, and also present a unique radiographic appearance of a cementoblastoma lesion associated with an impacted tooth.

Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near

Full Text Available We outline an inverse approach for investigating dendritic function-structure relationships by optimizing dendritic trees for a-priori chosen computational functions. The inverse approach can be applied in two different ways. First, we can use it as a `hypothesis generator' in which we optimize dendrites for a function of general interest. The optimization yields an artificial dendrite that is subsequently compared to real neurons. This comparison potentially allows us to propose hypotheses about the function of real neurons. In this way, we investigated dendrites that optimally perform input-order detection. Second, we can use it as a `function confirmation' by optimizing dendrites for functions hypothesized to be performed by classes of neurons. If the optimized, artificial, dendrites resemble the dendrites of real neurons the artificial dendrites corroborate the hypothesized function of the real neuron. Moreover, properties of the artificial dendrites can lead to predictions about yet unmeasured properties. In this way, we investigated wide-field motion integration performed by the VS cells of the fly visual system. In outlining the inverse approach and two applications, we also elaborate on the nature of dendritic function. We furthermore discuss the role of optimality in assigning functions to dendrites and point out interesting future directions.

Motivated by recent experimental work on magnetic properties of Si-MOSFETs, we report a calculation of magnetisation and susceptibility of electrons in an inversion layer, taking into account the co-ordinate dependence of electron wave function in the direction perpendicular to the plane. It is assumed that the inversion-layer carriers interact via a contact repulsive potential, which is treated at a mean-field level, resulting in a self-consistent change of profile of the wave functions. We find that the results differ significantly from those obtained in the pure 2DEG case (where no provision is made for a quantum motion in the transverse direction). Specifically, the critical value of interaction needed to attain the ferromagnetic (Stoner) instability is decreased and the Stoner criterion is therefore relaxed. This leads to an increased susceptibility and ultimately to a ferromagnetic transition deep in the high-density metallic regime. In the opposite limit of low carrier densities, a phenomenological treatment of the in-plane correlation effects suggests a ferromagnetic instability above the metal–insulator transition. Results are discussed in the context of the available experimental data. - Highlights: • Stoner-type mean field theory for electrons in an inversion layer is constructed. • Wave function change under an in-plane magnetic field is taken into account. • Tendency toward ferromagnetism is strengthened in comparison with a usual Stoner theory. • In-plane correlations at low densities are taken into account phenomenologically.

Motivated by rising atmospheric CO2 levels and recent developments in sequestration and seismic processing technologies, studies addressing the feasibility of offshore carbon sequestration are ongoing. The subsurface off the US east coast offers a few potential storage reservoirs including sedimentary layers as well as buried Mesozoic rift basins. Marine seismic reflection data first identified these features in the 1970s and are now being revisited as potential sequestration reservoirs. The rift basins are of particular interest as storage reservoirs for CO2 in light of recent work showing the efficacy of mineralizing injected carbon in basaltic formations. The use of these data presents unique challenges, particularly due to their vintage. However, new data processing capabilities and seismic prestack waveform inversion techniques elevate the potential of the legacy data. Using state of the art processing techniques we identify previously un-imaged rift basins off the US east coast between Delaware and Massachusetts and update mapping related to the areal and volumetric extent of basaltic fill. Applying prestack waveform inversion to the reprocessed seismic data, we show that each rift basin has different basaltic properties and thereby distinct utilities as carbon storage reservoirs.

Full Text Available Abstract Background RNA exhibits a variety of structural configurations. Here we consider a structure to be tantamount to the noncrossing Watson-Crick and G-U-base pairings (secondary structure and additional cross-serial base pairs. These interactions are called pseudoknots and are observed across the whole spectrum of RNA functionalities. In the context of studying natural RNA structures, searching for new ribozymes and designing artificial RNA, it is of interest to find RNA sequences folding into a specific structure and to analyze their induced neutral networks. Since the established inverse folding algorithms, RNAinverse, RNA-SSD as well as INFO-RNA are limited to RNA secondary structures, we present in this paper the inverse folding algorithm Inv which can deal with 3-noncrossing, canonical pseudoknot structures. Results In this paper we present the inverse folding algorithm Inv. We give a detailed analysis of Inv, including pseudocodes. We show that Inv allows to design in particular 3-noncrossing nonplanar RNA pseudoknot 3-noncrossing RNA structures-a class which is difficult to construct via dynamic programming routines. Inv is freely available at http://www.combinatorics.cn/cbpc/inv.html. Conclusions The algorithm Inv extends inverse folding capabilities to RNA pseudoknot structures. In comparison with RNAinverse it uses new ideas, for instance by considering sets of competing structures. As a result, Inv is not only able to find novel sequences even for RNA secondary structures, it does so in the context of competing structures that potentially exhibit cross-serial interactions.

The inverse scattering problem of the reconstruction of the unknown potential with compact support in the 3-d Schr\\"odinger equation is considered. Only the modulus of the scattering complex valued wave field is known, whereas the phase is unknown. It is shown that the unknown potential can be reconstructed via the inverse Radon transform. Therefore, a long standing problem posed in 1977 by K. Chadan and P.C. Sabatier in their book "Inverse Problems in Quantum Scattering Theory" is solved.

We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of conventional x-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically. (paper)

There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain

Understanding the itinerant-localised bonding role of the 5f electrons in the light actinides will afford an insight into their unusual physical and chemical properties. In recent years, the combination of core and valance band electron spectroscopies with theoretic modelling have already made significant progress in this area. However, information of the unoccupied density of states is still scarce. When compared to the forward photoemission techniques, measurements of the unoccupied states suffer from significantly less sensitivity and lower resolution. In this paper, we report on our experimental apparatus, which is designed to measure the inverse photoemission spectra of the light actinides. Inverse photoemission spectra of UO 2 and UO 2.2 along with the corresponding core and valance electron spectra are presented in this paper. UO 2 has been reported previously, although through its inclusion here it allows us to compare and contrast results from our experimental apparatus to the previous Bremsstrahlung Isochromat Spectroscopy and Inverse Photoemission Spectroscopy investigations

The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)

We examine the characteristics of NDI (negative degree inversion) and its relation with other inversion phenomena such as SVI (subject-verb inversion) and SAI (subject-auxiliary inversion). The negative element in the NDI construction may be" not," a negative adverbial, or a negative verb. In this respect, NDI has similar licensing…

Full Text Available This paper introduces the transmuted new generalized inverse Weibull distribution by using the quadratic rank transmutation map (QRTM scheme studied by Shaw et al. (2007. The proposed model contains the twenty three lifetime distributions as special sub-models. Some mathematical properties of the new distribution are formulated, such as quantile function, Rényi entropy, mean deviations, moments, moment generating function and order statistics. The method of maximum likelihood is used for estimating the model parameters. We illustrate the flexibility and potential usefulness of the new distribution by using reliability data.

The inverse scattering problem for the Schroedinger radial equation consisting in determining the potential according to the scattering phase is considered. The problem of potential restoration according to the phase specified with fixed error in a finite range is solved by the regularization method based on minimization of the Tikhonov's smoothing functional. The regularization method is used for solving the problem of neutron-proton potential restoration according to the scattering phases. The determined potentials are given in the table

Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.

Darwin's theory of evolution by natural selection unifies the world of physics with the world of meaning and purpose by proposing a deeply counterintuitive "inversion of reasoning" (according to a 19th century critic): "to make a perfect and beautiful machine, it is not requisite to know how to make it" [MacKenzie RB (1868) (Nisbet & Co., London)]. Turing proposed a similar inversion: to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is. Together, these ideas help to explain how we human intelligences came to be able to discern the reasons for all of the adaptations of life, including our own.

Inverse transport consists of reconstructing the optical properties of a domain from measurements performed at the domain's boundary. This review concerns several types of measurements: time-dependent, time-independent, angularly resolved and angularly averaged measurements. We review recent results on the reconstruction of the optical parameters from such measurements and the stability of such reconstructions. Inverse transport finds applications e.g. in medical imaging (optical tomography, optical molecular imaging) and in geophysical imaging (remote sensing in the Earth's atmosphere). (topical review)

Full Text Available Introduction: Patients with mental health conditions frequently use emergency medical services. Many suffer from substance use and homelessness. If they use the emergency department (ED as their primary source of care, potentially preventable frequent ED revisits and hospital readmissions can worsen an already crowded healthcare system. However, the magnitude to which homelessness affects health service utilization among patients with mental health conditions remains unclear in the medical community. This study assessed the impact of homelessness on 30-day ED revisits and hospital readmissions among patients presenting with mental health conditions in an urban, safety-net hospital. Methods: We conducted a secondary analysis of administrative data on all adult ED visits in 2012 in an urban safety-net hospital. Patient demographics, mental health status, homelessness, insurance coverage, level of acuity, and ED disposition per ED visit were analyzed using multilevel modeling to control for multiple visits nested within patients. We performed multivariate logistic regressions to evaluate if homelessness moderated the likelihood of mental health patients’ 30-day ED revisits and hospital readmissions. Results: Study included 139,414 adult ED visits from 92,307 unique patients (43.5±15.1 years, 51.3% male, 68.2% Hispanic/Latino. Nearly 8% of patients presented with mental health conditions, while 4.6% were homeless at any time during the study period. Among patients with mental health conditions, being homeless contributed to an additional 28.0% increase in likelihood (4.28 to 5.48 odds of 30-day ED revisits and 38.2% increase in likelihood (2.04 to 2.82 odds of hospital readmission, compared to non-homeless, non-mental health (NHNM patients as the base category. Adjusted predicted probabilities showed that homeless patients presenting with mental health conditions have a 31.1% chance of returning to the ED within 30-day post discharge and a 3

In this presentation the company called Energia Renovable De Mexico SA de CV (ERDM), shows not only its obtained objectives but also its wanted objectives. This company is manufacturer and consultant of photovoltaic modules. In the first part, it is given a description of the following issues: the beginnings the company, the implemented marketing strategy, the signed agreement between ERDM and Q-CELLS AG in German, the construction of the San Andres Tuxtla's office as well as the PV module, the reasons why this company is considered a leader not only in Mexico but also in Latin America. Then, It is briefly explained the company's mission, which is mainly focused on the network-connected system that are currently allowed according to the Mexican laws. Besides, there are mentioned the key pieces that have made possible the success of this company. At the same time, there are briefly explained the plans for Mexico, in which there are found the use of both photovoltaic systems and wind turbines in order to feed the electric network. Such plans have as targets to reduce the energy cost in Mexico and to open the profitable market to potential investors. Finally, there are mentioned the future plans that are going to help the company's expansion and to improve some issues related to the energy. [Spanish] En esta presentacion la compania Energia Renovable De Mexico S.A. de C.V. (ERDM), describe tanto los objetivos alcanzados como los que desean alcanzar en el futuro, fungiendo no solo como fabricantes sino tambien como consultores de modulos fotovoltaicos. En la primera parte, se da una descripcion de: los inicios de la compania, las estrategias mercadologicas utilizadas, el acuerdo con Q-CELLS, Alemania; la construccion de la oficina de San Andres Tuxtla y del modulo PV, las causas que la han llevado a ser una empresa lider. Enseguida, se explica escuetamente la mision de la compania; ademas, se mencionan las piezas clave que la han llevado al exito

The ability to reconstruct an unknown radioactive object based on its passive gamma-ray and neutron signatures is very important in homeland security applications. Often in the analysis of unknown radioactive objects, for simplicity or speed or because there is no other information, they are modeled as spherically symmetric regardless of their actual geometry. In these presentation we discuss the accuracy and implications of this approximation for decay gamma rays and for neutron-induced gamma rays. We discuss an extension of spherical raytracing (for uncollided fluxes) that allows it to be used when the exterior shielding is flat or cylindrical. We revisit some early results in boundary perturbation theory, showing that the Roussopolos estimate is the correct one to use when the quantity of interest is the flux or leakage on the boundary. We apply boundary perturbation theory to problems in which spherically symmetric systems are perturbed in asymmetric nonspherical ways. We apply mesh adaptive direct search (MADS) algorithms to object reconstructions. We present a benchmark test set that may be used to quantitatively evaluate inverse detection methods.

Background At the Children’s Hospital of Georgia (CHOG), we found that outpatient revisits for pediatric asthma were significantly above national norms. According to the NIH, costly hospital revisits for asthma can be prevented through guidelines-based self-management of asthma, central to which, is the use of a written Asthma-Action Plan (AAP). Purpose The asthma services literature has emphasized the role of the healthcare provider in promoting asthma self-management using the AAP, to prevent hospital revisits. On the other hand, the asthma policy literature has emphasized the need for community-based interventions to promote asthma self-management. A gap remains in understanding the extent of leverage that healthcare providers may have in preventing hospital revisits for asthma, through effective communication of AAP in the outpatient setting. Our study sought to address this gap. Methods We conducted a 6-month intervention to implement “patient-and-family-centered communication of the AAP” in CHOG outpatient clinics, based on the “change-management” theoretical framework. Provider communication of AAP was assessed through a survey of “Parent Understanding of the Child’s AAP.” A quasi-experimental approach was used to measure outpatient revisits for pediatric asthma, pre- and post-intervention. Results Survey results showed that provider communication of the AAP was unanimously perceived highly positively by parents of pediatric asthma patients, across various metrics of patient-centered care. However, there were no statistically significant differences in outpatient “revisit behavior” for pediatric asthma between pre- and post-intervention periods after controlling for several demographic variables. Additionally, revisits remained significantly above national norms. Conclusions Results suggest limited potential of “effective provider communication of AAP,” in reducing outpatient revisits for pediatric asthma; and indicate need for

Type-II superconducting behavior was observed in highly periodic three-dimensional lead inverse opal prepared by infiltration of melted Pb in blue (D = 160 nm), green (D = 220 nm) and red (D = 300 nm) opals and followed by the extraction of the SiO 2 spheres by chemical etching. The onset of a broad phase transition (ΔT = 0.3 K) was shifted from T c = 7.196 K for bulk Pb to T c = 7.325 K. The upper critical field H c2 (3150 Oe) measured from high-field hysteresis loops exceeds the critical field for bulk lead (803 Oe) fourfold. Two well resolved peaks observed in the hysteresis loops were ascribed to flux penetration into the cylindrical void space that can be found in inverse opal structure and into the periodic structure of Pb nanoparticles. The red inverse opal shows pronounced oscillations of magnetic moment in the mixed state at low temperatures, T 0.9T c has been observed for all of the samples studied. The magnetic field periodicity of resistivity modulation is in good agreement with the lattice parameter of the inverse opal structure. We attribute the failure to observe pronounced modulation in magneto-resistive measurement to difficulties in the precision orientation of the sample along the magnetic field

We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained from Rayleigh waves recorded by vertical-component geophones. Similar to wave-equation traveltime tomography, the complicated surface wave arrivals in traces are skeletonized as simpler data, namely the picked dispersion curves in the phase-velocity and frequency domains. Solutions to the elastic wave equation and an iterative optimization method are then used to invert these curves for 2-D or 3-D S-wave velocity models. This procedure, denoted as wave-equation dispersion inversion (WD), does not require the assumption of a layered model and is significantly less prone to the cycle-skipping problems of full waveform inversion. The synthetic and field data examples demonstrate that WD can approximately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic data and the inversion of dispersion curves associated with Love waves.

We revisit two-color, two-flavor chiral perturbation theory at finite isospin and baryon density. We investigate the phase diagram obtained varying the isospin and the baryon chemical potentials, focusing on the phase transition occurring when the two chemical potentials are equal and exceed the pion mass (which is degenerate with the diquark mass). In this case, there is a change in the order parameter of the theory that does not lend itself to the standard picture of first order transitions. We explore this phase transition both within a Ginzburg-Landau framework valid in a limited parameter space and then by inspecting the full chiral Lagrangian in all the accessible parameter space. Across the phase transition between the two broken phases the order parameter becomes an SU(2) doublet, with the ground state fixing the expectation value of the sum of the magnitude squared of the pion and the diquark fields. Furthermore, we find that the Lagrangian at equal chemical potentials is invariant under global SU(2) transformations and construct the effective Lagrangian of the three Goldstone degrees of freedom by integrating out the radial fluctuations.

Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.

The Rayleigh Principle states that the minimum separation between two reflectors that allows them to be visually separated is the separation where the wavelet maxima from the two superimposed reflections combine into one maximum. This happens around Δtres = λb/8, where λb is the predominant...... lower vertical resolution of reflection seismic data. In the following we will revisit think layer model and demonstrate that there is in practice no limit to the vertical resolution using the parameterization of Widess (1973), and that the vertical resolution is limited by the noise in the data...

A Galileon field is one which obeys a spacetime generalization of the non- relativistic Galilean invariance. Such a field may possess non-canonical kinetic terms, but ghost-free theories with a well-defined Cauchy problem exist, constructed using a finite number of relevant operators. The interactions of this scalar with matter are hidden by the Vainshtein effect, causing the Galileon to become weakly coupled near heavy sources. We revisit estimates of the fifth force mediated by a Galileon field, and show that the parameters of the model are less constrained by experiment than previously supposed. (orig.)

Recently there has been progress in the computation of the anomalous dimensions of gauge theory operators at strong coupling by making use of the AdS/CFT correspondence. On the string theory side they are given by dispersion relations in the semiclassical regime. We revisit the problem of a large-charge expansion of the dispersion relations for simple semiclassical strings in an [Formula: see text] background. We present the calculation of the corresponding anomalous dimensions of the gauge theory operators to an arbitrary order using three different methods. Although the results of the three methods look different, power series expansions show their consistency.

The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

The task of the paper and the seminar was to revisit some of Nicholas Garnham’s ideas, writings and contributions to the study of the Political Economy of Communication and to reflect on the concepts, history, current status and perspectives of this field and the broader study of political economy today. The topics covered include Raymond Williams’ cultural materialism, Pierre Bourdieu’s sociology of culture, the debate between Political Economy and Cultural Studies, information society theory, Karl Marx’s theory and the critique of capitalism.

The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

Phase wrapping in the frequency domain or cycle skipping in the time domain is the major cause of the local minima problem in the waveform inversion when the starting model is far from the true model. Since the phase derivative does not suffer from the wrapping effect, its inversion has the potential of providing a robust and reliable inversion result. We propose a new waveform inversion algorithm using the phase derivative in the frequency domain along with the exponential damping term to attenuate reflections. We estimate the phase derivative, or what we refer to as the instantaneous traveltime, by taking the derivative of the Fourier-transformed wavefield with respect to the angular frequency, dividing it by the wavefield itself and taking the imaginary part. The objective function is constructed using the phase derivative and the gradient of the objective function is computed using the back-propagation algorithm. Numerical examples show that our inversion algorithm with a strong damping generates a tomographic result even for a high ‘single’ frequency, which can be a good initial model for full waveform inversion and migration.

Full Text Available The article describes specific questions student learning inverse problems of mathematical physics. When teaching inverse problems of mathematical physics to the understanding of the students brought the information that the inverse problems of mathematical physics with a philosophical point of view are the problems of determining the unknown causes of known consequences, and the search for their solutions have great scientific and educational potential. The reasons are specified in the form of unknown coefficients, right side, initial conditions of the mathematical model of inverse problems, and as a consequence are functionals of the solution of this mathematical model. In the process of learning the inverse problems of mathematical physics focuses on the philosophical aspects of the phenomenon of information and identify cause-effect relations. It is emphasized that in the process of logical analysis applied and humanitarian character, students realize that information is always related to the fundamental philosophical questions that the analysis applied and the humanitarian aspects of the obtained results the inverse problem of mathematical physics allows students to make appropriate inferences about the studied process and to, ultimately, new information, to study its properties and understand its value. Philosophical understanding of the notion of information opens up to students a new methodological opportunities to comprehend the world and helps us to reinterpret existing science and philosophy of the theory related to the disclosure of the interrelationship of all phenomena of reality.

A reexamination of the collapse of 4f and 5f electrons in the lanthanide and actinide series is presented. The calculations show the well-known collapse of the f electron density at the thresholds of these series along with an f 2 collapse between thorium and protactinium. The collapse is sensitive to the choice of model for the exchange-correlation potential and the behavior of the potential at large radius

We here consider inverse photon-photon processes, i.e. AB → γγX (where A, B are hadrons, in particular protons or antiprotons), at high energies. As regards the production of a γγ continuum, we show that, under specific conditions the study of such processes might provide some information on the subprocess gg γγ, involving a quark box. It is also suggested to use those processes in order to systematically look for heavy C = + structures (quarkonium states, gluonia, etc.) showing up in the γγ channel. Inverse photon-photon processes might thus become a new and fertile area of investigation in high-energy physics, provided the difficult problem of discriminating between direct photons and indirect ones can be handled in a satisfactory way

The RAE-1 spacecraft inversion performed October 31, 1972 is described based upon the in-orbit dynamical data in conjunction with results obtained from previously developed computer simulation models. The computer simulations used are predictive of the satellite dynamics, including boom flexing, and are applicable during boom deployment and retraction, inter-phase coast periods, and post-deployment operations. Attitude data, as well as boom tip data, were analyzed in order to obtain a detailed description of the dynamical behavior of the spacecraft during and after the inversion. Runs were made using the computer model and the results were analyzed and compared with the real time data. Close agreement between the actual recorded spacecraft attitude and the computer simulation results was obtained.

A system of nonlinear equations is presented for the solution of the Cox-Thompson inverse scattering problem (1970 J. Math. Phys. 11 805) at fixed energy. From a given finite set of phase shifts for physical angular momenta, the nonlinear equations determine related sets of asymptotic normalization constants and nonphysical (shifted) angular momenta from which all quantities of interest, including the inversionpotential itself, can be calculated. As a first application of the method we use input data consisting of a finite set of phase shifts calculated from Woods-Saxon and box potentials representing interactions with diffuse or sharp surfaces, respectively. The results for the inversionpotentials, their first moments and asymptotic properties are compared with those provided by the Newton-Sabatier quantum inversion procedure. It is found that in order to achieve inversionpotentials of similar quality, the Cox-Thompson method requires a smaller set of phase shifts than the Newton-Sabatier procedure.

A system of nonlinear equations is presented for the solution of the Cox-Thompson inverse scattering problem (1970 J. Math. Phys. 11 805) at fixed energy. From a given finite set of phase shifts for physical angular momenta, the nonlinear equations determine related sets of asymptotic normalization constants and nonphysical (shifted) angular momenta from which all quantities of interest, including the inversionpotential itself, can be calculated. As a first application of the method we use input data consisting of a finite set of phase shifts calculated from Woods-Saxon and box potentials representing interactions with diffuse or sharp surfaces, respectively. The results for the inversionpotentials, their first moments and asymptotic properties are compared with those provided by the Newton-Sabatier quantum inversion procedure. It is found that in order to achieve inversionpotentials of similar quality, the Cox-Thompson method requires a smaller set of phase shifts than the Newton-Sabatier procedure

Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion

Full Text Available We give sufficient conditions when a topological inverse $\\lambda$-polycyclic monoid $P_{\\lambda}$ is absolutely $H$-closed in the class of topological inverse semigroups. For every infinite cardinal $\\lambda$ we construct the coarsest semigroup inverse topology $\\tau_{mi}$ on $P_\\lambda$ and give an example of a topological inverse monoid $S$ which contains the polycyclic monoid $P_2$ as a dense discrete subsemigroup.

Automatic differentiation (AD) is the technique whereby output variables of a computer code evaluating any complicated function (e.g. the solution to a differential equation) can be differentiated with respect to the input variables. Often AD tools take the form of source to source translators and produce computer code without the need for deriving and hand coding of explicit mathematical formulae by the user. The power of AD lies in the fact that it combines the generality of finite difference techniques and the accuracy and efficiency of analytical derivatives, while at the same time eliminating `human' coding errors. It also provides the possibility of accurate, efficient derivative calculation from complex `forward' codes where no analytical derivatives are possible and finite difference techniques are too cumbersome. AD is already having a major impact in areas such as optimization, meteorology and oceanography. Similarly it has considerable potential for use in non-linear inverse problems in geophysics where linearization is desirable, or for sensitivity analysis of large numerical simulation codes, for example, wave propagation and geodynamic modelling. At present, however, AD tools appear to be little used in the geosciences. Here we report on experiments using a state of the art AD tool to perform source to source code translation in a range of geoscience problems. These include calculating derivatives for Gibbs free energy minimization, seismic receiver function inversion, and seismic ray tracing. Issues of accuracy and efficiency are discussed.

Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R{sub 0} and the nominal value of the potential V(R{sub 0}) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD.

Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R 0 and the nominal value of the potential V(R 0 ) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD

The steady states of a Dirac's particle under an external electromagnetic potential, described by A μ and its derivatives in presented. Coupling constants are given by the Quantum Electrodynamics. Through a Lagrangian density, a Dirac equation is obtained with terms which present the Lamb shift and the interaction of the anomalous magnetic moment with the nuclear magnetic moment. These effects are treated as perturbations, with the condition Zα [pt

The article sums up a number of points made by the author concerning the response to Darwinism in the late nineteenth and early twentieth centuries, and repeats the claim that a proper understanding of the theory's impact must take account of the extent to which what are now regarded as the key aspects of Darwin's thinking were evaded by his immediate followers. Potential challenges to this position are described and responded to.

The transformation assigning to every point its inverse with respect to a circle with given radius and center is called an inversion. Discusses inversion with respect to points, circles, angles, distances, space, and the parallel postulate. Exercises related to these topics are included. (MDH)

Iran's new government has not yet made a final decision about the fate of that country's once ambitious nuclear power programme. If the programme is kept alive, it will be limited to the completion of at most one or two of the reactors that were already well underway when the revolution broke out. The author traces the origins and growth of the Iranian nuclear power programme between 1974 and 1978, summarizes the principal economic, infrastructural, and political criticisms of the programme as originally planned, discusses the potential for greater use of natural gas as an alternative and, finally, recommends a long, detailed reassessment of Iran's energy options. (author)

A model is proposed for pulsar optical and gamma-ray emission where relativistic electrons beams: (i) scatter the blackbody photons from the polar cap surface giving inverse Compton gamma-rays and (ii) produce synchrotron optical photons in the light cylinder region which are then inverse Compton scattered giving other gamma-rays. The model is applied to the Vela pulsar, explaining the first gamma-ray pulse by inverse Compton scattering of synchrotron photons near the light cylinder and the second gamma-ray pulse partly by inverse Compton scattering of synchrotron photons and partly by inverse Compton scattering of the thermal blackbody photons near the star surface. (author)

A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

Full Text Available The GPS meteorology (GPS/MET experiment, led by the Universities Corporation for Atmospheric Research (UCAR, consists of a GPS receiver aboard a low earth orbit (LEO satellite which was launched on 3 April 1995. During a radio occultation the LEO satellite rises or sets relative to one of the 24 GPS satellites at the Earth's horizon. Thereby the atmospheric layers are successively sounded by radio waves which propagate from the GPS satellite to the LEO satellite. From the observed phase path increases, which are due to refraction of the radio waves by the ionosphere and the neutral atmosphere, the atmospheric parameter refractivity, density, pressure and temperature are calculated with high accuracy and resolution (0.5–1.5 km. In the present study, practical aspects of the GPS/MET data analysis are discussed. The retrieval is based on the Abelian integral inversion of the atmospheric bending angle profile into the refractivity index profile. The problem of the upper boundary condition of the Abelian integral is described by examples. The statistical optimization approach which is applied to the data above 40 km and the use of topside bending angle profiles from model atmospheres stabilize the inversion. The retrieved temperature profiles are compared with corresponding profiles which have already been calculated by scientists of UCAR and Jet Propulsion Laboratory (JPL, using Abelian integral inversion too. The comparison shows that in some cases large differences occur (5 K and more. This is probably due to different treatment of the upper boundary condition, data runaways and noise. Several temperature profiles with wavelike structures at tropospheric and stratospheric heights are shown. While the periodic structures at upper stratospheric heights could be caused by residual errors of the ionospheric correction method, the periodic temperature fluctuations at heights below 30 km are most likely caused by atmospheric waves (vertically

Full Text Available We revisit the visibility problem, which is traditionally known in Computer Graphics and Vision fields as the process of computing a (potentially visible set of primitives in the computational model of a scene. We propose a hybrid solution that uses a dry structure (in the sense of data reduction, a triangulation of the type J1a, to accelerate the task of searching for visible primitives. We came up with a solution that is useful for real-time, on-line, interactive applications as 3D visualization. In such applications the main goal is to load the minimum amount of primitives from the scene during the rendering stage, as possible. For this purpose, our algorithm executes the culling by using a hybrid paradigm based on viewing-frustum, back-face culling and occlusion models. Results have shown substantial improvement over these traditional approaches if applied separately. This novel approach can be used in devices with no dedicated processors or with low processing power, as cell phones or embedded displays, or to visualize data through the Internet, as in virtual museums applications.

Being the second most abundant molecule in the ISM, CO has been well observed and studied as a tracer for many astrophysical processes. Highly rovibrationally excited CO emission is used to reveal features in intense UV-irradiated regions such as the inner rim of protoplanetary disks, carbon star envelopes, and star forming regions. Collisional rate coefficients are crucial for non-local thermodynamic equilibrium (NLTE) molecular analysis in such regions, while data for high rovibrational levels for CO were previously unavailable. Here we revisit CO excitation properties with comprehensive collisional data including high rovibrational states (up to v=5 and J=40) colliding with H2, H and He, in various NLTE astrophysical environments with the spectral modeling packages RADEX and Cloudy. We studied line ratio diagnostics between low- and high-vibrational transitions with RADEX. Using Cloudy, we investigated molecular properties in complex environments, such as photodissociation regions and the outflow of the carbon star IRC+10216, illustrating the potential for utilizing high rovibrational NLTE analysis in future astrophysical modeling.This work was supported by NASA Grants NNX15AI61G and NNX16AF09G.

Evans blue (EB) dye has owned a long history as a biological dye and diagnostic agent since its first staining application by Herbert McLean Evans in 1914. Due to its high water solubility and slow excretion, as well as its tight binding to serum albumin, EB has been widely used in biomedicine, including its use in estimating blood volume and vascular permeability, detecting lymph nodes, and localizing the tumor lesions. Recently, a series of EB derivatives have been labeled with PET isotopes and can be used as theranostics with a broad potential due to their improved half-life in the blood and reduced release. Some of EB derivatives have even been used in translational applications in clinics. In addition, a novel necrosis-avid feature of EB has recently been reported in some preclinical animal studies. Given all these interesting and important advances in EB study, a comprehensive revisiting of EB has been made in its biomedical applications in the review.

The study of the Inverse Free-Electron Laser, as a potential mode of electron acceleration, has been pursued at Brookhaven National Laboratory for a number of years. More recent studies focused on the development of a low energy (few GeV), high gradient, multistage linear accelerator. The authors are presently designing a short accelerator module which will make use of the 50 MeV linac beam and high power (2 x 10 11 W) CO 2 laser beam of the Accelerator Test Facility (ATF) at the Center for Accelerator Physics (CAP), Brookhaven National Laboratory. These elements will be used in conjunction with a fast excitation (300 μsec pulse duration) variable period wiggler, to carry out an accelerator demonstration stage experiment

If ankle joint cryotherapy impairs the ability of the ankle musculature to counteract potentially injurious forces, the ankle is left vulnerable to injury. To compare peroneal reaction to sudden inversion following ankle joint cryotherapy. Repeated measures design with independent variables, treatment (cryotherapy and control), and time (baseline, immediately post treatment, 15 minutes post treatment, and 30 minutes post treatment). University research laboratory. Twenty-seven healthy volunteers. An ice bag was secured to the lateral ankle joint for 20 minutes. The onset and average root mean square amplitude of EMG activity in the peroneal muscles was calculated following the release of a trap door mechanism causing inversion. There was no statistically significant change from baseline for peroneal reaction time or average peroneal muscle activity at any post treatment time. Cryotherapy does not affect peroneal muscle reaction following sudden inversion perturbation.

Inversions are DNA rearrangements that are essential for plant gene evolution and adaptation to environmental changes. We demonstrate the creation of targeted inversions and previously reported targeted deletion mutations via delivery of a pair of RNA-guided endonucleases (RGENs) of CRISPR/Cas9. The efficiencies of the targeted inversions were 2.6%and 2.2%in the Arabidopsis FLOWERING TIME (AtFT) and TERMINAL FLOWER 1 (AtTFL1) loci, respectively. Thus, we successfully established an approach that can potentially be used to introduce targeted DNA inversions of interest for functional studies and crop improvement.

Biomarkers of replicative senescence can be defined as those ultrastructural and physiological variations as well as molecules whose changes in expression, activity or function correlate with aging, as a result of the gradual exhaustion of replicative potential and a state of permanent cell cycle...... arrest. The biomarkers that characterize the path to an irreversible state of cell cycle arrest due to proliferative exhaustion may also be shared by other forms of senescence-inducing mechanisms. Validation of senescence markers is crucial in circumstances where quiescence or temporary growth arrest may...... be triggered or is thought to be induced. Pre-senescence biomarkers are also important to consider as their presence indicate that induction of aging processes is taking place. The bona fide pathway leading to replicative senescence that has been extensively characterized is a consequence of gradual reduction...

Motivated by recent experimental results and ongoing measurements, we review the chiral perturbation theory prediction for K L →π -+ e ± ν e γ decays. Special emphasis is given to the stability of the inner bremsstrahlung-dominated relative branching ratio versus the K e3 form factors, and on the separation of the structure-dependent amplitude in differential distributions over the phase space. For the structure-dependent terms, an assessment of the order p 6 corrections is given, in particular, a full next-to-leading order calculation of the axial component is performed. The experimental analysis of the photon energy spectrum is discussed, and other potentially useful distributions are introduced. (orig.)

Dispatchability is an important property for the efficient execution of temporal plans where the temporal constraints are represented as a Simple Temporal Network (STN). It has been shown that every STN may be reformulated as a dispatchable STN, and dispatchability ensures that the temporal constraints need only be satisfied locally during execution. Recently it has also been shown that Simple Temporal Networks with Uncertainty, augmented with wait edges, are Dynamically Controllable provided every projection is dispatchable. Thus, the dispatchability property has both theoretical and practical interest. One thing that hampers further work in this area is the underdeveloped theory. The existing definitions are expressed in terms of algorithms, and are less suitable for mathematical proofs. In this paper, we develop a new formal theory of dispatchability in terms of execution sequences. We exploit this to prove a characterization of dispatchability involving the structural properties of the STN graph. This facilitates the potential application of the theory to uncertainty reasoning.

We present in detail an extended resolvent approach for investigating linear problems associated to 2+1 dimensional integrable equations. Our presentation is based as an example on the nonstationary Schrödinger equation with potential being a perturbation of the one-soliton potential by means of a decaying two-dimensional function. Modification of the inverse scattering theory as well as properties of the Jost solutions and spectral data as follows from the resolvent approach are given.

We present in detail an extended resolvent approach for investigating linear problems associated to 2+1 dimensional integrable equations. Our presentation is based as an example on the nonstationary Schroedinger equation with potential being a perturbation of the one-soliton potential by means of a decaying two-dimensional function. Modification of the inverse scattering theory as well as properties of the Jost solutions and spectral data as follows from the resolvent approach are given

Highlights: • The paper applies the Extreme Learning Machines (ELMs) to inverse reactor problems. • Multi-group transport model is used for the inversion as opposed to point kinetics. • ELMs are compared against Artificial Neural Networks (ANNs). • Various options are tested to improve the reliability of the estimation. • Results highlight the potential of the ELM approach. - Abstract: The paper presents the application of Extreme Leaning Machines (ELMs) for inverse reactor kinetic applications. ELMs were proposed by Huang and co-workers (2004, 2006a,b, 2015), which showed their enhances capabilities in terms of training speed and generalization with respect to classical Artificial Neural Networks (ANNs). ELMs are here implemented for reactivity determination as an alternative to ANNs (e.g. Picca et al. (2008)) and Gaussian Processes (Picca and Furfaro, 2012). After a review of the main features of ELMs, their application to inverse kinetic problems is proposed. The ELMs performance is tested on a typical accelerator drive system configuration (Yalina reactor) and the inversion is carried out on an accurate kinetic model (multi-group transport).

Inverse treatment planning starts with a treatment objective and obtains the solution by optimizing an objective function. The clinical objectives are usually multifaceted and potentially incompatible with one another. A set of importance factors is often incorporated in the objective function to parametrize trade-off strategies and to prioritize the dose conformality in different anatomical structures. Whereas the general formalism remains the same, different sets of importance factors characterize plans of obviously different flavour and thus critically determine the final plan. Up to now, the determination of these parameters has been a 'guessing' game based on empirical knowledge because the final dose distribution depends on the parameters in a complex and implicit way. The influence of these parameters is not known until the plan optimization is completed. In order to compromise properly the conflicting requirements of the target and sensitive structures, the parameters are usually adjusted through a trial-and-error process. In this paper, a method to estimate these parameters computationally is proposed and an iterative computer algorithm is described to determine these parameters numerically. The treatment plan selection is done in two steps. First, a set of importance factors are chosen and the corresponding beam parameters (e.g. beam profiles) are optimized under the guidance of a quadratic objective function using an iterative algorithm reported earlier. The 'optimal' plan is then evaluated by an additional scoring function. The importance factors in the objective function are accordingly adjusted to improve the ranking of the plan. For every change in the importance factors, the beam parameters need to be re-optimized. This process continues in an iterative fashion until the scoring function is saturated. The algorithm was applied to two clinical cases and the results demonstrated that it has the potential to improve significantly the existing method of

We revisit a special model of gauge mediated supersymmetry breaking, the “R-invariant direct gauge mediation.” We pay particular attention to whether the model is consistent with the minimal model of the μ-term, i.e., a simple mass term of the Higgs doublets in the superpotential. Although the incompatibility is highlighted in view of the current experimental constraints on the superparticle masses and the observed Higgs boson mass, the minimal μ-term can be consistent with the R-invariant gauge mediation model via a careful choice of model parameters. We derive an upper limit on the gluino mass from the observed Higgs boson mass. We also discuss whether the model can explain the 3σ excess of the Z+jets+E{sub T}{sup miss} events reported by the ATLAS collaboration.

This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse conductivity can be explicitly computed and coincides with the classical result. In the general case, using magnetic perturbation theory, the conductivity tensor is expanded in powers of the strength of the magnetic field $B$. Then the linear term in $B$ of this expansion is written down in terms of the zero magnetic field Green function and the zero field current operator. In the periodic case, the linear term in $B$ of the conductivity tensor is expressed in terms of zero magnetic field Bloch functions and energies. No derivatives with respect to the quasimomentum appear and thereby all ambiguities are removed, in contrast to earlier work.

We revisit the calculation of instanton effects in correlation functions in N=4 SYM involving the Konishi operator and operators of twist two. Previous studies revealed that the scaling dimensions and the OPE coefficients of these operators do not receive instanton corrections in the semiclassical approximation. We go beyond this approximation and demonstrate that, while operators belonging to the same N=4 supermultiplet ought to have the same conformal data, the evaluation of quantum instanton corrections for one operator can be mapped into a semiclassical computation for another operator in the same supermultiplet. This observation allows us to compute explicitly the leading instanton correction to the scaling dimension of operators in the Konishi supermultiplet as well as to their structure constants in the OPE of two half-BPS scalar operators. We then use these results, together with crossing symmetry, to determine instanton corrections to scaling dimensions of twist-four operators with large spin.

Full Text Available New physics contributions to the Z penguin are revisited in the light of the recently-reported discrepancy of the direct CP violation in K→ππ. Interference effects between the standard model and new physics contributions to ΔS=2 observables are taken into account. Although the effects are overlooked in the literature, they make experimental bounds significantly severer. It is shown that the new physics contributions must be tuned to enhance B(KL→π0νν¯, if the discrepancy of the direct CP violation is explained with satisfying the experimental constraints. The branching ratio can be as large as 6×10−10 when the contributions are tuned at the 10% level.

We revisit the derivation of the density of states of sparse random matrices. We derive a recursion relation that allows one to compute the spectrum of the matrix of incidence for finite trees that determines completely the low concentration limit. Using the iterative scheme introduced by Biroli and Monasson [J. Phys. A 32, L255 (1999)] we find an approximate expression for the density of states expected to hold exactly in the opposite limit of large but finite concentration. The combination of the two methods yields a very simple geometric interpretation of the tails of the spectrum. We test the analytic results with numerical simulations and we suggest an indirect numerical method to explore the tails of the spectrum. (author)

A coupling between a light scalar field and neutrinos has been widely discussed as a mechanism for linking (time varying) neutrino masses and the present energy density and equation of state of dark energy. However, it has been pointed out that the viability of this scenario in the non-relativistic neutrino regime is threatened by the strong growth of hydrodynamic perturbations associated with a negative adiabatic sound speed squared. In this paper we revisit the stability issue in the framework of linear perturbation theory in a model independent way. The criterion for the stability of a model is translated into a constraint on the scalar-neutrino coupling, which depends on the ratio of the energy densities in neutrinos and cold dark matter. We illustrate our results by providing meaningful examples both for stable and unstable models. (orig.)

Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior. (topical review)

The seismic reflection method seeks to extract maps of the Earth's sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled sound sources positioned near the surface. Reasonably accurate models of seismic energy propagation take the form of hyperbolic systems of partial differential equations, in which the coefficients represent the spatial distribution of various mechanical characteristics of rock (density, stiffness, etc). Thus the fundamental problem of reflection seismology is an inverse problem in partial differential equations: to find the coefficients (or at least some of their properties) of a linear hyperbolic system, given the values of a family of solutions in some part of their domains. The exploration geophysics community has developed various methods for estimating the Earth's structure from seismic data and is also well aware of the inverse point of view. This article reviews mathematical developments in this subject over the last 25 years, to show how the mathematics has both illuminated innovations of practitioners and led to new directions in practice. Two themes naturally emerge: the importance of single scattering dominance and compensation for spectral incompleteness by spatial redundancy. (topical review)

It is rarely taught in an undergraduate or even graduate curriculum that the only conformal maps in Euclidean space of dimension greater than two are those generated by similarities and inversions in spheres. This is in stark contrast to the wealth of conformal maps in the plane. The principal aim of this text is to give a treatment of this paucity of conformal maps in higher dimensions. The exposition includes both an analytic proof in general dimension and a differential-geometric proof in dimension three. For completeness, enough complex analysis is developed to prove the abundance of conformal maps in the plane. In addition, the book develops inversion theory as a subject, along with the auxiliary theme of circle-preserving maps. A particular feature is the inclusion of a paper by Carath�odory with the remarkable result that any circle-preserving transformation is necessarily a M�bius transformation, not even the continuity of the transformation is assumed. The text is at the level of advanced undergr...

The LHC is enjoying a confluence of twos. This morning (Friday 5 August) we passed 2 inverse femtobarns delivered in 2011; the peak luminosity is now just over 2 x1033 cm-2s-1; and recently fill 2000 was in for nearly 22 hours and delivered around 90 inverse picobarns, almost twice 2010's total. In order to increase the luminosity we can increase of number of bunches, increase the number of particles per bunch, or decrease the transverse beam size at the interaction point. The beam size can be tackled in two ways: either reduce the size of the injected bunches or squeeze harder with the quadrupole magnets situated on either side of the experiments. Having increased the number of bunches to 1380, the maximum possible with a 50 ns bunch spacing, a one day meeting in Crozet decided to explore the other possibilities. The size of the beams coming from the injectors has been reduced to the minimum possible. This has brought an increase in the peak luminosity of about 50% and the 2 x 1033 cm...

Experimental developments principally concerning electron sources for inverse photoemission are presented. The specifications of the electron beam are derived from experiment requirements, taking into account the limitations encountered (space charge divergence). For a wave vector resolution of 0.2 A -1 , the maximum current is 25 microA at 20 eV. The design of a gun providing such a beam in the range 5 to 50 eV is presented. Angle-resolved inverse photoemission experiments show angular effects at 30 eV. For an energy of 10 eV, angular effects should be stronger, but the low efficiency of the spectrometer in this range makes the experiments difficult. The total energy resolution of 0.3 eV is the result mainly of electron energy spread, as expected. The electron sources are based on field effect electron emission from a cathode consisting of a large number of microtips. The emission arises from a few atomic cells for each tip. The ultimate theoretical energy spread is 0.1 eV. This value is not attained because of an interface resistance problem. A partial solution of this problem allows measurement of an energy spread of 0.9 eV for a current of 100 microA emitted at 60 eV. These cathodes have a further advantage in that emission can occur at a low temperature [fr

Inverse scattering theory has been applied to construct the interaction potentials from total cross sections as a function of energy for electrons scattered off of atoms and molecules. The underlying potentials are assumed to be real and energy independent and are evaluated using the Eikonal approximation and with real phase shifts determined from the total cross sections. The inversionpotentials have been determined using either a high energy limit approximation or by using a fixed energy inversion method at select energies. These procedures have been used to analyse e - - CH 4 , e - - SiH 4 , e - -Kr and e - -Xe scattering data in particular. 14 refs., 1 tabs., 3 figs

In this work we develop a new version of the fuzzy bag model. Th main ideas is to include the conservation of energy and momentum in the model. This feature is not included in the original formulation of the fuzzy bag model, but is of paramount importance to interpret the model as being a bag model - that, is a model in which the outward pressure of the quarks inside the bag is balanced by the inward pressure of the non-perturbative vacuum outside the bag - as opposed to a relativistic potential model, in which there is no energy-momentum conservation. In the MT bag model, as well as in the original version of the fuzzy bag model, the non-perturbative QCD vacuum is parametrized by a constant B in the Lagrangian density. One immediate consequence of including energy-momentum conservation in the fuzzy bag model is that the bag constant B will acquire a radial dependence, B = B(r). (author)

In this work we develop a new version of the fuzzy bag model. Th main ideas is to include the conservation of energy and momentum in the model. This feature is not included in the original formulation of the fuzzy bag model, but is of paramount importance to interpret the model as being a bag model - that, is a model in which the outward pressure of the quarks inside the bag is balanced by the inward pressure of the non-perturbative vacuum outside the bag - as opposed to a relativistic potential model, in which there is no energy-momentum conservation. In the MT bag model, as well as in the original version of the fuzzy bag model, the non-perturbative QCD vacuum is parametrized by a constant B in the Lagrangian density. One immediate consequence of including energy-momentum conservation in the fuzzy bag model is that the bag constant B will acquire a radial dependence, B = B(r). (author)

The objective of this study was to assess the clinical value of pelvimetry to predict dystocia due to cephalopelvic disproportion. 63 patients who had received an abdominal CT scan postpartum were included. Pelvimetry was performed retrospectively with these datasets on a 3D workstation; there were no CT examinations performed solely for pelvimetry, and there was no radiation exposure for study purposes. Patients were divided into three groups by the course of birth, i.e. normal vaginal delivery (A), dystocia due to cephalopelvic disproportion (B) and other patients (C). Previously described methods were evaluated for their accuracy in diagnosing cephalopelvic disproportion. The pelvimetric parameters did not show significant differences between groups A (n = 20) and B (n = 20) except for the sagittal mid-pelvic diameter (q) with 12.7 {+-} 0.6 cm vs. 11.9 {+-} 0.6 cm (p = 0.0001). The ROC analysis of the previously described methods showed areas under the curve between 0.50 and 0.67. The ROC curves for q had an area of 0.88, providing 85% sensitivity with 85% specificity. In conclusion, the sagittal mid-pelvic diameter shows potential to detect cephalopelvic disproportion with acceptable accuracy. With the information gained on the CT data, a prospective trial based on MR imaging can be set up to validate the diagnostic accuracy.

The objective of this study was to assess the clinical value of pelvimetry to predict dystocia due to cephalopelvic disproportion. 63 patients who had received an abdominal CT scan postpartum were included. Pelvimetry was performed retrospectively with these datasets on a 3D workstation; there were no CT examinations performed solely for pelvimetry, and there was no radiation exposure for study purposes. Patients were divided into three groups by the course of birth, i.e. normal vaginal delivery (A), dystocia due to cephalopelvic disproportion (B) and other patients (C). Previously described methods were evaluated for their accuracy in diagnosing cephalopelvic disproportion. The pelvimetric parameters did not show significant differences between groups A (n = 20) and B (n = 20) except for the sagittal mid-pelvic diameter (q) with 12.7 ± 0.6 cm vs. 11.9 ± 0.6 cm (p = 0.0001). The ROC analysis of the previously described methods showed areas under the curve between 0.50 and 0.67. The ROC curves for q had an area of 0.88, providing 85% sensitivity with 85% specificity. In conclusion, the sagittal mid-pelvic diameter shows potential to detect cephalopelvic disproportion with acceptable accuracy. With the information gained on the CT data, a prospective trial based on MR imaging can be set up to validate the diagnostic accuracy.

The purpose of this text is to present the theory and mathematics of inverse scattering, in a simple way, to the many researchers and professionals who use it in their everyday research. While applications range across a broad spectrum of disciplines, examples in this text will focus primarly, but not exclusively, on acoustics. The text will be especially valuable for those applied workers who would like to delve more deeply into the fundamentally mathematical character of the subject matter.Practitioners in this field comprise applied physicists, engineers, and technologists, whereas the theory is almost entirely in the domain of abstract mathematics. This gulf between the two, if bridged, can only lead to improvement in the level of scholarship in this highly important discipline. This is the book''s primary focus.

Literature pertaining to the effects of cannabis use and health which has been published during the past 11 years has been reviewed. Many older concerns about adverse effects on health (chromosomal damage, 'cannabinol psychosis', endocrine abnormalities, cardiac events, impaired immunity) no longer seem to elicit much interest. Continuing concerns about the adverse cognitive effects of chronic use indicate that these can be demonstrated by proper testing; some studies suggest that they may be long-lasting. Although cannabis does not produce a specific psychosis, the possibility exists that it may exacerbate schizophrenia in persons predisposed to that disorder. However, evidence from retrospective surveys must always be questioned. Tolerance and dependence have occurred in man, confirming previous findings in many other species. Addiction tends to be mild and is probably less severe than with other social drugs. Driving under the influence of cannabis is impaired acutely; how long such impairments last is still unknown. More exacting tasks, such as flying an airplane, may be impaired for as long as 24 hours. While there is no doubt that marijuana smoke contains carcinogens, an increase in cancer among users has thus far been anecdotal. Because of the long latent period between cancer induction and initiation of cigarette smoking, the full story is yet to be told. Marijuana use during pregnancy is not advised although the consequences are usually not greater than those of smoking cigarettes, and far less than those from alcohol use. Whether smoked marijuana should become a therapeutic agent requires a cost-benefit analysis of the potential benefits versus the adverse effects of such use as we now know them.

At the root of science lie basic rules, if we can discover or deduce them. This is not an abstract project but practical; if we can understand the why then perhaps we can rationally intervene. One of the unifying unsolved problems in physics is the hypothetical "Theory of Everything." In a similar vein, we can ask whether our own field contains such hidden fundamental truths and, if so, how we can use them to develop better therapies and outcomes for our patients. Modern oncology has developed as drugs and translational science have matured over the 50 years since ASCO's founding, but almost from that beginning tumor modeling has been a key tool. Through this general approach Norton and Simon changed our understanding of cancer biology and response to therapy when they described the fit of Gompertzian curves to both clinical and animal observations of tumor growth. The practical relevance of these insights has only grown with the development of DNA sequencing promising a raft of new targets (and drugs). In that regard, Larry Norton's contribution to this year's Educational Book reminds us to always think creatively about the fundamental problems of tumor growth and metastases as well as therapeutic response. Demonstrating the creativity and thoughtfulness that have marked his remarkable career, he now incorporates a newer concept of self-seeding to further explain why Gompertzian growth occurs and, in the process, provides a novel potential therapeutic target. As you read his elegantly presented discussion, consider how this understanding, wisely applied to the modern era of targeted therapies, might speed the availability of better treatments. But even more instructive is his personal model-not only the Norton-Simon Hypothesis-of how to live and approach science, biology, patients and their families, as well as the broader community. He shows that with energy, enthusiasm, optimism, intellect, and hard work we can make the world better. Clifford A. Hudis, MD, FACP

are a reasonable model of the signal processing performed by the human cochlea. The robustness of the reconstruction from such spectrograms with regards to the properties of the cochlear model showed that, for previously documented IHC models as well as for more restrictive conditions, the TFS-related information...

Full text: Based on the concept of generalized transformation operators a new hierarchy of Dirac equations with spherical symmetric scalar and fourth component vector potentials is presented. Within this hierarchy closed form expressions for the solutions, the potentials and the S-matrix can be given in terms of solutions of the original Dirac equation. Using these transformations an inverse scattering scheme has been constructed for the Dirac equation which is the analog to the rational scheme in the non-relativistic case. The given method provides for the first time an inversion scheme with closed form expressions for the S-matrix for non-relativistic scattering problems with central and spin-orbit potentials. (author)

A successful full wavenumber inversion (FWI) implementation updates the low wavenumber model components first for proper wavefield propagation description, and slowly adds the high-wavenumber potentially scattering parts of the model. The low-wavenumber components can be extracted from the transmission parts of the recorded data given by direct arrivals or the transmission parts of the single and double-scattering wave-fields developed from a predicted scatter field. We develop a combined inversion of data modeled from the source and those corresponding to single and double scattering to update both the velocity model and the component of the velocity (perturbation) responsible for the single and double scattering. The combined inversion helps us access most of the potential model wavenumber information that may be embedded in the data. A scattering angle filter is used to divide the gradient of the combined inversion so initially the high wavenumber (low scattering angle) components of the gradient is directed to the perturbation model and the low wavenumber (high scattering angle) components to the velocity model. As our background velocity matures, the scattering angle divide is slowly lowered to allow for more of the higher wavenumbers to contribute the velocity model.

Full Text Available In this paper, we mainly study a cooperative search and coverage algorithm for a given bounded rectangle region, which contains several unknown stationary targets, by a team of unmanned aerial vehicles (UAVs with non-ideal sensors and limited communication ranges. Our goal is to minimize the search time, while gathering more information about the environment and finding more targets. For this purpose, a novel cooperative search and coverage algorithm with controllable revisit mechanism is presented. Firstly, as the representation of the environment, the cognitive maps that included the target probability map (TPM, the uncertain map (UM, and the digital pheromone map (DPM are constituted. We also design a distributed update and fusion scheme for the cognitive map. This update and fusion scheme can guarantee that each one of the cognitive maps converges to the same one, which reflects the targets’ true existence or absence in each cell of the search region. Secondly, we develop a controllable revisit mechanism based on the DPM. This mechanism can concentrate the UAVs to revisit sub-areas that have a large target probability or high uncertainty. Thirdly, in the frame of distributed receding horizon optimizing, a path planning algorithm for the multi-UAVs cooperative search and coverage is designed. In the path planning algorithm, the movement of the UAVs is restricted by the potential fields to meet the requirements of avoiding collision and maintaining connectivity constraints. Moreover, using the minimum spanning tree (MST topology optimization strategy, we can obtain a tradeoff between the search coverage enhancement and the connectivity maintenance. The feasibility of the proposed algorithm is demonstrated by comparison simulations by way of analyzing the effects of the controllable revisit mechanism and the connectivity maintenance scheme. The Monte Carlo method is employed to validate the influence of the number of UAVs, the sensing radius

NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input

This paper analyzes the reconstruction of diffusion and absorption parameters in an elliptic equation from knowledge of internal data. In the application of photoacoustics, the internal data are the amount of thermal energy deposited by high frequency radiation propagating inside a domain of interest. These data are obtained by solving an inverse wave equation, which is well studied in the literature. We show that knowledge of two internal data based on well-chosen boundary conditions uniquely determines two constitutive parameters in diffusion and Schrödinger equations. Stability of the reconstruction is guaranteed under additional geometric constraints of strict convexity. No geometric constraints are necessary when 2n internal data for well-chosen boundary conditions are available, where n is spatial dimension. The set of well-chosen boundary conditions is characterized in terms of appropriate complex geometrical optics solutions

Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a "teleological stance" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.

The corrosion of aluminum current collectors and the oxidation of solvents at a relatively high potential have been widely investigated with an aim to stabilize the electrochemical performance of lithium-ion batteries using such components. The corrosion behavior of aluminum current collectors was revisited using a home-build high-precision electrochemical measurement system, and the impact of electrolyte components and the surface protection layer on aluminum foil was systematically studied. The electrochemical results showed that the corrosion of aluminum foil was triggered by the electrochemical oxidation of solvent molecules, like ethylene carbonate, at a relative high potential. The organic radical cations generated from the electrochemical oxidation are energetically unstable and readily undergo a deprotonation reaction that generates protons and promotes the dissolution of Al 3+ from the aluminum foil. This new reaction mechanism can also shed light on the dissolution of transitional metal at high potentials.

Online health communication has the potential to reach large audiences, with the additional advantages that it can be operational at all times and that the costs per visitor are low. Furthermore, research shows that Internet-delivered interventions can be effective in changing health behaviors. However, exposure to Internet-delivered health-communication programs is generally low. Research investigating predictors of exposure is needed to be able to effectively disseminate online interventions. In the present study, the authors used a longitudinal design with the aim of identifying demographic, psychological, and behavioral predictors of visiting, using, and revisiting an online program promoting physical activity in the general population. A webpage was created providing the public with information about health and healthy behavior. The website included a "physical activity check," which consisted of a physical activity computer-tailoring expert system where visitors could check whether their physical activity levels were in line with recommendations. Visitors who consented to participate in the present study (n = 489) filled in a questionnaire that assessed demographics, mode of recruitment, current physical activity levels, and health motivation. Immediately after, participants received tailored feedback concerning their current physical activity levels and completed a questionnaire assessing affective and cognitive user experience, attitude toward being sufficiently physically active, and intention to be sufficiently physically active. Three months later, participants received an email inviting them once more to check whether their physical activity level had changed. Analyses of visiting showed that more women (67.5%) than men (32.5%) visited the program. With regard to continued use, native Dutch participants (odds ratio [OR] = 2.81, 95% confidence interval [CI] = 1.16-6.81, P = .02) and participants with a strong motivation to be healthy (OR = 1.46, CI = 1

From 12 to 14 September 2002, the Academy of Humanities and Economics (AHE) hosted the workshop "Optimization and Inverse Problems in Electromagnetism". After this bi-annual event, a large number of papers were assembled and combined in this book. During the workshop recent developments and applications in optimization and inverse methodologies for electromagnetic fields were discussed. The contributions selected for the present volume cover a wide spectrum of inverse and optimal electromagnetic methodologies, ranging from theoretical to practical applications. A number of new optimal and inverse methodologies were proposed. There are contributions related to dedicated software. Optimization and Inverse Problems in Electromagnetism consists of three thematic chapters, covering: -General papers (survey of specific aspects of optimization and inverse problems in electromagnetism), -Methodologies, -Industrial Applications. The book can be useful to students of electrical and electronics engineering, computer sci...

Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new...... variant of the Universal Resolving Algorithm for inverse interpretation. The new variant outperforms the original algorithm in several cases, e.g., when unpacking a list using inverse interpretation of a pack program. It uses inverse driving as its main technique, which has not been described in detail...... before. Inverse driving may find application with, e.g., supercompilation, thus suggesting a new kind of program inverter....

In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure

We present an inverse modeling approach to estimate petrophysical and elastic properties of the subsurface. The aim is to use the fully coupled geomechanics-flow model of Girault et al (2011 Math. Models Methods Appl. Sci. 21 169–213) to jointly invert surface deformation and pressure data from wells. We use a functional-analytic framework to construct a forward operator (parameter-to-output map) that arises from the geomechanics-flow model of Girault et al. Then, we follow a deterministic approach to pose the inverse problem of finding parameter estimates from measurements of the output of the forward operator. We prove that this inverse problem is ill-posed in the sense of stability. The inverse problem is then regularized with the implementation of the Newton-conjugate gradient (CG) algorithm of Hanke (1997 Numer. Funct. Anal. Optim. 18 18–971). For a consistent application of the Newton-CG scheme, we establish the differentiability of the forward map and characterize the adjoint of its linearization. We provide assumptions under which the theory of Hanke ensures convergence and regularizing properties of the Newton-CG scheme. These properties are verified in our numerical experiments. In addition, our synthetic experiments display the capabilities of the proposed inverse approach to estimate parameters of the subsurface by means of data inversion. In particular, the added value of measurements of surface deformation in the estimation of absolute permeability is quantified with respect to the standard history matching approach of inverting production data with flow models. The proposed methodology can be potentially used to invert satellite geodetic data (e.g. InSAR and GPS) in combination with production data for optimal monitoring and characterization of the subsurface. (paper)

The thesis aims to calculate the inverse kinematics for the OWI-535 robotic arm. The calculation of the inverse kinematics determines the joint parameters that provide the right pose of the end effector. The pose consists of the position and orientation, however, we will focus only on the second one. Due to arm limitations, we have created our own type of the calculation of the inverse kinematics. At first we have derived it only theoretically, and then we have transferred the derivation into...

Automatic digital electronic control system based on inverse-model-follower concept being developed for proposed vertical-attitude-takeoff-and-landing airplane. Inverse-model-follower control places inverse mathematical model of dynamics of controlled plant in series with control actuators of controlled plant so response of combination of model and plant to command is unity. System includes feedback to compensate for uncertainties in mathematical model and disturbances imposed from without.

In a series of six lectures an elementary introduction to the theory of inverse scattering is given. The first four lectures contain a detailed theory of solitons in the framework of the KdV equation, together with the inverse scattering theory of the one-dimensional Schroedinger equation. In the fifth lecture the dressing method is described, while the sixth lecture gives a brief review of the equations soluble by the inverse scattering method. (author)

We present inverse modelling (top down) estimates of European methane (CH4) emissions for 2006-2012 based on a new quality-controlled and harmonised in situ data set from 18 European atmospheric monitoring stations. We applied an ensemble of seven inverse models and performed four inversion experiments, investigating the impact of different sets of stations and the use of a priori information on emissions. The inverse models infer total CH4 emissions of 26.8 (20.2-29.7) Tg CH4 yr-1 (mean, 10th and 90th percentiles from all inversions) for the EU-28 for 2006-2012 from the four inversion experiments. For comparison, total anthropogenic CH4 emissions reported to UNFCCC (bottom up, based on statistical data and emissions factors) amount to only 21.3 Tg CH4 yr-1 (2006) to 18.8 Tg CH4 yr-1 (2012). A potential explanation for the higher range of top-down estimates compared to bottom-up inventories could be the contribution from natural sources, such as peatlands, wetlands, and wet soils. Based on seven different wetland inventories from the Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP), total wetland emissions of 4.3 (2.3-8.2) Tg CH4 yr-1 from the EU-28 are estimated. The hypothesis of significant natural emissions is supported by the finding that several inverse models yield significant seasonal cycles of derived CH4 emissions with maxima in summer, while anthropogenic CH4 emissions are assumed to have much lower seasonal variability. Taking into account the wetland emissions from the WETCHIMP ensemble, the top-down estimates are broadly consistent with the sum of anthropogenic and natural bottom-up inventories. However, the contribution of natural sources and their regional distribution remain rather uncertain. Furthermore, we investigate potential biases in the inverse models by comparison with regular aircraft profiles at four European sites and with vertical profiles obtained during the Infrastructure for Measurement of the European Carbon

Full Text Available Most primary cells use Zn or Li as the anode, a metallic oxide as the cathode, and an acidic or alkaline solution or moist past as the electrolytic solution. In this paper, highly ordered polypyrrole (PPy inverse opals have been successfully synthesized in the acetonitrile solution containing [bmim]PF6. PPy films were prepared under the same experimental conditions. Cyclic voltammograms of the PPy film and the PPy inverse opal in neutral phosphate buffer solution (PBS were recorded. X-ray photoelectron spectroscopy technique was used to investigate the structural surface of the PPy films and the PPy inverse opals. It is found that the PF6- anions kept dedoping from the PPy films during the potential scanning process, resulting in the electrochemical inactivity. Although PF6- anions also kept dedoping from the PPy inverse opals, the PO43- anions from PBS could dope into the inverse opal, explaining why the PPy inverse opals kept their electrochemical activity. An environmental friendly cell prototype was constructed, using the PPy inverse opal as the anode. The electrolytes in both the cathodic and anodic half-cells were neutral PBSs. The open-circuit potential of the cell prototype reached 0.487 V and showed a stable output over several hundred hours.

Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

Full Text Available This work describes a technique to treat the inverse kinematics of a serial manipulator. The inverse kinematics is obtained through the numerical inversion of the Jacobian matrix, that represents the equation of motion of the manipulator. The inversion is affected by numerical errors and, in different conditions, due to the numerical nature of the solver, it does not converge to a reasonable solution. Thus a soft computing approach is adopted to mix different traditional methods to obtain an increment of algorithmic convergence.

Surface layer temperature inversion (SLTI), a warm layer sandwiched between surface and subsurface colder waters, has been reported to frequently occur in conjunction with barrier layers in the Bay of Bengal (BoB), with potentially commensurable...

.... In this study, one anatomically detailed 3D FDM model of the human thorax as a volume conductor was employed for forward and inverse estimation of ECG potentials and cardiac sources, respectively...

The gradient of standard full-waveform inversion (FWI) attempts to map the residuals in the data to perturbations in the model. Such perturbations may include smooth background updates from the transmission components and high wavenumber updates from the reflection components. However, if we fix the reflection components using imaging, the gradient of what is referred to as reflected-waveform inversion (RWI) admits mainly transmission background-type updates. The drawback of existing RWI methods is that they lack an optimal image capable of producing reflections within the convex region of the optimization. Because the influence of velocity on the data was given mainly by its background (propagator) and perturbed (reflectivity) components, we have optimized both components simultaneously using a modified objective function. Specifically, we used an objective function that combined the data generated from a source using the background velocity, and that by the perturbed velocity through Born modeling, to fit the observed data. When the initial velocity was smooth, the data modeled from the source using the background velocity will mainly be reflection free, and most of the reflections were obtained from the image (perturbed velocity). As the background velocity becomes more accurate and can produce reflections, the role of the image will slowly diminish, and the update will be dominated by the standard FWI gradient to obtain high resolution. Because the objective function was quadratic with respect to the image, the inversion for the image was fast. To update the background velocity smoothly, we have combined different components of the gradient linearly through solving a small optimization problem. Application to the Marmousi model found that this method converged starting with a linearly increasing velocity, and with data free of frequencies below 4 Hz. Application to the 2014 Chevron Gulf of Mexico imaging challenge data set demonstrated the potential of the

Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

The disposal of radioactive waste must comply with the performance objectives set forth in 10 CFR 61 for low-level waste (LLW) and 10 CFR 60 for high-level waste (HLW). To determine probable compliance, the proposed disposal system can be modeled to predict its performance. One of the difficulties encountered in such a study is modeling the migration of radionuclides through a complex geologic medium for the long term. Although many radionuclide transport models exist in the literature, the accuracy of the model prediction is highly dependent on the model parameters used. The problem of using known parameters in a radionuclide transport model to predict radionuclide concentrations is a direct problem (DP); whereas the reverse of DP, i.e., the parameter identification problem of determining model parameters from known radionuclide concentrations, is called the inverse problem (IP). In this study, a procedure to solve IP is tested, using the regression technique. Several nonlinear regression programs are examined, and the best one is recommended. 13 refs., 1 tab

A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal

Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

We consider the reconstruction of optical parameters in a domain of interest from photoacoustic data. Photoacoustic tomography (PAT) radiates high-frequency electromagnetic waves into the domain and measures acoustic signals emitted by the resulting thermal expansion. Acoustic signals are then used to construct the deposited thermal energy map. The latter depends on the constitutive optical parameters in a nontrivial manner. In this paper, we develop and use an inverse transport theory with internal measurements to extract information on the optical coefficients from knowledge of the deposited thermal energy map. We consider the multi-measurement setting in which many electromagnetic radiation patterns are used to probe the domain of interest. By developing an expansion of the measurement operator into singular components, we show that the spatial variations of the intrinsic attenuation and the scattering coefficients may be reconstructed. We also reconstruct coefficients describing anisotropic scattering of photons, such as the anisotropy coefficient g(x) in a Henyey–Greenstein phase function model. Finally, we derive stability estimates for the reconstructions

In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

The pulse inversion (PI) technique can be utilized to separate and enhance harmonic components of a waveform for tissue harmonic imaging. While most ultrasound systems can perform pulse inversion, only few image the 3rd harmonic component. PI pulse subtraction can isolate and enhance the 3rd...

We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic

Two methods of Abel inverse transformation are applied to two different test profiles. The effects of random errors of input data, position uncertainty and number of points of input data on the accuracy of inverse transformation have been studied. The two methods are compared in each other

In order to study human motion in biomechanical applications, a critical component is to accurately obtain the 3D joint positions of the user's body. Computer vision and inverse kinematics are used to achieve this objective without markers or special devices attached to the body. The problem of these systems is that the inverse kinematics is "blinded" with respect to the projection of body segments into the images used by the computer vision algorithms. In this paper, we present how to add image constraints to inverse kinematics in order to estimate human motion. Specifically, we explain how to define a criterion to use images in order to guide the posture reconstruction of the articulated chain. Tests with synthetic images show how the scheme performs well in an ideal situation. In order to test its potential in real situations, more experiments with task specific image sequences are also presented. By means of a quantitative study of different sequences, the results obtained show how this approach improves the performance of inverse kinematics in this application.

Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

The inversion field-effect transistor is the basic device of modern microelectronics and is nowadays used more than a billion times on every state-of-the-art computer chip. In the future, this rigid technology will be complemented by flexible electronics produced at extremely low cost. Organic field-effect transistors have the potential to be the basic device for flexible electronics, but still need much improvement. In particular, despite more than 20 years of research, organic inversion mode transistors have not been reported so far. Here we discuss the first realization of organic inversion transistors and the optimization of organic depletion transistors by our organic doping technology. We show that the transistor parameters—in particular, the threshold voltage and the ON/OFF ratio—can be controlled by the doping concentration and the thickness of the transistor channel. Injection of minority carriers into the doped transistor channel is achieved by doped contacts, which allows forming an inversion layer. PMID:24225722

Starting from reasonable hypotheses, the magnetic moments for the baryons are revisited dat the light of general space wave functions. They allow to put very severe bounds on the quark masses as derived from usual potential models. The experimental situation cannot be explained in the framework of such models. (author)

Inversion of electromagnetic data is a topical subject in the literature, and much time has been devoted to understanding the convergence properties of various inverse methods. The relative lack of success of electromagnetic inversion techniques is partly attributable to the difficulties in the kernel forward modeling software. These difficulties come in two broad classes: (1) Completeness and robustness, and (2) convergence, execution time and model simplicity. If such problems exist in the forward modeling kernel, it was demonstrated that inversion can fail to generate reasonable results. It was suggested that classical inversion techniques, which are based on minimizing a norm of the error between data and the simulated data, will only be successful when these difficulties in forward modeling kernels are properly dealt with. 4 refs., 5 figs.

To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also

Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

The detonation of a sub-Chandrasekhar-mass white dwarf (WD) has emerged as one of the most promising Type Ia supernova (SN Ia) progenitor scenarios. Recent studies have suggested that the rapid transfer of a very small amount of helium from one WD to another is sufficient to ignite a helium shell detonation that subsequently triggers a carbon core detonation, yielding a “dynamically driven double-degenerate double-detonation” SN Ia. Because the helium shell that surrounds the core explosion is so minimal, this scenario approaches the limiting case of a bare C/O WD detonation. Motivated by discrepancies in previous literature and by a recent need for detailed nucleosynthetic data, we revisit simulations of naked C/O WD detonations in this paper. We disagree to some extent with the nucleosynthetic results of previous work on sub-Chandrasekhar-mass bare C/O WD detonations; for example, we find that a median-brightness SN Ia is produced by the detonation of a 1.0 {M}ȯ WD instead of a more massive and rarer 1.1 {M}ȯ WD. The neutron-rich nucleosynthesis in our simulations agrees broadly with some observational constraints, although tensions remain with others. There are also discrepancies related to the velocities of the outer ejecta and light curve shapes, but overall our synthetic light curves and spectra are roughly consistent with observations. We are hopeful that future multidimensional simulations will resolve these issues and further bolster the dynamically driven double-degenerate double-detonation scenario’s potential to explain most SNe Ia.

Full Text Available In recent years, ocean acidification has gained continuously increasing attention from scientists and a number of stakeholders and has raised serious concerns about its effects on marine organisms and ecosystems. With the increase in interest, funding resources, and the number of scientific investigations focusing on this environmental problem, increasing amounts of data and results have been produced, and a progressively growing and more rigorous understanding of this problem has begun to develop. Nevertheless, there are still a number of scientific debates, and in some cases misconceptions, that keep reoccurring at a number of forums in various contexts. In this article, we revisit four of these topics that we think require further thoughtful consideration including: (1 surface seawater CO2 chemistry in shallow water coastal areas, (2 experimental manipulation of marine systems using CO2 gas or by acid addition, (3 net versus gross calcification and dissolution, and (4 CaCO3 mineral dissolution and seawater buffering. As a summation of these topics, we emphasize that: (1 many coastal environments experience seawater pCO2 that is significantly higher than expected from equilibrium with the atmosphere and is strongly linked to biological processes; (2 addition of acid, base or CO2 gas to seawater can all be useful techniques to manipulate seawater chemistry in ocean acidification experiments; (3 estimates of calcification or CaCO3 dissolution based on present techniques are measuring the net of gross calcification and dissolution; and (4 dissolution of metastable carbonate mineral phases will not produce sufficient alkalinity to buffer the pH and carbonate saturation state of shallow water environments on timescales of decades to hundreds of years to the extent that any potential negative effects on marine calcifiers will be avoided.

We revisit the pair creation constraint on superluminal neutrinos considered by Cohen and Glashow in order to clarify which types of superluminal models are constrained. We show that a model in which the superluminal neutrino is effectively light-like can evade the Cohen-Glashow constraint. In summary, any model for which the CG pair production process operates is excluded because such timelike neutrinos would not be detected by OPERA or other experiments. However, a superluminal neutrino which is effectively lightlike with fixed p 2 can evade the Cohen-Glashow constraint because of energy-momentum conservation. The coincidence involved in explaining the SN1987A constraint certainly makes such a picture improbable - but it is still intrinsically possible. The lightlike model is appealing in that it does not violate Lorentz symmetry in particle interactions, although one would expect Hughes-Drever tests to turn up a violation eventually. Other evasions of the CG constraints are also possible; perhaps, e.g., the neutrino takes a 'short cut' through extra dimensions or suffers anomalous acceleration in matter. Irrespective of the OPERA result, Lorentz-violating interactions remain possible, and ongoing experimental investigation of such possibilities should continue.

Substantial greenhouse gas (GHG) emissions from hydropower reservoirs have been of great concerns recently, yet the significant carbon emitters of drawdown area and reservoir downstream (including spillways and turbines as well as river reaches below dams) have not been included in global carbon budget. Here, we revisit GHG emission from hydropower reservoirs by considering reservoir surface area, drawdown zone and reservoir downstream. Our estimates demonstrate around 301.3 Tg carbon dioxide (CO2)/year and 18.7 Tg methane (CH4)/year from global hydroelectric reservoirs, which are much higher than recent observations. The sum of drawdown and downstream emission, which is generally overlooked, represents 42 % CO2 and 67 % CH4 of the total emissions from hydropower reservoirs. Accordingly, the global average emissions from hydropower are estimated to be 92 g CO2/kWh and 5.7 g CH4/kWh. Nonetheless, global hydroelectricity could currently reduce approximate 2,351 Tg CO2eq/year with respect to fuel fossil plant alternative. The new findings show a substantial revision of carbon emission from the global hydropower reservoirs.

In this paper, we revisit a 1986 article we published in this Journal, Meta-Analysis in Clinical Trials, where we introduced a random-effects model to summarize the evidence about treatment efficacy from a number of related clinical trials. Because of its simplicity and ease of implementation, our approach has been widely used (with more than 12,000 citations to date) and the "DerSimonian and Laird method" is now often referred to as the 'standard approach' or a 'popular' method for meta-analysis in medical and clinical research. The method is especially useful for providing an overall effect estimate and for characterizing the heterogeneity of effects across a series of studies. Here, we review the background that led to the original 1986 article, briefly describe the random-effects approach for meta-analysis, explore its use in various settings and trends over time and recommend a refinement to the method using a robust variance estimator for testing overall effect. We conclude with a discussion of repurposing the method for Big Data meta-analysis and Genome Wide Association Studies for studying the importance of genetic variants in complex diseases. Published by Elsevier Inc.

We revisit the exact solution of the two space-time dimensional quantum field theory of a free massless boson with a periodic boundary interaction and self-dual period. We analyze the model by using a mapping to free fermions with a boundary mass term originally suggested in Ref. [J. Polchinski, L. Thorlacius, Phys. Rev. D 50 (1994) 622]. We find that the entire SL (2, C) family of boundary states of a single boson are boundary sine-Gordon states and we derive a simple explicit expression for the boundary state in fermion variables and as a function of sine-Gordon coupling constants. We use this expression to compute the partition function. We observe that the solution of the model has a strong-weak coupling generalization of T-duality. We then examine a class of recently discovered conformal boundary states for compact bosons with radii which are rational numbers times the self-dual radius. These have simple expression in fermion variables. We postulate sine-Gordon-like field theories with discrete gauge symmetries for which they are the appropriate boundary states

Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often used to describe the standard or benchmark against which current condition is compared. If assessments are to be conducted consistently, then a common understanding of the definitions and complications of reference condition is necessary. A 2006 paper (Stoddard et al., 2006, Ecological Applications 16:1267-1276) made an early attempt at codifying the reference condition concept; in this presentation we will revisit the points raised in that paper (and others) and examine how our thinking has changed in a little over 10 years.Among the issues to be discussed: (1) the “moving target” created when reference site data are used to set thresholds in large scale assessments; (2) natural vs. human disturbance and their effects on reference site distributions; (3) circularity and the use of biological data to assist in reference site identification; (4) using site-scale (in-stream or in-lake) measurements vs. landscape-level human activity to identify reference conditions. Ecological assessments of aquatic ecosystems depend on the ability to compare current conditions against some expectation of how they could be in the absence of significant human disturbance. The concept of a ‘‘reference condition’’ is often use

We revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, $M_{in}$, above the supersymmetric gauge coupling unification scale, $M_{GUT}$. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, $m_0$ and $m_{1/2}$ respectively, at $M_{in}$, as do the trilinear soft supersymmetry-breaking parameters $A_0$. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and the LHC measurement of the Higgs mass, $m_h$. We find regions of $m_0$, $m_{1/2}$, $A_0$ and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for $m_0$ and $m_{1/...

A recent article in which John Searle claims to refute dualism is examined from a scientific perspective. John Searle begins his recent article 'Dualism Revisited' by stating his belief that the philosophical problem of consciousness has a scientific solution. He then claims to refute dualism. It is therefore appropriate to examine his arguments against dualism from a scientific perspective. Scientific physical theories contain two kinds of descriptions: (1) Descriptions of our empirical findings, expressed in an every-day language that allows us communicate to each other our sensory experiences pertaining to what we have done and what we have learned; and (2) Descriptions of a theoretical model, expressed in a mathematical language that allows us to communicate to each other certain ideas that exist in our mathematical imaginations, and that are believed to represent, within our streams of consciousness, certain aspects of reality that we deem to exist independently of their being perceived by any human observer. These two parts of our scientific description correspond to the two aspects of our general contemporary dualistic understanding of the total reality in which we are imbedded, namely the empirical-mental aspect and the theoretical-physical aspect. The duality question is whether this general dualistic understanding of ourselves should be regarded as false in some important philosophical or scientific sense.

Full Text Available In this paper we revisit the issue of aggregate output decline that took place in the early transition period. We propose an alternative explanation of output decline that is applicable to Central- and Eastern-European countries. In the first part of the paper we develop a simple dynamic general equilibrium model that builds on work by Gomulka and Lane (2001. In particular, we consider price liberalization, interpreted as elimination of distortionary taxation, as a trigger of the output decline. We show that price liberalization in interaction with heterogeneous adjustment costs and non-employment benefits lead to aggregate output decline and surge in wage inequality. While these patterns are consistent with actual dynamics in CEE countries, this model cannot generate output decline in all sectors. Instead sectors that were initially taxed even exhibit output growth. Thus, in the second part we consider an alternative general equilibrium model with only one production sector and two types of labor and distortion in a form of wage compression during the socialist era. The trigger for labor mobility and consequently output decline is wage liberalization. Assuming heterogeneity of workers in terms of adjustment costs and non-employment benefits can explain output decline in all industries.

Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

The diffraction anomalous fine structure method has been revisited by applying this measurement technique to polycrystalline samples and using an analytical method with the logarithmic dispersion relation. The diffraction anomalous fine structure (DAFS) method that is a spectroscopic analysis combined with resonant X-ray diffraction enables the determination of the valence state and local structure of a selected element at a specific crystalline site and/or phase. This method has been improved by using a polycrystalline sample, channel-cut monochromator optics with an undulator synchrotron radiation source, an area detector and direct determination of resonant terms with a logarithmic dispersion relation. This study makes the DAFS method more convenient and saves a large amount of measurement time in comparison with the conventional DAFS method with a single crystal. The improved DAFS method has been applied to some model samples, Ni foil and Fe 3 O 4 powder, to demonstrate the validity of the measurement and the analysis of the present DAFS method

We propose an algorithm for computing the potential V(x) associated to the one-dimensional Schroedinger operator E identical to - d 2 /dx 2 + V(x) -infinite < x< infinite from knowledge of the S.matrix, more exactly, of one of the reelection coefficients. The convergence of the algorithm is guaranteed by the stability results obtained for both the direct and inverse problems

In the 1950's and 1960's, Photoemission Spectroscopy (PES) established itself as the major technique for the study of the occupied electronic energy levels of solids. During this period the field divided into two branches: X-ray Photoemission Spectroscopy (XPS) for photon energies greater than ∼l000eV, and Ultra-violet Photoemission Spectroscopy (UPS) for photon energies below ∼100eV. By the 1970's XPS and UPS had become mature techniques. Like XPS, BIS (at x-ray energies) does not have the momentum-resolving ability of UPS that has contributed much to the understanding of the occupied band structures of solids. BIS moved into a new energy regime in 1977 when Dose employed a Geiger-Mueller tube to obtain density of unoccupied states data from a tantalum sample at a photon energy of ∼9.7eV. At similar energies, the technique has since become known as Inverse Photoemission Spectroscopy (IPS), in acknowledgment of its complementary relationship to UPS and to distinguish it from the higher energy BIS. Drawing on decades of UPS expertise, IPS has quickly moved into areas of interest where UPS has been applied; metals, semiconductors, layer compounds, adsorbates, ferromagnets, and superconductors. At La Trobe University an IPS facility has been constructed. This presentation reports on developments in the experimental and analytical techniques of IPS that have been made there. The results of a study of the unoccupied bulk and surface bands of GaAs are presented

Full Text Available Abstract Background Polymorphic inversions are a source of genetic variability with a direct impact on recombination frequencies. Given the difficulty of their experimental study, computational methods have been developed to infer their existence in a large number of individuals using genome-wide data of nucleotide variation. Methods based on haplotype tagging of known inversions attempt to classify individuals as having a normal or inverted allele. Other methods that measure differences between linkage disequilibrium attempt to identify regions with inversions but unable to classify subjects accurately, an essential requirement for association studies. Results We present a novel method to both identify polymorphic inversions from genome-wide genotype data and classify individuals as containing a normal or inverted allele. Our method, a generalization of a published method for haplotype data 1, utilizes linkage between groups of SNPs to partition a set of individuals into normal and inverted subpopulations. We employ a sliding window scan to identify regions likely to have an inversion, and accumulation of evidence from neighboring SNPs is used to accurately determine the inversion status of each subject. Further, our approach detects inversions directly from genotype data, thus increasing its usability to current genome-wide association studies (GWAS. Conclusions We demonstrate the accuracy of our method to detect inversions and classify individuals on principled-simulated genotypes, produced by the evolution of an inversion event within a coalescent model 2. We applied our method to real genotype data from HapMap Phase III to characterize the inversion status of two known inversions within the regions 17q21 and 8p23 across 1184 individuals. Finally, we scan the full genomes of the European Origin (CEU and Yoruba (YRI HapMap samples. We find population-based evidence for 9 out of 15 well-established autosomic inversions, and for 52 regions

One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.

The data obtained by Infrared Fourier Spectrometer on board Venera 15 Orbiter are revisited. The new database of temperature and aerosol profiles is created for the altitude range 55-100 km. The main improvements concern the involving of the whole spectral range free from absorption by any gases but CO2 into the temperature retrieval procedure. Besides the CO2 15 μm fundamental band, this range also includes the weak hot and isotopic CO2 bands. HITRAN-96 spectral database was used for calculation of the gaseous absorption coefficients. The diurnal variations at the isobaric levels are investigated. At low latitudes at the altitude h > 85 km a minimal temperature is observed in the afternoon, and a maximal one is on the morning day side. The temperature differences reach 20 K near 0.1 mb level. The temperature difference changes its sign below 1 mb level: in the afternoon it is warmer by more than 10 K than in the morning. The density of the clouds at all latitudes is found to be higher in the afternoon than in the morning. In the coldest parts of the `cold collar' the clouds are found to be composed of the mode 3 particles. The thermal zonal wind field reveals the presence of the midlatitude jet, connected with the `cold collar'. The low latitude jet near 85 km, connected with the temperature inversion above this level, is observed. It is also possible that another low latitude jet exists near the cloud tops at low latitudes.

A new approach for modelling dislocation creep during primary and secondary creep in FCC metals is proposed. The Orowan equation and dislocation behaviour at the grain scale are revisited to include the effects of different microstructures such as the grain size and solute atoms. Dislocation activity is proposed to follow a jog-diffusion law. It is shown that the activation energy for cross-slip E{sub cs} controls dislocation mobility and the strain increments during secondary creep. This is confirmed by successfully comparing E{sub cs} with the experimentally determined activation energy during secondary creep in 5 FCC metals. It is shown that the inverse relationship between the grain size and dislocation creep is attributed to the higher number of strain increments at the grain level dominating their magnitude as the grain size decreases. An alternative approach describing solid solution strengthening effects in nickel alloys is presented, where the dislocation mobility is reduced by dislocation pinning around solute atoms. An analysis on the solid solution strengthening effects of typical elements employed in Ni-base superalloys is also discussed. The model results are validated against measurements of Cu, Ni, Ti and 4 Ni-base alloys for wide deformation conditions and different grain sizes.

Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

This article revisits the theme of the clash of interests and power relations at work in participatory research which is prescribed from above. It offers a possible route toward solving conflict between adult-led research carried out by young researchers, funding requirements and organisational constraints. The article explores issues of…

Access to a fresh set of video-recordings of Sesotho praise-poetry made in the year 2000 enabled the author to revisit his adaptation of Albert Lord's definition of the formula as a dynamic compositional device that the oral poet utilizes during delivery. The basic adaptation made in 1983 pertains to heroic praises (dithoko tsa ...

Previous research on the literary origins of the term "school psychologist" is revisited, and conclusions are revised in light of new evidence. It appears that the origin of the term in the American literature occurred as early as 1898 in an article by Hugo Munsterberg, predating the usage by Wilhelm Stern in 1911. The early references to the…

Full Text Available The present article discusses Neutrosophic logic view to Schrodinger's cat paradox. We argue that this paradox involves some degree of indeterminacy (unknown which Neutrosophic logic can take into consideration, whereas other methods including Fuzzy logic cannot. To make this proposition clear, we revisit our previous paper by offering an illustration using modified coin tossing problem, known as Parrondo's game.

At the time of writing, the first community colleges in Ontario were preparing for transition to an accreditation model from an audit system. This paper revisits constructivist literature, arguing that a more pragmatic definition of constructivism effectively blends positivist and interactionist philosophies to achieve both student centred…

High precision mass measurements in Ψ and Υ families performed in 1980-1984 at the VEPP-4 collider with OLYA and MD-1 detectors are revisited. The corrections for the new value of the electron mass are presented. The effect of the updated radiative corrections has been calculated for the J/Ψ(1S) and Ψ(2S) mass measurements [ru

This dissertation revisits subject island effects (Ross 1967, Chomsky 1973) cross-linguistically. Controlled acceptability judgment studies in German, English, Japanese and Serbian show that extraction out of specifiers is consistently degraded compared to extraction out of complements, indicating that the Condition on Extraction domains (CED,…

We revisit a classic demonstration for surface tension in soap films and introduce a more striking variation of it. The demonstration shows how the film, pulling uniformly and normally on a loose string, transforms it into a circular arc under tension. The relationship between the surface tension and the string tension is analysed and presented in a useful graphical form. (letters and comments)

We revisit the notion of additively homomorphic encryption with a double decryption mechanism (DD-PKE), which allows for additions in the encrypted domain while having a master decryption procedure that can decrypt all properly formed ciphertexts by using a special master secret. This type of

This article revisits Goody's arguments about literacy's influence on social arrangements, culture, cognition, economics, and other domains of existence. Whereas some of his arguments tend toward technological determinism (i.e., literacy causes change in the world), other of his arguments construe literacy as a force that shapes and is shaped by…

We revisit a classic demonstration for surface tension in soap films and introduce a more striking variation of it. The demonstration shows how the film, pulling uniformly and normally on a loose string, transforms it into a circular arc under tension. The relationship between the surface tension and the string tension is analysed and presented in a useful graphical form. (letters and comments)

This paper aims to present the lessons learned during a control center design project by revisiting another control center from the same company designed two and a half years before by the same project team. In light of the experience with the first project and its analysis, the designers and res...

A seminar at Chicago-Kent College of Law (Illinois) that reviews six first-year law school courses by focusing on feminist issues in course content and structure is described. The seminar functions as both a review and a shift in perspective. Courses revisited include civil procedure, contracts, criminal law, justice and the legal system,…

This spotlight revisits the dynamics and prognosis outlined in the late 1980's published in Déforestation en Afrique. This book on deforestation in Africa utilized available statistical data from the 1980's and was a pioneering self - styled attempt to provide a holistic viewpoint of the ongoing trends pertaining to deforestation in ...

The literature on the exponential Fourier approach to the one-dimensional quantum harmonic oscillator problem is revised and criticized. It is shown that the solution of this problem has been built on faulty premises. The problem is revisited via the Fourier sine and cosine transform method and the stationary states are properly determined by requiring definite parity and square-integrable eigenfunctions. (paper)

The classic benchmarks for transport through a binary Markovian mixture are revisited to look at the probability distribution function of the chosen 'results': reflection, transmission and scalar flux. We argue that the knowledge of the ensemble averaged results is not sufficient for reliable predictions: a measure of the dispersion must also be obtained. An algorithm to estimate this dispersion is tested. (author)

Thorbecke Revisited: The Role of Doctrinaire Liberalism in Dutch Politics In the political history of the nineteenth century Thorbecke played a crucial role. As the architect of the 1848 liberal constitutional reform he led three cabinets. In many ways he dominated the political discourse during the

This is the third paper of a series revisiting the Faraday effect. The question of the absolute convergence of the sums over the band indices entering the Verdet constant is considered. In general, sum rules and traces per unit volume play an important role in solid-state physics, and they give...

The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…

Potential Theory presents a clear path from calculus to classical potential theory and beyond, with the aim of moving the reader into the area of mathematical research as quickly as possible. The subject matter is developed from first principles using only calculus. Commencing with the inverse square law for gravitational and electromagnetic forces and the divergence theorem, the author develops methods for constructing solutions of Laplace's equation on a region with prescribed values on the boundary of the region. The latter half of the book addresses more advanced material aimed at those with the background of a senior undergraduate or beginning graduate course in real analysis. Starting with solutions of the Dirichlet problem subject to mixed boundary conditions on the simplest of regions, methods of morphing such solutions onto solutions of Poisson's equation on more general regions are developed using diffeomorphisms and the Perron-Wiener-Brelot method, culminating in application to Brownian motion. In ...

In this report account is presented of research carried out during the period September 1, 1999-August 31, 2002 under the sponsorship of the Department of Energy, grant DE-FG02-90ER14119. The research covered several areas of modern optical physics, particularly propagation of partially coherent light and its interaction with deterministic and with random media, spectroscopy with partially coherent light, polarization properties of statistical wave fields, effects of moving diffusers on coherence and on the spectra of light transmitted and scattered by them, reciprocity inequalities involving spatial and angular correlations of partially coherent beams, spreading of partially coherent beams in-random media, inverse source problems, computed and diffraction tomography and partially coherent solitons. We have discovered a new phenomenon in an emerging field of physical optics, known as singular optics; specifically we found that the spectrum of light changes drastically in the neighborhood of points where the intensity has zero value and where, consequently, the phase becomes singular, We noted some potential applications of this phenomenon. The results of our investigations were reported in 39 publications. They are listed on pages 3 to 5. Summaries of these publications are given on pages 6-13. Scientists who have participated in this research are listed on page 14

In mechanized tunnel drilling processes, exploration of soil structure and properties ahead of the tunnel boring machine can greatly help to lower costs and improve safety conditions during drilling. We present numerical full waveform inversion approaches in time and frequency domain of synthetic acoustic data to detect different small scale structures representing potential obstacles in front of the tunnel boring machine. With the use of sensitivity kernels based on the adjoint wave field in time domain and in frequency domain it is possible to derive satisfactory models with a manageable amount of computational load. Convergence to a suitable model is assured by the use of iterative model improvements and gradually increasing frequencies. Results of both, time and frequency approach, will be compared for different obstacle and source/receiver setups. They show that the image quality strongly depends on the used receiver and source positions and increases significantly with the use of transmission waves due to the installed receivers and sources at the surface and/or in bore holes. Transmission waves lead to clearly identified structure and position of the obstacles and give satisfactory guesses for the wave speed. Setups using only reflected waves result in blurred objects and ambiguous position of distant objects and allow to distinguish heterogeneities with higher or lower wave speed, respectively.

The inverse relation between leverage and profitability is widely regarded as a serious defect of the trade-off theory. We show that the defect is not with the theory but with the use of a leverage ratio in which profitability affects both the numerator and the denominator. Profitability directly increases the value of equity. Firms do take the predicted offsetting actions. They issue debt and repurchase equity when profitability rises, and retire debt and issue equity when profitability fall...

Chromosomal rearrangements are a source of structural variation within the genome that figure prominently in human disease, where the importance of translocations and deletions is well recognized. In principle, inversions-reversals in the orientation of DNA sequences within a chromosome-should have similar detrimental potential. However, the study of inversions has been hampered by traditional approaches used for their detection, which are not particularly robust. Even with significant advances in whole genome approaches, changes in the absolute orientation of DNA remain difficult to detect routinely. Consequently, our understanding of inversions is still surprisingly limited, as is our appreciation for their frequency and involvement in human disease. Here, we introduce the directional genomic hybridization methodology of chromatid painting-a whole new way of looking at structural features of the genome-that can be employed with high resolution on a cell-by-cell basis, and demonstrate its basic capabilities for genome-wide discovery and targeted detection of inversions. Bioinformatics enabled development of sequence- and strand-specific directional probe sets, which when coupled with single-stranded hybridization, greatly improved the resolution and ease of inversion detection. We highlight examples of the far-ranging applicability of this cytogenomics-based approach, which include confirmation of the alignment of the human genome database and evidence that individuals themselves share similar sequence directionality, as well as use in comparative and evolutionary studies for any species whose genome has been sequenced. In addition to applications related to basic mechanistic studies, the information obtainable with strand-specific hybridization strategies may ultimately enable novel gene discovery, thereby benefitting the diagnosis and treatment of a variety of human disease states and disorders including cancer, autism, and idiopathic infertility.

A 900-kb inversion exists within a large region of conserved linkage disequilibrium (LD) on chromosome 17. CRHR1 is located within the inversion region and associated with inhaled corticosteroid response in asthma. We hypothesized that CRHR1 variants are in LD with the inversion, supporting a potential role for natural selection in the genetic response to corticosteroids. We genotyped six single nucleotide polymorphisms (SNPs) spanning chromosome 17: 40,410,565-42,372,240, including four SNPs defining inversion status. Similar allele frequencies and strong LD were noted between the inversion and a CRHR1 SNP previously associated with lung function response to inhaled corticosteroids. Each inversion-defining SNP was strongly associated with inhaled corticosteroid response in adult asthma (P values 0.002-0.005). The CRHR1 response to inhaled corticosteroids may thus be explained by natural selection resulting from inversion status or by long-range LD with another gene. Additional pharmacogenetic investigations into regions of chromosomal diversity, including copy number variation and inversions, are warranted.

Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

We introduce a new method of modeling and inversion of potential field data generated by a density contrast surface. Our method is based on 3D Cauchy-type integral representation of the potential fields. Traditionally, potential fields are calculated using volume integrals of the domains occupied...

This chapter revisits Jane Pilcher’s (1994) seminal work ‘Who should do the dishes? Three generations of Welsh women talking about men and housework’, which was originally published in Our Sister’s Land: the changing identities of women in Wales. As discussed in the introductory chapter, I began revisiting classic Welsh studies as part of my doctoral study Mothers and daughters on the margins: gender, generation and education (Mannay, 2012); this lead to the later publication of a revisiting ...

The study aims to explore the relationship between risk perception of rockfall and revisit intention using a Structural Equation Modeling (SEM) analysis. A total of 573 valid questionnaires are collected from travelers to Taroko National Park, Taiwan. The findings show the majority of travelers have the medium perception of rockfall risk, and are willing to revisit the Taroko National Park. The revisit intention to Taroko National Park is influenced by hazardous preferences, willingness-to-pa...

This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small

This proceeding volume is based on papers presented on the Third Annual Workshop on Inverse Problems which was organized by the Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, and took place in May 2013 in Stockholm. The purpose of this workshop was to present new analytical developments and numerical techniques for solution of inverse problems for a wide range of applications in acoustics, electromagnetics, optical fibers, medical imaging, geophysics, etc. The contributions in this volume reflect these themes and will be beneficial to researchers who are working in the area of applied inverse problems.