Organizers

Objectives

The proposed workshop will be composed of five different thrusts, one for each day of the event.

Thrust 1: Numerical analysis of free-energy methods.

In recent years, much progress has been made in the determination of free-energy differences in physical, chemical and biological systems, which, to a large extent, has been facilitated by the emergence of a large variety of methods that have improved significantly both the efficiency and the reliability of the measured quantities. This progress has, however, come at a price. It is increasingly difficult for the practitioner to find its way through the virtual maze of available computational schemes. Moreover, the reliability and efficiency of these techniques have not been examined in depth for all of them. A fundamental understanding of the numerical behavior of a free-energy method is, however, pivotal not only from a practical standpoint, but also for the development of new approaches. This part of the proposed workshop sets out to address the following questions: What is the statistical error and the systematic, finite-length bias of a free-energy method? How can the efficiency and the reliability of free-energy calculations be improved? How is it possible to assess the quality of a computation when the exact answer is not known? How are the commonly used methods conceptually related? Which method is the best for a specific problem? We propose to address these questions by dissecting from a mathematical perspective the available methodology to reach a consensus on the best possible current practices for free-energy calculations.

Jarzynski recently proposed an equality to calculate the equilibrium free-energy difference between two states based on non-equilibrium simulations, wherein the reference state is switched at constant velocity towards the target one. Previous methods such as slow growth would only converge when the switching speed is very small, which is computationally very expensive in a number of cases. In contrast, the Jarzynski equality holds for arbitrary switching velocities. Detailed mathematical analyses have, however, revealed that in its current implementation this approach often suffers from large statistical errors. Several questions, therefore, remain open, such as: What is the optimal choice of parameters likely to minimize the error? Are there cases, wherein this approach can be shown to be superior to equilibrium approaches? How can forward and backward trajectories be used routinely to improve its efficiency, as proposed by Crooks?

Thrust 3: Coarse-graining and reduced models.

Free energy is a pivotal quantity to build reduced models in statistical mechanics. It is, indeed, in some sense, the effective energy associated to the considered reaction coordinate. A very important and related question is: How can you use free energy to build dynamically consistent reduced models? Coarse-graining techniques are required both for computational reasons (to work with smaller systems) and for theoretical purposes (to get a better understanding of complex systems). Many schemes have been proposed in the literature based on , for example, generalized Langevin equations, proper orthogonal decompositions or the Mori-Zwanzig projection formalism. This represents a very active field of research to which mathematics can make a significant contribution. This part of the proposed workshop is intended to draw an overview of the currently available techniques and their field of application in physics, chemistry and biology.

Thrust 4: Methods for ergodic sampling.

One of the most critical shortcomings faced in free-energy calculations is the existence of a broad range of free-energy barriers at multiple scales, both lower and higher than thermal energy. All calculations rely on the assumption that during the short-time span of numerical simulations, time averages are close to thermodynamic-ensemble averages. However, as a result of finite sampling, in many cases this assumption is not satisfied, regions of conformational space becoming disconnected and the system being trapped in metastable regions. It is therefore of paramount importance to design sampling schemes, which increase the rate of conformational sampling under such circumstances. Techniques based on adaptive biasing, for instance, have proven to be particularly useful and benefit from a solid mathematical understanding of their underlying assumptions to ensure their efficiency and pave the road for further improvements of classical approaches. The current methodology will be reviewed with a glimpse into novel research areas, like the development of algorithms for sampling stationary measures for non-reversible dynamics (out-of-equilibrium systems).

Thrust 5: Transition path sampling.

Many important physical, chemical or biological processes span time scales that exceed significantly those amenable to molecular-dynamics simulations. A possible route to address this shortcoming consists in selecting a putative reaction coordinate, from whence the free energy and other related quantities can be determined. In contrast, transition-path sampling is a reaction-coordinate-free method, in which the ensemble of transition pathways is sampled using a Monte Carlo procedure. A set of dynamical pathways is obtained, which can then be further analyzed to obtain information about the reaction mechanism. An interesting avenue of research is the application of transition pathway techniques to non-equilibrium methods. In particular, biased-path sampling techniques can overcome some of the limitations of the Jarzynski identity. More generally, numerical techniques aimed at generating equilibrium trajectories from a metastable state to another one are needed for free-energy calculations, and to understand reactive paths. These various routes will be explored from a mathematical standpoint, emphasizing the reasons for their successes and failures.