In Nature exceptional permeability and selectivity properties are reached, for example ion channels are able to distinguish with high throughput very similar ions like Sodium and Potassium. The paradigm change as compared to nanoscale technology is that these biological filters are out-of-equilibrium, submitted to either thermal or active fluctuations – for example of the pore constriction. Here we investigate how out-of-equilibrium fluctuations of a pore may affect the translocation dynamics, in particular dispersion coefficients. Our findings demonstrate a complex interplay between transport and surface wiggling and elucidate the impact of pore agitation in a broad range of artificial and biological porins, but also, at larger scales, in vascular motion in fungi, intestinal contractions and microfluidic surface waves. These results open up the possibility that transport across membranes can be actively tuned by external stimuli, with potential applications to nanoscale pumping, osmosis and dynamical ultrafiltration.

We show that bringing into proximity two topologically trivial systems can give rise to a topological phase. More specifically, we study a 1D metallic nanowire proximitised by a 2D superconducting substrate with a mixed s-wave and p-wave pairing, and we demonstrate both analytically and numerically that the phase diagram of such a setup can be richer than reported before. Thus, apart from the two "expected" well-known phases (i.e. where the substrate and the wire are both simultaneously trivial or topological), we show that there exist two peculiar phases in which the nanowire can be in a topological regime while the substrate is trivial, and vice versa.

The successes and the multitude of applications of deep learning methods have spurred efforts towards quantitative modeling of the performance of deep neural networks. In particular, an information-theoretic approach has been receiving increasing interest. Nevertheless, it is in practice computationally intractable to compute entropies and mutual informations in industry-sized neural networks. In this talk, we will consider instead a class of models of deep neural networks, for which an expression for these information-theoretic quantities can be derived from the replica method. We will examine how mutual informations between hidden and input variables can be reported along the training of such neural networks on synthetic datasets. Finally we will discuss the numerical results of a few training experiments.