Neural networks

With access to large datasets, deep neural networks (DNN) have achieved human-level accuracy in image and speech recognition tasks. However, in chemistry, data is inherently small and fragmented. In this work, we develop various approaches of using rule-based models and physics-based simulations to train ChemNet, a transferable and generalizable pre-trained network for small-molecule property prediction that learns in a weak-supervised manner from large unlabeled chemical databases.

There is an urgent need to develop new ways to combat the threat of rising antibiotic resistance rates, which has been nationally highlighted explicitly in recent action plans of the CDC and the White House. Our laboratory has found that hospitalized patients who develop infection almost always have the infecting bacterium on their body before they become ill, usually in the intestine but occasionally on the skin or in the mouth.

This talk investigates the role of feedback in a network of amplifiers. We start by revisiting the theory of the individual feedback amplifier. Then, taking inspiration from neuronal behaviors, we present a simple architecture that makes the network amplification zoomable, allowing for a versatile modulation of the resolution of the network amplifier. The network property results from a feedback localization property of the individual amplifiers.

In this talk I will introduce the main mathematical questions arising in the modeling of large-scale neuronal networks involved at functional scales in the brain. Such networks are composed of multiple populations (different neuronal types), in which each neuron has a stochastic dynamics and operate in a random environment. Understanding the collective dynamics of such neuronal assemblies involves mathematical tools developed in statistical physics, and most cortical activity regimes are out-of-equilibrium, related to periodic or chaotic solutions in law.

Synaptic transmission is a central component of neural processing. Short term depression occurs when repeated driving of a synapse reduces its efficacy, a feature that is common across the nervous system. The mechanics of synaptic discharge are well characterized and involve both probabilistic release and uptake of neurotransmitter during activity. However, many studies which consider the impact of depression on information flow use a deterministic model of depression, based on trial averaged response.

Inferring cause-effect relationships from observations is one of the fundamental challenges in natural sciences and beyond. Due to the technological advances over the last decade, the amount of observations and data available to characterize complex systems and their dynamics has increased substantially, making scientists face this challenge in many different areas. Specific examples of general importance include seismicity as well as nerve cell cultures and even the brain.

Computation has fundamentally changed the way we study nature. Recentbreakthroughs in data collection technology, such as GPS and othermobile sensors, high definition cameras, satellite images, and genotyping, aregiving biologists access to data about wild populations, from genetic tosocial interactions, which are orders of magnitude richer than anypreviously collected. Such data offer the promise of answering some ofthe big questions in population biology: Why do animals form social

In this talk I will describe recent results on large populations of brain connectivity networks. We analyze sex and kinship relations and their effects in brain networks metrics and topologies. The reported results are obtained in collaboration between the team of Paul Thompson at UCLA and my team at the UofM, in particular Neda Jahanshad and Julio Duarte-Carvajalino.