Mathematics

This opening lecture lists some of the questions and issues propelling current research in Cell Biology and modelling in this field. I introduce basic features of eukaryotic cells that can crawl, and explain briefly the role of the actin cytoskeleton in cell motility. I also introduce the biochemical signalling that regulates the cytoskeleton and the concept of cell polarization. By simplifying the
enormously complex signalling networks, and applying tools of mathematics (nonlinear dynamics, scaling, bifurcations), we can hope to get some understanding of a few of the basic mechanisms that areresponsible for symmetry breaking, robustness, pattern formation, self-assembly, and other cell-level phenomena.

Central to Alan Turing's posthumous reputation is his work with British codebreaking during the Second World War. This relationship is not well understood, largely because it stands on the intersection of two technical fields, mathematics and cryptology, the second of which also has been shrouded by secrecy. This lecture will assess this relationship from an historical cryptological perspective. It treats the mathematization and mechanization of cryptology between 1920-50 as international phenomena. It assesses Turing's role in one important phase of this process, British work at Bletchley Park in developing cryptanalytical machines for use against Enigma in 1940-41. It focuses on also his interest in and work with cryptographic machines between 1942-46, and concludes that work with them served as a seed bed for the development of his thinking about computers.

While Turing is best known for his abstract concept of a "Turing Machine," he did design (but not build) several other machines - particularly ones involved with code breaking and early computers. While Turing was a fine mathematician, he could not be trusted to actually try and construct the machines he designed - he would almost always break some delicate piece of equipment if he tried to do anything practical.
The early code-breaking machines (known as "bombes" - the Polish word for bomb, because of their loud ticking noise) were not designed by Turing but he had a hand in several later machines known as "Robinsons" and eventually the Colossus machines.
After the War he worked on an electronic computer design for the National Physical Laboratory - an innovative design unlike the other computing machines being considered at the time. He left the NPL before the machine was operational but made other contributions to early computers such as those being constructed at Manchester University.
This talk will describe some of his ideas behind these machines.

Many scientific questions are considered solved to the best possible degree when we have a method for computing a solution. This is especially true in mathematics and those areas of science in which phenomena can be described mathematically: one only has to think of the methods of symbolic algebra in order to solve equations, or laws of physics which allow one to calculate unknown quantities from known measurements. The crowning achievement of mathematics would thus be a systematic way to compute the solution to any mathematical problem. The hope that this was possible was perhaps first articulated by the 18th century mathematician-philosopher G. W. Leibniz. Advances in the foundations of mathematics in the early 20th century made it possible in the 1920s to first formulate the question of whether there is such a systematic way to find a solution to every mathematical problem. This became known as the decision problem, and it was considered a major open problem in the 1920s and 1930s. Alan Turing solved it in his first, groundbreaking paper "On computable numbers" (1936). In order to show that there cannot be a systematic computational procedure that solves every mathematical question, Turing had to provide a convincing analysis of what a computational procedure is. His abstract, mathematical model of computability is that of a Turing Machine. He showed that no Turing machine, and hence no computational procedure at all, could solve the Entscheidungsproblem.

Many multi-cellular organisms exhibit remarkably similar patterns of aging and mortality. Because this phenomenon appears to arise from the complex interaction of many genes, it has been a challenge to explain it quantitatively as a response to natural selection. I survey attempts by me and my collaborators to build a framework for understanding how mutation, selection and recombination acting on many genes combine to shape the distribution of genotypes in a large population. A genotype drawn at random from the population at a given time is described in our model by a Poisson random measure on the space of loci, and hence its distribution is characterized by the associated intensity measure. The intensity measures evolve according to a continuous-time, measure-valued dynamical system. I present general results on the existence and uniqueness of this dynamical system, how it arises as a limit of discrete generation systems, and the nature of its equilibria.

After reviewing ordinary finite-dimensional Morse theory, I will explain how Morse generalized Morse theory to loop spaces, and how Floer generalized it to gauge theory on a three-manifold. Then I will describe an analog of Floer cohomology with the gauge group taken to be a complex Lie group (rather than a compact group as assumed by Floer), and how this is expected to be related to the Jones polynomial of knots and Khovanov homology.

PIMS was proud to support the 'Summer at the HUB' camp which took place in July-August 2011. Focus camps included Lego Simple Machines and Math, iPad Camp and Robo Meccano. Many thanks to Britannia Centre for providing this video.

I will discuss techniques to get upper and lower bounds for moments of zeta and L-functions. The lower bounds are unconditional and the upper bounds in general rely on the Riemann Hypothesis. In several cases of low moments, one can obtain asymptotics, and I may discuss a couple of such recent cases.

I will discuss techniques to get upper and lower bounds for moments of zeta and L-functions. The lower bounds are unconditional and the upper bounds in general rely on the Riemann Hypothesis. In several cases of low moments, one can obtain asymptotics, and I may discuss a couple of such recent cases.

I will discuss the distribution of values of zeta and L-functions when restricted to the right of the critical line. Here the values are well understood by probabilistic models involving “random Euler products”. This fails on the critical line, and the L-values here have a different flavor here with Selberg’s theorem on log normality being a representative result.