Abstract:

Differential delay equations serve as mathematical models of numerous real life phenomena where aftereffects play crucial role in their functioning. Solutions to differential delay equations of simple form can exhibit quite complicated dynamics, in contrast to similar ordinary differential equations. This talk discusses several aspects of dynamics of solutions of scalar differential delay equations, such as global asymptotic stability, instability, existence, stability, and shape of periodic solutions, and chaotic behavior. The scalar differential delay equation

x'(t) = -x(t) + f(x(t - τ)), τ >0

is of significant theoretical and applied interest. It finds applications in mathematical biology and physiology, physics and laser optics, and in nonlinear boundary value problems for hyperbolic partial differential equations. It has been a subject of extensive studies by many authors in the past 30 years. The case of large delay allows for an exact or asymptotic reduction of its dynamics to relevant properties of interval maps. Some of the complex dynamics of the above equation then follow from well-known facts in the theory of interval maps, such as the Sharkovsky's Ordering. In spite of the simple form of this equation and numerous studies on its dynamics it is still a source of many challenging and unanswered questions. Parts of this talk are based on joint work with A.Sharkovsky.

Anatoli Ivanov received his PhD degree from the Institute of Mathematics, Ukrainian Academy of Sciences, Kiev in 1983, under the guidance of Professor Sharkovsky. He has become a member of the Institute and worked for seven years in the Department of Dynamical Systems headed by Sharkovsky. He has held a postdoctoral position in mathematics at the Ludwig-Maximilians-Universitat in Munich, Germany, a two-year visiting position at the University of Rhode Island, before becoming a faculty member at the Pennsylvania State University in 1994. His general research interests are in differential equations and dynamical systems with specialization in differential delay and difference equations.

More links on the subject:

The Period Three Theorem by Li and Yorke, frequently cited in Chaos Theory, is a special case of Sharkovsky's theorem.

Multifunctional Materials for Molecular Electronics

Jaclyn Brusso

University of Ottawa

Abstract:

Since the digital revolution, information storage and global communications have transformed how we live. Essential to the continuation of technological advancements is the need for smaller, lighter, cheaper and more efficient electronic, optical and magnetic materials. Current electronic devices are typically based on inorganic materials (e.g., Si), which require a top-down lithographic approach to fabrication. Consequently, scaling down of current technologies indefinitely is not possible. Future generations of molecular electronics will therefore require the development of new materials with new functionalities, as simply employing existing materials at a reduced scale is not feasible. We aim to address these challenges through the design and synthesis of new magneto-optoelectronic materials for molecular electronics (e.g., molecular wires and fibers, organic light emitting diodes, field effect transistors, photovoltaic cells, magnetic switches, etc.). Since performance of electronics critically depends on the extent of molecular order (in addition to other factors), rational engineering of self-organizing molecular systems with multifunctional characteristics is one of the most attractive and active fields of current research. In pursuit of this goal, we aim to develop closed and open shell compounds with specific molecular architectures in order to achieve dimensional control over phase separation at the nanoscale. This strategy allows us to investigate the role structure and morphology play in the overall properties of advanced functional materials so that we may understand and, ultimately control, their self-assembly. In that regard, the preparation and characterization of organomain group semiconductors and radicals currently being pursued within our lab will be discussed.

Dissecting the principles governing regulatory networks at transcriptional and post-transcriptional levels using genomic and systems approaches

Sarath Janga

Indiana University - Purdue University Indianapolis

Abstract:

An important notion that is emerging in post-genomic biology is that cellular components can be visualized as a network of interactions between different molecules like proteins, RNA, DNA, metabolites and small molecules. This has led to the application of network theory to a wide range of biological problems including understanding regulation of gene expression, function prediction, biomarker identification and drug discovery settings. While my lab has been employing these approaches in a number of different contexts, in this seminar I will focus on regulatory networks controlled by transcription factors, RNA-binding proteins and regulatory RNAs. For instance, in transcriptional networks, typically trans-acting elements like TFs form one set of nodes and their target genes, of which they control the activity, form the other set of nodes. The links between them which have directionality from the trans-acting elements to their target genes, controlled by their cis-regulatory elements, form a complex and directional network of interactions. In the first half of my talk, I will focus on our recent understanding of the structure of the transcriptional regulatory networks in eukaryotic organisms. I will then present evidence that transcriptional regulation plays a significant role in shaping the organization of genes on chromosomes in both the major domains of life, bacteria and eukarya. In the second half, I will present our efforts to systematically dissect the expression dynamics of RNA-binding proteins (RBPs) in post-transcriptional networks (formed by RBPs and their target RNAs) and how RBPs are dysregulated across human cancers. Our analysis shows that RBPs generally exhibit high protein stability, translational efficiency and protein abundance but their encoding transcripts tend to have low half-life. Analysis of the RBP-RNA interaction network revealed that the number of distinct targets bound by an RBP (connectivity) is strongly correlated with its protein stability, translational efficiency and abundance. We also note that RBPs show less noise in their expression in a population of cells, with highly connected RBPs showing significantly lower noise, indicating that highly connected RBPs are likely to be tightly regulated at the protein level as significant changes in their expression may bring about large-scale changes in global expression levels by affecting their targets. Finally, I will present our recent results demonstrating that RBPs are consistently and significantly highly expressed compared to other classes of genes (non-RBPs) as well as in comparison to well-documented groups of regulatory factors like Transcription Factors (TFs), miRNAs and long non-coding RNAs (lncRNAs) across all the human tissues examined. Our analysis also revealed the existence of a unique signature of ~30 RBPs that are very highly expressed across at least two-thirds of the nine cancers profiled in this study, and could be labeled as strongly upregulated (SUR) set of RBPs.

Molecular Topological Insulators

Artur Izmaylov

University of Toronto

Abstract:

A conical intersection (CI) of several electronic states is one of the most common structural motifs in molecules where the Born-Oppenheimer approximation breaks down. Besides possible electronic transitions in nuclear dynamics, CIs also change the topology of the individual electronic surfaces, so that a nuclear wave‐packet confined to a particular electronic surface must change the sign after encircling a locus of CI. This is a manifestation of the geometric phase associated with the parametric dependence of electronic wavefunctions on nuclear coordinates. We investigated influence of the geometric phase associated with CI on nuclear and electronic dynamics in molecular models. It was found that topological changes associated with the geometric phase can freeze transitions between coupled electronic states, and thus, give rise to an insulating state. These results are highly relevant to nuclear dynamics in molecules with Jahn-Teller distortion and open possibilities to design new molecular switches based on the topologically induced insulating state.

Tensor product weight representations of the super-Virasoro algebras

Active Metamaterials:
From ‘Trapped Rainbows’ to Stopped-Light Lasing

Ortwin Hess

The Blackett Laboratory, Imperial College London

Abstract:

Metamaterials have in the last decade emerged as a new paradigm in physics, optics, engineering and nanoscience, paving the way for a multitude of unprecedented capabilities such as invisibility cloaking, perfect imaging and trapped rainbow stopping of light. In this lecture I will give an overview of recent key developments in the field of metamaterials, highlighting theoretical, computational and modeling concepts of active metamaterials for subwavelength imaging, invisibility cloaking and trapped-rainbow stopping of light [1,2]. Describing the complex spatio-temporal interaction between plasmons, light and quantum gain media on the nanoscale the lecture will then chart the way towards control of nanoscale quantum emitters [3], ultrathin metasurfaces [4] and stopped-light nanolasing [5].

Professor Ortwin Hess holds the Leverhulme Chair in Metamaterials in the Blackett Laboratory/Department of Physics at Imperial College London, UK and is Co-Director of the Centre for Plasmonics & Metamaterials at Imperial College. Professor Hess studied physics at the University of Erlangen and the Technical University of Berlin. Following pre- and postdoctoral research in Edinburgh and at the University of Marburg Ortwin has been (from 1995 to 2003) Head of the Theoretical Quantum Electronics Group at the Institute of Technical Physics in Stuttgart, Germany. He has a Habilitation at the University of Stuttgart (1997) and became Adjunct Professor in Theoretical Physics at the University of Stuttgart in 1998. Since 2001 he is also Docent of Photonics at Tampere University of Technology in Finland. Ortwin has been Visiting Professor at Stanford University (1997-1998) and the University of Munich (2000-2001). Before moving to Imperial College in 2010 he was from 2003 to 2010 Professor in Physics at the University of Surrey in Guildford, UK.

Equation-based and agent-based models of consumer behaviour

Monica Cojocaru

University of Guelph

Abstract:

The process of decision making at the individual level has been studied extensively in operations research and management sciences, optimization, game theory, etc. The main traditional approach is concerned primarily with the study of appropriately defined static (equilibrium) states and their properties, assuming that individuals make rational decisions. For constantly evolving systems however, this is an important, yet not sufficient, approach to describe societal behavior. This is a particularly important question if one studies innovation (new products) and science (new information about a product ? e.g. health benefits) driven problems, their complex relationship with policy making, and the ever-changing population composition. In such a setting, the factors influencing individual and/or population attitudes are evolving, so the static theory cannot apply. Part of my current research is centered around several dynamic modeling approaches to population behavior incorporating both objective and subjective decision factors. We present a time-dependent extension of the standard, static model of consumer choice for differentiated products. We use both an agent-based and a PDE computational approach, and we incorporate social networks effects via prisoner's dilemma game. In this setting, an individual's choice depends not only on its characteristics (personality traits, perceived health benefits of a product, price of product, personal income), but also on the consumption choices of others in its social network. Of central interest is how consumers react to the introduction of a new product in the market.

Dr. Cojocaru completed her BSC and MSc in mathematics at the University of Bucharest (Romania) and her PhD in mathematics at Queen's University in Kingston Canada. She held an NSERC postdoctoral fellowship at the Centre des Researches Mathematiques (CRM) in Montreal. She then received an NSERC University Faculty Award and her current position at the University of Guelph, Canada, in the Mathematics & Statistics Department. She held several visiting positions at CRM, Fields Insitute, Harvard, and Northwestern universities. She currently holds the Canada-US Fulbright Visiting Research Chair position in the Department of Mathematics at UCSB.

Contaminated Mixtures and Fractionally-Supervised Classification

Paul McNicholas

University of Guelph, University Research Chair in Computational Statistics

Abstract:

First, an alternative to trimmed clustering is presented where contaminated mixtures are used to account for outliers. This approach is introduced, developed, and illustrated on real data. The merits of the proposed approach are set in contrast to the existing tclust approach. Second, a new classification paradigm is introduced. Fractionally-supervised classification (FSC) allows a level of supervision in between unsupervised and supervised classification, thereby presenting a much more flexible approach than semi-supervised classification. Our novel FSC approach is illustrated on real data, where its performance is compared to more traditional levels of supervision.

Paul McNicholas is Professor and University Research Chair in Computational Statistics in the Department of Mathematics & Statistics at the University of Guelph. Paul's primary research interests are in clustering and classification, with a focus on mixture model-based approaches. Recent work has focused on non-Gaussian mixtrue models, contaminated mixtures, bioinformatics, and sensometrics. Beyond clustering and classification, Paul engages in research around big data, cure rates, data mining, and high performance computing.

Abstract:

We will present approaches for the theoretical study of molecular dynamics in complex environments such as superfluid clusters and nano cages. We will mostly focus on path integral simulation techniques and will consider both atomic and molecular bosons as the constituent of the superfluid. We will show that the superfluid response to the rotational dynamics of a molecule can be used to explain microwave and infrared spectroscopic experiments. We will discuss the challenges associated with different types of rigid tops. Results will be presented for the case of asymmetric top molecules trapped in parahydrogen clusters and the idea of a genuine probe of superfluid response will be introduced. Recent results for quantum rotors trapped in water cages will be discussed. A perspective on future challenges including the study of quantum dynamics in flexible polyatomic molecules will be presented.

Abstract:

I will discuss computer simulations and multiscale modelling of both soft and hard materials. Computer simulations can be very good in providing even quantitative predicitions, but if not executed carefully, they can also give results that are spectacular but fully unphysical. I will discuss these cases with examples using hard, soft and biological matter. I will also discuss some recent developments and approaches to make simulation even more reliable.

The Quantitative Characterization of Finite Simple Groups

New Invariants for Incidence Structures

Vladimir Tonchev

Michigan Technological University

Abstract:

New isomorphism invariants for incidence structures based on a connection between trace codes and Galois geometry are discussed. Using these invariants, a new Hamada type characterization of the classical finite geometry designs is proved.

Abstract:

Drastic complexity growth of modern System-on-Chip Integrated Circuits has resulted in numerous architectural changes in Design-for-Test (DFT) and Design-for-Debug (DFD) areas, a.k.a. DFx. This presentation will start with an insight into the DFx area followed by analysis of various DFx techniques and architecture evolution. The mentioned quantitative changes in modern multi-processor SoC systems have also introduced new challenges to the verification process of DFx logic itself. We will present an architecture- and feature-independent object-oriented verification environment, specifically aimed at DFx verification. The proposed environment is based on hierarchical multi-agent approach in which each level represent a different level of abstraction. Test patterns written in this environment correspond to the highest level of abstraction and are architecture-independent, a key advantage in a rapid SoC development process. The lower levels are almost entirely automatically generated based on parsing functional specification. Detailed description of the proposed verification environment will be presented and discussed.

Data Mining and Rare Event Targeting Performance Measure

Spin Glasses and Computational Complexity

Daniel Gottesman

Perimeter Institute for Theoretical Physics

Abstract:

A system of spins with complicated interactions between them can have many possible configurations. Many configurations will be local minima of the energy, and to get from one local minimum to another requires changing the state of very many spins. A system like this is called a spin glass, and at low temperatures tends to get caught for very long times at a local minimum of energy, rather than reaching its true ground state. Indeed, in many cases, finding the ground state energy of a spin glass is a computationally hard problem, too hard to be solved on a classical computer or even a quantum computer in any reasonable amount of time. Which types of interactions give us computationally hard problems and spin glasses? I will survey what is known as we close in on finding the simplest complex spin systems.

Closed-loop supply chaim management, sustainable supply chain

Mean-field Modelling of the Electroencephalogram

Lennaert van Veen

University of Ontario Institute of Technology

Abstract:

Models of cortical dynamics come in two main families: network models and mean-field models. The former describe many interacting neurons and their connections, while the latter describe electrical potentials, generated collectively by hundreds of thousands of neurons, as continuous in space and time. These potentials are directly related to the signal measured by the electroencephalogram (EEG). While simulation and analysis of a cortical sheet larger than a few square centimetres by network models is intractable, mean-field models, sometimes called neural mass models, can be analysed as spatially extended dynamical systems. Such dynamical systems describe the cortex as an excitable medium and take the form of nonlinear partial differential equations. I will discuss some of the history of mean-field modelling and introduce the approach of David Liley. I will summarise what we know about his model, going over some of the analysis of its reductions to local or spatially homogeneous dynamics, and paying attention to a notorious problem in physiological modelling: the large number of parameters with a large margin of uncertainty. Finally, I will discuss my current work with Kevin Green on the simulation and analysis of the full-fledged model, that allows for "brain wave" solutions and spatio-temporal chaos with time and length scales typical of brain function.

Abstract:

Systems of differential-algebraic equations (DAEs) arise in many engineering and scientific disciplines. The index of a DAE is a measure of how difficult it is to solve it, compared to an ordinary differential equation (ODE). Problems of index 3 and higher are considered high index, and are very difficult to solve numerically.

We give an introduction to DAEs and how they differ from ODEs. Then we overview Pryce's structural analysis (SA) theory and its realization in the DAETS solver (Nedialkov and Pryce), a C++ package for solving high-index DAEs. This solver can deal with fully implicit, any index, and aribrary-order DAEs.

We also present DAESA (Nedialkov, Pryce, and Tan), a standalone Matlab tool for SA of DAEs. It provides convenient facilities for rapid investigation of DAE structures. In particular it reveals subsystems of a DAE to a ﬁner resolution than many other methods.

Finally, we outline current work on algorithms for analyzing large systems of DAEs by exploiting their sparsity structure.

Fluidics and Energy: Photobioreactors and Fluids Underground

David Sinton

Director, Centre for Sustainable Energy, University of Toronto

Abstract:

This talk will describe my group's research efforts in small scale fluidics for energy applications. There is a disconnect between the scale of our tools and the scale of global energy challenges. Our work in bridging this gap is informed by two examples of small-to-large scale energy impact in nature: (1) Photosynthesis - interactions of light and fluids in nanoscale membranes constitute the largest energy process on earth; and (2) Fluids underground - microporous media store the fossil fuel products of photosynthesis and carbon. Related to photosynthesis, an optofluidic photobioreactor approach is described that couples solar radiation into photosynthetic bacteria for the conversion of carbon dioxide to biofuel. The approach is enabled by combining fluids and light on the scale of the bacterium using evanescent light confined to a waveguide surface. Related to fluids underground, a chip-based approach is presented for the study of carbon dioxide transport and reactivity under reservoir temperatures and pressures. This work is motivated by the need to manage carbon emissions while developing and scaling renewable energy technologies - both areas where the fluidics community can contribute.

David Sinton is the Director of the Centre for Sustainable Energy at the University of Toronto and Associate Professor in the Department of Mechanical and Industrial Engineering. Prior to joining the U. of T., Dr. Sinton was an Associate Professor and Canada Research Chair at the University of Victoria (2003-2011) and Visiting Associate Professor at Cornell University (2009-2010). He received the 2006 Canadian Society of Mechanical Engineering I. W. Smith Award and became a Fellow of the Canadian Society of Mechanical Engineering, FCSME, in 2012.

What is the S-acts Theory of Monoids?

Solving the Pell Equation

Hugh Williams

University of Calgary

Abstract:

Let D be a positive nonsquare integer. The misnamed Pell equation is an expression of the form T^2 - D U^2 = 1, where the values of T and U are constrained to be integral. This very simple Diophantine equation seems to have been known to mathematicians for over 2000 years. It is well known that for any D it has an infinitude of solutions which can be easily expressed in terms of a fundamental solution t, u. The problem of finding t and u has been studied since the early 7th century, when the Indian mathematician Brahmagupta discovered an ad hoc method of doing this. Unfortunately, this method and its more deterministic successors cannot be used when D becomes very large. In this case we must evaluate the regulator R of the associated real quadratic number field Q(squareroot(D)). The problem of computing R can be very difficult, particularly when the field discriminant d becomes large. (Clearly, the actual value of the regulator can never be computed because it is a transcendental number; we are content with an approximate value that is within 1 of the actual value.) The best current method for computing R is Buchmann's subexponential method. Unfortunately, the correctness for the value of R produced by this technique is dependent on the truth of a generalized Riemann hypothesis. The best unconditional algorithm (the value of R is unconditional, not the running time) for computing the regulator of a real quadratic field is Lenstra.s Las Vegas Algorithm. In this mainly expository talk we describe a technique for rigorously verifying the regulator produced by the subexponential algorithm. We next show how we can record the fundamental solution by making use of a compact representation of it.

Boundary Patrolling by Mobile Agents

Evangelos Kranakis

Carleton University

Abstract:

We study a problem concerning the traversal of a rectifiable curve with $k \geq 1$ mobile robots so as to minimize the idle time (i.e., duration of time any point on the curve is left unvisited by a mobile agent). At any time during the traversal the speed of the robots cannot exceed a max value (set equal to $1$ for all robots). The rectifiable curve can be either closed (e.g., cycle) or open (e.g., segment) and consists of alternating contiguous subsegments of vital intervals (that must be traversed) and neutral intervals (that do not have to be traversed).Given such a rectifiable curve, our goal is to give algorithms describing the motion boundaries and movement of the robots within the curve so that the idle time is optimized. Despite its apparent simplicity, designing such algorithms and proving their correctness turns out to be quite a challenging problem. We give optimal algorithms for solving this problem in the case where the rectifiable curve is either a (straight line) segment or a cycle.

About the Speaker: Dr. Kranakis has published extensively in the analysis of algorithms, bioinformatics, communication and data (ad hoc and wireless) networks, computational and combinatorial geometry, distributed computing, and network security. He is the author of Primality and Cryptography (Wiley-Teubner series in Computer Science, 1986), and co-author of Boolean Functions and Computation Models with Peter Clote (Springer Verlag Texts in Theoretical Computer Science, 2002) and Principles of Ad Hoc Networking with Michel Barbeau (Wiley, 2007). He is currently CNS (Communication, Networks, and Security) Theme Leader in the MITACS NCE (Networks of Centers of Excellence).

Quantum Computation and Quantum Circuit Model

Lianao Wu

Universidad del Pais Vasco and Ikerbasque, Spain

Abstract:

The ultimate information processor in modern quantum technology is a quantum computer: a computer that uses quantum bits, qubits, and quantum circuitry to perform calculations. A quantum computer is able to solve problems that are impossible for classical computers. In the next decades, quantum computers might move out of research labs and into practical applications.

This talk will introduce quantum computation and a computation model -- Quantum circuit model, which is the counterpart of classical circuit model.

Dr. Lianao Wu is an Ikerbasque Research Professor in the Department of Theoretical Physics at the Faculty of Science and Technology at the UPV-EHU in Leioa, Spain. He has 25 years of research experience in scientific centres in Canada, China, Japan, Europe, and the United States. His main research lines are: Quantum Control, Quantum Information Processing, Quantum Devices

New Features in Maple 16

Juergen Gerhard

Maplesoft

Abstract:

The presentation will give a brief overview of the new features in Maple 16, including Drag-to-Solve, fast polynomial arithmetic, high-impact visualization, smart 2D plot view, rubber-band zooming, 100 new Math apps, iPad support, to name only a few.

North-South convergence in the presence of global warming

John Roemer

Yale University

Abstract:

How should rights to emit carbon into the atmosphere be allocated between the global North and the global South? We postulate a two-region world, comprised of North (US) and South (China). We show that global CO2 emissions follow a conservative path that leads to the stabilization of atmospheric carbon concentrations at 450 ppm. North and South converge to a path of sustained growth at 1% per year (28.2% per generation) in 2075 upon which welfare per capita is equalized globally. During the transition to the steady state, North grows at 1% per year while South grows markedly faster. The transition paths require a drastic reduction of the share of emissions allocated to North, large investments in knowledge, both in North and South, and large investments in education in South. Surprisingly, to sustain North's growth rate, some output must be transferred from South to North during the transition. Although subject to caveats, our results support a degree of optimism by providing evidence of the possibility of tackling climate change in a way that is fair both across generations and across regions while allowing for positive rates of human development.

John Roemer is the Elizabeth S. and A. Varick Stout Professor of Political Science and Economics at Yale University. His work concerns distributive justice, political economy, and the relationship between them. Recent books are Racism, Xenophobia, and Redistribution (Harvard UP, 2007), Democracy, education and equality (Cambridge UP, 2006), Political Competition (Harvard UP, 2001), Equality of Opportunity (Harvard UP, 1998), Theories of distributive justice (Harvard UP, 1996), and A future for socialism (Harvard UP, 1994). His current interests include distributive justice in the presence of global warming. He is a fellow of the Econometric Society, the American Academy of Arts and Sciences, a corresponding fellow of the British Academy, and a past fellow of the Guggenheim and Russell Sage Foundations.

Kantian equilibrium, externalities, and social ethos

John Roemer

Yale University

Abstract:

Although evidence accrues in biology, anthropology and experimental economics that homo sapiens is a cooperative species, the reigning assumption in economic theory is that individuals optimize in an autarkic manner (as in Nash and Walrasian equilibrium). I here postulate an interdependent kind of optimizing behavior, called Kantian. It is shown that in simple macro economic models, when there are negative externalities (such as congestion effects from use of a commonly owned resource) or positive externalities (such as a social ethos reflected in individuals' preferences), Kantian equilibria dominate Nash-Walras equilibria in terms of efficiency. While economists schooled in Nash equilibrium may view the Kantian behavior as utopian, there is some - perhaps much -- evidence that it exists. If cultures evolve through group selection, the hypothesis that Kantian behavior is more prevalent than we may think is supported by the efficiency results here demonstrated.

John Roemer is the Elizabeth S. and A. Varick Stout Professor of Political Science and Economics at Yale University. His work concerns distributive justice, political economy, and the relationship between them. Recent books are Racism, Xenophobia, and Redistribution (Harvard UP, 2007), Democracy, education and equality (Cambridge UP, 2006), Political Competition (Harvard UP, 2001), Equality of Opportunity (Harvard UP, 1998), Theories of distributive justice (Harvard UP, 1996), and A future for socialism (Harvard UP, 1994). His current interests include distributive justice in the presence of global warming. He is a fellow of the Econometric Society, the American Academy of Arts and Sciences, a corresponding fellow of the British Academy, and a past fellow of the Guggenheim and Russell Sage Foundations.

Abstract:

Porphyrins, cyclic tetrapyrroles, are essential for almost all life. This is due in part to the fact that they can bind a range of divalent metal ions (e.g., Fe(II), Mg(II), Ni(II), Co(II)), and have different oxidation states and ring substituents. Furthermore, the rings can be opened to form a range of linear tetrapyrroles. The resulting porphyrins and their derivatives play key roles as enzyme prosthetic groups, in ligand transport, electron transfers and light harvesting. The biosynthesis of porphyrins is a multi-step process requiring approx. 10 enzymes and can involve, for example, flavins, S-adenosyl-methionine and tRNA. Unfortunately, not all enzymatic mechanisms involved are well known or understood. We have used a range of computational methods including Docking, Molecular Dynamics, Density Functional Theory and Quantum Mechanics/Molecular Mechanics (QM/MM) methods to investigate the mechanism of several key enzymes directly involved, or closely related to those involved, in porphyrin biosynthesis. In particular, some results from our recent computational studies on Uroporphyrinogen III (URO-III) Decarboxylase (UROD), the enzyme that catalyses the first-branching point in porphyrin biosynthesis, and aminoacyl-tRNA synthetases, will be presented.

Designing light-switchable peptides and proteins

A Trilogy on a Point

Kenneth R. Meyer

University of Cincinnati

Abstract:

The point is the Lagrange equilibrium point L_4 of the restricted three-body problem for mass ratio near the critical mass ratio of Routh.
The trilogy is three papers Dr. Meyer wrote with coauthors in 1971, 2003 and 2011. The first is the Hamiltonian-Hopf theorem, which deals with the bifurcation of periodic solutions. The second treats the evolution of invariant manifolds, which leads to chaotic behavior. The third deals with the stability of the point in a limiting case. In his talk, Dr. Meyer will summarize the first two results and then he will reveal the geometry, which will obviate the proof of the third result.

Kenneth Meyer has been a leading researcher in dynamical systems and celestial mechanics since the 1960s. He received masters and PhD degrees in mathematics from the University of Cincinnati (UC) in 1962 and 1964. He came back to UC in Cincinnati as full professor in 1972 and remained there, serving as department head for three years and receiving the honorific Charles Phelps Taft professorship in 1984. He retired in 2003, and continues to do research as an emeritus professor at UC. He has authored over 100 publications and has served on the editorial boards of several journals. His work on dynamical systems permanently altered the direction of research in this area.

The Quasi-Morphic Property of Group

Engineering Modelling and Simulation - The Symbolic Approach

Bonnie Yue

Application Engineer, Maplesoft

Abstract:

Several key industries including automotive and aerospace sectors are questioning the viability of the traditional engineering modeling and simulation toolchain for emerging design challenges. Many are concluding that meeting these challenges require a rethinking of how we create and interact with engineering models. This seminar presents a case for a new approach that exploits some of the well-known advantages of symbolic computation in a way that accelerates the development of complex dynamic models and produces models of higher fidelity and real time performance. The software context for this presentation will be the Maplesoft product line. Best known for its math product Maple, Maplesoft is emerging as a key player in the engineering modeling software with the release of MapleSim, a new modeling environment for high performance, multi-domain modeling of physical systems. This seminar will provide an overview of the product MapleSim and its conceptual foundation. Demonstration examples from different industries will illustrate its functionality and potential.

Reversible Group Rings Over Commutative Rings

The Curved N-Body Problem and the Shape of Physical Space

Florin Diacu

University of Victoria, Department of Mathematics & Statistics

Abstract:

In 1844 Gauss allegedly tested the nature of the physical space. He computed the sum S of the angles of a triangle formed by three mountain peaks, hoping to learn whether space is hyperbolic (S < 180), Euclidean (S = 180), or elliptic (S > 180). But, due to the unavoidable measurement errors, his experiments proved inconclusive. Since we cannot reach distant stars to measure the angles of large triangles, Gauss’s method is of no practical use in cosmos either.

In some technical papers written within the past 3 years, we proposed a new method for testing the shape of space. Our avenue is the curved n-body problem, which extends Newtonian gravitation to spaces of constant curvature. For n = 2, the problem was initiated by Bolyai and Lobachevsky and studied ever since, but nobody generalized it until now to n > 2. The study of the equations of motion leads tosurprising results. We discovered that each of the hyperbolic, Euclidean, and elliptic environments exhibit characteristic orbits. So, under the reasonable assumption that space has constant curvature, the shape of the universe could be determined, at least in principle, through astronomical observations of celestial motions.

But what are these specific orbits? Are they stable? Can we observe them and decide if the universe is hyperbolic, Euclidean, or elliptic? Our talk, whose understanding requires only sophomore level mathematics, will answer these questions.

Professor Diacu obtained his doctoral degree at the University of Heidelberg in Germany with a thesis in celestial mechanics. Since 1991, he has been a professor at the University of Victoria in British Columbia, where he was the director of the Pacific Institute for the Mathematical Sciences (PIMS) between 1999 and 2003. Diacu's research is focused on qualitative aspects of the n-body problem of celestial mechanics. Diacu also obtained some important results on Saari's conjecture, which states that every solution of the n-body problem with constant moment of inertia is a relative equilibrium. Apart from his mathematics research, Florin Diacu is also an author of several successful books.

Time Varying Jump Risk Premium in Stock Market Return

An Example of Large Scale Online Ad Auction Analysis with Dremel

Oleg Golubitsky

Google, Inc.

Abstract:

How does one analyze a model generating millions of predictions every day? In this talk we will discuss online ad auctions. These ads rely on probabilistic prediction of click-through and conversion rates. We will consider a simple metric used to analyze models for predicting these rates and show how it can be effectively computed from the vast click and conversion data with Dremel, a query system designed at Google for log analysis. We will take a quick look under the hood of Dremel to see what makes it scalable to multi-terabyte datasets.

Quantum Cryptography

Michele Mosca

University of Waterloo, Institute for Quantum Computing

Abstract:

Cryptography is the art of using mathematical tools to provide information security objectives. The security of the mathematical tools often relies on unproven assumptions about the infeasibility of solving some mathematical problems. Other tools rely on robust information theoretic tools. Since the world is quantum mechanical, the security of all these tools must be re-assessed in the context of quantum information processing.
One very dramatic change was the fact (discovered by Peter Shor) that factoring and finding discrete logarithms is easy on a quantum computer. This drove researchers around the world to seriously study the question of whether one can realistically build a quantum computer, and impressive progress has been made in harnessing quantum mechanical systems for information processing.
Even earlier, it was known (through the work of Wiesner, and of Bennett and Brassard) that the Uncertainty Principle allows us to achieve information theoretically secure cryptographic objectives – their security is not based on computation assumptions, but rather on fundamental features of quantum mechanics. Again, great progress has been made on implementations, particularly quantum key distribution.
In this talk, I will discuss the opportunities and challenges of deploying quantum key distribution in the next generation quantum-secure infrastructure (joint work with Ioannou, Lutkenhaus and Stebila). I will also discuss a new quantum cryptographic primitive for achieving a public-key identification scheme (joint work with Ioannou).

Professor Michele Mosca is a co-founder and the Deputy Director of the Institute for Quantum Computing, a founding member of the Perimeter Institute for Theoretical Physics and a faculty member in the Combinatorics & Optimization department of the University of Waterloo. He has made major contributions in the areas of quantum algorithms, techniques for studying the limitations of quantum computers, quantum self-testing and private quantum channels. He has won numerous academic awards including the Commonwealth Scholarship, the Premier's Research Excellence Award and a Canada Research Chair in Quantum Computation. He has been a Canadian Institute for Advanced Research (CIFAR) Fellow since 2010.

Topological Fluid Dynamics

Boris Khesin

University of Toronto

Abstract:

Topological fluid dynamics is a young mathematical discipline that studies topological features of flows with complicated trajectories and their applications to fluid motions, and develops group-theoretic and geometric points of view on various problems of hydrodynamical origin. Its main ideas can be traced back to the seminal 1966 paper by V. Arnold on the Euler equation for an ideal fluid as the geodesic equation on the group of volume-preserving diffeomorphisms. In the lecture we survey various results related to Arnold's program in hydrodynamics.

Boris Khesin received his PhD degree from the Moscow State University, Russia in 1990, under the guidance of Professor V.Arnold. He taught for several years at the University of California at Berkeley and at Yale University before coming to the University of Toronto in 1996. His awards include the Sloan Research Fellowship, Andre-Aisenstadt Mathematics Prize, The McLean and PREA Awards.
His areas of research are infinite-dimensional Lie groups, topological hydrodynamics, integrable systems and Poisson geometry.

Successes and open problems on the road to quantum gravity

Lee Smolin

Perimeter Institute for Theoretical Physics

Abstract:

I survey the results of our search for quantum gravity, both theoretical and experimental, in order to emphasize both the unexpected successes and persistent open issues. I will argue that these are explained by a wrong assumption concerning the treatment of time.

Dr. Smolin is a theoretical physicist who works mainly on the problem of quantum gravity. He was a founder of the approach called loop quantum gravity. He is also known for proposing the notion of the landscape of theories, based on his application of Darwinian methods to Cosmology. He has contributed also to the foundations of quantum mechanics, elementary particle physics and theoretical biology. He also has a strong interest in philosophy and his three books, Life of the Cosmos, Three Roads to Quantum Gravity and The Trouble with Physics are in part philosophical explorations of issues raised by contemporary physics.

Polymatroids

Jack Edmonds

Abstract:

The talk will sketch an introduction to P, NP, coNP, LP duality, matroids, and some other foundations of combinatorial optimization theory. A predicate, p(x), is a statement with variable input x. It is said to be in NP when, for any x such that p(x) is true, there is, relative to the bit-size of x, an easy proof that p(x) is true. It is said to be in coNP when not(p(x)) is in NP. It is said to be in P when there is an easy (i.e.,polynomially bounded time) algorithm for deciding whether or not p(x) is true. Of course P implies NP and coNP. Fifty years ago I speculated the converse.

Polymatroids are a linear programming construction of abstract matroids. We use them to describe large classes of concrete predicates (i.e.,“problems‟) which turn out to be in NP, in coNP, and indeed in P.

Failures in trying to place the NP “traveling salesman predicate” in coNP, and successes in placing some closely related polymatroidal predicates in both NP and coNP and then in P, prompted me to conjecture that (1) the NP traveling salesman predicate is not in P, and (2) all predicates in both NP and coNP are in P. The conjectures have become popular, and are both used as practical axioms.I might as well conjecture that the conjectures have no proofs.

“The classes of problems which are respectively known and not known to have good algorithms are of great theoretical interest. … I conjecture that there is no good algorithm for the traveling salesman problem. My reasons are the same as for any mathematical conjecture: (1) It is a legitimate mathematical possibility, and (2) I do not know.” - Jack Edmonds, 1966, quoted by Christos Papadimitriou in his 1994 book Computational Complexity.

"Pioneered by the work of Jack Edmonds, polyhedral combinatorics has proved to be a most powerful, coherent and unifying tool throughout combinatorial optimization. …Edmonds conjectured that there is no polynomial-time algorithm for the traveling salesman problem. In language that was developed later, this is equivalent to NP≠P."- Lex Schrijver in his 2003 book Combinatorial Optimization.

The number of matrices and a random matrix with prescribed row and column sums and 0-1 entries

Alexander Barvinok

University of Michigan

Abstract:

Let us consider the set of 0-1 matrices with prescribed row and column sums as a finite probability space with the uniform measure. I will present an asymptotic formula for the number of such matrices and also describe what a random matrix is likely to look like. We'll also discuss what a random graph with the prescribed degree sequence looks like and how many such graphs are there.

This talk is partially based on a joint work with J.A. Hartigan (Yale).

QIP=PSPACE

John Watrous

University of Waterloo, Institute for Quantum Computing

Abstract:

The interactive proof system model of computation is a cornerstone of computational complexity theory, and its quantum computational variant has been studied in quantum complexity theory for the past decade. In this talk I will discuss an exact characterization of the expressive power of quantum interactive proof systems that I recently proved in collaboration with Rahul Jain, Zhengfeng Ji, and Sarvagya Upadhyay. The characterization states that the collection of computational problems having quantum interactive proof systems consists precisely of those problems solvable with an ordinary classical computer using a polynomial amount of memory usage (or QIP = PSPACE in complexity-theoretic terminology). This characterization implies the striking fact that quantum computing does not provide an increase in computational power over classical computing in the context of interactive proof systems.

Dr. Watrous' research has helped establish Canada as a world leader in the area of quantum computing and quantum information theory. He is an Associate Professor of Computer Science and Member of the Institute for Quantum Computing at the University of Waterloo. Dr. Watrous' research focuses on the theory of quantum information science and its applications to algorithms, complexity theory and cryptography. He has recently solved a decade-old problem in complexity, proving QIP=PSPACE.

Abstract:

We investigate integrable matrix ODEs and PDEs and the corresponding algebraic structures. In particular, we study associative multiplications in semi-simple associative algebras over C compatible with the usual one or, in other words, linear deformations of semi-simple associative algebras over C. It turns out that these deformations are in one-to-one correspondence with representations of certain algebraic structures, which we call M-structures in the matrix case and PM-structures in the case of direct sums of several matrix algebras. The classification of these PM-structures naturally leads to affine Dynkin diagrams of A, D, E-type.

The Critical Nodes Detection Problem in Networks

Panos M. Pardalos

Distinguished Professor and University of Florida Research Foundation Professor Director, Center for Applied Optimization Industrial and Systems Engineering Department

Abstract:

We study two problems that involve in detecting critical nodes in networks. In the first problem, we seek a set of vertices with a specified cardinality whose deletion results in maximum number of disconnected components. In an alternate version of the problem, we desire the specified amount of disconnectivity and try to minimize the number of vertices to be deleted in order to achieve this. This is referred to as the critical node detection problem, and finds applications in supply chain networks, epidemic control and identification of influential individuals in social networks, and telecommunication networks. In a supply chain network, it is important to ensure connectivity between supply and demand nodes. These nodes could be secured or made more resilient in order to retain connectivity in the network. In this talk, we review the recent work in this area and provide formulations based on integer linear programming. We also discuss new complexity results and present heuristic techniques to solve the problems.

TRACDS: Temporal Relationship Among Clusters for Data Streams

Margaret Dunham

Southern Methodist University, Dallas, USA

Abstract:

In this talk we propose a new extension to clustering data streams called Temporal Relationship Among Clusters for Data Streams (TRACDS). This is not a new clustering algorithm, but rather a way to capture the temporal relationships among clusters that is inherent in the ordering of observations in the data stream. We propose to capture this ordering relationship among the clusters by overlaying clusters created by any data stream clustering algorithm with a Markov Chain (MC). The states in the Markov Chain represent the clusters and the transitions are the relationships between clusters. The TRACDS framework defines clustering/MC operations that are triggered by the underlying stream clustering algorithm. TRACDS is general enough to be built on top of any clustering algorithm. We describe and illustrate applications of TRACDS to outlier detection and prediction of future events in the stream.

Consequences of Temporal Buybacks in the Book Industry

Yigal Gerchak

Department of Industrial Engineering,Tel Aviv University

Abstract:

Book publishers in many countries allow booksellers to return all unsold copies of a title for a full refund, although the costs of return (shipping, handling) are usually assumed by the bookseller. But, to qualify, returns (buybacks) need to take place within a certain time window from the delivery to the store. We construct a model for a given title with uncertain retail demand in which, at some point in time, the retailer can return unsold copies to the publisher (who can sell them in a secondary market), or order more copies. The time window for returns divides the selling horizon into two periods, with the possibility of returns or replenishment at the end of the first period. Most of the system parameters (e.g., second period retail price, secondary market price) depend on the relative lengths of the two periods, which is a key strategic variable in our model. We characterize the optimal replenishment policy for this temporal newsvendor problem for both centralized and decentralized systems. Moreover, our numerical comparison of the optimal length of the return time window for the two systems demonstrates that they differ both in terms of their values and behaviors. Our numerical study also reveals the impact of different system parameters on the optimal return window length, and provides managerial insights as to how that length should be tailored based on market / product / operating characteristics. In addition, we briefly consider how to coordinate the supply chain, which requires period-dependent revenue shares and buyback prices, in addition to the wholesale price.

How Real Are Real Numbers?

Abstract:

We discuss mathematical and physical arguments against continuity and in favor of discreteness, with particular emphasis on the ideas of Emile Borel (1871-1956).

Gregory J. Chaitin is at the IBM T. J. Watson Research Center in New York, and is the discoverer of the remarkable Omega number. His work on algorithmic information develops an idea in Leibniz's Discours de metaphysique (1686), and has focused on randomness, on the limits of formal axiomatic reasoning, and, more recently, on metabiology. His two latest books are Meta Math! and Thinking about Godel and Turing, a collection of essays.

Representations of Solutions of Second Order Linear Systems with Delay

Denis Khusainov

National University of Kiev

Abstract:

We consider scalar and matrix delay equations of the form X"(t) + Ω2 X(t - τ) = 0, where Ω is either a real number or a matrix, x is a scalar or a vector, respectively, and τ is a delay. The delay scalar and matrix sine and cosine functions are introduced as the solutions, following the classical respective cases of the equations without delay (τ=0). Representations of solutions of initial value problems for the homogeneous and nonhomogeneous equations with delay are given by using the delay sine and cosine functions.

Dr. Professor Khusainov’s research focuses on problems of stability and optimization of dynamical systems described by differential equations, difference equations, and functional-differential equations. He made important contributions to the development of the method of optimal Lyapunov functions and Lyapunov Krasovsky functionals. He is the author of three monographs and more than 200 journal publications.

Multiscale Simulation of Biochemical Systems

Linda Petzold

University of California Santa Barbara

Abstract:

In microscopic systems formed by living cells, the small numbers of some reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA). Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) the presence of multiple timescales (both fast and slow reactions); and (2) the need to include in the simulation both chemical species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation. We will describe several recently developed techniques for multiscale simulation of biochemical systems, and outline some of the future challenges.

Dr. Linda Petzold is currently Professor in the Department of Computer Science (Chair 2003-2007) and the Department of Mechanical Engineering, and Director of the Computational Science and Engineering Program at the University of California Santa Barbara. She received her Ph.D. in Computer Science in 1978 from the University of Illinois. From 1978-1985 she was a member of the Applied Mathematics Group at Sandia National Laboratories in Livermore, California, from 1985-1991 she was Group Leader of the Numerical Mathematics Group at Lawrence Livermore National Laboratory, and from 1991-1997 she was Professor in the Department of Computer Science at the University of Minnesota. Dr. Petzold is a member of the US National Academy of Engineering. She is a Fellow of the ASME and of the AAAS. She was awarded the Wilkinson Prize for Numerical Software in 1991, the Dahlquist Prize in 1999, and the AWM/SIAM Sonia Kovalevski Prize in 2003. She served as SIAM (Society for Industrial and Applied Mathematics) Vice President at Large from 2000-2001, as SIAM Vice President for Publications from 1993-1998, and as Editor in Chief of the SIAM Journal on Scientific Computing from 1989-1993.

Kinetic theory for polymer-nanoparticle composites (PNCs)

Abstract:

Polymer-nanoparticle composites are promising novel high performance materials. By adding a small amount of nanosized secialty particles into the conventional polymer matrix, one can improve the materials, chemical, electrical properties of the composite significantly. The extraordinary improvement in the material property is strongly impacted by the polymer-nanoparticle surface interaction. In this talk, I will make an attempt to model the flowing polymer-nanoparticle composite using a kinetic theory. We model the excluded volume interaction of the nanoparticles, the polymer matrix conformation, polymer-nanoparticle contact interaction explicitly. We then benchmark the model in monodomains. Phase diagram in equilibrium, shear and elongational flows will be discussed. Finally, we explore the spatial-temporal structure for inhomogeneous polymer domain PNCs.

The common theme of the members of the thrust area led by Professor Qi Wang at the USC NanoCenter is to study complex phenomena in nanoscience research through theoretical analysis and computational simulation. The current research activities in the group include: quantum chemistry, algorithm development and analysis, pattern recognition and visualization, Monte Carlo, MD, Ab initio simulation of complex material systems, multiscale modeling and simulation of molecular structures, self-assembly phenomena, and mesoscale structure in complex fluids/soft matter, high performance computing, and GPU computing. The thrust actively seeks interdisciplinary interaction with other thrust areas within the NanoCenter and outside groups.

Emerging trends and new techniques for engineering modeling and simulation

Tom Lee

Chief Evangelist, Maplesoft

Abstract:

Several key industries including automotive and aerospace sectors are questioning the viability of the traditional engineering modeling and simulation toolchain for emerging design challenges. Many are concluding that meeting these challenges require a rethinking of how we create and interact with engineering models. This seminar presents a case for a new approach that exploits some of the well-known advantages of symbolic computation ina way that accelerates the development of complex dynamic models and produces models of higher fidelity and real time performance.

The software context for this seminar will be the Maplesoft product line. Best known for its math product Maple, Maplesoft has recently emerged as a key player in the engineering modeling software with the release of the milestone product MapleSim, a new modeling environment for high performance, multi-domain modeling of physical systems. MapleSim supports the interactive, drag and drop approach to system modeling and allows for rapid development of a wide range of models. Furthermore, through the application of Maple's symbolic computation algorithms, MapleSim automatically generates and provides access to all of the model equations of the system allowing for deeper exploration and greater flexibility in parameter studies. These symbolic tools also simplify model equations increasing simulation speeds by as much as an order of magnitude over traditional signal-flow based simulations.

This symbolic approach has already earned keen interest from automotive OEMs, and a broad range of academic engineering groups. Researchers project significant reduction in the effort required to develop model equations and potential for advancement in the analytical capabilities and are actively exploring its potential to become the new software framework for modeling research. Educators have also expressed optimism as it provides an intuitive environment to define and simulate models without sacrificing any of the rigor in the course as the system provides full access to the underlying model equations.

This seminar will provide an overview of the product MapleSim and its conceptual foundation. Several demonstration examples will illustrate its functionality and potential. An informal Q & A session will discuss implications in research and education.

As Chief Evangelist at Maplesoft, Tom Lee is the principal external spokesperson for the company. He holds a PhD in Mechanical Engineering (Automation and Control) from the University of Waterloo. He has been with Maplesoft since 1989.

Computational Chemistry on Gold Clusters and Complexes

Uncertainty Quantification and Management in Multiscale Systems

Nicholas Zabaras

Cornell University

Abstract:

Uncertainty quantification in multiscale systems arises from limited noisy experimental data, and from the inherent randomness in the underlying physical phenomena. Sources of uncertainty maybe affiliated with initial/boundary conditions, material properties, constitutive laws, loads, reaction constants, geometry, and topology. In this talk, we will review methodologies that account for the stochastic and multiscale nature exhibited by such systems. In particular, we will discuss: (1) A data driven strategy to incorporate limited experimental data into the stochastic analysis, (2) Effective computational strategies to solve the resulting stochastic partial differential equations in highdimensional spaces and(3) A stochastic variational multiscale formulation to incorporate uncertain multiscale features. A number of examples will be presented to demonstrate the potential and limitations of the various techniques discussed. These will include problems related to longterm integration and stochastic discontinuity.

Quantum Computing over the Rainbow

Steven Flammia

Perimeter Institute for Theoretical Physics

Abstract:

The standard model of quantum computing assumes that the quantum computer is capable of inducing precisely controlled dynamics on the constituent qubits. An alternative model, the one-way quantum computer, allows any quantum computation to be implemented using only measurements. Much of the hard work is consolidated into preparing the initial entangled state on which the measurements are made. Most proposals for building such "cluster" states suffer from a drawback: either the cluster must be knitted together piece by piece, which itself requires precise control, or the cluster can be prepared efficiently, but the measurements are difficult to perform because the qubits can't be individually addressed. After introducing these miraculous cluster states, I will discuss a very compact method for building them that uses a single optical cavity. The method creates large states without difficult knitting, and where all the constituents are addressable. The information is encoded in the broad spectrum of resonant frequencies of the cavity (the optical frequency comb), hence the computation takes place "over the rainbow.”

Multidimensional Gaussian Distributions and Random Matrix Theory

John Nieminen

Formerly with Northern Digital Inc.

Abstract:

Since its beginning random matrix theory (RMT) has captivated mathematicians and physicists alike. RMT goes beyond being a fundamental subject of study and has been applied to several different fields of math and physics including quantum and acoustic chaos, stochastic processes, econophysics, and number theory. This talk will focus on some new observations that relate certain statistics of random point process to the statistics of eigenvalue spacings of random matrices. Particular attention will be given to eigenvalue spacings of two-by two and four-by-four random matrices and how these are connected to some properties of multidimensional Gaussian distributions.

Abstract:

For the study of complex synthetic and biological molecular systems by computer simulations one is still restricted to simple model systems, or to by-far-too-small time scales. To overcome this problem, multiscale techniques are being developed. However, in almost all cases the regions treated at different level of resolution are kept fixed and the free exchange of particles among these regions is not allowed. I here present a robust computational method and its basic theoretical framework for an efficient and flexible coupling of the different regimes. The key feature of the method is that it allows for a dynamical change of the number of molecular degrees of freedom during the course of the MD simulation by an on-the-fly switching between the atomistic and coarse-grained levels of detail. Thermodynamic equilibrium is preserved by interpreting the concept of changing resolution in terms of "geometrically induced phase transition". This leads to the introduction of a "latent heat" of switching and to the extension of the equipartition theorem to fractional (switching) degrees of freedom. The efficiency of this approach is illustrated by applications to several systems.

Mathematics of Risk Transfer

Abstract:

In the seventies, Mathematical finance emerged as an academic companion to a market revolution brought by financial derivatives. The birth of Financial Risk Management in the nineties solidified the role of mathematics in the financial sector. Both areas are now merging into economic activity based on risk transfer, which is transcending the financial sector. This talk will review some new mathematical challenges in this area, focusing on the role of stochastic correlation models, which was at the root of the sub-prime crisis.

How Do We Simulate Things at the Scale of Molecules and Electrons?

Tom Woo

CRC in Catalyst Modelling and Computational Chemistry, University of Ottawa

Abstract:

Computer simulations combined with high performance computing have benefited many areas of science and engineering. Chemistry is no different. Computational chemistry involves simulating systems at the atomic and electronic level. When examining individual molecules at this microscopic scale, quantum mechanical effects are important. As a result, to get the physics ‘right’ we need to solve the quantum mechanical Schrödinger equation or equivalent to properly describe these systems. In this presentation, a brief introduction to the technology of computational chemistry and molecular scale modeling will be given. Additionally, some examples from our research group of how computational chemistry has provided important insights into chemical processes will be presented. A study of how anti-wear engine oil additives function at the molecular level in automobile engines will be provided (Science, 2005, 207, 1612) and our recent efforts to search in silico for new pure nitrogen analogues of diamond at high pressure will be given. (Phys. Rev. Lett. 2006, 97, 155503).

Some Representative Issues in Multiscale Modeling

Abstract:

This talk will focus on the methodologies for efficiently capturing the macroscale behavior of a system using microscopic models.I will try to give a candid assessment of the current status of the field, discussing both the main successes and the major difficulties.I will start by reviewing some of the classical methodologies.I will then discuss the attempts that have been made to construct general frameworks.I will present a relatively new strategy for constructing seamless techniques that do not require going back and forth between the macro and micro states of the system. If time permits, I will also discuss some typical instabilities and inconsistencies that can arise in multiscale methods.Finally, I will discuss what can be done for systems that do not have scale separation or any other special features.

Weinan E is a professor in the department of mathematics and program of applied and computational mathematics at Princeton University. He received his B. Sc. from the University of Science and Technology of China in 1982, his master's degree from the Chinese Academy of Sciences in 1985 and his Ph. D. from UCLA in 1989. After visiting positions at the Courant Institute and the Institute for Advanced Study, he took a faculty position at the Courant Institute from 1994 to 1999, before moving to Princeton University. He is the recipient of the PECASE awards (1997), Feng Kang Prize (1999) and the ICIAM Collatz Prize (2003). Weinan E's current research interest centers on stochastic and/or multi-scale, multi-physics modeling. His current work addresses problems in the theory and modeling of rare events, multi-scale modeling of complex fluids and micro-fluidics, atomistic and continuum modeling of solids, as well as other types of problems with multiple time scales.

Abstract:

We consider experimentally feasible chains of trapped ions with pseudo-spin 1/2, and find models that can potentially be used to implement error-resistant quantum computation. Similar in spirit to classical neural networks, the error-resistance of the system is achieved by encoding the qubits, distributed over the whole system. We therefore call our system a quantum neural network, and present a quantum neural network model of quantum computation. Qubits are encoded in a few quasi-degenerate low energy levels of the whole system, separated by a large gap from the excited states, and large energy barriers between themselves. We investigate protocols for implementing a universal set of quantum logic gates in the system, by adiabatic passage of a few low lying energy levels of the whole system. Naturally appearing and potentially harmful distributed noise in the system leaves the fidelity of the computation virtually unchanged, if it is not too strong. The computation is also naturally resilient to local perturbations of the spins. Reference: Phys. Rev. Lett. 98, 023003 (2007); quant-ph/0607053.

Optimization and Data Mining in Biomedicine

Distinguished Professor,Co-Director of the Center for Applied Optimization, University of Florida, ISE, MBE Departments, McKnight Brain Institute, and University of Florida Genetics Institute

Abstract:

In recent years optimization has been widely used in many problems in biomedicine. These problems are inherently complex and very difficult to solve. In this talk we are going to focus on global optimization techniques (multi-quadratic programming) in computational neurosciences and biclustering (nonlinear fractional optimization) based data mining approaches in cancer research. In addition, several other applications will be briefly discussed.

A combinatorial approach to key predistribution for distributed sensor networks

Douglas R. Stinson

David R. Cheriton School of Computer Science, University of Waterloo

Abstract:

In this talk, we discuss the use of combinatorial set systems (combinatorial designs) in the design of key predistribution schemes (KPS) for sensor networks. We show that the performance of a KPS can be improved by carefully choosing a certain class of set systems as “key ring spaces.” Especially, we analyze KPS based on a type of combinatorial design known as a transversal design. We employ two types of transversal designs, which are represented by the set of all linear polynomials and the set of quadratic polynomials (over some finite field), respectively. These KPS turn out to have significant efficiency in a shared key discovery phase without degrading connectivity and resiliency.

The Monster and its Moonshine Functions

John McKay

Concordia University

Abstract:

This group of astronomical order is slowly yielding its secrets. It is the symmetry group of a rational conformal field theory. In this introductory talk, Dr. McKay will discuss the functions that constitute monstrous moonshine and explain the importance of the monster group and its connections with better established parts of mathematics.

A Treecode Algorithm for Regularized Particle Simulations

Robert Krasny

Arthur F. Thurnau Professor, University of Michigan

Abstract:

Consider a system of N particles interacting through a long-range potential, e.g. point charges, point masses, or point vortices. The cost of evaluating the pairwise interactions by direct summation is O(N2), which is prohibitively expensive when N is large. A number of hierarchical algorithms have been developed to reduce the cost to O(Nlog N) or O(N) such as the Barnes-Hut treecode and the Greengard-Rokhlin fast multipole method. These algorithms are recursive and they replace the particle-particle interactions by suitably chosen particle-cluster or cluster-cluster interactions. This talk will describe a version of the Barnes-Hut treecode that was recently developed for particle simulations of vortex sheet roll-up in three-dimensional fluid flow. The scheme uses Taylor expansions in Cartesian coordinates to treat the regularized Biot-Savart kernel. Adaptive techniques are employed to gain efficiency. Applications to evaluating electrostatic forces in plasma dynamics and molecular dynamics will also be briefly discussed.

Abstract:

An exact trajectory of a dynamical system lying close to a numerical trajectory is called a shadow. We present a general-purpose method for proving the existence of finite-time shadows of numerical ODE integrations of arbitrary dimension in which some measure of hyperbolicity is present. Much of the rigor is provided automatically by interval arithmetic and validated ODE integration software that is freely available. The method is a generalization of a previously published containment process that was applicable only to two-dimensional maps. We extend it to handle maps of arbitrary dimension and then to ODEs. For an n-dimensional system, the method involves building n-cubes around each point of the discrete numerical trajectory through which the shadow is guaranteed to pass at appropriate times. The proof consists of two steps: first, the rigorous computational verification of a simple geometric property we call the inductive containment property; and second, a simple geometric argument showing that this property implies the existence of a shadow. The computational step is almost entirely automated and easily adaptable to any ODE problem. The method allows for the rescaling of time, which is a necessary ingredient for successfully shadowing ODEs. Finally, the method is local, in the sense that it builds the shadow inductively, requiring information only from the most recent integration step, rather than more global information typical of several other methods. The method produces shadows of comparable length and distance to all currently published results.

Activists and Political Coalitions in the United States

Norman Schofield

Washington University

Abstract:

Formal models of voting have emphasized the mean voter theorem, that all parties should rationally adopt identical positions at the electoral mean. The lack of evidence for this assertion is a paradox or contradiction in need of resolution. This paper attempts to resolve this paradox by considering an electoral model that includes “valence” or non-policy judgements by voters of party leaders. The theorem is used to suggest that Republican success depends on balancing the opposed demands of economic and social conservatives. Democrat success in future elections resides on overcoming the policy demands of economic liberals and gaining support from cosmopolitans, the socially liberal but economically conservative potential supporters of the party.

Simulations of Complex Materials Across Multiple Length Scales

Abstract:

A variety of physical phenomena involve multiple length and time scales. Some interesting examples of multiple-scale phenomena are:

(a) the mechanical behavior of crystals and in particular the interplay of chemistry and mechanical stress in determining the macroscopic brittle or ductile response of solids;

(b) the molecular-scale forces at interfaces and their effect in
macroscopic phenomena like wetting and friction;

(c) the alteration of the structure and electronic properties of macromolecular systems due to external forces, as in stretched DNA nanowires or carbon nanotubes.

In these complex physical systems, the changes in bonding and atomic configurations at the microscopic, atomic level have profound effects on the macroscopic properties, be they of mechanical or electrical nature. Linking the processes at the two extremes of the length scale spectrum is the only means of achieving a deeper understanding of these phenomena and, ultimately, of being able to control them.

While methodologies for describing the physics at a single scale are well developed in many fields of physics, chemistry or engineering, methodologies that couple scales remain a challenge, both from the conceptual point as well as from the computational point. In this presentation I will discuss the development of methodologies for simulations across disparate length scales with the aim of obtaining a detailed description of complex phenomena of the type described above. I will also present illustrative examples, including hydrogen embrittlement of metals, DNA conductivity and translocation through nanopores, and affecting the wettability of surfaces by surface chemical modification.

Design Half Metallic Materials for Spintronics –
One of the Frontiers of Physics

Ching-Yao Fong

University of California, Davis

Abstract:

We first answer the question 'What is spintronics?' A general survey of spintronic materials will be given. We explain why half metallic materials are ideal candidates for spintronic applications. It will be followed by a brief discussion of the fundamental ideas underlying methods of calculation - density functional theory and the pseudopotential method.

A Perimeter Institute, Wilfrid Laurier University and University of Waterloo Joint Event

From Disposing Arrow’s Dictator to Understanding All Those Mysteries About Voting

Don Saari

Professor of Economics and Mathematics and the Director of the Institute
for Mathematic and Behavioral Sciences, University of California, Irvine

Abstract:

Over a half century ago, Arrow's Impossibility Theorem left us with the negative sense that "no decision method is fair." Is this correct? After indicating "why" his theorem states what it states, very benign interpretations immediately follow— interpretations that show how to replace Arrow's dictator with positive conclusions. Then I show how all of the standard voting paradoxes can be understood. In this manner, we can identify the "optimal voting rule." If time permits, I will indicate how all of this extends to price dynamics.

A Perimeter Institute, Wilfrid Laurier University and University of Waterloo Joint Event

The Chaotic Evolution of the Universe

Don Saari

Professor of Economics and Mathematics and the Director of the Institute
for Mathematic and Behavioral Sciences, University of California, Irvine

Abstract:

In this expository talk, I describe how "chaotic behavior" not only was discovered in the study of the Newtonian N-body problem, but also is responsible for several strange appearing motions. Then, a mathematical outline of the general evolution of the universe, under Newton's laws, is provided. No prior background in dynamics or the mathematics of the N-body problem is needed to follow this lecture.

Human Walking Isn't All Hard Work, or How to Freeload on Physics

Art Kuo

University of Michigan

Abstract:

It takes considerable effort in order to walk, both to control the motion and to provide energy. But just how much control is needed, and where does the energy go? To answer this question we might consider just how little control and energy are needed. Passive walking machines are two-legged mechanisms that can walk down a gentle slope with no control whatsoever and no external energy input. With a very small amount of power, these machines can also walk on level ground. Their movements look surprisingly human, and in fact it appears that humans harness the passive dynamic properties of the limbs when they walk, just as the machines do. Humans need to exert some control, but make best use of physics to minimize their effort. We will use simple minimization principles to interpret theoretical and experimental evidence for freeloading on physics.

The GuessAndCheck Methodology

Doron Zeilberger

Rutgers University

Abstract:

The method of "Guess and Check" is taught to elementary school pupils as preparation for algebra, but in a more global sense is the underlying methodology of the physical sciences that Reichenbach famously called "Context of Discovery" and "Context of Justification".

Implicitly, this is also how mathematicians work, but you'll never know that from their finished product, whose format can be described by the regular expression:

" ((Lemma: _ _ _ : Proof: _ _ _ )* Theorem: _ _ _ Proof: _ _ _)* "

sometmes intersepted by "Remark:" and "Corollary:".

The future of mathematics will depend on how well we understand the REAL way that mathematicians arrive at their results, not by formal logic, but by some (usually implicit) ANSATZ. We should learn how to make the ansatzes EXPLICIT, and then teach it to our computers, who would eventually far surpass their masters.

Computing the Universe

Seth Lloyd

Professor of Quantum-Mechanical Engineering, MIT

Abstract:

The universe can be thought of as a giant information processor: every atom, quark, and photon registers bits of information, and every time two elementary particles interact, those bits are flipped. The universe computes. How powerful a computer is it? Recent advances in physics of quantum computation allows us to measure both the computational power of the universe and also how hard it is for a computer to reproduce the universe’s full dynamics. This talk calculates the computational capacity of universe and describes how the dynamics of the universe – including the behaviour elementary particles and quantum gravity – can be analyzed as a quantum computation.

Financial Innovation in a Chaotic Environment

Abstract:

Dr. Myron Scholes is the Frank E. Buck Professor of Finance Emeritus at the Stanford University Graduate School of Business. He is most noted for being the Co-originator of the famed Black-Scholes Option Pricing Model, a building block of modern finance. For his work he was awarded the Alfred Nobel Memorial Prize in Economic Sciences in 1997. He is a published author, member of several editorial boards and past president of the American Finance Association.

A Focus on Financial Mathematics: Meeting Increasing Demands in the Financial Services Industry

Getting Ahead in Financial Services: How life-long education will drive your career

Roberta Wilton

CEO, CSI Global Education Inc.

Abstract:

Dr. Roberta Wilton is President and CEO of CSI Global Education Inc. CSI offers more than 50 products designed to meet the educational needs of professionals and individuals who need to leverage financial knowledge to meet both career and personal goals. CSI is the leading educator in this sector in Canada, and has recently expanded into international markets.

Effects of Continuous Variation on the Strategic Stability of Conventions

Mike Mesterton-Gibbons

Florida State University

Abstract:

Conventions---rules for conflict avoidance---abound in animal social life and often rely on arbitrary cues. Consider, for example, the coordination problem faced by territorial neighbors: each would like to obtain more space than the other is willing to yield, yet both suffer if there is a prolonged dispute over boundary locations. A game-theoretic model shows that neighbors may be favored to place boundaries at the position of a conspicuous landmark even when the size of the resulting territory is smaller than could be obtained through escalated fighting. Another example of a convention, explored by a second model, is the age-based (older dominates younger) reproductive queue that is observed in certain species of wasp. In both cases, the shape of the distribution of a continuous variable---e.g., strength or lifespan ---affects the strategic stability of a convention.

Fair Division: From Cake-Cutting to Dispute Resolution

Steven J. Brams

New York University

Abstract:

Cutting a cake, dividing up the property in an estate, determining borders in international disputes --such problems of fair division are ubiquitous. Rigorous procedures for allocating goods (or "bads" like chores), or deciding who wins on what issues in disputes, will be analyzed. Particular attention will be given to procedures that produce "envy-free" allocations, in which everybody thinks he or she received the largest portion and hence does not envy anybody else. Applications to real-life conflicts, ranging from interpersonal to international, will be discussed.

Computational biology of gene regulation in Bacillus subtilis

Michiel De Hoon

Columbia University

Abstract:

In recent years, experimentally characterized gene regulatory relations in the model organisms Escherichia coli and Bacillus subtilis have been collected and annotated in their respective databases RegulonDB and DBTBS. Such databases allow us to predict gene regulation in these and related organisms on a genome-wide scale.

In bacteria, gene expression is organized in operons, a group of adjacent genes on the same strand of DNA that are transcribed into a single mRNA molecule. Transcription of an operon starts as the RNA polymerase binds to the promoter region upstream of the first gene, and continues until it reaches a terminator sequence downstream of the last gene in the operon. Sigma factors, which form a subunit of the RNA polymerase, are responsible for recognizing the correct DNA binding site of the promoter region. Most bacteria have more than one sigma factor, allowing the RNA polymerase to reprogram itself to recognize different DNA motifs by incorporating the appropriate sigma factor.

Previously, we predicted the operon structure of B. subtilis based on the genome sequence information and a collection of microarray gene expression data, using experimentally known operons as a training set. As a next step in our understanding of gene regulation in bacteria, we attempt to predict from the DNA sequence and gene expression data which sigma factor controls the expression of a given operon.

Similarly, we may attempt to predict the location of the transcriptional terminator downstream of the operon. In Bacillus subtilis, most terminators consist of an inverted repeat in the DNA sequence, followed by a stretch of several thymine residues, allowing us to predict the presence of a transcriptional terminator directly from the DNA sequence. Recently, we analyzed the statistical properties of 425 known terminators in Bacillussubtilis. We found that transcriptional terminators, and therefore the boundaries of operons, can be predicted with a high accuracy (~ 94%) in Bacillus subtilis as well as other members of the Firmicutes phylum, even in the absence of experimentally known operons in these organisms. Interestingly, considerably fewer transcriptional terminators could be detected in bacteria (such as E. coli) belonging to other phyla, suggesting an important role for other termination mechanisms in these organisms.

Abstract:

Multiprotein complexes play an essential role in many cellular processes. But our knowledge of the mechanism of their formation, regulation and lifetimes, is very limited. Timely transcription of the corresponding genes is one mechanism by which protein complexes are regulated. We investigate this mechanism in Multi-protein complexes in yeast by mapping gene expression data onto the manually curated set of protein complexes deposited in the MIPS database. Rather than combining all the available gene expression data, which includes data measured under more than 500 different conditions we examine the expression profiles of genes corresponding to components of individual complex under different sub-groups of conditions independently. In a previous study we identify putative regulatory sequence motifs in the upstream regions of the genes involved in individual complexes, and predicted groups of co-regulated genes in each complex on the basis of these motifs. Combining the condition dependent gene expression analysis with the identified putative regulons, allows us to obtain a dynamic picture of the conditions dependent transcriptional regulation program of protein complexes in yeast.

This is joint work with Nicolas Simonis, Chris Orsi, Didier Gonze, and Jacques van Helden.

Complex Systems: Rational Modelling Ensures Fidelity

Anthony Roberts

University of Southern Queensland

Abstract:

In fluid mechanics dissipation across thin domains acts to organise dynamics of thin films and dispersion along channels. We seek long term fidelity characterised by exponentially quick agreement between model and reality. But cross-sectional averaging is unsound. Resolving subgrid structure is the key to modelling across disparate physical length scales---for example, discrete models of dispersion in a channel. Other multiscale modelling uses sparsely distributed microscopic simulators to build up a model of macroscopic dynamics; but Brownian bugs provide a cautionary example of birth induced clustering that lies outside the usual continuum models. Initial conditions also have a long term effect and must be modelled to ensure fidelity. State space, dynamical systems arguments form the basis for accurate and reliable low-dimensional models of spatio-temporal dynamics.

Why Does a Psychologist Need a Supercomputer?
The Representation and Use of Knowledge in Memory

D. J. K. Mewhort

Queen's University

Abstract:

I will sketch a Distributed Holographic Model for semantic memory (DHM) and apply it to archival data from representative experiments. Conventional views distinguish active memories (all the short-term and sensory stores) from passive memory (the long-term storage).

By contrast, the DHM proposes that long-term story is dynamic: the strength of words and of their associates changes dynamically whenever items are studies or retrieved. Using random vectors to represent words, the DHM captures basic data from free-recall experiments, including the list-strength effect, the part-list cue effect, and category-based clustering in recall.

I will conclude by sketching BEAGLE: a statistical model used to construct vectors that describe the meaning of words in terms of the word's position in a hyperspace. BEAGLE is trained by reading continuous prose. The next step is to use BEAGLE vectors in the DHM to understand the flow of ideas.

Some results in Mathematical Biology and in Ramsey Theory

Abstract:

This talk will have two parts. One will be a review of the activities in Professor Berger's group in computational biology.

The other will concern some questions about the existence of monochromatic solutions to simple homogeneous linear equations when the integers or real numbers are colored.

We discuss the following answer to the first case of a conjecture of Rado from 1933: Consider a linear homogeneous equation in three variables with integer coefficients. If every coloring of the integers using 24 colors has a monochromatic solution to it, then every coloring with any finite number of colors has a monochromatic solution to it.

We also discuss answers to the following two questions:

Does every 3 coloring of the non-zero reals have a monochromatic solution to the equation x + 2y = 4z ?

Does every coloring of the reals by positive integers have a monochromatic solution to the equation x1 + 4x2 -x3 - x4 - ... - x7 = 0 with all xi distinct?
This is joint work with Jacob Fox.

Financial Engineering on Computational Clusters

Thomas F. Coleman

Cornell University (Director, Cornell Theory Center)

Abstract:

Many of the important problems of risk management and financial engineering are computationally intensive. Computing good answers can take hours, sometimes days. Practitioners, under severe time pressure, sometimes make unwarranted assumptions or overly simplify models in order to compute an answer more quickly. Unfortunately, this approach can yield incorrect, and sometimes dangerous, computed answers.

The advent of (commodity) cluster computing, with point-and-click access from a desktop, offers convenient parallel computing power for the financial analyst or risk manager. We discuss some basic problem situations that arise in computational risk management such as portfolio management, pricing and hedging of large portfolios, evaluating complex index-linked insurance contracts, Value-at-Risk, credit risk computations, and sensitivity analysis; we demonstrate the convenience and power of cluster approaches, especially when working at the portfolio level.

The talk is organized as follows:
1) Background
2) What is the connection between web services, and high-performance (parallel) computing?
3) Why is the .NET/web services environment, front-ended by Excel, absolutely ideal for handling the compute-intensive problems of finance?
4) Four examples of the effective use of this environment in financial engineering. (a) Price and compute risk parameters of a large portfolio of structured bonds (in 10 minutes instead of 10 hours); (b) Compute a rich ‘hypercube’ of future ‘what if scenarios’ for a portfolio of convertibles (in 1 minute instead of 25 minutes); (c) How to price a portfolio of portfolios of defaultable corporate bonds, and compute risk parameters, in 4 minutes instead of 4 hours.
5) Conclusions

Mathematical Modelling with Maple and Applications

Jurgen Gerhard

Maplesoft

Abstract:

Computer algebra systems such as Maple have several advantages over purely numerical mathematical software that make them particularly suitable for modelling in science and engineering. In addition to symbolic computations, with the potential to obtain, e.g., exact formulae for the dependency of the model on certain parameters and what-if analyses, such systems also offer numerical computations to arbitrary precision, visualization capabilities, and the ability to document within one framework the whole modelling process, from the development of the model to simulation and validation.

After a brief introduction to various aspects of Maple: symbolic computation, numerical computation, vizualization, and documentation, the talk will demonstrate modelling applications in Maple from areas including chemistry and engineering.

Abstract:

The US will spend a billion dollars on nanotechnology in
2004. In Michael Crichton’s novel Prey, miniature robots
escape from a secure research laboratory in the Nevada
desert, threatening to engulf the world. Medical companies
claim that nanotechnology could catch cancer before it
spreads, seeking and destroying cells at the first sign of
mutation.

In this keynote lecture for the opening ceremonies of the
Laurier Science Research Centre, Dr. Sargent will describe
what nanotechnology is and what it can do. “Nano" refers
to one billionth of a metre: the size of a few atoms clustered together to form a molecule. Atoms and
molecules are dominated by different forces and governed
by different rules when they interact on the scale of the
nanometer. When the semiconductor industry first faced
this fact, it was viewed as a major obstacle to continued
growth in performance and profit; today, the chance to
exploit the distinct rules of the nanoscale game is viewed as
a huge opportunity.

Dr. Sargent will draw from his own research to illustrate
what can be done with nanotechnology. For example, in the
biomedical arena, optical tags – light-emitting materials –
can be created that produce their characteristic signatures in
spectral window in which living tissue is largely
transparent. DNA wrappers on these nanoparticles can
render them stable in biological systems over weeks. This
discovery opens doors to applications in vivo in human
patients to screen systematically for cancer – prostate
tumors, for example – at its earliest stages, before it has
had an opportunity to spread.

In communications and computing, service convergence
demands a corresponding coming together of underlying
platform technologies. Today, the physical technologies
underlying wireline, wireless, and optical networks are
mutually incompatible. Dr. Sargent will describe progress
made toward using tailored nanomaterials to enable an
optical network that can reside on silicon, the canonical
platform for computing. The strategy is based on the realization of semiconductor quantum dots that can be spin- or spray-coated onto any substrate – even onto clothing.
The materials have proven efficient as sources of infrared light; efficient optical detectors for networking and night-vision imaging; modulators for putting signals onto the network or for friend-foe identification in the battlefield. Wired Magazine recently covered Dr. Sargent and his
colleagues’ breakthrough in making nanosized buckyballs
process optical information in one picosecond, potentially
enabling a Terabit-per-second network.

Professor Ted Sargent is Visiting
Professor of Nanotechnology at the
Microphotonics Centre at MIT. In
2003 he was named “one of the
world’s top young innovators” by
MIT’s Technology Review magazine
(audience > 1M worldwide).
Technology Review’s TR100 is a
group of 100 creative individuals
under age 35, drawn from a broad
spectrum of fields, whose research
will shape how we live and work in
the future.
Dr. Sargent is the author of well over
one hundred papers published in
international refereed journals and
presented at international
conferences. Since 2003, he has
given invited lectures at MIT,
Stanford, UCLA, Oxford,
Cambridge, and University College
London. He has addressed scientific
researchers at the leading technical
conferences in the US, Japan, and
Europe in the areas of
nanotechnology, photonics, and
optical networking. His coming
speaking engagements include the
opening Plenary address at the First
International Society for Biological
Engineering Conference on
Bioengineering and Nanotechnology
in Singapore; the opening scientific
address at Silicon Valley’s Nano
Electronics and Photonics Forum;
and an address to the US National
Academy of Sciences’ Keck Futures
Initiative Designing Nanostructures
Conference.
In 2002, Dr. Sargent was celebrated
by the New York-based Institute of
Electrical and Electronics Engineers
(IEEE), “For groundbreaking
research in applying new phenomena
and materials from nanotechnology
towards transforming fibre-optic
communications systems into agile
optical networks.” The IEEE is the
technical professional association of
electrical engineers with more than
380,000 members in 150 countries. This seminar was sponsored by the Faculty of Science
as part of the Research Centre Opening event.

A multiple supplier inventory model with random discount offerings

A mathematical study of disinfection of
bacterial biofilms with antibiotics

Hermann Eberl

University of Guelph

Abstract:

Many bacterial infections in the human body are caused by
bacterial biofilms, including fatal ones like cystic fibrosis, as well
as less dangerous but more wide-spread ones like dental caries.
These biofilms are communities of bacteria that are embedded in a
polymeric network. Life in the biofilm community offers the
microorganisms protection against harmful environmental factors.
This makes them more resistant against antibiotics than planktonic
bacteria.

In this talk we derive a set of meso-scopic model equations for a
prototype biofilm and discuss the application of the model to
study survival of biofilms in the presence of antibiotics. The
model consists of a system of semi-linear and quasi-linear
parabolic partial differential equations. Analytical results include
existence, uniqueness, boundedness and stability of solutions.