Applied Mathematics Seminar

nov. 16

3D modeling of water displacement with shallow water equations presented by Elijah Meert, Department of Mathematics, Western Michigan University

Abstract: In this talk we discuss a class of hyperbolic partial differential equations called shallow water equations. Our discussion ids based on the paper Rapid, Stable Fluid Dynamics for Computer Graphics by Michael Kass and Gavin Miller which describes a numerical scheme for approximation of solutions of shallow water equations. This numerical scheme is used to create realistic water animations. The scheme described is simple, stable, and reduces computational cost from cubic to quadratic.

In 1990, at the time when this paper was published, there were many popular methods already available for modeling water surfaces such as stochastic subdivision, Fourier synthesis and wave tracking. Although these models were able to model wave refraction with depth, they did leave a wide variety of water phenomena unexplored. For example, previous models neglected wave reflections, net transport of water and boundary conditions with changing topology. The method introduced in Rapid, Stable Fluid Dynamics for Computer Graphics is able to model these phenomena as well as wave refraction with depth.

nov. 9

Mathematical modelling of Ebola epidemic: effects of vaccination and quarantine presented by Raed Abdullah, Department of Mathematics, Western Michigan University

Abstract: We discuss a new mathematical model describing dynamics of Ebola epidemic which includes both vaccination and quarantine processes. The characteristic feature of this model is a use of Caputo fractional derivative of order α ε (0, 1] instead of traditional time derivative. We provide a stability analysis of the disease-free equilibrium and discuss results of numerical simulation by using both the Euler method and Markov Chain Monte Carlo (MCMC) method.

These simulation results clearly demonstrate that quarantine and vaccination are very efficient ways to control Ebola epidemic.

The talk is based on the paper "Modeling the effect of quarantine and vaccination on Ebola disease", Advances in Difference Equations, (2017) 2017:178, by Tulu, Tian and Wu.

nov. 2

Micro-Doppler Effect: model and applications presented by David Sayre, Department of Mathematics, Western Michigan University

Abstract: Modern day radars utilize a phenomenon called the Doppler Effect in order to evaluate the velocity of an object relative to the antennae's location.

Our discussion will focus on producing a model that includes both the Doppler Effect and Micro-Doppler Effect for a moving object. Additionally, we will discuss how the Micro-Doppler Effect can enhance our radar systems by allowing for object recognition.

oct. 26

α-Concave functions and their applications presented by Matthew Stodola, Department of Mathematics, Western Michigan University

Abstract: It is well known that in optimization, concavity of a function defined on a convex set is useful because it guarantees that a local maximum of this function is, in fact, a global maximum. In this talk, we will describe a generalization of concavity known as α-concavity in the context of probabilistic optimization, noting that some useful properties of regular concave functions extend to α-concave functions as well.

This talk follows sections of a chapter Optimization Models with Probabilistic Constraints from the monograph Lectures on Stochastic Programming by Shapiro, Dentcheva and Ruszczynski.

oct. 19

Examples of neural networks with hidden layers presented by Melinda Koelling, Ph.D., Department of Mathematics, Western Michigan University

Abstract: At the start of this semester, we heard about the mathematical theory (or, the lack of mathematical theory) for deep learning neural networks.

In this talk, I will discuss some instructive examples of neural networks with hidden layers. We will look at a simple example where a hidden layer is necessary. Then we'll look at some examples to help us understand why it is plausible that we can approximate an arbitrary function with a neural network with a single hidden layer.

oct.12

11 a.m.

No-Arbitrage principle in conic finance presented by Mehdi Vazifedan, Department of Mathematics, Western Michigan University

Abstract: The "No-Arbitrage" characterization has been long established in one price financial models as the Fundamental Theorem of Asset Pricing (FTAP). In one price economy, FTAP establishes that no-arbitrage is equivalent to the existence of an equivalent martingale measure. In fact, such an equivalent measure can be derived as the unit normal vector of the hyperplane that separates the attainable gain subspace and the convex cone representing arbitrage opportunities. However, in a two-price financial models (where there is a bid-ask price spread) the set of attainable gains is not a subspace anymore. We use convex optimization and the conic property of this region to characterize the No-Arbitrage principle in financial models with bid-ask price spread present. This characterization will lead us to the generation of a set of ordered pairs of martingale measures and discount random variables. Under such set, we can find the lower and upper bounds (supper-hedging and sub-hedging bounds) for the price of any future cash flow. We will show that for any given cash flow, for which the price is outside the above range, we can build a trading strategy that provides one with an arbitrage opportunity. We will generalize this structure to any two-price finite-period financial model.

Abstract: In 1962, Edmund Phelps pioneered a utility-based analysis of personal savings and consumption by using methods of dynamic programming. In the 1970s, Nils Hakansson generalized Phelps' optimal investment and consumption model by allowing multiple investment vehicles for a class of utility functions. Phelps' and Harkansson's results (including their explicit formulae for the optimal consumption strategy) influenced future developments in the field and were cited in several hundreds of research papers.

Recently we discovered that their original results contain some errors. In this talk we illustrate these errors by using numerical computations and indicate the incorrectness of the proof. Because of importance of the Phelps and Hankansson models we formulate the multi-stage problem as an aggregated constrained convex programming problem (instead of the dynamic programming problem set up by Phelps and Hakansson). Then we convert the problem to one without constraint by using the Lagrange multiplier rule. In view of its complexity we are not able to obtain an explicit solution. However, we can use an expectation to aggregate the necessary (and sufficient) conditions for optimality to derive approximate solutions. These solutions will be explained in this talk.

oct. 5

Abstract: In the second talk we discuss a relation between optimization and training of Deep Learning Networks, Convolutional Neural network and application of Deep Learning Neural Networks in computer vision, speech recognition and natural language processing.

A significant progress during the last decade in developing Machine Learning methodology (especially, its Deep Learning version) for such application as speech recognition, machine translation, computer vision, bioinformatics, etc. was manifested in such successful commercial products as Apple's Siri, Microsoft's Cortana, Amazon's Alexa and Google Assistant. Deep Learning neural networks are characterized by multi-layer structures of intermediate knowledge representations which are built as a result of training of networks on sets of raw data and consecutive testing.

sept. 28

Deep learning neural networks: paradigm and mathematical foundations I presented by Yuri Ledyaev, Ph.D., Department of Mathematics, Western Michigan University

Abstract: A significant progress during the last decade in developing Machine Learning methodology (especially, its Deep Learning version) for such applications as speech recognition, machine translation, computer vision, bioinformatics, etc. was manifested in such successful commercial products as Apple's Siri, Microsoft's Cortana, Amazon's Alexa and Google Assistant. Deep Learning neural networks are characterized by multi-layer structures of intermediate knowledge representations which are built as a result of training of networks on sets of raw data and consecutive testing.

Currently, in spite of impressive achievements of Deep Learning in many application, there is no general mathematical theory which explains efficiency of this approach for these applications or that explains the failure of it in some cases (for example, over-fitting phenomena).

In this series of two talks we discuss basic characteristics of Deep Learning neural networks's design and variety of mathematical methods which are used in this design.