Abstract
III-nitride based multiple quantum well structures are the active region of choice for LEDs and lasers from the ultraviolet to the visible spectral regime. For maximum efficiency, the electron and hole carrier densities need to be distributed uniformly in the quantum wells at all relevant operating conditions. We analyze the carrier injection and distribution in polar and semi-polar LEDs using a microscopic carrier transport model. It is based on drift-diffusion currents, and quantum wells are treated as scattering centers, with self-consistent kp-band structure calculation for the quantum well carriers. Tunneling currents at hetero interfaces are included via an effective potential model. Impurities such as Si, Mg and O are activated according to a Schottky-type model. Their distribution is found from experimental SIMS data. The activation mode includes both the impurity density as well as the local electric field (Poole-Frenkel effect). Comparison to experimental data reveals the following findings: the hole injection efficiency depends critically on the proximity of the Mg dopants to the p-side quantum well. At the same time, electron leakage is suppressed by ionized Mg atoms forming an electrostatic barrier, which is reduced with rising current due to saturation of the acceptor levels. Impurities that form shallow donors (e.g. oxygen) also hamper the hole injection efficiency as they reduce the Mg ionization rate. In UV LEDs, special care has to be taken for electron injection into the deep quantum wells. Therefore a high performance LED requires nanoscale control of intentional and unintentional impurity distribution. Finally, it is shown that the ideality factor of the pin-diode contains information on the injection homogeneity of the LED.

Abstract
In the past few decades, power grids across the world have become dependent on markets that aim to efficiently match supply with demand at all times via a variety of pricing and auction mechanisms. These markets are based on models that capture interactions between producers, transmission and consumers. Energy producers typically maximize profits by optimally allocating and scheduling resources over time. A dynamic equilibrium aims to determine prices and dispatches that can be transmitted over the electricity grid to satisfy evolving consumer requirements for energy at different locations and times. Computation allows large scale practical implementations of socially optimal models to be solved as part of the market operation, and regulations can be imposed that aim to ensure competitive behaviour of market participants. The recent explosion in the use of renewable supply such as wind, solar and hydro has led to increased volatility in this system. We develop models that aim to ensure enough generation capacity for the long term under various constraints related to environmental concerns, and consider the recovery of costs for this enhanced infrastructure. We demonstrate how risk can impose significant costs on the system that are not modeled in the context of socially optimal power system markets and highlight the use of contracts to reduce or recover these costs. We also outline how battery storage can be used as an effective hedging instrument.

Abstract
We consider mathematical PDE models of motility of eukaryotic cells on a substrate and discuss them in a broader context of active materials. Our goal is to capture mathematically the key biological phenomena such as steady motion with no external stimuli and spontaneous breaking of symmetry. We first describe the hierarchy of PDE models of cell motility and then focus on two specific models: the phase-field model and the free boundary problem model. The phase-field model consists of the Allen-Cahn equation for the scalar phase field function coupled with a vectorial parabolic equation for the orientation of the actin filament network. The key mathematical properties of this system are (i) the presence of gradients in the coupling terms and (ii) the mass (volume) preservation constraints. These properties lead to mathematical challenges that are specific to active (out of equilibrium) systems, e.g., the fact that variational principles do not apply. Therefore, standard techniques based on maximum principle and Gamma-convergence cannot be used, and one has to develop alternative asymptotic techniques. The free boundary problem model consists of an elliptic equation describing the flow of the cytoskeleton gel coupled with a convection-diffusion PDE for the density of myosin motors. This PDE system is of Keller-Segel type but in a free boundary setting with nonlocal condition that involves boundary curvature. Analysis of this system allows for a reduction to a Liouville type equation which arises in various applications ranging from geometry to chemotaxis. This equation contains an additional term that presents an additional challenge in analysis. In the analysis of the above models our focus is on establishing the traveling wave solutions that are the signature of the cell motility. We also study breaking of symmetry by proving existence of non-radial steady states. Bifurcation of traveling waves from steady states is established via the Schauder's fixed point theorem for the phase field model and the Leray-Schauder degree theory for the free boundary problem model. These results are obtained in collaboration with Jan Fuhrmann, M. Potomkin, and V. Rybalko.

Abstract
In recent years nonlinear PDE models have been used successfully in various applications in socio-economic sciences. For example to describe opinion for- mation and knowledge growth in a society, or the collective dynamics of large pedestrian crowds. In this talk we focus on two PDE models for socio-economic problems. First a Boltzmann mean-eld game approach to describe knowledge and economic growth in a society; and second a nonlinear PDE model for interacting pedes- trian ows. We start by discussing the underlying microscopic modelling as- sumptions as well as the corresponding mean-eld equations. Then we focus on the existence and linear stability of solutions in either case. Finally we construct special solutions, which relate to sustained economic growth or segregation of flows, and illustrate the dynamics of both models with numerical simulations.

Abstract
The formation of patterns in solids is often explained as the result of a competition between an elastic bulk energy and a higher order surface energy. As an example we will discuss microstructures in shape memory alloys, which are usually modeled by singularly perturbed non-convex bulk energies. The main focus will be on recent analytical results for shape memory alloys which have been found experimentally to have particularly low hysteresis.

Abstract
Mathematical modeling is the key tool that drives modern science, technology engineering, and medicine (STEM). It comprises data about the respective situations & phenomena and models (e.g. differential equations) that allow to draw inferences and make predictions from the data. STEM data has seen a lot of attention to support its collection, storage, curation, and presentation by computers, and large public and private data repositories have been established. The models themselves have largely remained in the pen-and-paper world of mathematics, and have been transformed into programs on a case-by-case basis. In particular, the question on how to model the models declaratively for computer support is largely unresolved, and as a consequence we do not have collections of models as a resource. In this talk we will present the knowledge-based FrameIT method, which represents mathematical models and the knowledge backing it in a theory graph whose nodes (theories) are logic-based representations of the mathematical objects, their assumptions, and properties and whose edges (theory morphisms) are meaning-preserving mappings. The critical realization utilized by the FrameIT method is that the application of a mathematical model can be represented as an extension of the model theory graph by special theories representing the situation and phenomena and that the application mapping naturally becomes a theory morphism. This gives an abstract framework for modelling mathematical modelling and makes the models into represented objects that can be collected, curated, presented, and reused. The FrameIT method has been implemented and tested in the context of serious games for mathematics. We will present that work in the talk and discuss how the very simple models (highschool math) used there can be scaled to realistic models in the STEM fields. We also discuss the OMDoc/MMT infrastructure for collection, storage, curation, and presentation of theory graphs and how this can serve as a basis for a repository of mathematical models.

Abstract
This talk will cover two complementary topics : sequential sampling and uncertainty quantification. The automatisation of sequential decision making in an uncertain environment, also called sequential sampling, is a fundamental goal of modern artificial intelligence and machine learning. A simple example of sequential sampling is the best arm identification bandit setting [1], which is a discrete, unstructured and noisy optimisation problem. Recent own results [5] addressing a conjecture from [1] will be presented. They highlight the importance of uncertainty quantification for sequential sampling, and for risk-aware decision making. Empirical uncertainty quantification of estimation procedures can be simple in parametric, low dimensional situations. However, it becomes challenging and often problematic in high and infinite dimensional models. Indeed, adaptivity to the unknown model complexity becomes key in this case, and uncertainty quantification becomes akin to model estimation, see [6,4]. Such model-adaptive uncertainty quantification can be formalised through the concept of adaptive and honest confidence sets [2]. Recent own results [3,4] related to this concept will be presented.

Probability theory was axiomatically built on the concept of measure by A. Kolmogorov in the early 1930s, giving the probability measure and the related integral as primary objects and random variables, i.e. measurable functions, as secondary. Not long after Kolmogorov´s work, developments in operator algebras connected to quantum theory in the early 1940s lead to similar results in an approach where algebras of random variables and the expectation functional are the primary objects. Historically this picks up the view implicitly contained in the early probabilistic theory of the Bernoullis.

This algebraic approach allows extensions to more complicated concepts like non-commuting random variables and infinite dimensional function spaces, as it occurs e.g. in quantum field theory, random matrices, and tensor-valued random fields. It not only fully recovers the measure-theoretic approach, but can extend it considerably. For much practical and numerical work, which is often primarily concerned with random variables, expections, and conditioning, it offers an independent theoretical underpinning. In short words, it is "probability without measure theory".

This functional analytic setting has also strong connections to the spectral theory of linear operators, where analogies to integration are apparent if they are looked for. These links extend in a twofold way to the concept of weak distribution, which describes probability on infinite dimensional vector spaces. Here the random elements are represented by linear mappings, and factorisations of linear maps are intimately connected with representations and tensor products, as they appear in numerical approximations.

Taking this conceptual basis of vector spaces, algebras, linear functionals, and operators gives a fresh view on the concepts of expectation and conditioning, as it occurs in applications of Bayes´s theorem.

Abstract
We will report on novel nano-patterned composite materials for sensing and energy harvesting. We rely on a wealth of nano-composites composed of silicon (Si) as well as GaN nanowires (NWs) from chemical vapor deposition (CVD) based processing including NW shaping based on reactive ion etching, plasmonic nano-structures realized by patterning with focused ion beam microscopes (FIB) of thin metal layer or particle (silver, gold) and few layer graphene from CVD processing. The NWs show large surface areas that can chemically be functionalized to account for sensing functionality. The graphene requires proper chemical treatment to permit attachment of functional groups. The chemical routes to nano-materials functionalization will be reported. Sensing mechanisms with functional nano-materials can be e.g. of electrical, optical or mechanical nature. Examples will be given for selected cases. Emphasis will reside on optical detection based on the surface-enhanced Raman scattering (SERS) effect. We will report SERS of graphene layers transferred on arrays of split ring resonators (SRRs). Raman enhancement factors per area up to 75 demonstrate the strong plasmonic coupling between graphene and the metamaterial resonances. The SRR/graphene material offers a perspective to control SERS and may pave the way towards advanced SERS substrates that could lead to the detection of single molecules attached to graphene for bio-chemical sensing. For energy harvesting applications we develop Si and GaN nanowires with which Mie modes can be controlled and efficient absorption of solar light can be tuned. The integration of NWs in thin film solarcell concepts with efficiency potential >>15% will be demonstrated. The same type of Si NWs and their applicability to solarfuels generation will be shown.

Abstract
A recurring mathematical challenge in developing simulation techniques to describe and predict materials properties on the computer is the occurrence of interacting many-particle systems on all relevant length and time scales. In the talk I will give a brief overview about the concepts and methods we have developed over the last years and that allow us to describe materials properties from the quantum mechanical level all the way up to engineering quantities. A particular emphasize will be put on novel algorithmic strategies to accurately yet computationally efficiently coarse grain the sampling of stochastic high-dimensional configuration spaces. Applying these strategies allows to use highly accurate but computationally expensive ab initio calculations as input and thus to reach a predictive accuracy that was hitherto not achievable. The performance of this approach will be discussed for a few examples: The design of next-generation ultra-high-strength steels, medical implants with tailored elastic properties but also how to address failure mechanisms that often delay or prevent the introduction of new materials.

Abstract
The main aim of the talk will be to show how to reinterpret the concept of weak solution satisfying a suitable energy conservation and entropy inequality - recently introduced by E. Feireisl and coauthors for a problem of heat conduction in fluids - in order to deal with certain classes of complex fluids dynamics. In many cases indeed the resulting PDE systems dysplay high order nonlinearities due mainly to quadratic forcing terms. The main idea consists in replacing, in the weak formulation, these PDEs by an equality representing energy conservation complemented with a differential inequality describing production of entropy. In this way, the thermodynamical consistency is preserved, but the entropic formulation is more tractable mathematically. This solution notion has been already successfully applied to the analysis of non-isothermal liquid crystals models and to the case of the evolution of a non-isothermal mixture of two different viscous incompressible fluids of the same density. Abstract including references (in pdf format)

Abstract
In this talk an overview is given on multirate time integration methods developed over the years for the integration of the compressible Euler equation and advection-diffusion-reaction equations. Physical and chemical processes in the atmosphere occur on different time scales ranging from seconds to hours and days. Let us mention waves of different speeds in the atmosphere, turbulent diffusion, condensation of water vapor, temporal emission patterns, or photo chemistry. Further time scales come in to play due to the shallow nature of the atmosphere and the huge computational area in case of simulations of the whole globe. To handle these problems anisotropic grids with different grid sizes are used in the vertical and horizontal direction and additional local grid refinement is applied in the horizontal direction. In our solution strategy we follow the method of lines, first discretize in space and then solve in a second stage a huge system of ordinary differential equations (ODE) in time. Most of the proposed time integration methods originate from the idea of source splitting and the recursive use of Runge-Kutta like methods. At each stage of these Runge-Kutta methods again an ODE has to be solved but now only a part of the original right hand side is taken into account whereas the rest act as an additional constant source term. This idea can than be applied again to the subproblems. For these methods order conditions are derived and different strategies for determining reliable schemes with good stability properties will be explained. Two special applications are multirate methods for the advection equation on locally refined grids and generalized split-explicit methods for the compressible Euler equation.

Abstract
Impact cratering on planetary surfaces is one of the most important geological processes in the solar system. The cratered landscapes such as on the Moon, Mars or Mercury testify to the importance of collision events during the evolution of planets. Although remnants of meteorite impacts are rare on Earth it is generally accepted that impact events played an important role in the evolution of the biosphere. On the other hand impacts pose a threat to life on earth. 65 Mio years ago the dinosaurs were wiped out by the impact of a 10 km diameter asteroid striking the Earth at approx. 20 km/s. A quantitative understanding of impact processes can be obtained by the analysis of remnants of impacts from the geological record, by analogue experiments, and numerical modelling. The latter is the main topic of this presentation. Numerical simulations of meteorite impacts require a special type of computer codes, so-called ?hydrocodes?. A hydrocode may be loosely defined as a code designed to solve large deformation, finite strain transient problems that occur on a short time scale. While material strength is neglected in Eulerian codes used for gas dynamics, it is a key component of hydrocodes. In contrast to structural analysis codes, the energy equation is integrated in time, and the deviatoric and pressure terms in the stress tensor are usually modelled separately. The solution is advanced in time using an explicit integration scheme because stress waves and shocks are an important part of the solution, and they must be resolved accurately in both space and time. We will describe modelling approaches to investigate hypervelocity impact processes and shock wave propagation on different scale ranging from millimetres to thousands of kilometres. Besides an efficient parallelized numerical solution of partial differential equations by finite volume technique on Eulerian grids the parameterisation of material properties poses the biggest challenge in the simulation of impact processes. Material modelling in terms thermodynamic behaviour and mechanical response of large, rapid deformation is key for realistic description of the processes.

Abstract
Within the semiconductor industry pure leading edge foundries serve a special mission by delivering state-of-the-art competitive logic performance with a strong focus on system-on-chip (SoC). Therefore they have to support a broad portfolio of different technology options on each node and GLOBALFOUNDRIES is an industry leader by representing this particular business model. To achieve the "high performance per watt" figure of merit technology elements like PD-SOI, strained-Si, an aggressive junction scaling as well as HKMG is needed together with an efficient multiple core- and power-efficient design. Those technology elements were developed and optimized for multiple generations beginning from an 180nm to a 32/28nm technology. For taking the next steps on further device scaling into 20nm VLSI CMOS technologies and beyond several challenges will occur, which will be discussed in the presentation. Future technologies will push a planar transistor technology to its physical limits and subsequent technologies require substantial innovations in process architecture and 3D device concepts. Two possible technology approaches will be discussed such as Multi-Gate-FETs/FinFETs and Extreme-Thin(ET)-SOI/Ultra-Thin-Box(UTB) devices to continue More Moore and More than Moore.

Abstract
Convection driven by gravitational instability is a widespread phenomenon in the geophysical fluid dynamics, dynamics of stars and planetary interiors. In contrast to the quasi-homogeneous small-scale turbulence, convection is distinguished by the cell-like coherent structure of the flow. Absence of wind shear and non-linear dependene of freshwater density on temperature in vicinity of 4°C are characteristic features of ice-covered lakes. Therefore, a variety of convective flows develop there, driven by solar radiation penetrating the ice, and by salt fluxes at the ice-water and the water-sediment interfaces. In our field studies on ice-covered lakes, we apply modern measurement techniques, like microstructure profiling and acoustic doppler velocimetry, which deliver direct estimations of the mixing characteristics, comparable with the output of the eddy-resolving LES and DNS models. Providing a rare example of 'pure' convection in the natural conditions, ice-covered lakes serve as natural laboratories for investigation of convective mixing and for testing of CFD models. Apart from general geophysical studies, a number of ecological applications exist, where field studies on convection can be effectively combined with advanced mathematical modelling of convective mixing.

Further Informations
Part I of WIAS launching event for "Mathematics of Planet Earth 2013"Presentation slides

Abstract
Our study on electrolytes takes a thermodynamically consistent coupling between mechanics and diffusion into account. It removes some inherent deficiencies of the popular Nernst-Planck model. A boundary problem for the equilibrium processes is used to illustrate the new features of our model.

Nucleation and growth mechanisms are important kinetics of the phase transformation model which arises in the crystallization of the polymer materials. In each stage, the nucleation rate and the growth rate have been the crucial coefficients describing the kinetics of the process as well as the properties of the specimens. Moreover, identification of these physical parameters describing the nucleation or the growth mechanisms is essential for controlling the crystallization of polymers and so is a significant subject also from industrial viewpoints.

In this talk, we will revisit the time cone approach in Cahn 1996 where a hyperbolic governing equation is derived for the heterogeneous nucleation rate and spatially homogeneous growth rate. As for the inverse problem, by utilizing the eigenfunction expansion method, we investigate the identification of the growth rate for an isothermal one dimension specimen. Such a problem can be seem as an inverse coefficient problem for a hyperbolic equation which is highly nonlinear with respect to the observation data. A two-step Tikhonov type regularization method is proposed to reconstruct the growth rate provided with the final noisy observation data. Numerical prototype examples are presented to illustrate the validity and effectiveness of the proposed scheme.

Prof. Dr. Vincenzo Capasso, University of Milano, Department of Mathematics:
The role of geometric randomness in the mathematical modeling of Angiogenesismore ...LocationWeierstraß-Institut, Mohrenstr. 39, 10117 Berlin, Erdgeschoss, Erhard-Schmidt-Hörsaal

A complete description of the geodesic curves on the Riemann manifold of multivariate normal distributions equipped with the Fisher information metric has been accomplished by Eriksen in 1987, and later by Calvo and Oller in 1991 but in a different manner. The former describes geodesic curves in terms of an exponential map in somewhat mysterious way and the latter obtains a solution of the differential equation of a geodesic curve explicitly by solving much general system of differential equations. The method what Erikson had taken seems to have a group theoretic nature while it is still unclear. The purposes of this talk are to derive the explicit formula of the geodesic curve from the result obtained by Eriksen and to clarify why such exponential map may give geodesic curves for the one dimensional normal distribution case.

In the beginning of the talk, the speaker will briefly introduce the newly established research institute, "Institute of Mathematics for Industry, Kyushu University".

Abstract
Many mass-produced everyday products of modern technology would appear to be completely magical to our ancestors: mobile phones, television, computers, electric light, cars, etc. Some devices that are still perceived as magical or mysterious are about to appear in the laboratory and are not so mysterious after all. For example, the first prototype of an electromagnetic cloaking device has been recently made at Duke university. This device makes an object invisible to microwave radiation of a single frequency and polarization. At Harvard University, first vital steps towards levitating objects on the forces of the quantum vacuum have been made. At St Andrews, we observed first indications of artificial black holes in the laboratory, using extremely short light pulses in photonic-crystal fibres. Invisibility devices, quantum forces and optical black holes have two things in common: they represent applications of Einstein's general relativity in Maxwell's electromagnetism and their practical demonstrations are made possible by modern metamaterials. I will try to elucidate the scientific principles acing behind the scenes of such "pure and applied magic".

Abstract
Fluid particles like drops or bubbles play a prominent role in numerous applications like multiphase chemical reactors, fuel engines, atomization, drying of liquid sprays, heat exchange and ink-jet printing to mention just a few. One particular development towards process intensification relies on micro-systems, which further enhances the role of inter-material interfaces and requires accurate models and a profound understanding of these. This talk surveys sharp-interface models for two-phase flows with increasing level of physico-chemical interface properties, starting from a simple dividing interface to the case when the interface is a phase for itself with surface viscosity and variable surface tension - the so-called Boussinesq-Scriven surface fluid. For the different levels of interfacial properties, the corresponding mathematical models together with main analytical results are outlined. For some of the models, results of Volume-of-Fluid (VOF)-based numerical approaches are also included.

Abstract
An overview will be presented that summarizes recent developments in amorphous/crystalline silicon (a-Si:H/c-Si) heterojunction solar cell technology and the current understanding of the fundamental device physics. In a Si:H/c Si cells, device performance is crucially dependent on the quality of the a Si:H/c Si heterojunction. Some of the main issues will be discussed that have been identified as being crucial to minimize recombination at the junction and thereby maximize cell efficiency: Wet chemical pre-treatment of the c Si surface prior to a Si:H deposition; optimum doping, which is a trade-off between maximizing band bending and minimizing interface recombination; thermal and plasma post-treatments of the a Si:H/c Si structure. By optimizing these aspects using specifically developed characterization methods such as UV-excited photoelectron spectroscopy and surface photovoltage measurements, we were able to realize (n)a Si:H/(p)c Si and (p)a Si:H/(n)c Si cells with up to 18.5% and 19.8% efficiency, respectively. In both cases, the cells were prepared without the commonly used intrinsic buffer layer.

Abstract
The talk is organized in four parts. The first part describes requirements regarding the numerical simulation of turbulent flows. It will be shown that stochastic methods are needed which generalize deterministic methods such that the closure problems of deterministic equations are solved. The second part describes the basics of stochastic methods and their implied deterministic equations. The modeling of molecular dynamics, turbulent velocity and scalar fields will be discussed. Special emphasis is placed on the question of how existing (FDF and PDF, LES and RANS) methods can be unified. The third part describes the application of stochastic and deterministic methods to both non-reacting and reacting turbulent flow simulations. The integration of RANS and LES for swirling turbulent jet flow simulations will be discussed. It will be shown how the range of applicability of hybrid RANS/PDF methods can be extended such that nonpremixed turbulent combustion simulations can be performed. The fourth part summarizes these developments and describes future activities.

Abstractnextnano is a versatile software for the simulation of three-dimensional nanometer-scale quantum structures and semiconductor devices. We will outline some of the basic physical concepts and numerical methods that have been developed for nextnano, an international collaborative effort involving many physicists, mathematicians, and programmers. In addition, we will present several application examples.