Mathematical Modeling in Industry X - A Workshop for Graduate Students

Content

The IMA is holding a 10-day workshop on Mathematical Modeling in Industry. The workshop is designed to provide graduate students and qualified advanced undergraduates with first hand experience in industrial research.

Format

Students will work in teams of up to 6 students under the guidance of a mentor from industry. The mentor will help guide the students in the modeling process, analysis and computational work associated with a real-world industrial problem. A progress report from each team will be scheduled during the period. In addition, each team will be expected to make an oral final presentation and submit a written report at the end of the 10-day period.

Application Procedure

Graduate students and advanced undergraduates are invited to apply. An application form must be submitted to the IMA. In addition, two letters of recommendation are required; one must be from the student's advisor, director of graduate studies, or department chair. Prerequisites vary and depend on the project, but computational skills are important.

The IMA will cover local living expenses and will offer to pay airfare for the math modeling participants. Selection criteria will be based on background and statement of interest, as well as geographic and institutional diversity. Women and minorities are especially encouraged to apply. Applications must be completed by April 15, 2006 for full consideration. Early submissions are encouraged. Successful applicants will be notified by April 30, 2006.

Completed application forms and Letters of Recommendation should be addressed to "Math Modeling Committee" and emailed to mm-applications@ima.umn.edu. Text, pdf or postscript files are prefered.

The goal of this project is to develop a set of algorithms implemented in software (such as Matlab) that reads and analyzes a birefringence map for a glass sample after exposure to a UV laser. The purpose of the analysis is to characterize how much strain (density change) has been produced in the glass by the laser exposure. This result can be reduced to a single number (the density change) but should be accompanied by some kind of error bar or quality of fit assessment. The analysis is to be performed in several steps, each of which offers opportunities for algorithm design and optimization:

1. A baseline measurement is read from a data file. This gives the birefringence of the glass sample prior to any laser exposure.

2. An experimental data file is read in, giving the birefringence field of the same sample after laser exposure. It is necessary to align the two fields of data so that the baseline can be subtracted from the post-exposure field. The alignment involves a two-dimensional translation (no rotation or scale change), but the translation may well be a sub-pixel value. (Typically the data sets are on a uniform grid of 0.5 mm spacing, which is a little coarser than some of the features we hope to study.) After subtraction, the resulting field of data represents only the laser-induced birefringence, without artifacts due to the initial birefringence of the sample.

3. A theoretical birefringence field is read in. This has been calculated assuming a nominal fractional density change (e.g. 1ppm ) and takes into account the sample boundary conditions and exposure geometry. The theoretical birefringence field must be aligned with the subtracted file calculated above, again with a sub-pixel shift, and then a best-fit value of the density should be deduced to give the best agreement between theory and measurement. Theory and experiment are compared in Figure 1.

There are several features of this problem that makes it mathematically more interesting:

1. Birefringence (defined as the difference in optical index of refraction for orthogonal polarizations of light) is a quantity with both magnitude and direction, but is not a vector. Manipulating and calculating birefringence fields offers some challenges.

2. Sub-pixel alignment of data sets requires some kind of interpolation scheme, such as Fourier interpolation by use of FFTs or something else. Optimizing the alignment with slightly noisy data offers some challenges.

3. The underlying physics of birefringence and why the birefringence fields look as they do (e.g. zero in the center of the exposed region, peak value just outside the exposed region) is interesting to study and understand.

One of the more intriguing choices of finite elements in the finite element method is B-splines. B-splines can be constructed to form a basis for any space of piecewise polynomial functions, including those which have specified continuity conditions at the junctions between the individual polynomial pieces. The classical finite element method based on B-splines for ODEs is de Boor - Swartz collocation at Gauss points. Until recently, however, extensions to more than one variable were hard to come by.

This project is straightforward: We will attempt to implement a finite element method for an elliptical PDE using WEB-splines. We will test the code on a fairly simple cylindrical beam that comes from an established multi-disciplinary design optimization problem. If time permits, we will perform the actual design optimization on the given part using the WEB-spline code that we will have developed.

Required: One semester of numerical analysis, knowledge of programmingDesired: One semester of partial differential equations.

[+] Team 3: Cell-Foreign Particle Interactions

Mentor Suping Lyu, Medtronic

Benjamin Cook, University of California, Los Angeles

Tanya Kazakova, University of Notre Dame

Pedro Madrid, University of Puerto Rico

Jeremy Neal, Kent State University

Miguel Pauletti, University of Maryland

Ruijun Zhao, Purdue University

Cell membrane forms a closed shell separating the cell content (cytoplasm) from the extra cellular matrix, both of which are simply aqueous solutions of electrolytes and neutral molecules. Typically, there is a net positive charge in the outside surface (extracellular) of the membrane and a net negative charge in the inside surface (cytoplamic) of the membrane. As such, there is a voltage drop from the outside surface to the inside surface across the membrane. However, the membrane itself is hydrophobic and deformable. When there is an external electric field, e.g. by a charged foreign particle, the surface charge densities of the membrane could be disturbed. Because the system is in electrolyte solutions, the static interactions need to be modeled with the Poisson-Boltzmann equation. The problems proposed here are: (1) How are the surface charge densities of the membrane disturbed by a charged particle? What are the interactions between the particle and the membrane? (2) If the particle is smaller than the cell, when it touches the membrane surface, how does it deform the membrane and can it pass through the membrane? Consider the following variables for the above analysis: the size and charge of the particle, surface charge density and surface tension of the membrane, membrane curvature and rigidity, and particle-membrane distance. One can assume that both the particle and the cell are spheres. The electrolyte solutions both inside and outside of the cell are the same. The membrane thickness (about 5 nm) is much smaller than cell size (1 to 10 micron).

Computerized reservoir simulation models are widely used in the industry to forecast the behavior of hydrocarbon reservoirs and connected surface facilities over long production periods. These simulation models are increasingly complex and costly to build and often use millions of individual cells in their discretization of the reservoir volume. Simulation processing time and memory requirements increase constantly and even the utilization of ever faster computers cannot stem the growth of simulation turnaround time.

On the other hand, decision makers in reservoir and field management need to quickly assess the risks associated with a certain model and production strategy and need to come up with high/low scenarios for NPV and the likelihood of these scenarios. To achieve reduced turnaround time in this difficult environment, reservoir engineers and applied mathematicians employ optimization techniques that use surrogate models (i.e. a response surface) to perform these tasks – the costly simulation model is used to seed the design space and to assist with local refinement of the surrogate model.

Task:

The project team will face an interesting and challenging task, subdivided into three steps:

The team creates a response surface model for a given reservoir using a simplified black-oil reservoir simulator to seed the design space. The challenge is to avoid factorial decomposition of the input parameters and still obtain a relevant distribution of points within the design space.

Once the response surface model is built, the team will use it to investigate certain scenarios and come up with P10, P50 and P90 parameter estimates. In part two of this step, the NPV will be optimized for each scenario.

The last step is to use the response surface and simulator to perform a simple history match. The emphasis here is on making use of the response surface model to reduce turnaround time. Local refinement of the response surface will be necessary.

Many kinds of image degradation, including blur due to defocus or camera motion, may be modeled by convolution of the unknown original image by an appropriate point spread function (PSF). Recovery of the original image is referred to as deconvolution. The more difficult problem of blind deconvolution arises when the PSF is also unknown.

The goal of the project is to design and implement an effective algorithm for blind deconvolution of images degraded by motion blur (see figures). The project will consist of the following stages:

Scheduling problems occur in many industrial settings and have been studied extensively. They are used in many applications ranging from determining manufacturing schedules to allocating memory in computer systems. In this project we study the scheduling problem known as the Carpool problem: suppose that a subset of the people in a neighborhood gets together to carpool to work every morning. What is the fairest way to choose the driver each day? This problem has applications to the scheduling of multiple tasks on a single resource. The goal of this project is to study various aspects of algorithms to solve the Carpool problem, including optimality and performance.