ARL/ICB Crash Course in Systems Biology, August 2010

This course is geared toward biologists who want to become familiar with current computational biology software and capabilities, emphasizing quantitative applications for understanding and modeling complex biological systems. The course is taught by researchers from the Army Institute for Collaborative Technology and the Army Research Laboratory.

To register and obtain lodging and transportation information for the workshop, please go to:

Schedule

The course will consist of four sessions, each lasting approximately 3.5 hours (including a break in the middle of the session).

Monday, 9 Aug

8:00 am

Registration open

8:45 am

Welcome and introductions (Ed Perkins)

9:00 am

Session 1: Modeling and Analysis using Differential Equations

12:30 pm

Lunch

2:00 pm

Session 2: Stochastic Modeling and Simulation

5:30 pm

Adjourn

Tuesday, 10 Aug

9:00 am

Session 3: Data Acquisition and Analysis

12:30 pm

Lunch

2:00 pm

Session 4: Applications

5:30 pm

Adjourn

Lecture Outline

Session 1: Modeling and Analysis using Differential Equations

This session will provide an introduction to modeling of core processes in biology using differential equations. The first lecture will focus on the cell as a multi-layered feedback system. Scientists need to build ad hoc models to analyze the cellular complexity in a quantitative manner. Ordinary differential equations (ODEs) are a good choice when considering high copy number molecules in a well mixed environment. Several transcriptional regulation pathways in bacteria, for instance, have been successfully modeled with ODEs. We will overview the general methods to build macroscopic deterministic models of biological processes, referring to the trp operon and the iron starvation pathways as application examples. Classical control and dynamical systems analysis tools (equilibria, bifurcations and frequency analysis) will also be reviewed. Finally, we will provide some fundamental notions from the theory of chemical reaction networks. The second lecture will close the modeling process cycle by covering the model identification theory and practice. Once a model structure (system of equations) is proposed, the validity of this structure should be tested by means of an identifiability analysis, e.g. making use of sensitivity analysis tools that can help to identify critical and negligible parameters and to establish a parameter ranking. If experimental data are available, parameter estimation is then carried out, leading to a first model. Otherwise a set of experiments must be devised by means of optimal experimental design and performed before the parameter estimation. The quality of these estimators should be assessed by checking the correlation between them and computing their confidence intervals. This initial model must be validated with new experiments, which in most cases will reveal a number of deficiencies. Thus, a new model structure and/or a new experimental design must be planned, and the process is repeated iteratively until the validation step is considered satisfactory.

Session 2: Stochastic Modeling and Simulation

In microscopic systems formed by living cells, the small numbers of some reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA). Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multi-scale nature of the underlying problem: (1) the presence of multiple timescales (both fast and slow reactions); and (2) the need to include in the simulation both chemical species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled
by a deterministic differential equation.
In the first half of the session, we will first describe the SSA, and then outline the methods such as tau-leaping, hybrid, slow-scale SSA and finite state projection that have been developed to accelerate the process of discrete stochastic simulation for well-mixed chemically reacting systems. Then we will examine the state of the art in algorithms and software for discrete stochastic simulation of spatially-dependent biochemical systems.
The second half of the session will focus on StochKit, a software package for simulation of stochastic models. StochKit provides command-line executable for running stochastic simulations using variants of Gillespie’s Stochastic Simulation Algorithm (SSA) and Tau-leaping. Among the numerous implementations of the SSA, StochKit provides solvers for the most well used and efficient methods: SSA Direct Method, Optimized Direct Method [Cao et al. 2004], Logarithmic Direct Method, and a Constant-Time Algorithm [Slepoy et al. 2008]. As for the Tau-leaping algorithm, we provide a solver for an Adaptive Explicit Tau-leaping method. To further increase the computational efficiency, StochKit provides automatic parallelization and a converter for SBML files. We will give a comprehensive review of the available algorithms and illustrate how to use Matlab functions in StochKit to process output files. For advanced developers, we will briefly illustrate how to build a custom solver for specific needs.

Session 3: Data Acquisition and Analysis

Since its inception 15 years ago, the DNA microarray has become a staple experimental tool for exploring the effects of biological intervention on gene expression. By measuring the abundance of tens of thousands of mRNA transcripts at once, microarrays provide a genome-wide characterization of biological function. While the high dimensionality of microarray data provides a distinct advantage over smaller scale experimental platforms, it also requires intelligent use of data processing and analysis techniques to control for sources of noise, systematic bias, and statistical artifacts. This session will provide an overview of the workflow required to transform raw microarray data into biological insight. In the first lecture, we will begin by introducing the R/Bioconductor analysis platform. We will then describe data preprocessing techniques including background subtraction, dye bias normalization, and scale normalization. These techniques reduce the effects of noise and systematic biases often associated with high dimensional data. Next, we will discuss methods to identify differentially expressed genes--i.e., genes whose transcripts are expressed at different levels between experimental conditions. Through the use of sophisticated statistical methods, we will obtain subsets of genes showing reproducible expression changes across experimental replicates. These individual genes provide the first clues into the underlying biological processes acting in the experiment of interest. In the second lecture, we will focus on functional classification and ontological analyses of genes of interest. We will apply pathway enrichment and network analysis to identify functions and networks highly enriched in a set of genes. We will discuss ways for functional analyses of time series data sets and ways to present gene expression data.

Session 4: Applications

Lecture 7: Polarization in Yeast Mating (Mike Lawson, UCSB)

One of the most well-studied examples of cell polarization is the growth of the mating projection in Saccharomyces cerevisiae. A single molecular entity located at the front of the cell, termed the polarisome, helps to organize structural, transport, and signaling proteins. We have developed a spatial stochastic model (utilizing the reaction-diffusion master equation) of polarisome formation in mating yeast, focusing on the tight localization of proteins on the membrane. Prior work has produced deterministic (PDE) mathematical models that describe the spatial dynamics of yeast cell polarization in response to spatial gradients of mating pheromone; however, these required special mechanisms (e.g. high cooperativity) to match the characteristic punctate of the polarisome. This new model is built on simple mechanistic components, but is able to achieve a highly polarized phenotype even in relatively shallow input gradients. Preliminary results highlight the need for spatial stochastic modeling because deterministic simulation fails to achieve a sharp break in symmetry.

Structured Singular Value (SSV) analysis is a tool developed in control theory useful to analyze uncertain biological models.
A range of applications will be presented that focus on its use for analysis of fragility points in the network, drug screening, model discrimination, and model extension.