The dolfin-adjoint project automatically derives the discrete
adjoint and tangent linear models from a forward model written in
the Python interface to DOLFIN.

These adjoint and tangent linear models are key ingredients in many
important algorithms, such as data assimilation, optimal control,
sensitivity analysis, design optimisation, and error estimation. Such
models have made an enormous impact in fields such as meteorology and
oceanography, but their use in other scientific fields has been
hampered by the great practical difficulty of their derivation and
implementation. In his recent book, Naumann (2011) states that

[T]he automatic generation of optimal (in terms of robustness and
efficiency) adjoint versions of large-scale simulation code is one
of the great open challenges in the field of High-Performance
Scientific Computing.

The dolfin-adjoint project aims to solve this problem for the case
where the model is implemented in the Python interface to DOLFIN.

Works for both steady and time-dependent problems and for both linear and nonlinear problems.

Using it is very easy: given a differentiable forward model, employing dolfin-adjoint involves
changing on the order of ten lines of code.

The adjoint and tangent linear models exhibit optimal theoretical efficiency. If every forward
variable is stored, the adjoint takes 0.2-1.0x the runtime of the forward model, depending on the
precise details of the structure of the forward problem.

If the forward model runs in parallel, the adjoint and tangent linear models also run in parallel
with no modification.

If instructed, the adjoint model can automatically employ optimal checkpointing schemes to
mitigate storage requirements for long nonlinear runs.

Rigorous verification routines are provided, so that users can easily verify for themselves
the correctness of the derived models.

The traditional approach to deriving adjoint and tangent linear models
is called algorithmic differentiation (also called automatic
differentiation). The fundamental idea of algorithmic differentiation
is to treat the model as a sequence of elementary instructions. An
elementary instruction is a simple operation such as addition,
multiplication, or exponentiation. Each one of these operations is
differentiated individually, and the derivative of the whole model is
then composed with the chain rule.

The dolfin-adjoint project is instead based on a very different
approach. The model is considered as a sequence of equation
solves. This abstraction is similar to the fundamental abstraction of
algorithmic differentiation, but operates at a much higher level of
abstraction. This idea is implemented in a software library,
libadjoint. When this new idea is combined with the high-level
abstraction of the FEniCS system, many of the difficult problems
associated with algorithmic differentiation dissolve.

For more technical details on libadjoint and dolfin-adjoint, see
the papers.