Computational astrophysics is the use of numerical methods to solve research problems in astrophysics on a computer.

Numerical methods are used whenever the mathematical model describing an astrophysical system is too complex to solve analytically (with
pencil and paper). Today, it is difficult to find examples
of research problems that do not use computation.

Computational versus Analytic Methods

Solutions generated by numerical methods are generally only approximations to the exact solution of the underlying
equations. However, much more complex systems of equations can be solved numerically than can be solved analytically.
Thus, approximate solutions to the exact equations found by numerical methods often provide far more insight
than exact solutions to approximate equations that can be solved analytically. For example, time-dependent numerical solutions of fluid flow in three dimensions can exhibit behavior which is not expected from one-dimensional analytic solutions to the steady-state (time independent) equations.

The increase in computing power in the last few decades has meant that an increasingly larger share of problems in astrophysics
can be solved on a desktop computer. However, the most computationally intensive problems in astrophysics are still limited by the memory and floating-point speed of the largest high-performance computer (HPC) systems available (e.g. computers on the top500 list). The use of HPC is necessary to maximize the spatial, temporal, or frequency resolution of solutions, to include more
physics, or when large parameter surveys are required to understand the statistics.

Examples

Traditionally, the most important applications of computation in astrophysics have been in the areas of:

Stellar structure and evolution.

The internal structure and evolution of stars with different masses and chemical composition was largely mapped out in the 1960's
using numerical methods to solve the equations of stellar structure. Today, the frontiers of research include calculating
multidimensional stellar models of rapidly rotating stars, modeling the effects of hydrodynamical processes
such as convection from first principles, and understanding how stars generate magnetic fields
through dynamo processes.

Radiation transfer and stellar atmospheres.

Without oversimplifying assumptions, computational methods are required to calculate the propagation of light through the outer layers of a star, including
its interaction with matter through absorption, emission, and scattering of photons. The calculation of cross sections for the interaction of light with matter for astrophysically relevant ions is itself a challenging computational problem. The construction of such
stellar atmosphere models share many challenges with radiation transfer problems in other systems such as planets, accretion disks,
and the interstellar medium. Modern calculations improve the frequency resolution, include a better treatment of opacities, and can
treat non-hydrostatic atmospheres.

Astrophysical fluid dynamics.

Figure 1: Density in a 2D simulations of the propagation of a pulsed, supersonic, protostellar jet

The dynamics of most of the visible matter in the universe can be treated as a compressible fluid. Time-dependent and
multidimensional solutions to the fluid equations, including the effects of gravitational, magnetic, and radiation fields,
require numerical methods. A vast range of problems are addressed in this way, from convection and dynamo action in stellar
and planetary interiors, to the formation of galaxies and the large scale structure of the universe.

It is well known that Newton's Laws of Motion for some number of particles N interacting through their mutual gravitational attraction
do not in general have an analytic solution for N > 2. Thus, to compute the orbits of planets in the solar system, or of stars
in the Galaxy, numerical methods are required. The most challenging problems today include accurate integration of the orbits of the planets over the
age of the solar system, studying the dynamics of globular clusters including the effect of stellar evolution and the formation of binaries,
studying galaxy mergers and interaction, and computing structure formation in the universe through the gravitational clustering of collisionless dark matter.

Numerical Methods

The diverse set of mathematical models encountered in astrophysics means that a very wide range of numerical methods are
necessary. These range from basic methods for linear algebra, nonlinear root finding, and ordinary differential equations (ODEs),
to more complex methods for coupled partial differential equations (PDEs) in multi-dimensions, topics which span the entire
content of reference books such as Numerical Recipes.

However, there are several numerical methods used in astrophysics that deserve special mention, either because they have such wide use in astrophysics, or because astrophysicists have made significant contributions to their development. These methods are considered in the
following subsections.

Stellar Structure Codes

The equations of stellar structure are a set of ODEs which define a classic two-point boundary value problem. Analytic solutions to
an approximate system of equations exist in special cases (polytropes) but these are of limited application to real stars. Numerical
solutions to the full system of equations were first computed using shooting methods, in which boundary conditions are guessed at the
center and surface of the star, the equations are integrated outwards and inwards, and matching conditions are used at some interior
point to select the full solution. Shooting methods are laborious and cumbersome; today modern stellar structure codes use
relaxation schemes which find the solution to the finite-difference form of the stellar structure equations by finding the roots of
coupled, non-linear equations at each mesh point.
A good example of a
public code that uses a relaxation scheme is the EZ-code, based on Eggleton's variable mesh method. The evolution of stars are then computed
by computing stellar models at discrete time intervals, with the chemical composition of the star modified by nuclear reactions in the
interior.

Radiative Transfer Codes

Calculating the emergent intensity from an astrophysical system requires solving a multidimensional integral-differential equation, along
with level population equations to account for the interaction of the radiation with matter. In general the solution is a function of
two angles, frequency, and time. Even in static, plane-parallel atmospheres, the problem is two-dimensional (one angle and frequency). However, the most challenging aspect of the problem is that scattering couples the solutions at different angles and frequencies. As in
the stellar structure problem, relaxation schemes are used to solve the finite difference form of the transfer equations, although
specialized iteration techniques are necessary to accelerate convergence. Monte-Carlo methods, which adopt statistical techniques to
approximate the solution by following the propagation of many photon packets, are becoming increasingly important. The problem of
line-transfer in a moving atmosphere (stellar wind) is especially challenging, due to non-local coupling introduced by Doppler shifts in
the spectrum.

There are two tasks in an N-body code: integrating the equations of motion (pushing particles), and computing the gravitational
acceleration of each particle. The former requires methods for integrating ODEs. Modern codes are based on a combination of high-order
difference approximations (e.g. Hermite integrators), and symplectic methods (which have the important property of generating solutions that
obey Liouville's Theorem, i.e. that preserve the volume of the solution in phase space). Symplectic methods are especially important for
long term time integration, because they control the accumulation of truncation error in the solution.

Calculation of the gravitational acceleration is challenging because the computational cost scales as N(N-1), where N is the number of particles. For small N, direct summation can be used. For moderate N (currently \(N \sim 10^{5-6}\)), special purpose hardware (e.g.
GRAPE boards) can be used to accelerate the evaluation of \(1/r^2\) necessary to compute the acceleration by direct summation. Finally, for large N (currently \(N \geq 10^{9-10}\)), tree-methods are used to approximate the force from distant particles.

Codes for Astrophysical Fluid Dynamics

Solving the equations of compressible gas dynamics is a classic problem in numerical analysis which has application to many fields besides astrophysics. Thus, a large number of methods have been developed, with many important contributions being made by astrophysicists. A complete review of all the methods is beyond the scope of this discussion (see Laney 98). For solving the equations of compressible fluid dynamics, the most
popular methods include

finite-volume methods (which often use a Riemann solver to compute upwind fluxes),

operator split methods (which combine elements of both finite-differencing and finite-volume methods for different terms in the equations),

central methods (which often use simple expressions for the fluxes, combined with high-order spatial interpolation), and

particle methods such as smooth particle hydrodynamics (SPH, which integrates the motion of discrete particles to follow the flow, see Monaghan 92).

A good technical review of many of these methods is given by LeVeque (2002).
SPH is an example of a method developed largely to solve astrophysics problems, although many of the developments in other methods (e.g.
the extension of finite-difference and finite-volume methods to magnetohydrodynamics (MHD) to include the effects of magnetic fields on the dynamics of the fluid) have also been motivated by astrophysics.

Relation to other fields.

Computational astrophysics is necessarily inter-disciplinary, involving aspects of not only astrophysics, but also numerical analysis
and computer science.

Numerical Analysis

Numerical Analysis is a rigorous branch of mathematics concerned with the approximation of functions and integrals, and the approximation
of solutions to algebraic, differential, and integral equations. It provides tools to analyze errors that arise from the approximations
themselves (truncation error), and from the use of finite-precision arithmetic on a computer (round-off error). Convergence, consistency,
and stability of numerical algorithms are all essential for their use in practical applications. Thus, the development of new numerical
algorithms to solve problems in astrophysics is deeply rooted in the tools of numerical analysis.

Computer Science

Computational science and computer science differ. The former is the use of numerical methods to solve scientific problems. The latter
is the study of computers and computation. Thus, computer science tends to be more orientated towards the theory of computers and
computation, while scientific computation is concerned with the practical aspects of solving science problems. Still, there is a large degree of overlap between the fields, with many computer scientists engaged in developing tools for scientific computation, and many
scientists working on software problems of interest to computer scientists. Examples of the overlap include the development
of standards for parallel processing such as the Message Passing Interface (MPI) and OpenMP, and development of parallel I/O filesystems such as Lustre.

History

Computational astrophysics has a long history, dating back to the numerical approximation of the motion of the moon and planets in
order to compute ephemeri for tides and navigation. At first, the term computer referred to a person employed to calculate some quantity to some level of numerical accuracy. Electronic digital computers were introduced in World War II to break encryption codes and compute artillary
range tables, however they were quickly adopted to other purposes after the war. In astrophysics, this included computing the first stellar
structure models in the 1950s, and the development of numerical methods for fluid flow and shock-capturing at about the same time. Major advances in the decades that followed included the use of transistor instead of vacuum tubes, the development of compiled languages (Fortran in the 1950s, C in the 1970s), the adoption of the IEEE standards for the representation of floating point numbers and arithmetic (previously every manufacturer used their own pattern for bits) and the development of very large integrated circuits (VLSI), which allows the complexity of processor design that is enjoyed today. Modern trends in computer design are towards distributed memory parallel processors, with
multiple cores capable of vector processing. At the same time as the development in hardware, progress in numerical analysis, computer science, and software engineering
have resulted in standardized protocols for interprocessor communication across networks, object-orientated languages such as Fortran95 and C++, interpreted languages such as Java and Python, visualization tools such as OpenGL, and software engineering tools such as CVS and
Subversion (SVN).