Turbulence, a scientific term to describe certain complex and unpredictable
motions of a fluid, is part of our daily experience and has been for
a long time. No telescope or
microscope is needed to contemplate the volutes of smoke from a
cigarette, the elegant arabesques of cream poured into coffee and the
vigorous eddies of a mountain stream. In an airplane we sometimes
experience bursts of "clear air turbulence". Ultrasonography can
reveal
turbulent blood flow in our arteries; satellite pictures may show
turbulent
meteorological perturbations; computer simulations reveal turbulent
fluctuations of mass in the Universe on scales of tens of
megaparsecs. Without turbulence, urban pollution would linger around
for
centuries, the heat produced by nuclear reactions in the interior
of stars would not be able to escape on an acceptable time scale and
meteorological phenomena would be predictable almost for ever.

Actually the word "turbulence" (Latin: turbulentia) originally
refers
to the disorderly motion of a crowd (turba). In the Middle Ages it was
frequently used to mean just "trouble", a word which derives from
it. Even today "turbulent" may refer to social or personal
behaviour. Its scientific usage refers to irregular and seemingly
random motion of a fluid. This definition, which is far from exhaustive, tries
to express in a synthetic way one of the most complex and fascinating
phenomenon of natural science, from Antiquity to present days.

The subject has indeed a very long history. More than two thousand years ago Lucretius described eddy motion
in his De rerum natura. Over five centuries ago Leonardo
was probably the first to use the word turbulence (in Italian
turbolenza) with its modern meaning and to observe the slow
decay of eddies formed behind the pillars of a bridge. Just over a
quarter of a millennium back, Euler wrote the equations of incompressible ideal or
inviscid (zero-viscosity) flow in both two and three dimensions and
realized the importance of vorticity. Seventy years later Navier
generalized
these equation to include viscosity. Because of further work by
Stokes, the equations are known as the Navier–Stokes equation. They
constitute a set of nonlinear and nonlocal evolution equations for the
three-dimensional
velocity field. In modern notation, the first equation which
expresses Newton's law as applied to arbitrary fluid elements, reads
\[\tag{1}
\partial_t{ v} +{ v}\cdot\nabla { v} = -\nabla p +\nu\nabla
^2 { v}, \]

where \(p\) is the pressure (divided by a the constant density of the
fluid) and \(\nu\) is the (kinematic) viscosity. The second equation,
due to d'Alembert, expresses incompressibility
\[\tag{2}
\nabla \cdot { v} =0.\]

Kelvin was the first to propose
studying turbulence using random solution of the Navier–Stokes
equations. Reynolds showed that, for a given geometry of the flow,
the different regimes that can take place (laminar, turbulent, ...)
are controlled by the dimensionless number (now called the Reynolds number)
\[\tag{3}
Re = LV/\nu,\]

where \(L\) and \(V\) are a typical scale and a typical velocity of the flow.
For a lot more information on the early history of the
subject we refer the reader to Worlds of Flow by Darrigol.

With the advent of aeronautics, the development of meteorology,
astrophysics, plasma physics and nuclear weapons the understanding of
turbulent
flow became a very important issue. Progress remained however quite
slow, for reasons we shall come back to. In more recent years
a paradigmatic shift took place: as predicted by
von Neumann 60 years ago
it became possible to simulate turbulent flow on computers, thereby
leading to a new kind of experimentation which somewhat blurs
the traditional distinction between theory and experiments.

The aim of this review is to focus on some open questions for which
significant progress can be expected on the scale of the next decade.
We shall of course not try to review the entire field of turbulence:
it has become very large and has to some extent developed a
babel-tower excessive diversity for lack of a unifying language. We encourage the reader to look up the other Scholarpedia articles
on fluid mechanics and particularly those focusing on specific aspects
which we could not cover in detail, e.g. direct numerical simulations [1][2].

After the great breakthrough due to Kolmogorov ,
dimensional and scaling arguments seemed to provide such a language, at least
in the limit of very large Reynolds numbers (fully developed
turbulence). Although it turned out that the true story was much more
complicated, we feel that it is still useful to examine many topics arising in
turbulence from the point of view of scaling and of its shortcomings. The
topics addressed here are listed in the Contents.

Tools for turbulence

Theory

Since the basic equations are known [link to section on NS], the
question is: how much of a theoretical handle do we have on the
Navier–Stokes (NS) equations? The short answer is: very little. We
cannot, for example, show that the solutions of the NS equations with nice and
smooth initial conditions stay nice, smooth and unique for all times, at least
not in 3D (but in 2D, yes, we can). There will be more on this in Section 4 on blow
up. It has even been speculated by Jean Leray in the thirties that the
random character of turbulence originates from non-uniqueness of the
solutions to the NS equations. Nowadays, we know enough about how
chaos
can appear in deterministic
dynamical systems
that there is no need
to resort to non-uniqueness to explain turbulence.

When we try tackling random solutions of the NS
equations we have to face a closure problem: because the equations are
quadratically nonlinear, the time-rate of change of the correlation
functions of the velocity at \(n\) different points involves similar
correlation functions, but with \(n+1\) arguments. An infinite hierarchy
of equations is then obtained. The simplest form of closure,
introduced by Kolmogorov's student Milionshchikov, is to arbitrarily close this hierarchy by
relating fourth-order correlation functions to second-order ones as if
the velocity had Gaussian statistics. This is of course unjustified and
leads to problems such as negative values for energy-like quantities,
which are by definition non-negative. Cures for such diseases can be found
but they are frequently ad hoc with no possibility to control the errors
made with respect to the correct solutions. The main difficulty is the
absence of a small parameter which would permit to start a suitable
perturbative approach. The smallness of the viscosity - or equivalently a
large value for the Reynolds number - is of no use so far because
very little is understood about the Euler equations, that is
the Navier–Stokes equations with the viscosity set to zero (see Euler 250
and references therein).

The most fruitful theoretical approaches have been based on scaling
arguments, that is essentially on dimensional analysis. This is presented
in Section 2. Another rather fruitful kind of approach is through the
use of toy models: after having identified certain properties of the basic
equations which are believed to play a key role in the behavior of turbulence
(for example invariance and conservations properties) one tries to find
simpler models sharing those properties and which either can be solved
analytically or at least for which numerical solutions are much simpler
than for the full 3D NS equations. Precise definitions of such models
would take up too much space, but we can give an idea of what has been
achieved with the use of some of these toy models.

The 1D Burgers equation (Frisch and Bec, Bec and Khanin)
gives a concrete
example of how energy dissipation can have a finite non-vanishing limit
when the viscosity tends to zero in spite of the fact that the inviscid
equation formally conserves energy. As pointed out by Saffman, it also
shows what can go wrong with naive application of dimensional arguments.

The random coupling model of Kraichnan starts with N independent
replicas of the random NS equations and then couples them artificially
by random Gaussian coefficients chosen in such a way as to preserve
most invariance and conservation laws. In the limit \( N \to \infty \)
closed equations, called the Direct Interaction (DIA) equations, are
obtained for suitable statistical quantities (including the two-time and
two-point velocity correlation functions). The solutions are not compatible
with the Kolmogorov 1941 theory to be discussed in Section 2; this happens not because
scale invariance is broken but because the model does not preserve
a certain form of Galilean invariance. Fortunately, this shortcoming can be repaired
by resorting to a kind of Lagrangian description (Kraichnan ) or
by making the coupling coefficients scale dependent to obtain
the Eddy Damped Quasi-Normal Markovian (EDQNM) model (Orszag).

Shell models start from the NS equations written in spatial Fourier
space and replace all the Fourier modes in the shell having wavenumbers between
\(2^n\) and \(2^{n+1}\) by just a few degrees of freedom, typically one complex
number. The interactions between these "shell amplitudes" are of course
chosen again to preserve as many features as possible of the original
equations. Some shell models, such as GOY or SABRA (Gledzer, Ohkitani and Yamada, Biferale),
are known to display the same anomalous scaling
as the full equations, see Section 2. Unfortunately little
theoretical progress has been achieved and the shell models main advantage
remains their ability to run very high Reynolds numbers using just
a workstation.

Much more drastic simplifications of the true dynamics are involved in
the multiplicative random model (Novikov and Stewart, Yaglom,
Benzi et al.)
in which the amplitudes in the
shell \(n+1\) are just obtained by multiplying the amplitude in the
shell \(n\) by a random variable with a suitable distribution
(independent identically distributed random variables are assumed for
different \(n\)). Correlation functions can then be calculated
explicitly; they display anomalous scaling and multifractality
through a mechanism involving large deviations (Varadhan).

Finally purely qualitative models such as the Swift–Richardson
flea-eddy model (Section 2) can be made which are stimulating for the
imagination.

Experiments

As stated before, turbulent flows abound everywhere around us. However
high-Reynolds-number flows displaying good scaling properties of the kind
discussed in Section 2 require very large scales, as is the case in
the natural atmospheric and astrophysical environments. The former keeps
changing (with the weather) and the latter is not so easily accessed.
Large-scale facilities, such as major wind tunnels, are very expensive
and it is difficult to have a fundamental experiment running there for
a duration of several days to several weeks (needed, e.g. to accumulate
good statistical data on rare violent events) when they are competing
with important industrial applications such as the testing of new designs
for cars and airplanes.

During the last ten years a new technique has been developed involving
low temperature Helium (above the lambda point where it becomes
superfluid) (Chavanne et al., Niimela and Sreenivasan).
It takes advantage of the fact
that Helium can have a kinematic viscosity two orders of magnitude lower than
air to achieve fairly high Reynolds numbers with facilities which still fit on
a table. Special measurement techniques had to be developed, since the usual
hot-wire techniques cannot cope with such experiments. It is
now planned to use much larger Helium facilities using know-how developed by
major high-energy centers such as CERN.

Finally, very promising results have been recently obtained in understanding
the dynamics of Lagrangian (tracer) particles in turbulent flows. Rapid
technological advances in optical particle tracking
allow scientists to measure accurately the positions, velocities and
accelerations of such tracer particles (for a review see Toschi and Bodenschatz).
Detailed comparison between experiments and numerical
simulations on
Lagrangian
properties of turbulent flows has opened interesting new directions of
investigation with many
applications where transport and/or aggregation of particles is important.

Numerical simulations

In 1949 von Neumann predicted that the advent of digital
computers would revolutionize the study of turbulence since it would
become possible to simulate the NS equations in 3D in turbulent
regimes. Actually the first genuine 3D simulations of turbulence had
to wait about 20 years with the advent of supercomputers and the
development of (pseudo)-spectral numerical technique taking advantage
of fast Fourier transform algorithms (Orszag and Patterson). For
homogeneous turbulence without boundaries (or more precisely, using
periodic boundary conditions) the achievable spatial resolutions,
which at first was only about \(32^3\) has now reached \(4096^3\)
(Kaneda et al.).
It can be safely predicted that within a decade
or less the Reynolds numbers achievable by such simulations will be
comparable to those of the best (present-day) experimental
facilities. Numerical simulations have of course the advantage that
one has access to the whole spatial structure of the velocity, of the
vorticity, of the local energy dissipation, etc. However a single
simulation at very high Reynold numbers may take from weeks to months of CPU
time which can be prohibitive when exploring parameter space. Simulations
are of course rather demanding for those problems in which boundaries are
essential. Furthermore, when complex boundaries or physics are involved
setting up suitable simulations can become difficult. One interesting way
to handle such problems is through Lattice Boltzmann simulations which
combine the hydrodynamical aspect with the microscopic physics of the problem
while being not more demanding than, say, finite difference techniques
(Benzi et al. , Succi).

For engineering applications, one usually needs to suitably model turbulence
while using as much as possible of our basic scientific
understanding. This can be done in a variety of ways, including
\(K-\epsilon\) theory (Mohammadi and Pironneau), Reynolds Averaged Navier–Stokes modelling
(Pope )
and renormalization group methods( Orszag et al. ). Such topics are beyond the present
review which focusses on the physics of turbulence.

K41 and intermittency

Scaling ideas have a long history in fluid mechanics,
starting with Newton's derivation of the quadratic dependence of the drag
on the relative velocity of motion between a body and the ambient fluid.
Scaling arguments and dimensional analysis play a key role in
the first serious attempt to understand the
statistical properties of turbulent flow.

In 1941 Kolmogorov defined a conceptual framework for turbulence, now
referred to as K41 theory
( Kolmogorov 1941a, Kolmogorov 1941b, Frisch)
which applies to
homogeneous, isotropic turbulence, that is turbulence
statistically invariant under translations and rotations of the sort
frequently obtained at very high Reynolds numbers when there
is no large-scale shear.

A few years earlier, Richardson had proposed a qualitative vision of
the energy cascade for the
way energy flows from larger to smaller eddies. He proposed that it would
be similar to the way blood flows from larger to smaller fleas in a
famous poem of Swift.

In K41, this is made quantitative by two postulates regarding the
large Reynolds number limit. On the one hand
Kolmogorov assumes that the energy dissipation rate \(\varepsilon\) has
a finite non-vanishing limit as the viscosity tends to zero while
keeping the scale and velocity characteristic of the production
of the turbulence fixed (for a recent experimental investigation on this point see Sreenivasan 1984). On the other hand he assumes that a statistical scale
invariance of the cascade is achieved in the limit of very large Reynolds
numbers. The former assumption, which is now generally called the existence
of a dissipative anomaly (in a laminar fluid, the dissipation goes
to zero with the viscosity), is well supported by experimental
and numerical results. The latter assumption holds only in an approximate
way (see below).

In its mathematical formulation, the invariance postulated by Kolmogorov resembles that of the Brownian motion process, in the development of
which Kolmogorov was strongly involved. If \(x(t)\) is the position
at time \(t\) of a Brownian particle, then for any \(t>0\ ,\) \(h>0\) and
\(\lambda >0\ ,\) the statistical distribution of \(x(t+\lambda h) - x(t)\)
is the same as that of \(\lambda ^{1/2}[x(t+h) -x(t)]\ .\) In plain
language, the increments of the position of the Brownian particle
scale as the square root of the time increments. In K41, temporal position
increments become spatial velocity increments and the square root becomes
a cubic root. The latter is dictated by a dimensional argument: let
\(\varepsilon\) denote the energy dissipation per unit mass of the fluid
and \(l\) the separation between two points. If we try relating
velocity increments to these quantities by a formula of the form
\[\tag{4}
\hbox{velocity increment} = C \varepsilon ^\alpha l ^ \beta,\]

where \(C\) is a dimensionless constant, we immediately find that
\[\tag{5}
\alpha = \beta = 1/3.\]

Kolmogorov was using the Richardson cascade
idea to equate the energy dissipation rate to the rate of energy transfer
from scale to scale and assuming that viscosity should become
irrelevant. K41 scaling immediately implies scaling laws for
structure functions, that is moments of velocity increments.
The simplest instances are the longitudinal structure functions for
homogeneous isotropic turbulence, defined as
\[\tag{6}
S_p (l)\equiv \left \langle\left\{\left[{\vec v}( {\vec r} +{\vec \ell})
- {\vec v}({\vec r})\right ]\cdot\frac{\vec \ell}{l}\right\}^p\right \rangle,\]

where \(p\) is a positive integer, \({\vec \ell}\) denotes a spatial
vector
increment and \(l\) its modulus. According to K41 one should have

All these scaling relations are meant to apply
within
the inertial range, that is the range of scales much smaller
than
the scales at which turbulence is produced and much larger than the Kolmogorov
dissipation scale \(\eta\) at which direct energy dissipation into heat
becomes important. K41 gives \(\eta = C_{\rm diss} (\nu
^3/\varepsilon)^{1/4}\ .\)

Eq. (7) and one of its consequences, namely that the energy spectrum
of turbulence should follow a \(k^{-5/3}\) law (where \(k\) is the
wavenumber) are reasonably well supported by experimental and
numerical data. However, a careful examination of the scaling
laws, using for example the Extended Self Similarity
method,
reveals small but measurable
discrepancies from K41. Indeed, structure functions at inertial-range separations do display power-law behavior
\( S_p(l) \propto l ^{\zeta_p}\ ,\) but the
graph of the scaling exponents \(\zeta_p\) is not exactly the straight line \( \zeta_p= p/3\) predicted by K41:
it displays curvature as shown in (Figure 2). Hence the
self-similarity assumed in K41 may actually be broken.
Presently the
scaling exponents \(\zeta_p\) are known with an accuracy of a few
percent and could well be universal, that is independent of
the mechanism by which the turbulence is driven. Obtaining better evidence
for or against universality is important.

Figure 2: The value of the exponents obtained by two independent direct numerical simulations of homogeneous isotropic turbulence at very high resolution . The discrepancy between the red circles and the green triangles gives an estimate of the error bars. Inset: anomalous character to the scaling exponents highlighted by plotting its ratio to the K41 value, which would be unity in K41 theory. Note that the error bars definitively rule out the dimensional prediction (courtesy L. Biferale). The purple squares refer to negative and small positive values of \(p\) obtained by Chen et al. (2005), (courtesy K.R. Sreenivasan)

This is in fact not surprising if we go back to the Swift–Richardson
picture of turbulence. Real fleas may not want to cluster in the
Swiftian
fashion (he actually had in mind poets, not fleas) but there are many
natural instances of hierarchical clustering, for example in plants as shown in (Figure 3)

Figure 3: An example of hierarchical clustering in nature.

Thanks to the work of Mandelbrot we know that such objects
are fractals. Mandelbrot was also the first to conjecture
that at infinite Reynold numbers the energy dissipation of turbulence
concentrates in a fractal set of Hausdorff dimension less than three (
Mandelbrot 1968).

The fact that small-scale activity in high-Reynold number turbulence
becomes increasingly clumpy and that self-similarity is broken is
generally referred to as intermittency. It can be quantified by
measuring for example the flatness of velocity increments
\[\tag{9}
F(l) \equiv S_4(l) /S_2^2(l),\]

a quantity which should be independent of \(l\) in K41 and which
actually grows as \(l\) decreases.

Once it was realized that K41 is probably not exactly correct –
although it had already found many practical applications – a flurry
of activity started to understand intermittency. At first
phenomenological models were developed, such as the multiplicative
random model, which displays not only fractal dissipation but the
phenomenon of multifractality, a concept with applications far beyond
turbulence (Benzi et al., Jaffard).
In the mid-nineties Kraichnan
conjectured that intermittency and anomalous scaling are already
present in a simple passive scalar model in which a scalar field
(say, a temperature field) is being advected by a prescribed Gaussian
random field with K41-type scaling and a very short correlation time.

Figure 4: Spatial behavior of a passive field advected by a Gaussian random field (Celani et al. 2001)

The conjecture was proven and – for the first time – a result about
intermittency was derived ab initio, that is from the basic
equations without recourse to ad hoc steps. This breakthrough made use
of techniques borrowed from quantum field theory.
Very roughly, to calculate the correlation function \(\psi_n\) of order
\(n\) of the passive scalar, one has to solve a multi-dimensional linear
partial differential equation of the form:
\[\tag{10}
L_n \psi_n = f_n,\]

where \(f_n\) is a prescribed function and \(L_n\) a prescribed linear
operator,
both scale-invariant with known scaling properties. This seems to
determine
the scaling properties of the solution, the exception being those
functions which sit in the null space of \(L_n\ .\) These are the zero modes which allow the breaking of scale invariance
(Gawedzki and Kupianen , Chertkov , Shraiman and Siggia, Falkovich et al. ).
Such zero modes, which can be
interpreted as quantities statistically conserved under transport by the
flow, are hard to calculate
but can be shown not to depend on initial and boundary condition and
on the way the scalar is fed into the flow, thus ensuring
universality of the (anomalous) scaling properties. It is not known if such conservation laws are associated to
any symmetries, such as the invariance properties of the passive
scalar transport in the absence of molecular diffusion.
It is
generally
believed that a somewhat similar mechanism will guarantee the
universality of scaling for the full nonlinear problem of turbulence
but
suitable techniques to derive such results ab initio have
yet to be found.

Bulk quantities, drag and its reduction

The K41 theory makes remarkable predictions on the statistical
properties of homogeneous and isotropic turbulence. Although the
effects of intermittency and breaking of scale invariance tell us that
K41 theory is not exactly true, Kolmogorov was able to build a
conceptual framework which enables us to understand in a quantitative way
what we mean by turbulence. One obvious question is whether one can
use the same framework to make predictions on bulk (global) quantities for
turbulent flows.

As a simple but not trivial example, consider the question of how much fluid
can be carried by a pipe of radius \(H\) for a given pressure gradient
\(\nabla p\ .\) In a laminar flow, the velocity profile is parabolic with a maximum at the
centre of the channel equal to \(U_0 = v_*^2 H/\nu\ ,\) where \(v_*^2 \equiv H\nabla p /(2\rho)\) and \(H\) is the
pipe radius and \( \rho\) the density. Consequently, the Reynolds number of the flow is given by \(Re =U_0 H / \nu = v_*^2 H^2 / \nu^2\ .\)
For large enough \(Re\) the flow becomes turbulent and the mass throughput becomes smaller
with respect to the laminar case: a substantial power input is spent for
maintaining turbulent fluctuations and the maximum average velocity at the centre of the channel \(U_M\)
becomes substantially smaller than \(U_0\ .\) For a pipe of length \(L\ ,\) the total work done by the pressure gradient
is proportional to \(H^2 \nabla p L / \rho\) while the total average kinetic energy proportional to \(\rho U_M^2 L\ .\)
The ratio between these two quantities is named drag coefficient \(C_D\ ,\) namely:
\[
C_D \equiv \frac{\nabla p H^2 }{\rho U_M^2} = \frac{v_*^2}{U_M^2}.
\]
The interpretation of the drag coefficient \(C_D\) is rather intuitive: it is a measure of how much
kinetic energy is acquired by the mean flow with respect to the work done by the external forces.
For laminar flow in a pipe, \(U_M = U_0\) and \(C_D \sim 1/Re\ .\) At the onset of turbulence
\(C_D\) increases and eventually decreases very slowly
for increasing \(Re\ .\) There exists no systematic theory capable of
predicting the Reynolds-number dependence of \(C_D\ .\)

A beautiful although phenomenological approach has been developed by
von Kármán in the thirties. Using ideas from the slightly
later K41 theory, von Kármán's
approach can be recast as follows. For homogenous pipe flow,
the velocity profile \(U\) depends only on the distance \(y\) from the
boundary and the description of the different statistical quantities becomes
easier by introducing Prandtl's dimensionless variables
\[\tag{11}
U^+(y^+) \equiv \frac{U(y)}{v_*} :\,\,\, \ y^+ \equiv \frac{y}{\delta} :\,\,\, \ \delta \equiv \frac{\nu}{v_*}.
\]

In all turbulent flows, one can always introduce the average velocity field and the turbulent fluctuations.
In homogenous wall bounded turbulent flows, the average velocity field has a non zero component only in the streamwise
direction \(x\ .\) Denoting by \(\langle \ldots \rangle\) the average along the \(x\) direction, the momentum equation reads:
\[\tag{12}
\nu \frac { \partial U}{ \partial y} + W= v_*^2(1-\frac { y^+ }{ H^+ }),
\]

where \(W\) is the momentum flux towards the wall. Turbulent fluctuations take energy from the mean flow at a rate \(W\partial_y U(y)\ .\)
On the average the energy source of turbulent fluctuations must balance \(\varepsilon_t\) which is the rate of energy dissipation due to turbulence.
Both energy production and energy
dissipation are mostly concentrated near the boundaries. The conceptual advance made by von Kármán
is to stress that
in the range \(1 \ll y^+ \ll H^+\ ,\) one can expect that the effect of viscosity should be negligible, i.e. the mean flow should not depend on \(\delta\ .\)
Then using (12) one can
obtain \(W=v_*^2\ .\) Next, using the Kolmogorov theory, one can assume that
energy dissipation is independent of the viscosity. By dimensional analysis, the most general expression for \(\varepsilon_t\) can
be written as \(v_*^3F(y^+)/\delta\ .\) Finally, by balancing energy source and energy, we obtain
\[\tag{13}
W \frac{\partial U}{\partial y } = \varepsilon(y) = \frac{v_*^3}{\delta} F(y^+) \ \rightarrow \frac{\partial U}{\partial y} = \frac{v_*}{\delta} F(y^+/\delta),
\]

where \(F\) is a universal function. As it
is written, the r.h.s of the above equation depends explicitly on \(\delta\ .\) For
\(Re \rightarrow \infty\ ,\) according to the Kolmogorov theory, \(\partial_y U\) should
become independent of the viscosity which is parameterized by the scale \(\delta\ .\) Therefore we reach the conclusion that either
\(F=0\) or \(F \sim 1/y^+\ ,\) and
the most general form of \(U^+\) must be
\[\tag{14}
U^ +(y^+) = A + B\log \left(y^+\right),
\]

where \(A\) and \(B\) are universal constant. Equation (14) is
the prediction originally made by von Kármán and it allows us to
compute the value of \(C_D\) as a function of \(Re\) and the two
constants \(A\) and \(B\ .\) Note that for finite Reynolds number, we may expect
deviation (of the order of \(1/\log (Re)\))
from (14) which cannot be computed by using dimensional analysis. Existing data from laboratory experiments
and numerical simulations show a remarkable good agreement with (14) (Procaccia and Sreenivasan).

Let us remark that
we did use the
K41 theory although the turbulent flow is certainly neither isotropic
nor homogeneous (in the \(y\) direction). There is, thus, a
hidden assumption that the basic relation of the K41 theory can be
extended beyond the framework where the theory has been proposed. This
idea can be worked out by assuming that turbulent kinetic energy and
energy dissipation are always related according to the K41 dimensional
analysis, which is the basis of the so-called \(K - \varepsilon \) model of
turbulent flow.

Within the obvious limitations of the argument so far discussed, we
want only to stress that the fundamental assumption in the K41 theory,
namely that energy dissipation is independent of the viscosity for
large Reynolds number, is the basic guideline in understanding the
dynamics of high-Reynolds number turbulent flows, and this beyond the case of homogeneous and
isotropic turbulence. It follows that the effect of boundary
conditions does not necessarily show a different world with respect to
the K41 theory.

According to our discussion in the previous section, we should expect
intermittency acting in wall-bounded turbulent flows as in any other
case of turbulence. Does the intermittency change the von Kármán
theory? As far as bulk quantities are concerned, it seems that the
answer is no. However, if one looks at high-order moment of pressure
fluctuations at the wall (pressure is force per unit area from which
we compute the drag coefficient \(C_D\)), then one discovers strong
intermittency and breakdown of the predictions from dimensional analysis. A quantitative
explanation of intermittent fluctuations in wall bounded turbulence
flows is beyond the von Kármán as well as the K41 theory and it is
a research problem under investigation (Casciola et al., Toschi et al.).

Although
successful, the von Kármán theory of boundary layers is not "closed"
as far as one does not predict the universal constants \(A\) and
\(B\ .\)
Clearly, any scaling argument, such as those obtained by
dimensional analysis, cannot be helpful in predicting the two
constants. Numerical simulations give estimates of \(A\) and \(B\) which
are in very good agreement with experimental data. An open question
is whether one can go on theoretically and make progress to predict
pure constants beyond dimensional analysis.

There are many physical problems where dimensional analysis does not provide
an answer. One example is the problem of drag reduction in
a turbulent pipe flow when a small amount of flexible polymers is
added (Lumley, Virk).
This phenomenon, known also as Tom's effect, was discovered in
the early forties and it cannot be explained by using arguments
similar to the von Kármán or the Kolmogorov theory (for recent reviews see
(Christopher et al. , Procaccia et al. ).
Since polymers can be stretched by turbulent fluctuations, a certain amount
of turbulent kinetic energy is transferred to the polymers. This mechanism decreases turbulent fluctuations
especially near the wall. Thus the momentum flux towards the wall decreases
as well, which may eventually lead to
decrease in the drag coefficient \(C_D\ ,\) i.e. more mass throughput for the same power input.
It is a challenging question to obtain a quantitative theory on the amount of drag reduction as a function
of the polymer concentration \(C\) and the physical properties of the polymer (notably its maximum extension length \(L_p\) and
characteristic time \(\tau_p\) for the stretched polymer to relax its extension from \(L_p\) ) .
de Gennes and Tabor suggested that the
presence of polymers in high-Reynolds-number turbulence modifies the
Kolmogorov dissipation scale, leading to a different energy
balance. However, this amounts to a change in the effective Reynolds
number that can hardly affect the balance in the energy-containing
range and thus the drag. Actually, it has been argued (Procaccia et al.)
that drag reduction by addition of polymers is equivalent to
introducing a space-dependent viscosity which increases linearly from
the wall boundary: the amount of drag reduction depends on the slope
of the effective viscosity which is a rather complex function of both
\( C\) and \(\tau_p\ .\)

Polymer addition is not the only way to reduce drag. Experiments and numerical
simulations show that addition of surfactants or air bubbles in water can lead
to drag reductions. It is unclear whether one will eventually discover a universal
mechanism for drag reduction. The whole subject, of considerable importance for many
engineering applications, is under active investigation.

Blow up

A fundamental milestone in three-dimensional turbulence is the idea
that energy dissipation becomes Reynolds-number independent for large
Reynolds numbers. It is present both in K41 and in subsequent theories
of intermittency and in good agreement with experiments and numerical
simulations (see Section 5.2 of Frisch and references therein).
Since dissipation is proportional to the
viscosity, it is somewhat paradoxical that it can tend to a non-zero
value when viscosity vanishes, hence the name viscous anomaly.
For the passive scalar problem discussed in Section 2. a result which is
the counterpart of the viscous anomaly can be derived rigorously. The
mathematical problem remains open for the three-dimensional
Navier–Stokes equations and is connected to some deep issues of
regularity for both the Euler and the Navier–Stokes equations.

Onsager was the first to observe that solutions to the 3D
incompressible Euler equations need not conserve energy if the
solutions lack regularity (more precisely if they are not Hölder
continuous with an exponent larger than \(1/3\)); Duchon and Robert gave
an expression of the dissipative anomaly (the local amount of
dissipation) for such solutions of the Euler equations. For such questions see
the review paper by Eyink and Sreenivasan.

The key issue, of course, is: how regular are the solutions to
the Navier–Stokes and Euler equations when the initial conditions
are sufficiently smooth? Such issues are reviewed in Majda and Bertozzi
and Temam and, in more elementary way, in Rose and Sulem .
In two dimensions in a bounded domain it has
been known since the thirties that Euler flow preserves (sufficient)
initial smoothness forever. In three dimensions only finite-time
regularity is guaranteed. This can be understood somewhat naively by the
following argument (part of which actually survives in much more complex
functional analysis estimates).

Now, let us be sloppy: we identify the velocity gradient \(\nabla
{ u}\) and the vorticity (which is actually its antisymmetric part)
and furthermore blur the distinction between scalars, vectors
and tensors to rewrite the preceding equation as
\[D_t { \omega} \simeq { \omega}^2,\]
which we consider as an ordinary differential equation
(written in Lagrangian coordinates). It is obvious that if
initially \({ \omega}_0 \equiv { \omega}(0) >0\ ,\) then \({
\omega}(t)\)
will blow up (become infinite) at \(t_\star = 1/{ \omega}_0 \ .\)

However our understanding of the mathematics of the Euler equations is
so limited – even a quarter of a millennium after
their introduction– that we cannot do much
better than obtaining for various function-space norms an upper bound
which behaves as for the Burgers equation. The numerical evidence about 3D
Euler flow is not clearly for or against finite-time blow up. There
are actually mechanisms
which could make 3D incompressible flow much tamer than compressible
flow. One of them, depletion is discussed in Section 4.

One might think that the presence of viscous dissipation makes the
problem
much easier and that blow up would then be ruled out. This is
indeed conjectured by most specialists but has never been proven.
Actually the Clay Foundation has decided to devote one of its
Millennium
million-dollar prizes to precisely this issue.
Finally there is a tricky issue at the interface of the Euler and
Navier-Stokes blow up problems. In the absence of boundaries
one can show that hypothetical all-time regularity for the Euler
equations implies the same for Navier–Stokes but not so
with boundaries ( Kato )

Increase of predictability and depletion of non linearity

Figure 5: Snapshot of the vorticity field in two dimensional turbulence. Red and blue colors refer to positive and negative vorticity. A few well-defined coherent structures are observed.

Turbulence in two space dimensions shows some interesting new features
with respect to the three dimensional case so far discussed.
Vorticity conservation prevents energy dissipation from staying constant
when the viscosity is decreased. Thus the basic assumption of K41 theory
becomes inapplicable.
However enstrophy, i.e. the integral of the vorticity
squared, can be dissipated and the energy cascade of the K41 picture
can by replaced by an enstrophy cascade along with an inverse cascade
of energy towards large scales (Kraichnan and Montgomery ).
As soon as high resolution numerical
simulations of two dimensional flows became available to the scientific
community, one striking feature emerged from flow visualization and,
in particular, from visualization of the vorticity field
(Mc Williams, Benzi et al., Legras et al. ): the
emergence of long lived coherent structure mostly in the form of
circular vortex. A careful analysis of the flow field shows that these
structures are stable non linear solutions of the Euler
equations.
As such, coherent structures
represent, locally, a depletion of non-linearity in a turbulent
flow: within a coherent structure the enstrophy
cascade is inhibited and the structure can survive for extremely long times.
Whenever the vorticity field is dominated by few coherent structures, the predictability time
of large scale dynamics increases, being no longer constrained by the small scale fluctuations.
The above picture is quite generic and is not necessarily limited to two dimensional turbulence,
although strong evidence of depletion of non linearity has been observed
mostly in the 2D case.

The increase of predictability time by depletion of nonlinearity is certainly relevant for geophysical applications.
In particular, for atmospheric and oceanic dynamics, one can show the
relevance of some form of 2D dynamics. For example in the case of atmospheric
flows, one can prove that, in absence of dissipation and of mechanical and
thermodynamical forcing, there exists a new form of vorticity, named
Ertel potential vorticity, which
is conserved. The Ertel potential vorticity is defined as \(\vec{\Omega}\cdot \vec{\nabla} \theta/\rho\ ,\) where \(\vec{\Omega}\) is
the absolute vorticity (the sum of the flow vorticity and the Coriolis force on a rotating sphere) and \(\theta\) is the potential temperature (surface
at \(\theta=const\) are isoentropic). A particular case of Ertel potential vorticity can be obtained at midlatitudes where the Coriolis force is almost
balanced by the pressure gradients (geostrophic balance).
Quasigeostrophic flows can be described using the idea of potential enstrophy
(square of potential vorticity) cascade from large to small scales (where the
flow is assumed to be dissipated as three dimensional turbulence). One can then expect that
quasigeostrophic flows can form stable coherent structure
similar to those observed in ordinary 2D turbulence, and we can again expect
that coherent structures must will be more predictable than predicted by
dimensional arguments.

We want to emphasize that the above scenario, still
rather speculative at this stage, implies that there
is an enhancement of the predictability time, i.e. an enhancement of
our ability to forecast weather and/or ocean circulation, because of coherent structures.
This enhancement in the predictability time could be an intermittent process, i.e. it could happen at random times, depending on whether or not
coherent structures dominate the flow.
Also, the effect of coherent structures can change the whole energy budget in
the flow and in particular momentum and heat fluxes may be changed in
a non trivial way, with relevant consequences on the long-time
behavior of the general circulation, i.e. the climate.

Overture of turbulence flows in different scientific fields

There are many important and fascinating problems in physics where turbulence plays an important role.
In many cases, one can develop arguments similar to those of the K41 theory.
but in other cases, new concepts are needed.

Here is an example of a long-standing problem not easily handled by
a simplistic argument: when the density is a slowly varying function
of temperature, one can observe rather strong turbulence whenever a temperature gradient
is acting in the direction opposite to that of the gravitational field
(natural convection). In this case, one is usually interested in
predicting the heat flux in terms of the temperature difference \(\Delta
T\ ,\) the buoyancy force \(g\beta\ ,\) the temperature gradient \(\Delta
T/H\ ,\) the viscosity \(\nu\) and the thermal diffusivity \(\kappa\ .\) Here, \(\beta\)
designates the volume expansivity and \(H\) the thickness of the layer.
One can
easily show that the dimensional turbulent heat flux \( Nu \) must be a
function of the Rayleigh number \(Ra = g\beta \Delta T H^3/\nu\kappa\)
and the Prandtl number \(Pr= \nu/\kappa\ .\) From the equations of
motion, one is able to compute the energy dissipation of the
turbulent flow which must be proportional to \(Nu Ra\ .\) According to the
K41 approach, energy dissipation should also be of the order of \(U^3/H\)
where, as usual, \(U\) is characteristic flow velocity in turbulent
flow. Since the source of energy in natural convection is the
potential energy, \(g \beta \Delta T H\ ,\) we can estimate \(U\sim (g
\beta \Delta TH)^{1/2}\ ,\) i.e. \(U \sim Ra^{1/2}\ .\) It follows that the
straightforward prediction based on K41 assumption is that \(Nu \sim
Ra^{1/2}\) (Kraichnan). It so happens that the above prediction, with a suitable
estimate of constants, is also an upper bound of the turbulent heat flux for
any Prandtl number.
However, for infinite Pr number, a rigorous upper bound gives at most
\(Nu \sim Ra^{1/3}\ .\) When \(Pr \rightarrow \infty\ ,\)
the boundary layers are laminar for any \(Ra\) and, consequently, the turbulent heat flux can change dramatically.
Thus, a rather complex behavior of the relation Nu versus Ra must be
expected, depending on the many parameters entering in the systems:
one definitively must go beyond a simple dimensional analysis (for
recent reviews, Ahlers et al., Procaccia and Sreenivasan and references therein).

Thermal convection, with its complex and fascinating properties, is just one
example of the many important systems which are still in need of a full systematic
scientific understanding from the theoretical, experimental and computational
point of view. Here is a short and incomplete list of some of several interesting fields where understanding
the nature of turbulence can potentially lead to a significant scientific breakthrough:

MHD (magnetohydrodynamic) turbulence plays an important role in many astrophysical and geophysical problems. such as the generation of magnetic fields in heavenly bodies, in planets and (sometime) in large-scale industrial facilities;

Compressible turbulence which is crucial in many engineering applications for example in aeronautics and combustion;

Geophysical fluid Dynamics and Climate theory. We already briefly mention some interesting links between two dimensional turbulence and enhancement in the predictability time. This is just one aspect of a major scientific problem concerning the large-scale atmospheric and oceanic circulations and climate.

Superfluid turbulence.

The above list is neither exhaustive nor indicates any scientific priority. It
just tells us that the understanding of turbulent flows remains a fundamental
issue in modern physics.