I am looking for the source of a photograph of George Forsythe wearing
a business suit and leaning against a tape drive in an old fashioned
computer center with an asbestos tile ceiling. Does anyone know the
photo?

I was surprised to find two definitions. Both deal with a vector-
vector function f at a point x. The condition number is:

1) the lim sup at x of a ratio: the norm of changes to the function
divided by the norm of changes to the argument.

2) the minimum of coefficients that appear in perturbation bounds of
the form: norm of change to the function at x bounded by a coefficient
times norm of change to x plus an error term of size Landau's little o.

I will not mention the sometimes complicated terminology in the
literature other than to say I have ignored scalings that might be
used to make the condition numbers "normwise relative." It seems to
me that the norms can include the scalings so the statements above
encompass the relative case.

The first definition is usually attributed to Rice (1966) who has an
expression that is equivalent to the definition of the lim sup. The
second definition might as well be attributed to Wilkinson (1973) who
said he would refer to the coefficient in an error bound as a
condition number; it is of course natural to ask for the smallest
coefficient.

I could imagine an epsilon-and-delta proof that these two definitions
are the same except the second definition makes me uneasy by stating
"min" rather than "inf". The textbooks I examined that use either
definition further say the condition number of a continuously
differentiable function is the induced norm of the Jacobian matrix.

These definitions raise the following questions:

a) are 1 and 2 equivalent perhaps with some minor modification?

b) what is the purpose of this level of abstraction? That is, for
what non-differentiable functions are condition numbers need in
scientific or engineering problems?

There are of course other cases to be considered such as componentwise
condition numbers and problems with indeterminate solutions.

Joseph Grcar asked "What is the general definition of the
"condition number" of a numerical analysis problem?" (NA Digest Vol.
10: Issue 26).
One of the best discussions of condition numbers for
numerical analysis problems is that given by J. H. Wilkinson, in
"Rounding Errors in Algebraic Processes", National Physical
Laboratory Notes on Applied Science No.32, HMSO, London, 1963, on
pages 29, 33, 91-94 and 135-136. He explained that "We have avoided
framing a precise definition of condition numbers so that we may use
the term freely" (p.29). Also, Wilkinson discussed the condition of a
polynomial with respect to zeros (p.38), and the condition of a
matrix with respect to computing eigenvalues (pp.136-138). He
remarked that "The term condition number seems first to have been
used by Turing in his paper on rounding errors in matrix processes
[in 1948], though the term 'ill-condition' had been in common use
among numerical analysts for some considerable time before this" (p.
33). Wilkinson gave many detailed examples in his major treatise on
"The Algebraic Eigenvalue Problem", Oxford University Press, 1965.
A noteworthy early treatment of conditioning was given by Isaac
Newton's young friend Roger Cotes (1682-1716) in his short tract
"Aestimatio Errorum in Mixta Mathesi", published in 1722. Cotes gave
complete perturbation analyses (first-order) for plane triangles and
for spherical triangles, and he explained how to conduct the
measurement of astronomical angles in such a way as to minimize the
uncertainty in the computed result. [Ronald Gowing, "Roger Cotes -
Natural Philosopher", Cambridge University Press, 1983, pp.91-109.
Reviewed in Math. Rev. 87b:01033.]
Garry J. Tee,
Department of Mathematics, University of
Auckland, tee@math.auckland.ac.nz

On May 10, the first official version of the free data-assimilation toolbox
OpenDA has been announced. A complete version for both Windows and Linux,
including the source code, is now available through the web site
www.openda.org
and will soon be available on sourceforge.net.

OpenDA is a collection of building blocks and tools that allow rapid
implementation of data-assimilation for arbitrary (large scale) numerical
models. It includes various approximate Kalman filters and parameter
estimation
methods. Software components are available to couple the methods to your
numerical model. Language bindings exist for Fortran, C/C++ and Java.

Tools are available for using models with data-assimilation. These
include a
workbench that allows you to select any of the available data-assimilation
methods and configure its parameters.

The OpenDA software supports high performance computing. The
compute-intensive
data-assimilation methods have been parallellized using MPI. The design of
OpenDA allows it to handle models that have themselves been parallelized
either
with MPI or with OpenMP.

OpenDA has been developed in a joint effort by Delft University of
Technology,
the water management research institute Deltares and the scientific
software
engineering company VORtech. It has been applied in various practical
applications in fields like water management and atmospheric chemistry. A
coupling has been made to the open source wave model SWAN, the flood early
warning system FEWS, the open source atmospheric chemistry model
Chimere, as
well as several closed source applications.

More information can be found at www.openda.org or directly through the
e-mail
address info@openda.org. The OpenDA association is interested in
projects that
will help to spread the use of OpenDA and to develop it further. Please
communicate any initiatives to info@openda.org.

The Award Committee -- Stephan Dahlke, Universitaet Marburg, Germany and
Josef
Dick, University of New South Wales, Australia - determined that the
following
paper exhibited exceptional merit and therefore awarded the prize to:

In modern applied mathematics, theory and rigorously analyzed
computation go
hand-in-hand. This calls for a text that discusses both in detail, and
we have
undertaken to provide one. The discussion of numerical methods strives to be
visual and comparative, based on carefully chosen examples of prototypical
methods and model problems.

A more detailed description of the approach to numerical methods, with a
link
to the excerpt of the first 38 pages of the chapter on design and
analysis of
computational methods, as well as several sample figures may be found at:

http://www.math.utah.edu/~palais/ODEMC/ODEMC-Numerical.html

This book also provides a conceptual introduction to the theory of ordinary
differential equations, concentrating on the initial value problem for
equations of evolution and with applications to the calculus of
variations and
classical mechanics, along with a discussion of chaos theory and ecological
models. While the book would be suitable as a textbook for an
undergraduate or
elementary graduate course in ordinary differential equations, the authors
have designed the text also to be useful for motivated students wishing to
learn the material on their own or desiring to supplement an ODE textbook
being used in a course they are taking with a text offering a more
conceptual
approach to the subject.

Readers of NA Digest may also be interested in a page linked from the web
companion that discusses an inconsistency in the literature in the
definition
of the region of absolute stability of a numerical method . Examples from
eight well-known books, half of which give definitions requiring decay of
solutions, and half of which require only boundedness of solutions may be
found here:

39th SPEEDUP Workshop on High Performance Computing,
September 6/7, 2010, at ETH Zurich

The intention of this workshop is to present and discuss the
state-of-the-art in high-performance and parallel scientific computing.
Presentations will focus on algorithms, applications, and software
issues related to high-performance parallel computing. The focus of the
workshop on Monday Sept 6 will be on software environments for large
scale simulations and on the issues of fault tolerance in massively
parallel systems.

Topic:
This workshop of the BMBF research network SyreNe (http://www.syrene.org)
aims to bring together researchers and users of model order reduction
techniques with special emphasis on applications in micro- and
nanoelectronics. Contributions from other areas such as computational
electromagnetics, mechanical systems, computational fluid dynamics and
related disciplines are welcome.

A workshop on 'Multiscale simulation of heterogeneous materials and
coupling of thermodynamic models' will be held in Leuven, Belgium, on
January 12-14, 2011.

Website: http://www.cs.kuleuven.be/conference/multiscale11/

Since the macroscopic properties of a large class of materials depend
on the heterogeneities on micro- and mesoscopic scales, appropriate
mathematical models are needed to adequately describe the evolution of
the spatial structure and composition variations at each of these
scales. This has lead to a number of modeling approaches that describe
a material’s behavior on different scales ranging from the (sub)atomic
to the continuum level.

In this workshop, the focus will be on two closely connected themes:
computational multiscale methodology and the coupling of thermodynamic
models.

We refer to the website for a list of invited speakers. Contributed
presentations of 20 minutes are welcomed on topics related to the
scope of the workshop. We also encourage each participant to bring a
poster on his/her work, which will be displayed for the whole duration
of the workshop. Please submit your contribution before August 31,
2010 via the website.

This workshop is sponsored by the Scientific Research Networks
'Advanced numerical methods for mathematical models', 'Surface
modification of materials', 'Computational modeling of materials', and
'Structural and chemical characterization of materials at the micro-
and nanoscale', the European COST MP0602 action, as well as by Res
Metallica (consisting of OCAS, Bekaert and Umicore).

With the continuing advances in high-performance computing
(HPC) the role of computational science and engineering (CSE)
has gained significant importance over the last decades.
At the same time scientific simulation faces a number of
challenges. Many of those are combinatorial in nature and
unified by a common set of abstractions, data structures, and
algorithms based on combinatorics, graphs, and hypergraphs.
CSC11 provides a forum for researchers interested in the
interaction of combinatorial mathematics and algorithms
with CSE. The workshop will follow and in part overlap with
the 2011 SIAM Conferenceon Optimization (OP11,
http://www.siam.org/meetings/op11/).

We invite 2-page extended abstracts for 25-minute oral
as well as for poster presentations to be submitted via
http://www.easychair.org/conferences/?conf=csc11.

The Department of Mathematical Sciences, The University of Liverpool, UK,
has
3 PhD positions. Please forward to any suitable
candidates.

(1) Graduate TA Posts 1 and 2:
These are funded by the University of Liverpool in partnership with
the
Xi'an Jiaotong-Liverpool
University (XJTLU). Candidates can choose any available projects within
Mathematical Sciences.
Each post is for 4 years, starting from 1 Sep 2010, at the research
council level of £13,590 pa.
There are no residence requirements. The GTAs’ deadline is: 6 July 2010.
See
http://www.liv.ac.uk/maths/Prosp_PG/2010JulyTeachingAssistants.pdf
(2) Post 3: EPSRC INDUSTRIAL CASE studentship
This is funded by the EPSRC following a successful competition for the
general pool, for the project
“Blind Deblurring Techniques for Retinal Imaging”,
in collaboration with the St Pauls Eye Unit of the Royal Liverpool and
Broadgreen University Hospital,
a leading eye hospital that treats around 90000 patients per year from
all over UK and further afield.
Essentially involving variational imaging modelling and advanced
numerical methods, the project will be
supervised by Prof Ke Chen (Math Sciences) and Prof Simon Harding (St
Paul’s unit).
The usual UK EPSRC rules and conditions apply.
See http://www.liv.ac.uk/maths/Prosp_PG/2010JulyCMIT_CaseStudent.pdf