In addition to the 8 postings of E&A below, click here to download the file VVCSE98_errata.doc. This is a MicroSoft Word document that contains corrections to the text in MarkUp format. Turn on MarkUp in your Word to see the changes.

The principal result of this article by Prof. Toshiyuki Hayase of Tohoku University, Sendai, Japan, is the demonstration that QUICK is superior to central
differencing on yet another account: QUICK converges monotonically for the problem of unsteady turbulent flow in a square duct (calculated without a turbulence model) whereas central differencing does not, making error
estimation/banding problematical.

Additionally, the article provides further motivation for the use of Fs = 3 in the definition of the GCI (Chapter 5), and for the use of p = 2 in the QUICK method.

More often than
not, papers using the GCI have confused it with an Error Estimator. The GCI is not an error estimator, but an error band. In fact, it is defined as Fs = 3 times the (generalized) Richardson error estimator.
Asymptotically, one would expect the GCI to be conservative by a factor of Fs = 3. The rationale for this conservatism is primarily based on the common practice of performing grid convergence tests by using only
two grids. For studies with many grids which establish conclusively a constant observed rate of convergence p, I recommend the value Fs = 1.25. (See pages 120-123.) Several good researchers have criticized me for
using such a conservative value as Fs
= 3, but Hayase’s results tend to confirm my approach. As he pointed out, the fine grid GCI for mean bulk velocity with central differencing (in his Grid C) is half the real error. This would be bad enough, but bearing in mind that
Fs = 3, the (generalized) Richardson Error Estimate is thus 6 times the real error. I think that this again shows that the conservatism of Fs = 3 is more than justified for the difficult problem computed.

Although the author did not mention it, his calculation of the fine-grid GCI for QUICK on grid B, like the result for central differencing on Grid C, is less than the real error (15 < 21). However, this is due to the
use of the theoretical rate of convergence (p = 3) for QUICK rather than observed rate. For QUICK, there is ambiguity in the rate of convergence p. Although the advection term is 3rd order for 1-D
problems, its order degrades for multidimensions, or at least is sensitive to the implementation. Also, other discretizations in the code may be second order. Examination of Hayase’s Figure 9(a) indicates that in fact
the observed convergence rate for the QUICK calculation is precisely p = 2 for the mean bulk velocity. (Applying Equation 5.10.6.1, page 131 of Verification and Validation in Computational Science and Engineering
to the values read as f3 = 1.64, f2 = 1.2, f1 = 1.09 with r = 2 gives exactly p
= 2.) If the observed rate of convergence p = 2 is used, the new GCI for QUICK is 7/3 times the old value of 15, or 35, which is now conservative compared to the real error (35 > 21). Again, the GCI would not be conservative even with
p = 2 unless Fs = 3 or at least 1.8.

This note, based on Chapter 10, Section 10.23, pp. 331- 335, has been published in the "Discussion" section of JFE along with the reply by H. W. Coleman and F. Stern. The authors accept my point about the
validity of "trend capturing" in Figure 10.23.1, p. 332, but only for a "very special case." They also accept the need for including some required error tolerance in the definition of validation, and
agree that the situation that I describe on page 335 should also be considered a successful validation even though it does not fit their original definition; however, they do not agree with my use of Eqn. 10.23.7, page
335.

E & A #5 posted 03/13/99

The following E&A are due to the courtesy of Prof. Dominique Pelletier.

Page 162. The paper by Pelletier and Ignat (1995) already considers the k-omega
and k-tau models as well as the k-epsilon model for a shear layer. In all cases the fields of u, v, p were identical. The fields of k are identical, while those of omega or tau were adjusted so as to yield the same eddy
viscosity distribution.

Page 202. The argument that the ratio of specific heat -> 1 as Mach number -> 0 cannot hold, even though it is a commonly held belief by too many people. See R. L. Panton's book
Incompressible Flows, John Wiley & Sons, New York, 1984, pp. 258-260. The error arises because of inappropriate nondimensionalization.

Page 222. The consideration of homogeneous boundary conditions was used
in the subject paper, but is not necessary to the methodology described. These are necessary if we use collocated variables.

Page 227. My (PJR) interpretation of the example given on the inherent limitation of
the energy norm used in common single-grid error estimators is debatable. Prof. D. Pelletier maintains (as others would) that even if nodal values are exact, this does not mean that the approximation of the solution is
exact. A non-zero energy norm for the simple problem is the right answer, because in the FEM, the approximation is given by the pairing of nodal values and interpolation functions. Even if the nodal values are exact,
linear interpolation between nodes yields an error, which is measured by the energy norm; it is an interpolation error estimate. Linear interpolation is error free for only linear functions over the domain; it will
display interpolation errors for all other functions. I (PJR) am still of the opinion that the example demonstrates the inadequacy of the error measure. When all node values are exact for all discretizations, yet the
error measure shows non-zero values, something is amiss. The interpretation hinges on a long-standing difference of approach between FDM and FEM.

E & A #4 posted 01/23/99

Recent references Verification and Validation are the following.

AIAA (1998), Guide for the Verification and Validation of Computational Fluid
Dynamics Simulations, AIAA G-077-1998. Available through

AIAA

1801 Alexander Bell Drive

Suite 500

Reston, VA 22091

for US$24.95 + shipping. 19 pages.

Oberkampf, W. L. (1998), “Bibliography for Verification and Validation in Computational Simulation, Sandia Report SAND98-2041, September 1998. Available through

Sandia National Laboratories

P. O. Box 5800

Albuquerque, NM 87185-0825

111 pages + distribution list. This bibliography contains abstracts or brief Table of Contents, a listing of authors by year, and brief evaluation notes. It is highly recommended.

Blottner, F. G. and Lopez, A. R.
(1998), Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification, Sandia Report SAND98-2222, October 1998. Available through

Sandia National Laboratories

P. O. Box 5800

Albuquerque, NM 87185-0825

77 pages + distribution list. Rigorous application of Richardson extrapolation not only for accuracy estimation and local truncation error but also to determine if governing equations are well posed. Study of
restrictions on non-uniform mesh variation for various families of discretization schemes, and sequential 1-D refinement.

An early paper using rigorous application of Richardson Extrapolation in difficult 2-D turbulent boundary layer
problems, this reference was overlooked in Verification and Validation in Computational Science and Engineering. (Cited in Blottner and Lopez, 1998, above.)

E & A #3 posted 01/21/99

An additional reference for what is herein called the “Method of Manufactured Solutions” is the “Prescribed Solution Forcing Method” of Dee (1991). See also
Wang (1996).

The author suggests (1) the time-dependent differentially heated cavity, and (2) the driven cavity with a sinusoidally oscillating lid, for time-dependent benchmarks. These appear to be excellent choices, since they
are easily reproducible and contain no vorticity singularities.

The author gives a few 1-D solutions and a 2-D solution of isentropic unsteady
flow, useful for benchmarking.

E & A #1 — Chapter 3.

In retrospect, it seems that early instances of the use of what we now call the Method of Manufactured Solutions were cited in Roache (1972, p. 363-365; see
also the recent edition in Roache, 1998B, Fundamentals ofCFD). These include Greenspan (1967) and Gourlay and Morris (1968A,B). Although the general method was not presented, it seems clear that these
authors used the approach to generate an ad hoc exact solution for time-dependent model equations. Greenspan solved only the linear parabolic 1-D heat conduction equation with a source term, but Gourlay and Morris
(1968A) solved a 1-D nonlinear advection + source PDE (with both a “manufactured” source term and a “manufactured” time-dependent nonlinear advection term) with a solution form of

(They did not include diffusion terms.) In Gourlay and Morris (1968B), they solved a 2-D nonlinear conservation-form advection + source equation
(again, no diffusion terms) with a “manufactured” source term; they compared the discrete solution only in the steady state to the exact steady state solution of

Obviously, the simple solution form was chosen first, then passed through the PDE to generate the problem. (Undoubtedly, many of the
non-infinite-series classical solutions in engineering were obtained this way, i.e. beginning with a solution form.) See also B. K. Crowley (1967), cited on p. 366 of Roache (1972).

What is strange is that the notion persisted, often repeated, that we did not have any non-trivial solutions to the full nonlinear Navier-Stokes equations,
when all we have to do is “complicate” the problem a little with the addition of a source term, and we can generate all the solutions we want. The key concept is that, for code Verification, these solutions need not be
physically realistic.