What
is needed in Computer Algebra Packages for Mathematical Research!

This is to be the introductory talk for the session on computer
algebra. As such it will be a commentary on a few computer algebra
packages and what aspects of them seem most helpful to the research
mathematician. The talk will touch upon some of the packages
that will be introduced and described in subsequent presentations
and talks. The philosophy to be presented is that the role of
computer algebra packages is FIRST, an aid to discovery and
only secondarily is it to prove theorems (even though the latter
maybe play an essential part in the former).

In order to make this view concrete, I will provide examples
of how computer algebra packages can either impede or expedite
discoveries. The first examples chosen will concern the Omega
package (developed jointly with Paule and Riese and to be demonstrated
later in the workshop). First it is briefly explained how the
package rapidly produces generating functions for rather complicated
partitions. Next we look at the generating function for partitions
that form the sides of non-degenerate k-gons. If you let the
Omega package do too much, you fail to grasp what is happening.
This is based on Bull. Austral. Math. Soc., 64(2001), 321-329.

Our next example, based on a recent Putnam problem, illustrates
how the flexibility of computer algebra packages assists research.
Here I think we should hold as an ideal Hardy's description
of Ramanujan: "But with his memory, his patience, and his power
of calculation, he combined a power of generalisation, a feeling
for form, and a capacity for rapid modification of his hypotheses,
that were often really startling, and made him, in his own peculiar
field, without rival in his day." While most of us are vastly
inferior to Ramanujan in each of the categories named, we do
have the advantage of these wonderful machines that can assist
in emulating a few of Ramanujan's qualities. This portion of
the talk will be based on Contemp. Math., 291(2001), 11-27.

The talk will conclude with further eclectic samples of the
significant interaction of computer algebra with topics that
Ramanujan might have found interesting. In this portion we hope
to consider some of the marvelous computer algebra summation
packages.

Orthogonal
Polynomials in One Variable (Friday, July 26 at 9:00 am)
Slides

The
classical orthogonal polynomials in one variable are fairly
well understood. However, when these are extended to more than
one variable, no one knows how many different ways this can
be done, and how explicitly the polynomials can be found. There
will be three talks on orthogonal polynomials. The first will
deal with the way some of the classical polynomials arise in
quantum angular momentum and its q-analogue. The 3j symbols
can be transformed into two different sets of orthogonal polynomials,
the 6j symbols into a set of orthogonal polynomials. In all
of these cases there are only finitely many polynomials and
the natural orthogonality relation is a discrete sum. Each of
these sets of polynomials becomes a set of orthogonal polynomials
with respect to an absolutely continuous measure when the parameters
are changed. It has been shown that the 9j symbols are orthogonal
polynomials in two variables, and the weight function has recently
been discovered. However, there still is no explicit representation
which can be used to make the polynomial character obvious.
It is likely that the obvious analytic continuation of the discrete
orthogonality will give the measure when parameters have been
changed, but this has not yet been shown.

The Digital Library of Mathematical Functions will include some
but far from all of the known results on special functions which
will be used in the future. The editors and authors are making
guesses about what will be used in science and engineering.
In the past, and almost surely in the future, such guesses have
been far to conservative about what will be needed in some scientific
fields. Some guesses will be given. Certain sets of orthogonal
polynomials in one and several variables and elliptic hypergeometric
functions are two examples. Some wild speculations about such
things as an integral representation of the double gamma function
will be mentioned.

Alexander
Berkovich
(Department of Mathematics, University of Florida)

Partitions
with gap conditions: some old and new results

I
start this talk by reviewing the Euler as well as Rogers-Ramanujan
partition theorems. Next, I will give a unified treatment of
the Schur and Goellnitz partition theorems using a method of
colored integers. Then, I will discuss a recent four-parameter
generalization of Goellnitz partition theorem due to Alladi,
Andrews, Berkovich. I will describe an essential role played
by "Maple" in discovering and proving this new theorem.

A
hierarchy of functions can be defined by oscillatory integrals
constructed from the polynomial normal forms of catastrophe
theory, each with a different topology of coalescence of saddle-points
as parameters vary. Diffraction catastrophes fall outside the
more familiar hypergeometric class. They have many applications
throughout wave physics, where they describe waves (sound, light,
water, quantum) near the geometrically stable caustic singularities
of ray physics; the description is asymptotic, and gets better
as the wavelength gets smaller. Diffraction catastrophes have
interesting and beautiful mathematical properties: scaling laws,
nonlinear integral identitites, bifurcation and Stokes' sets,
geometry of maxima, phase singularities, and as powerful building-blocks
of uniform asymptotic expansions. Their importance was not appreciated
when the NBS Handbook of Mathematical Functions (Abramowitz
and Stegun) was written in the 1960s. Now they will feature
in a chapter being written with Christopher Howls for the new
NIST Digital Library of Mathematical Functions project.

The
Digital Library of Mathematical Functions is envisioned as a
versatile Web-based resource of information on the special functions
of applied mathematics. The primary content will be carefully
researched and verified formulas and graphs providing quick
access to the detailed properties of these functions most useful
in practical application in science and engineering. Presenting
such information in a way that is not only natural and convenient
in a Web environment, but also provides capabilities that exceed
what is available using traditional publication methods, remains
a technical challenge.

Some
of the technical issues that must be addressed in such an undertaking
include the following:

A
survey of these challenges, and how they are being addressed
in the context of the DLMF will be the main subject of this
presentation. Several will be discussed in much greater detail
by other speakers at the workshop.

One
unique tension between the desire to provide versatile interactive
Web content and the requirement to present certified standard
reference information arises in the context of graphics. I will
illustrate some of the pitfalls using a Java applet for exploring
functions of a single variable, and suggest techniques of "honest
plotting" which are necessary for their resolution. This will
also expose new needs for algorithms and software for special
functions.

Finally,
I will provide a demonstration of the current working DLMF Web
site.

This
talk will look at specific examples of the interplay between
special functions and combinatorial analysis. The intention
is to engender discussion of the role of a chapter on Combinatorial
Analysis within the Digital Library of Mathematical Functions.

Identities
for special functions are just special formulae in predicate
logic whose correctness has been established, typically, by
proofs produced by human mathematicians. Recent advances in
various areas of mathematics have made it possible to invent
and/or prove many of these identities by algorithms. In this
talk, we put these advances into the more general context of
formal, computer-supported mathematics, notably automated theorem
proving. We give an overview on the Theorema system, which aims
at providing algorithms for proving, solving, and simplifying
classes of formulae in various areas of mathematics in
a uniform logic and software technologic frame. We describe
future mathematics as the process of "mathematical knowledge
management" in wich mathematical theories on a meta-level
establish individual theorems that form the basis of proving,
solving, and/or simplifying infinite collections of formulae
on a lower level. We propose to build up mathematical knowledge
bases for all areas of mathematics and to provide tools for
the formal manipulation of these knowledge bases that are essentially
based on automated proving.

We
will demonstrate the design and current capabilities of
Theorema by a couple of examples. Theorema is programmed in
Mathematica and, thus, is available on all platforms.

The
six Painleve equations (PI-PVI) were first derived
around the beginning of the twentieth century by investigation
by Painleve, Gambier and their colleagues in a study of nonlinear
second-order ordinary differential equations. There has been
considerable interest in Painleve equations over the last few
years primarily due to the fact that they arise as reductions
of soliton equations solvable by inverse scattering. Further,
the Painleve equations are regarded as completely integrable
equations and possess solutions which can be expressed in terms
of the solutions linear integral equations. Although first discovered
from strictly mathematical considerations, the Painleve equations
have appeared in various of several important applications including
statistical mechanics, random matrices, plasma physics, nonlinear
waves, quantum gravity, quantum field theory, general relativity,
nonlinear optics and fibre optics. The Painleve equations may
also be thought of as nonlinear analogues of the classical special
functions such as Bessel functions. Their general solutions
are transcendental in the sense that they cannot be expressed
in terms of previously known functions. However, for special
values of the parameters, PII-PVI possess rational
solutions and solutions expressible in terms of special functions.
For example, there exist special solutions of PII-PVI
that are expressed in terms of Airy, Bessel, parabolic cylinder,
Whittaker and hypergeometric functions, respectively. Further
the Painleve equations admit symmetries under affine Weyl groups
which are related to the associated Backlund transformations.
In this talk I shall give an overview of some of plethora of
remarkable properties which the Painleve equations possess (including
connection formulae, Backlund transformations, associated discrete
equations and hierarchies of exact solutions) and some of their
applications.

A
flexible conversion facility in the mathematical language is
as important as a dictionary in spoken languages. Such a tool
is implemented in the Maple system as a net of conversion routines
aiming at expressing any mathematical function in terms of another
one, whenever that is possible as a finite sum of terms. When
the parameters of the functions being converted depend on symbols
in a rational manner, any assumptions on these symbols - e.g.
made with the "assuming" facility - are taken into account at
the time of performing the conversions.

Most
special functions also arise as solutions to some differential
equations - ODEs or PDE - of linear and non-linear type, polynomial
in the unknowns and their derivatives. This polynomial differential
representation of a mathematical function is the starting point
for establishing the function's properties. By composing functions
with themselves, arbitrary powers, additions and products one
obtains rather arbitrary non-polynomial objects which can also
be represented in differential (typically non-linear) polynomial
form. The routines being presented can construct these differential
polynomial representations in general, making possible the computation
of subtle identities, the solving of non-polynomial (possibly
non) differential systems using techniques for differential
polynomial ones, etc. This tool is at the root of the current
Maple PDE system solver.

Finally,
the requirement concerning mathematical functions is not merely
computational: typically, one needs information on established
identities, alternative definitions and mathematical properties
in general. We usually look for that information in handbooks
like Abramowitz & Stegun. Part of this information is already
found in internal Maple subroutines. This motivates the idea
of a "function wizard" project, whose main purpose is to provide
access to each piece of this information through a simple interface,
including the goal of making the information complete. Such
a "computer algebra handbook of mathematical functions" - a
concept close to that of a live function wizard - is an ongoing
project expected to superseede textbooks at some moment, in
that it can respond to requests by processing whatever (growing
number of) mathematical information using (a growing number
of) mathematical algorithms.

A
large class of special functions and combinatorial sequences
that are implicitly represented by systems of linear functional
equations is amenable to computer algebra methods. The class
enjoys many closure properties that have recently been turned
into algorithms, now implemented in the Maple package Mgfun.
The presentation treats concrete examples with our implementation.
Applications include the evaluation of parametrized definite
integrals and sums, series and asymptotic expansions, and the
automatic proof of identities.

The Dirichlet problem for the sphere requires the determination
of a function harmonic in the interior with specified boundary
values. One approach to the solution is to expand the boundary
values as a series of spherical harmonics. These are the restrictions
of harmonic homogeneous polynomials to the surface. Other partial
differential equations with spherical symmetry can be solved
by using spherical harmonics, for example the field of a point
charge in a hollow sphere, or the wave function of an electron
subject to a Coulomb potential. The classical orthogonal polynomials
of the Legendre, Gegenbauer and Jacobi families appear in specific
formulae for the harmonics. In the 60's and 70's it became apparent
that harmonic analysis, that is, the study of functions on which
there are group actions, has a natural setting on the sphere.
Perhaps the first important result was to realize Gegenbauer
polynomials as spherical functions on the sphere as a homogeneous
space of the rotation group SO(N). So for a certain period
of time the emphasis was on decomposing the space of spherical
harmonics of a given degree according to the action of parabolic
subgroups like SO( N-1) and SO( m) × SO(N-m).
This provided a setting and elegant proofs for product formulae
and addition theorems. Even in those times the importance of
finite symmetry groups was already manifest. In order to analyze
the wave functions of electrons in molecules with a crystal
structure it is necessary to use the structure of spherical
harmonics invariant under the symmetry point group of the crystal.
Later it was realized that finite reflection groups and root
systems are more fundamental than the rotation group. Gegenbauer
and Jacobi polynomials appear as the spherical harmonics associated
to reflection groups of rank 1 or 2. Now we have larger classes
of weight functions on the sphere. The classical theory of spherical
harmonics can be taken over to weight functions consisting of
products of powers of linear functions invariant under a reflection
group. In the past we used the idea of irreducible unitary representations
to produce orthogonal decompositions, nowadays we construct
commutative algebras of self-adjoint operators whose simultaneous
eigenfunctions are the orthogonal polynomials we wish to study.
This is completely successful for the weight function consisting
of powers of the coordinate functions, and gives a nice basis
of products of ordinary Jacobi polynomials. It is a more difficult
matter to find bases of polynomials with, for example, hyperoctahedral
invariance. A novel basis for symmetric functions is introduced
to solve this problem; although the obtained basis is not orthogonal
we can find the inner products and explicitly compute the determinant
of the Gram matrix. The harder problem of spherical harmonics
associated to Calogero-Moser problems will be discussed in connection
with the by now well-developed theory of symmetric and nonsymmetric
Jack polynomials.

The
goal of this talk is twofold. The first and the main objective
is to present an overview of the modern theory of integrable
systems. The theory was originated in the remarkable work of
Gardner, Green, Kruskal, and Miura of 1967 on the Korteweg-de
Vries equation. Since then it has gradually transformed into
a subject which could be called the nonlinear special functions
and which overlaps now with many areas that have never been
considered before as "integrable systems."

The
focus of the talk will be on the analytic aspects of the theory
of integrable systems represented by its principal analytic
ingredient - the Riemann-Hilbert method. We will argue that
the method can be thought of as a non-commutative analog of
the method of contour integral representations. Applications
of the Riemann-Hilbert technique to the asymptotic analysis
of the Painlevé transcendents, including rigorous derivation
of the relevant connection formulae, will be discussed in detail.

The
second objective of the lecture is, to a certain extent, of
a speculative nature. We will try to use the main topic as an
opportunity to reflect on the very notion of "integrability."
In fact, we shall try to go beyond the classical definitions
of integrability in the sense of Liouville and Frobenius. An
ideal goal would be a rigorous understanding of such commonly
used terms as "explicit solution,'' "exact formula,''
etc. Most certainly we are, at the moment, very far from even
a rigorous formulation of the question. Still, some relevant
observations toward the goal mentioned can be made, and we will
try to do so when discussing the recent applications of the
Riemann-Hilbert method in matrix models, special functions and
combinatorics.

In
the Bateman project, special functions in connection with group
theory only figured in the (excellent) chapter 11 on spherical
harmonics. When this project was underway, some fifty years
ago, more was already known about special functions arising
in group representations. In particular, the pioneering work
of Wigner has been important. Since then there has been an increasing
interaction between special functions and group representation
theory. The profits of this interaction for special funtion
theory were a new way of systematizing the theory, new conceptual
proofs of old results, new results for known special functions,
and the introduction of completely new classes of special functions.
On the other hand, in group representation theory and in harmonic
analysis on groups (in particular noncompact semisimple Lie
groups) certain results essentially used hard analytic facts
about special funtions occurring in that context. Applications
of the interaction between special functions and group theory
were made in particular in physics (for instance Clebsch-Gordan
coefficients), while conversely physical motivation often led
to new examples of this interaction.

Until
the late eighties, the development of this interaction went
along two tracts. A tract started by Wigner and vigorously extended
by Vilenkin studied matrix elements of group representations
as special functions. A special case of this approach focused
on spherical functions. This was a source of product formulas
and addition formulas, see work by, among others, Koornwinder,
D. Stanton, Dunkl. A spin-off was the idea of special functions
associated with root systems, which led to the Heckman-Opdam
hypergeometric functions, the Dunkl operators and the Macdonald
polynomials.

A
second tract, having earlier roots but brought to full maturity
by W. Miller Jr., started with a family of special functions
and differential-difference operators associated wih them. The
operators generate a Lie algebra and hence a local Lie group.
This again naturally implies many formulas for the special functions
one started with, for instance generating functions. In later
work, Miller focused on symmetry aspects of separation of variables.

The
introduction of quantum groups during the eighties had an enormous
impact on the theory of q-special functions which, a little
earlier had got a new dynamics by, among others, the discovery
of Askey-Wilson polynomials. While q-special functions, until
then, only were found to live on p-adic groups and Chevalley
groups, they now got a very natural setting on quantum groups.
To some extent, everything which had been done earlier for special
functions in connection with Lie groups could now be repeated
in the q-case, but more exciting phenomena were met because
the q-universe is not just a dull parallel of the classical
universe, but one object in the q-universe can correspond to
different classical objects, and conversely.

During
the nineties Macdonald polynomials and Koornwinder's extension
of them found their setting in the quantum world in various
quite different ways. Much impact had Cherednik's interpretation
of Macdonald polynomials in the context of affine Hecke algebras.
Representation theory of (affine) Hecke algebras and harmonic
analysis on them became important for special function theory.

Important
recent developments involve representation theory of noncompact
quantum groups, the study of dynamical quantum groups, and work
on elliptic quantum groups.

I
shall demonstrate the main features of the Mathematica packages
HYP and HYPQ, which are designed for convenient manipulation
of binomial and q-binomial sums, and hypergeometric and basic
hypergeometric series. In particular, the packages contain an
extensive built-in list of summation and transformation formulas
for these series.

Development
of a New Handbook and Web Site of Properties of Special Functions
Slides:htmlpdfpowerpoint

Abramowitz
and Stegun's Handbook of Mathematical Functions, published in
1964 by the National Bureau of Standards, is familiar to many
mathematicians. It is even more familiar to scientists who use
special functions in their daily work. But developments in mathematics
and computing since 1964 make the old handbook less and less
useful. To meet the need for a modern reference, NIST is developing
a new handbook that will be published in 2003. A Web site covering
much the same territory will be released also. This work is
supported by the National Science Foundation. Details of the
project and its current status will be described.

At the Mathematical Functions website, we have a collection
of most of the known formulas for practically all mathematical
functions. It is organized in such a way as to allow for expansion,
citation, searching, conversion of formulas, and substantial
computer support from the Mathematica system.

Currently (June 2002) the site includes more than 37,300 formulas
for the approximately 250 elementary and special functions that
are available in(Mathematica. It is already the biggest
online collection of formulas presented in different format
types ( notebooks, HTML, and PDF). The site
also has a well-developed, unified structure; in particular,
the organization of mathematical files and programming solutions
allows for semiautomatic updating and citation of formulas.
The site contains thousands of new or adapted-to Mathematica
formulas for elementary and classical special functions.

In this talk we will outline some of the material at the Mathematical
Functions site and also discuss how it was put together.

Representation,
display and manipulation of mathematics on the WebpdfpsSlides:pdf

Earlier
in this workshop, we will have seen how MATHML
provides a means to display mathematics on the web. We will
have heard about automatic verification of special function
identities using computer algebra and theorem proving software.
TEX (and LaTEX) sets the standard for quality typesetting of
mathematics. Presentation MATHML
has the potential to approach its quality, as implementations
mature and consistent fonts become available, and this must
be a goal. Yet, online mathematics also offers the possibilities
of various manipulations of the formula by the user ranging
from simple selections of alternative forms of expressions,
through broad rearrangements, to extensive computations with
and validations of formula. What is required of a representation
of mathematics that will support these, and other foreseeable,
capabilities?

An
ideal representation must be able to capture the nuances of
quality display as well as the precise meaning of the formula.
The presentation hints include the `exceptions' that an author
introduces to make it's meaning more apparent to a reader, such
as the ordering of variables and terms, presence and omission
of parenthesis in certain situations, choice among different
notation for the same operations and so on.While
TEX, LATEX and Presentation MATHML
easily capture this presentation information, they sidestep
the issue of the meaning of formula. Which F is being
referred to? What does a given prime indicate? A useful representation
must also capture the meaning in sufficient precision that formulas
can be supplied as input to computer algebra systems and theorem
provers without fear of misinterpretation. ContentMATHML
andOPENMATHconvey
the semantics (and the input languages of computer algebra systems
each convey their own version of the semantics), but, alone,
lack the presentation information.

A
third aspect, metadata, must also be considered. Irrespective
of how well a mathematical search engine can be made to work,
there are facets to a formula that cannot be inferred from the
formula itself. For example, the role of a formula as an addition
theorem, or asymptotic expansion, may best be handled by annotating
the formula, perhaps with simple textual information. Additional
information regarding the dependence of one formula on another,
or constraints on variables, or the derivation of the formula
may be better handled by structured annotations, such as provided
by theOMDOCextension
of OPENMATH.

Thus
one is led to a core representation consisting of a hybrid of
presentation and content markup, along with additional annotations
for metadata. Since the representation most appropriate for
exchange, delivery and presentation in different media may be
different, the representation must support the transformation
from one the core format to another. The various XML formats
(MATHMLand OPENMATH)
are designed to support this.

But,
given the verbosity of XML, these may not be the the best form
for authoring. Indeed, writing being a rather personal thing,
the styles of different authors must be accommodated. The mathematical
community must therefore develop various means of authoring
for content, as well as presentation. Some may be based on XML
editors with graphical user interface. Others may be based on
computer algebra system worksheets.

Given
the longstanding preference for LATEX among most of the DLMF
project members, we are developing a LATEX based approach. We
extend LATEX by defining macros at a higher semanticlevel,
for example for various calculus operations; macros are defined
for each special function, and so on. By minimizing the ambiguities
inherent in purely presentational markup, an infix parser with
only minimal heuristics should be able to transform the formula
into the corresponding semantic representations.

I
will discuss some of the most important interrelationships between
the theory of Lie groups and algebras, and special functions,
with a strong emphasis on results obtained in the 50 years after
the publication of the Bateman Project. An informal justification
for this treatment is that most functions commonly called "special"
obey symmetry properties that are best described via group theory
(the mathematics of symmetry). In particular, those special
functions that arise as explicit solutions of the partial differential
equations of mathematical physics, such as via separation of
variables, can be characterized in terms of their transformation
properties under the Lie symmetry groups and algebras of the
differential equations. (The same ideas extend to difference
and q-difference equations.) I shall treat, briefly, the following
topics:

4.
Special functions as Clebsch-Gordan coefficients for the reduction
of tensor products of irreducible group representations (the
motivation for Wilson polynomials).

In
practice, the first two items involve hypergeometric functions
predominantly and are special cases of the third item. The group
theoretic basis for variable separation allows treatment of
non-hypergeometric functions, such as those of Lame' and Heun.
The last item provides an important motivation for the construction
of the Askey-Wilson polynomials.

I
will conclude with a brief examination of special functions
(or functions that deserve to be called "special") that arise
when one restricts certain irreducible Lie group representations
to a discrete lattice subgroup. The two most important examples
are an irreducible representation of the Heisenberg group (and
its relation to the windowed Fourier transform, the Weil-Brezin-Zak
transform and theta functions), and an irreducible representation
of the affine group (and its relation to the continuous and
discrete wavelet transforms). I will describe the properties
of the Daubechies family of scaling functions, a very modern
family of "special" functions arising as solutions of difference
equations.

Current
versions of MATLAB provide a limited number of special functions.
Most of the special function library is based on old Fortran
codes by Cody and Amos. The connection through the Symbolic
Toolbox to Maple provides a richer set of functions, but this
is available to only a fraction of MATLAB users. What other
functions should we provide in MATLAB itself, and how should
we provide them?

The
Riemann Hypothesis, which predicts that all non-trivial zeros
of the Riemann zeta function lie on the critical line, is the
most famous unsolved problem in mathematics. Most of the interest
in this function has therefore been concentrated on the zeros,
and the latest results in this area will be surveyed. However,
the zeta function, and more general classes of zeta functions,
also have many other interesting properties as special functions
of mathematics, and some of the more interesting ones will be
discussed.

The
new (and of necessity digital) libraries for mathematicsSlides:pdfps

Traditional journals, even those available electronically, are
changing slowly. However, there is rapid evolution in scholarly
communication. Usage is moving to electronic formats. In some
areas, it appears that electronic versions of papers are being
read about as often as the printed journal versions. Although
there are serious difficulties in comparing figures from different
media, the growth rates in usage of electronic scholarly information
are sufficiently high that if they continue for a few years,
there will be no doubt that print versions will be eclipsed.
Further, much of the electronic information that is accessed
is outside the formal scholarly publication process. There is
also vigorous growth in forms of electronic communication that
take advantage of the unique capabilities of the Web, and which
simply do not fit into the traditional journal publishing format.

This
lecture will present some statistics on usage of print and electronic
information. It will also discuss some preliminary evidence
about the changing patterns of usage. It appears that much of
the online usage comes from new readers (esoteric research papers
assigned in undergraduate classes, for example) and often from
places that do not have access to print journals. Also, the
reactions to even slight barriers to usage suggest that even
high quality scholarly papers are not irreplaceable. Readers
are faced with a "river of knowledge'' that allows them
to select among a multitude of sources, and to find near substitutes
when necessary. To stay relevant, scholars, publishers, and
librarians will have to make even larger efforts to make their
material easily accessible.

There
is a long history of statistical problems that served to motivate
research in linear algebra, in numerical methods, and in special
functions. We here focus on topics arising fom multivariate
analysis: extremal problems, multivariate integrals and distributions,
totally positive functions, and eigenvalue problems.

Frank
W.J. Olver (Institute
for Physical Science and Technology, University of Maryland
and Mathematics & Computational Sciences, National Institute
of Standards and Technology (NIST)) olver@ipst.umd.edu

The
problem of simplifying complicated sum expressions arises not
only in special functions but in many mathematical fields. Nevertheless,
symbolic algorithms that assist in this task do not have a very
long history.

The
starting point of symbolic summation with the computer is Gosper's
algorithm (1978), a decision procedure for indefinite hypergeometric
summation. Despite being a first breakthrough, for a long time
its applicability has been considered as quite restricted since
most hypergeometric summations arising in practice are definite
ones. Zeilberger's `creative telescoping' (1990) dissolved this
limitation. Since then symbolic summation has turned into an
active subarea of computer algebra on its own.

This
talk presents a survey on algorithms and methods along a historical
chain of missed opportunities.

The
first talk of the computer algebra session by George Andrews
focuses on the role computer algebra tools play in discovery.
The primary object of this second talk of the session is to
introduce to some of the mathematical ideas which make the algorithmic
machinery work. Illustrative examples (e.g., hypergeometric
sums and multiple-sums, q-series, harmonic number identities)
should whet one's appetite to attend the software presentation
in the afternoon.

New
and old addition theorems and Landen identities for Jacobian
elliptic functions: do these indeed give rise to "novel" solutions
for non-linear PDEs?

The algebra of addition formulae for theta and Jacobian elliptic
functions is indeed profuse and bewildering in their notations
and multivarious forms. Khare and Sukhatme have recently introduced
scores of identities for sums of Jacobi functions with arguments,
z, augmneted by addition of 2(i-1)K/p, K being the "quarter
period" which is a complete elliptic integral of the first kind,
and i =3D 1,2,3,...p. They have investigated p running from
2 to 9. Small values of p give well known results, larger values
lead to results which seem not to have been noted before. They
have subsequently found expansions for single Jacobi functions
in terms of sums of repetitions of the same function with differeing
moduli (Landen type identities), also with agruments spaced
along the same series z+ 2(i-1)K/p.

What are we to make of these new identities, which were discovered
by Khare and Sukhatme (Ref. 1 below) in "finding new solution
classes" for non-linear equations, such as the KDV equation?
A first clue is that these "new" solutions of the KDV equation
have been now found, by the present author, to actually be well
expressed in terms of old and well known solutions through yet
another apparently novel set of Landen type identities, this
time involving relations between squares of the Jacobian functions.

We will attempt to draw all of this together in a manner suitable
for electronic presentation: i.e. what is real, novel, and of
permanent value, and which parts of this can be systematized
in a useful manner? We also attempt to draw out their connections
to work of the ancients, namely what is their relationship to
the cubic and quartic curve invariants of Abel and Weierstrass,
which provide geometric underpinnings to the original addition
theorems?

In
this talk, which is based on joint work with Richard McFarland,
we shall study two classical problems in multivariate statistical
analysis of discriminant analysis and of missing data analysis.
In each of these problems it is an old and difficult question
to evaluate the exact probability distribution of certain maximum
likelihood estimators under the assumption of fixed sample sizes.
Using the theory of Bessel and confluent hypergeometric functions
of matrix argument, we first show that the probability distributions
of these maximum likelihood estimators can be described in terms
of algebraic functions of classical random variables. Using
these algebraic functions we then apply direct Monte Carlo simulation
to evaluate the probability distributions of the maximum likelihood
estimators.

We
demonstrate Mathematica packages related to special functions.
The first part deals with hypergeometric summation. We show
how Zeilberger's algorithm can be used to automatically prove
single summation identities. Furthermore, we present an algorithm
which generalizes and improves Sister Celine's technique for
proving multiple summation identities. For both algorithms we
have worked out q-analogues.

The
second part is devoted to MacMahon's Partition Analysis, a method
for solving problems related to linear diophantine equations
and inequalities. Together with George Andrews, who has algorithmized
this method, and Peter Paule we have developed the Omega package,
which will be demonstrated by means of several examples.

Interactive
3D Visualizations of High Level Functions in a Mathematical
Digital Library

A Web-based digital library offers an ideal medium for informative
graphical representations of high level mathematical functions.
In contrast to 2D and 3D still images in printed media, Web
capabilities make it possible to create dynamic interactive
visualizations that can help users gain a deeper understanding
of special functions.

Still, the development of effective graphics for a digital library
presents several challenges. There must be a reliable means
for computing the data. A sophisticated and intimate knowledge
of the function may be required to determine which features
should be emphasized. Furthermore, singularities, poles, branch
cuts and other complexities can make the creation of accurate
visualizations quite difficult.

This talk will discuss steps taken to address these issues,
including the use of numerical grid generation to design suitable
computational meshes to plot complicated functions. The visualizations
developed for the NIST Digital Library of Mathematical Functions
(DLMF) will be compared to graphical presentations in the original
NBS handbook and earlier references such as Jahnke and Emde.
Also, we will compare the design of graphics for the NIST DLMF
with what is available in popular computer algebra systems and
suggest some areas where future research is needed.

Special
functions play a significant role in modern numerical methods.
In particular wavelet functions became a key technology in scientific
computing for solving partial differential equations via a finite
element ansatz or boundary integral methods. These methods can
be implemented efficiently using multi-scale properties of wavelets.
Moreover, wavelets can be designed with computer algebra methods
to optimally approximate the desired solution with a small number
of wavelet ansatz functions. Similar ideas are utilized to construct
efficient methods for the numerical solution of ordinary differential
equations.

It
is a fact that summation plays an important role in proving
existence of solutions of nonlinear partial differential equations,
in particular nonlinear semi-group theory. We summarize the
role of special function techniques, like summation and generating
functions, from the fundamental paper of Crandall and Liggett
for proving existence of nonlinear parabolic partial differential
equations.

Finally
we present two examples, a variational problem and an inverse
problem, where we used spherical harmonics to reduce the originally
higher dimensional problems to one dimensional problems. We
were experimenting with efficient finite element software to
solve the higher dimensional variational problem. With this
software we were able to find a qualitatively correct solution
- however quantitatively it was wrong. For symmetric test cases,
using spherical harmonics we were able to calculate the solution
analytically and were able to rescale the higher dimensional
problem in order to get a qualitatively correct solution with
the finite element code.

Sigma
is a summation package, implemented in the computer algebra
system Mathematica, that enables to discover and prove nested
multisum identities. Based on Karr's difference field theory
(1981) this package allows to find all solutions of parameterized
linear difference equations in a very general difference field
setting, so called PiSigma-fields. With this difference field
machinery nested indefinite multisums can be simplified by minimizing
the depth of nested sum-quantifiers. In addition, Sigma provides
several algorithms in order to discover closed form evaluations
of definite nested multisums. Here one first tries to compute
a recurrence for a given definite sum by applying Zeilberger's
creative telescoping idea in the difference field setting. Second
one attempts to solve this recurrence in terms of d'Alembertian
solutions, a subclass of Liouvillian solutions. Combining these
solutions one finally may find a closed form evaluation of a
definite multisum. All these aspects will be illustrated by
various examples.

Besides
discovering multisum identities, Sigma is a very powerful tool
to prove definite nested multisum identities. In this software
presentation I will illustrate how the user can be dispensed
from these proving aspects by embedding Sigma as an "external
prover'' in the Theorema-system. Theorema, designed by B. Buchberger,
supports various facets of mathematical proving, solving and
computing in a human readable fashion. One of our goals is to
combine these tools into a complete identity prover.

This is a brief qualitative review of the historical origins,
general structure, existing applications, and possible future
developments of the theory of hypergeometric type series built
out of the Jacobi theta functions.

Topics
to be mentioned. Connections between Barnes multiple gamma,
q-gamma, and elliptic gamma functions. General structure of
the plain and basic hypergeometric functions via ratios of sequential
series coefficients and its Jacobi theta functions generalization.
Elliptic functions origins of the notions of balancing, well-poisedness,
and very-well-poisedness for hypergeometric type series. Modular
group invariance. Contiguous relations for a terminating very-well-poised
12E11 elliptic hypergeometric series.
Frenkel-Turaev identities (elliptic extensions of the terminating
Jackson 8 7 sum and Bailey 10
9 transformation). The elliptic beta integral (an elliptic
extension of the Askey-Wilson and Nassrallah-Rahman integrals).
Elliptic analogues of the Wilson (discrete) and Rahman (continuous)
families of biorthogonal rational functions. A non-rational
functions generalization and two-index biorthogonality. An elliptic
extension of the Ramanujan-Watson-Gupta-Masson continued fraction.
Multivariable special functions (elliptic Selberg integrals
and some summation formulae). Work in progress and some further
possible developments.

(A
part of the author's results to be presented was obtained in
collaboration with J.F. van Diejen
and A.S. Zhedanov.)

The exponential formula relates the combinatorial objects for
the generating functions f(x) and exp(f(x)). Several settings
for exponential formulas exist. In this talk I will review some
of them and what is known (and not known) about their q-analogues
in permutation enumeration. Some results which should be amenable
to computer proofs will be presented.

Peter
L. Walker
(College of Arts & Science, American University of Sharjah)
peterw@aus.ac.ae

The
elliptic functions of Jacobi and Weierstrass

The
talk will describe the Jacobian and Weierstrassian families
of elliptic functions. The emphasis will be on points of view
which have recently become important, new results which have
come to light as a result of the DLMF project, and graphical
representations.

To
process and disseminate technical knowledge more effectively,
efforts are underway worldwide to create and codify Web-accessible
digital libraries of mathematical contents. Notable examples
include the Digital Library of Mathematical Functions (DLMF)
project at the National Institute of Standards and Technology
(NIST), and the markup languages MathML and OpenMath. To benefit
from such digital libraries, users should to be able to search
not only for text, but also for equations and other math constructs.

Searching can be divided into three broad classes of increasing
complexity: (1) keyword based, (2) structural, and (3) semantic.
Text search technology has matured at the keyword level, has
made significant progress at the structural level (phrase and
contiguity search, and XML-based search of structured documents),
and is starting to push towards semantic search.

Applying text search technology to math search of equations
and other math constructs faces serious problems, even at the
keyword and structural levels. The first problem is that mathematical
contents often involve symbols, as in |P_n(x)| and |d^2y/dx^2-x=0|,
that are misinterpreted or unrecognized by text search systems.
The second problem is that mathematical expressions have rich
structures whose semantics are undetected by text search system.
For example, |sin(x + log x)| is no different to a text search
system than |sin x + log x|, and |x (y + z)| is misinterpreted
as |x y + z|, if interpreted at all. The third problem is mathematical
equivalence (``synonyms''). A sum or a product of several terms
can be expressed in many equivalent ways; numbers can be represented
in multiple forms (e.g., 1/2 vs. 0.5 vs. 2-1); polynomials
can be expressed in many factored and unfactored forms; and
so on. Standard thesaurus-based approaches in text search are
not adequate for searching for mathematically equivalent forms.
The fourth problem is the issue of levels of abstraction in
mathematical contents and queries. Should not a document containing
f(x) match a query "f(t)", or a document containing f2+g2
match a query "f(x)2+g(x)2"? The fifth
problem is that of notational ambiguity in mathematics. Is dy
the product d× y or the differential dy? Is x(t+s) the
product x× (t+s) or the function x applied at t+s? Markup
languages can eliminate ambiguity in the database contents,
but not in users' queries.

In this talk, we will discuss the efforts and approaches for
addressing those problems and, generally, for developing math
search techniques and systems. At the keyword and structural
levels of math search, two methodological strategies will be
covered. The first is the evolutionary strategy of building
a math search engine on top of a text search engine, and translating
math contents (including the symbolic and the structural) in
the database and in queries into textual counterparts for text
search. The second strategy is to develop new schemes to index
equations and structures directly, using parse trees and possibly
other models, to increase the precision of searching. Common
to both strategies is the necessity of an effective and easy-to-use
interface for entering and editing math queries. The math search
system being developed at NIST for the DLMF project will serve
as an illustrative focus.

The talk will also cover pertinent aspects of the activities,
projects, and products that are related to, or have impact upon,
math search. These include math markup languages (OpenMath,
MathMl, OMDOC), math ontologies and metadata, math languages/editors
(EzMath, MathType, MathWriter, etc.), Math on the Web (MathWeb,
MONET, the Esprit Project), and industry support.