Proposed talks, title and abstract,
November Workshops
Accepted talks will be 20 to 30 minutes in length.

November
7-11, 2011
Workshop on Computational Topology

Persistent homology quantifies the stability
of critical points of a scalar function: a critical point with
large persistence cannot be eliminated by a small perturbation.
Motivated by the fact that noise can give rise to spurious critical
points, we consider the converse problem: within a prescribed
tolerance from some input function, minimize the number of critical
points.
For functions on surfaces, we solve this problem using a novel
combination of persistent homology and discrete Morse theory.
We construct a function that achieves the lower bound on the number
of critical points dictated by the stability of persistence diagrams.
Such a function can be constructed in linear time after persistence
pairs have been identified.
Moreover, we obtain not only a single solution but a whole a convex
set of optimal solutions. Within this set, a convex functional
can be minimized using convex optimization methods, guaranteeing
that the minimum number of critical points is still maintained.
(Joint work with Carsten Lange and Max Wardetzky. Reference: http://dx.doi.org/10.1007/s00454-011-9350-z)

Bobrowski, Omer and Robert J. AdlerDistance Functions, Critical Points, and Topology for some
Random Complexes

In this talk we focus on the distance function
from a random set of points $P$ in the Euclidean space. The distance
function is continuous, however, it is not everywhere differentiable.
Nevertheless, one can accurately define critical points and then
apply Morse theory to it.We study the number of critical points
in small neighborhoods around $P$.
Specifically, we are interested in the case where the number of
points in $P$ goes to infinity, and the size of the neighborhoods
goes to zero. We present limit theorems for the number of critical
points and show that it exhibits a phase transition, depending
on how fast the size of the neighborhoods goes to zero. A similar
phase transition was presented recently by Kahle & Meckes
who studied the Betti-numbers of random geometric complexes.
We show that this is more than just a coincidence, and discuss
the connection between the distance function and geometric complexes.

Topological Persistence has proven to be
a promising framework for dealing with problems concerning shape
analysis and comparison. In this context, it was originally introduced
by taking into account 1-dimensional properties of shapes, modeled
by real-valued functions. More recently, Topological Persistence
has been generalized to consider multidimensional properties of
shapes, coded by vector-valued functions. This extension has led
to introduce suitable shape descriptors, named the multidimensional
persistence Betti numbers, and a distance to compare them, the
socalled multidimensional matching distance. In this paper we
propose a new computational framework to deal with the multidimensional
matching distance. We start by showing some theoretical results,
and then we use them to formulate an algorithm for computing such
a distance up to an arbitrary threshold error.

We study the simplification of simplicial
complexes by repeated edge contractions. First, we extend to arbitrary
simplicial complexes the statement that edges satisfying the link
condition can be contracted while preserving the homotopy type.
Our primary interest is to simplify flag complexes such as Rips
complexes for which it was proved recently that they can provide
topologically correct reconstructions of shapes. Flag complexes
(sometimes called clique complexes) enjoy the nice property of
being completely determined by the graph of their edges. But,
as we simplify a flag complex by repeated edge contractions, the
property that it is a flag complex is likely to be lost. Our second
contribution is to propose a new representation for simplicial
complexes particularly well adapted for complexes close to flag
complexes. The idea is to encode a simplicial complex $K$ by the
graph $G$ of its edges together with the inclusion-minimal simplices
in the set difference $Flag(G) \ K$. We call these minimal simplices
blockers. We prove that the link condition translates nicely in
terms of blockers and give formulae for updating our data structure
after an edge contraction or a collapse. Finally, we observe in
some simple cases that few blockers appear during the simplification
of Rips complexes, demonstrating the efficiency of our representation
in this context.

We associate with each compact set $X$
of Rn two real-valued functions $cX$ and $hX$ defined on $R+$
which provide two measures of how much the set $X$ fails to be
convex at a given scale. First, we show that, when P is a finite
point set, an upper bound on cP (t) entails that the Rips complex
of P at scale r collapses to the Cech complex of P at scale r
for some suitable values of the parameters t and r. Second, we
prove that, when P samples a compact set X, an upper bound on
hX over some interval guarantees a topologically correct reconstruction
of the shape X either with a Cech complex of $P$ or with a Rips
complex of P. Regarding the reconstruction with Cech complexes,
our work compares well with previous approaches when X is a smooth
set and surprisingly enough, even improves constants when X has
a positive mu-reach. Most importantly, our work shows that Rips
complexes can also be used to provide topologically correct reconstruction
of shapes. This maybe of some computational interest in high dimensions.

Chazal, FredericPersistence Based Signatures for Metric Spaces

We introduce a family of signatures for
compact metric spaces, possibly endowed with real valued functions,
based on the persistence diagrams of suitable filtrations built
on top of these spaces. We prove the stability of these signatures
under Gromov-Hausdorff perturbations of the spaces.

Computation of cycles representing classes
of homology groups is a fundamental problem arising in applications
such as parameterization, feature identification, topology simplification,
and data analysis. Variations of the classical Smith Normal Form
algorithm and the recently developed persistence algorithm do
compute representative cycles of a homology basis for a simplicial
complex, but they remain oblivious to the input geometry. Some
recent research in computational topology has addressed the problems
of computing homology cycles that are optimal with respect to
a given metric. In this talk, we concentrate on two such developments:
(i) Computing an optimal basis for one dimensional homology of
a simplicial complex and using it to approximate such a basis
for a smooth manifold from its point data; (ii) Computing an optimal
cycle homologous to a given cycle in a simplicial complex. We
provide efficient algorithms with their guarantees for (i) and
show that classical Linear Programs can solve (ii) for some interesting
cases even though the general problem is NP-hard.

Dlotko, Pawel
Applications of Computational (co)homology

Due to the recent development of efficient
algorithms and implementations, homology and cohomology theories
have become useful tools in various branches of science and engineering.
For example, electromagnetic modeling provides an interesting
context to present a link between physical phenomena and homology
and cohomology. Over the past twenty-five years a considerable
effort has been invested by the computational electromagnetics
community to develop fast and general techniques for potential
design. Nevertheless, heuristic approaches seem still to be dominant
in literature. During the talk a proof will be given showing that,
for edge-element based finite element method, the first cohomology
group generators are needed to make boundary value problem well
defined. To conclude, efficient algorithmic techniques to compute
cohomology group generators on various meshes (being arbitrary
regular CW-complexes) will be presented together with results
of electromagnetic simulations (this is joint work with Ruben
Specogna).
If time permits, recent progress in distributed (co)homology computations
will be discussed. As an example, a coverage problem in sensor
network will be presented and distributed computation techniques
used to solve it will be highlighted (joint work with Robert Ghrist,
Mateusz Juda and Marian Mrozek).

The field of image analysis studies the
topology and geometry of digital images and data sets. The subfield
of combinatorial image analysis studies their combinatorial properties.
Algebraic topology is a field of mathematics concerned with the
study of deformation-invariant properties of geometrical objects.
Our recently created Andalusian research group FQM-369 Combinatorial
Image Analysis develops methods and tools for combinatorial
image analysis, which apply the theory of computational topology.
In this talk, we will introduce the three main tasks in which
our group is involved:
(1) Well-composed cell complexes. The 2D manifold that is the
surface bounding a real 3D object, might appear to be non-manifold
in the geometric realization of the cubical complex Q(I) associated
to a discrete representation of the object after the acquisition
process. This task deals with the construction of a cell complex
K(I) that is homotopy equivalent to Q(I) and whose boundary surface
is a union of 2D manifolds.
(2) Cup products on general cell complexes. The cup product on
cohomology encodes more information than homology, but has traditionally
been computed only for cubical and simplicial complexes. Recently
our group developed techniques that allow to compute the cup product
on cell complexes X, where X is a quotient, a Cartesian product,
or the result of merging cells of highest dimension.
(3) Extending persistent homology. The incremental algorithm for
persistent homology by Edelsbrunner et al., is currently the de
facto standard for extracting topological information, especially
Betti numbers, when an object is seen as a sequence of additions
from a point cloud data. A first aim of this task is to extend
persistent homology to cell complexes obtained from the given
one by merging, removing cells or edge contractions.
Collaborators: Maria Jose Jimenez and Belen Medrano (Applied Math
Dept., University of Seville, Spain), Walter G. Kropatsch (PRIP
group, Vienna University of Technology, Austria), Adrian Ion (PRIP
group and Institute of Science and Technology, Austria), Ron Umble
(Math. Dept., Millersville University of Pennsylvania, USA).

Filtrations are at the core of some topological
data analysis methods. For instance, persistent homology captures
the topology of a space by measuring the lifetime of homological
events occurring along a given filtration. These kinds of filtration
are often induced by considering sub-level sets of continuous
functions. A natural question arises about whether any filtration
can be generated in this way. In our talk we analyze this problem.
In particular, we present a result showing that, under some "natural"
conditions, any filtration of a topological space is induced by
a continuous function. Both the cases of scalar and vector indexed
filtrations are considered.

Hirani, Anil University of IllinoisOptimization, Knots, and Differential Equations

I will introduce several optimality problems
in integer homology and real cohomology that we have studied recently.
For integer homology, I will first discuss our STOC 2010 result
for the Optimal Homologous Chain Problem (OHCP). We showed that
for relatively torsion-free complexes, OHCP can be solved in polynomial
time. This was surprising because with mod 2 coefficients OHCP
is NP-hard, as shown earlier by Chen and Freedman. Next I will
describe our SoCG 2011 result by introducing the Optimal Bounding
Chain Problem (OBCP) and its relative version. We used these to
show that the Least Spanning Area Problem for knots embedded in
3-manifolds with trivial homology can be solved in polynomial
time. An earlier result of Agol, Hass, and Thurston is that for
general 3-manifolds the problem is NP-hard. Both OHCP and OBCP
discussed above were formulated as 1-norm minimization while satisfying
some constraint involving boundary matrices. If instead, one uses
2-norm minimization, real coefficients, and coboundary matrices,
the OHCP problem transforms into one that is needed as a fundamental
step in finite element exterior calculus for solving elliptic
partial differential equations. I will discuss our recent results
from our arXiv eprint on Cohomologous Harmonic Cochains where
we proved a discrete Hodge-deRham isomorphism theorem. Different
parts of the above results are joint work with T. Dey, N. Dunfield,
K. Kalyanaraman, B. Krishnamoorthy, S. Watts, and H. Wang.

In the past decades, the homology and
cohomology theories gained a vivid attention outside of the mathematics
community prompted by its modern applications in sciences and
engineering. Until recently, the main progress has been done in
computation of homology groups of finitely representable objects.
Whenever a mathematical model was making it possible as, for example,
in the case of orientable manifolds, the duality has been used
to avoid explicitly working with cohomology. The cup product endows
the cohomology with the ring structure which permits distinguishing
between nonhomotopical spaces which homology groups do not distinguish.
However, this intrinsically more difficult theory had to wait
longer for computer implementations. Some of application-oriented
work on computing the cohomology ring of simplicial complexes
has been done by Gonzalez-Diaz and Real. We developed a cohomology
ring algorithm in a dimension-independent framework of combinatorial
cubical complexes.
This approach is convenient in the cup-product computation and
motivated, among others, by interpreting pixels or voxels as cubes.
The S-complex theory and so called co-reductions are adopted to
build a cohomology ring algorithm speeding up the algebraic computations.
This is joint work with M. Mrozek. The implementation of the algorithm
by P. D{\l}otko is in progress.

Kerber, Michael Alexander Duality for Functions: the Persistent Behaviour of
Land and Water and Shore

This work contributes to the point calculus
of persistent homology by extending Alexander duality to real-valued
functions. Given a perfect Morse function $f: S^{n+1} \to [0,1]$
and a decomposition $S^{n+1} = U \cup V$ such that $M = U \cap
V$ is an $n$-manifold, we prove elementary relationships between
the persistence diagrams of $f$ restricted to $U$, to $V$, and
to $M$.
Joint work with Herbert Edelsbrunner

Kahle, Matthew Princeton UniversityHigher-dimensional Expanders

Abstract: Expander graphs have found many
applications in mathematics and theoretical computer science.
We will discuss their generalizations to higher-dimensional simplicial
complexes, providing several kinds of random polyhedra as examples.
This will include joint work with Dotterrer, and also with Hoffman
and Paquette.

We consider the issue of obtaining the
probability of complete coverage for a given domain by a finite
coverage process with compact convex grains. The problem is approached
by by considering a random simplicial complex associated with
the nerve of the coverage process. We determine distributions
of random Betti numbers as well as the Euler characteristic of
the nerve, which then allows us to address the complete coverage
probability question. This results should have applications to
e.g. sensor networks or cloud coverage.

The theory of multidimensional persistent
homology was initially developed in the discrete setting , and
involved the study of simplicial complexes filtered through an
ordering of the simplices. Later, stability properties of multidimensional
persistence have been proved to hold when topological spaces are
filtered by continuous functions, i.e. for continuous data. This
talk aims to provide a bridge between the continuous setting,
where stability properties hold, and the discrete setting, where
actual computations are carried out. More precisely, we develop
a method to compare persistent homologies of vector functions
obtained from discrete data in such a way that stability is preserved.
These advances support the appropriateness of multidimensional
persistent homology for shape comparison in computer vision and
computer graphics applications. This is a joint work with N. Cavazza,
M. Ethier, P. Frosini, and T. Kaczynski.

Matschke, BenjaminA Parametrized Version of Gromov's Waist of the Sphere Theorem

Gromov, Memarian, and Karasev--Volovikov
proved that any map f from an n-sphere to a k-manifold (n>=k)
has a preimage f^{-1}(z) whose epsilon-neighborhoods are at least
as large as the epsilon-neighborhoods of the equator $S^{n-k}$.
We present a parametrized generalization. For the proof we introduce
a Fadell-Husseini type ideal-valued index of G-bundles that is
quite computable in our situation and we obtain two new parametrized
Borsuk-Ulam type theorems.

I'll discuss the following applications
of the finite Fourier transform to combinatorial topology.
1. We study the homology of certain arithmetically constructed
spaces called Sum Complexes. In particular, it is shown that sum
complexes on a prime number of vertices are hypertrees, i.e. have
vanishing rational homology. One ingredient in the proof is a
remarkable theorem of Chebotarev concerning submatrices of the
Fourier matrix. (joint work with N. Linial and M. Rosenthal)
2. Uncertainty principles roughly assert that a nonzero function
and its Fourier transform cannot both be concentrated on small
sets. We will point out a connection between discrete uncertainty
inqualities and the topology of sum complexes.
3. We give a Fourier characterization of the top homology of balanced
complexes. This leads to an extension and a short proof of a recent
result of Reiner and Musiker on a topological interpretation of
the coefficients of the cyclotomic polynomial.

Mileyko, YuriyProbability measures on the space of persistence diagrams

Persistence diagrams are topological summaries
that provide useful information about the topology and geometry
of data and play a crucial role in topological data analysis.
However, the problem of quantifying the uncertainty, noise, and
reproducibility of these topological summaries, which is a fundamental
aspect of the classical data analysis, has not been well studied.
In this talk, we shall show that the space of persistence diagrams
has properties that allow for the definition of probability measures
which support expectations, variances, percentiles and conditional
probabilities. This provides a theoretical basis for a statistical
treatment of persistence diagrams, for example computing sample
averages and sample variances of persistence diagrams, and allows
us to extend the theory of topological persistence to a much larger
set of applications.

Mimram, Samuel CEA
Efficient State Space Reduction Using Trace Spaces

State-space reduction techniques, used
primarily in model-checkers in order to verify programs, all rely
on the idea that some actions are independent, hence could be
taken in any (respective) order while put in parallel, without
changing the semantics. It is thus not necessary to consider all
execution paths in the interleaving semantics of a concurrent
program, but rather some equivalence classes. We describe a new
algorithm to compute such equivalence classes, and a representative
per class, which is based on ideas originating in algebraic topology.
We introduce a geometric semantics of concurrent languages, where
programs are interpreted as directed topological spaces, and study
its properties in order to devise an algorithm for computing dihomotopy
classes of execution paths. We will also report on a preliminary
implementation, showing promising results towards efficient methods
to analyze concurrent programs, with better results than state
of the art partial-order reduction techniques.

Morozov, Dmitriy Witnessed k-Distance

Distance function to a compact set plays
a central role in several areas of computational geometry. Methods
that rely on it are robust to the perturbations of the data by
the Hausdorff noise, but fail in the presence of outliers. The
recently introduced \emph{distance to
a measure} offers a solution by extending the distance function
framework to reasoning about the geometry of probability measures,
while maintaining theoretical guarantees about the quality of
the inferred information. A combinatorial explosion hinders working
with distance to a measure as an ordinary power distance function.
In this talk, we analyze an approximation scheme that keeps the
representation linear in the size of the input, while maintaining
the guarantees on the inference quality close to those for the
exact but costly representation. (Joint work with Leonidas Guibas
and Quentin Merigot.)

Mrozek, MarianTowards the Understanding of the Homological Persistence of Maps.

When a topological space is known only
from sampling, persistence provides a useful tool to study its
homological properties. In many applications one can sample not
only the space, but also a map acting on the space. The understanding
of the topological features of the map is often of interest, in
particular in time series analysis. We consider the concept of
persistence in finite dimensional vector spaces and combine it
with a graph approach to computing homology of maps in order to
study the persistence of eigenspaces of maps induced in homology.
This is research in progress, joint with Herbert Edelsbrunner.

Abstract: In this talk I will introduce
several families of signatures for compact Riemannian manifolds
or finite metric approximations of these. Our signatures are defined
as persistence diagrams of carefully chosen functions over the
space, and they provide a multi-scale description of the structure
of the space. Some of them are global and can be used in shape
classification applications, while others have a more local nature
and can be used in (partial) shape matching applications.
All our signatures are stable under small perturbations of the
space in the Gromov-Hausdorff distance. I will also illustrate
their practicality through a series of experiments.

Given a mapping to a Euclidean space E
and a point x in E, the well group quantifies the stability of
the homology over x. I will introduce two definitions of a well
group. The first being very intuitive but not so easy to compute.
The second involves the use of intersection theory. Both definitions
have similar properties of stability, but It is not clear whether
the two definitions are equivalent. This raises some interesting
questions.

Raussen, Martin Aalborg UniversitySimplicial Models for Trace Spaces

Concurrency theory in Computer Science
studies the effects that arise when several processors run simultaneously
sharing common resources. It attempts to advise methods to deal
with the "state space explosion problem". In recent
years, models with a combinatorial/topological flavor have been
introduced and investigated as tools in the analysis of concurrent
processes. It is a common feature of these models that an execution
corresponds to a directed path (d-path), and that d-homotopies
(preserving the directions) have equivalent computations as a
result.
I will discuss a particular classical example of a directed space
arising as a (pre-cubical set) model of a class of Higher Dimensional
Automata (HDA). For such a space, I will describe a new method
that determines the homotopy type of the space of traces (executions)
as a prodsimplicial complex - with products of simplices as building
blocks. A description of that complex opens up for (machine) calculations
of homology groups and other topological invariants of the trace
space. The determination of path components is particularly important
for applications.
This prodsimplicial trace complex arises from a covering of a
trace space by particular contractible subspaces. Nonempty (and
contractible) intersections of these subspaces form a poset category
with a classifying space that corresponds to a barycentric subdivision
of the prodsimplicial complex.
The determination of this complex in a particular case depends
on deciding whether certain subspaces of traces are empty are
not. This decision relies on an algorithm detecting deadlocks
and unsafe regions for these Higher Dimensional Automata by checking
a bunch of inequalities.
This concept is the background for an algorithm yielding a representation
of the prodsimplicial complex that has been implemented using
the ALCOOL software from a lab at CEA in France. Using the outcome
of the algorithm, homological invariants have been calculated
using homology software by Mrozek etal.
If time permits, I will sketch a method that generalizes the main
construction to arbitrary HDA taking into account processes that
may branch and loop.

Robinson, Michael University of Pennsylvania
Euler integral transforms and applications

The old idea of using the combinatorial
Euler characteristic as a valuation to define an integration theory
found application to sensor networks in a recent paper of Baryshnikov
and Ghrist. They showed that a dense network of sensors, each
of which produces an integer count of nearby targets could be
integrated to yield a total count of the targets within the sensor
field even if the target support regions overlap. The resulting
algorithm is reminiscent of signal processing techniques, though
it uses integer-valued data points. Seeing as a primary tool of
signal processing is the integral transform, a question is "are
there integral transforms in this theory?"

It happens that many of the transforms
traditionally used in harmonic analysis have natural analogs under
the Euler integral. The properties of these transforms are sensitive
to topological (as well as certain geometric) features in the
sensor field and allow signal processing to be performed on structured,
integer valued data, such as might be gathered from ad hoc networks
of inexpensive sensors. For instance, the analog of the Fourier
transform computes a measure of width of support for indicator
functions. There are some notable challenges in this theory, some
of which are present in traditional transform theory (such as
the presence of sidelobes), and some which are new (such as the
nonlinearity of the transform when extended to real-valued data).
These challenges and some mitigation strategies will be presented
as well as a showcase of the transforms and their capabilities.Reference:
http://arxiv.org/abs/1011.4494

Abstract: Motivated by problems in structural
biology, specifically cryo-electron microscopy, we introduce vector
diffusion maps (VDM), a new mathematical framework for organizing
and analyzing high dimensional data sets, 2D images and 3D shapes.
VDM is a mathematical and algorithmic generalization of diffusion
maps and other non-linear dimensionality reduction methods, such
as LLE, ISOMAP and Laplacian eigenmaps. While existing methods
are either directly or indirectly related to the heat kernel for
functions over the data, VDM is based on the heat kernel for vector
fields. VDM provides tools for organizing complex data sets, embedding
them in a low dimensional space and interpolating and regressing
vector fields over the data. In particular, it equips the data
with a metric, which we refer to as the vector diffusion distance.
In the manifold learning setup, where the data set is distributed
on (or near) a low dimensional manifold $M^d$ embedded in $R^p$,
we prove the relationship between VDM and the connection-Laplacian
operator for vector fields over the manifold. Applications to
structural biology (cryo-electron microscopy and NMR spectroscopy),
computer vision and shape space analysis will be discussed. (Joint
work with Hau-tieng Wu.)

The rise of three-dimensional imaging technology
has sharpened the need for quantitative geometric and topological
analysis of 3D images. Increasingly popular tools for the topological
analysis of data are Morse complexes and persistent homology.
We present a homotopic algorithm for determining the Morse complex
of a grayscale digital image. For two- or three-dimensional images
we prove that
this algorithm constructs a discrete Morse function which has
exactly the number and type of critical cells necessary to characterize
the topological changes in the level sets of the image. The resulting
Morse complex is considerably simpler than the cubical complex
originally used to represent the image and may be used to compute
persistent homology.

Wang, Bei SCI Institute, University
of UtahStratification Learning through Local Homology Transfer

We would like to show that point cloud
data can under certain circumstances be clustered by strata in
a plausible way. For our purposes, we consider a stratified space
to be a collection of manifolds of different dimensions which
are glued together in a locally trivial manner inside some Euclidean
space. To adapt this abstract definition to the world of noise,
we first define a multi-scale notion of stratified spaces, providing
a stratification at different scales which are indexed by a radius
parameter. We then use methods derived from kernel and cokernel
persistent homology to cluster the data points into different
strata. We prove a correctness guarantee for this clustering method
under certain topological conditions. We then provide a probabilistic
guarantee for the clustering for the point sample setting 
we provide bounds on the minimum number of sample points required
to state with high probability which points belong to the same
strata. Finally, we give an explicit algorithm for the clustering.

In manifold learning, algorithms based
on graph Laplacian constructed from data have received considerable
attention both in practical applications and theoretical analysis.
One nice property of graph Laplacian that facilitates it broad
practical usage is that its construction requires only the proximity
graph from input points data. Much of the existing work has been
done under the assumption that the
data is sampled from a manifold without boundaries and singularities
or that the functions of interest are evaluated away from such
points. At the same time, it can be argued that singularities
and boundaries are an important aspect of realistic data. Boundaries
occur whenever the process generating data has a bounding constraint;
while singularities appear when two different manifolds intersect
or if a process undergoes a "phase transition", changing
non-smoothly as a function of a parameter.

In this talk I will present some results
from our recent study of the behavior of graph Laplacians on singular
manifolds. In particular, we consider boundaries and two types
of singularities: intersections, where different manifolds come
together and sharp "edges", where a manifold sharply
changes direction. We show that the behavior of graph Laplacian
near these singularities is qualitatively different from that
in the interior of the manifolds. Unlike in the interior of the
domain, where graph Laplacian converges to the Laplace-Beltrami
operator, near singularities graph Laplacian tends to a first-order
differential operator, which exhibits different scaling behavior
as a function of the kernel width. Understanding such behavior
will lead to interesting applications in learning and analyzing
complex data.
This is joint work with M. Belkin, Q. Que, and X. Zhou.

November
14-18, 2011
Workshop on Sphere Arrangments

Balázs, Csikós On the Volume of the Union and Intersection of Random Balls

The generalized Kneser-Poulsen conjecture predicts that the volume
of the union/intersection of some balls in the Euclidean, spherical
or hyperbolic space does not decrease/increase when the balls
are rearranged so that the distances between the centers increase.
First we overview different weighted versions of this conjecture
then turn our attention to inequalities on the expectation of
the volume of the union/intersection of balls with fixed centers
and random radii.

The study of Gaussian mixture distributions goes back to the
late 19th century, when Pearson introduced the method of moments
to analyze the statistics of a crab population. They have since
become one of the most popular tools of modeling and data analysis,
extensively used in speech recognition, computer vision and other
fields. Yet their properties are still not well understood. Widely
used algorithms, such as Expectation Maximization (EM), often
fail even on simple generated data and their theoretical properties
are often unclear.

In my talk I will discuss our recent results with Kaushik Sinha,
which, in a certain sense, completes work on an active recent
topic in theoretical computer science by establishing quite general
conditions for polynomial learnability of parameters of a wide
class of distributions, including Gaussian mixture models, using
methods of algebraic geometry

Bezdek, Karoly - University of CalgaryOn a Foam Problem and on the X-ray Conjecture

We investigate the following two problems the first of which is
on sphere packings in Euclidean space and the second of which
is closely connected to sphere coverings in spherical space. Thus,
first
we a raise a relative of the well-known Kelvin problem that one
can regard as a stronger version of the Kepler conjecture. In
particular, we prove that the average surface area of any family
of convex cells that tile the Euclidean 3-space with each cell
containing a unit ball, is always at least 13.8564... . Second
recall that a subset of the d-dimensional Euclidean space having
nonempty interior is called a
spindle convex body if it is the intersection of (finitely or
infinitely many) congruent d-dimensional closed balls. The spindle
convex body is called a
``fat'' one, if it contains the centers of its generating balls.
We outline a proof of the X-ray Conjecture for fat spindle convex
bodies in dimensions 3, 4, 5, and 6 as well as in dimensions greater
than or equal to 15.

Fasy, Brittany Terese Ghosts in Gaussian Mixture Models

The sum of n isotropic Gaussian kernels can have more than n
modes. We give a thorough analysis of the sum of n Gaussians centered
at the vertices of the standard (n-1)-simplex.

Given a packing of open balls and two points outside the balls
at distance d from one another, nd the shortest path connecting
the two points and avoiding the balls. We give an upper bound
for the length of the shortest path showing that the he detour
we have to make in order to cover a given distance d from one
point to another one avoiding the members of a packing of balls
with bounded radii in En approaches zero with the dimension. We
also mention some open problems.

The problem of finding configurations of points that are optimally-distributed
on a set appears in a number of guises including best-packing
problems, coding theory, geometrical modeling, statistical sampling,
radial basis approximation and golf-ball design (i.e., where to
put the dimples). This talk will focus on classical and recent
results concerning asymptotic geometrical properties of N-point
configurations on a compact metric set A interacting through a
weighted inverse power law for large values of N. This is joint
work with S. Borodachov, E. Saff and T. Whitehouse.

Haucourt, Emmanuel Unique Decomposition and its Application to Concurrency Theory

The PV-Language was introduced by E.W. Dijkstra as a toy example
for the study of Concurrency - a theoretical branch of Computer
Science. There is a natural way to associate each PV-program with
a directed topological space. The shapes modeling PV-programs
actually lie in a family which enjoys a free commutative monoid
structure. From a practical point of view, a PV-program is made
of several processes running together and sharing the same pool
of resources. The from the decomposition of its modeling shape
we can gather processes into groups running independently from
each each other. Moreover, the component category invariant turns
any model of PV program into a finite nonempty connected loop-free
category. Such categories also admit a free commutative monoid
structure. We conjecture a relation between the decomposition
of a shape and the decomposition of its component category.

Musin, Oleg University of Texas at Brownsville (pls. assign
talk November 14-17)Packing of congruent spheres on a sphere.

According to the Boros-F\"uredi-B\'ar\'any theorem, for any
system of $n$ points in Euclidean $d$-space, there is a point
contained in at least a constant fraction of all simplices generated
by them. We discuss some related problems for geometric arrangements.
One of our tools will be the following: Let $\alpha>0$, let
$H_1,\ldots,H_m$ be finite families of semi-algebraic sets of
constant
description complexity, and let $R$ be a fixed semi-algebraic
$m$-ary relation on $H_1\times\cdots\times H_m$ such that the
number of $m$-tuples that are related (resp. unrelated) with respect
to $R$ is at least $\alpha\prod_{i=1}^m|H_i|$. Then there exists
a constant $c>0$, which depends on $\alpha, m$ and on the maximum
description complexity of the sets in $H_i\; (1\le i\le m)$ and
$R$, and exist subfamilies $H^*_i\subseteq H_i$ with $|H^*_i|
\geq c'|H_i|\; (1\le i\le m)$ such that $H^*_1\times\cdots\times
H^*_m\subset R$ (resp. $H^*_1\times\cdots\times H^*_m\cap R=\emptyset$).
(Joint work with Jacob Fox, Mikhail Gromov, Vincent Lafforgue
\and Assaf Naor.)

Stephenson, Ken University of TennesseeCircle packings and circle arrangements: searching for common
ground

Circle packings are configurations of circles having prescribed
patterns of tangency. They are not traditional 2D sphere arrangements
because it is their tangency "patterns" that are specified,
while the individual circle sizes are flexible. However, there
is an "edge-flipping" operation on combinatorics which
leads to interesting dynamics in circle packing. Experiments suggest
that some common ground may emerge with the spherical arrangement
community--- and there may be a few practical applications as
well.

Toth, Csaba University of CalgaryOn the Average Distance from the Fermat-Weber Center of a Planar
Convex Body

The FermatWeber center of a planar body Q is a point in
the plane from which the average distance to the points in Q is
minimal. We first show that for any convex body Q in the plane,
the average distance from the FermatWeber center of Q to
the points in Q is larger than Delta(Q)/6, where Delta(Q) is the
diameter of Q. From the other direction, we prove that the same
average distance is at most 0.3490 Delta(Q). The new bound brings
us closer to the conjectured value of Delta(Q)/3. We also confirm
the upper bound conjecture for centrally symmetric planar convex
bodies.
(Joint work with Adrian Dumitrescu and Minghui Jiang)

Understanding and describing expensive black box functions such as physical
simulations is a common problem in many application areas. One
example is the recent interest in uncertainty quantification with
the goal of discovering the relationship between a potentially
large number of input parameters and the output of a simulation.
Typically, the simulation of interest is expensive to evaluate
and thus the sampling of the parameter space is necessarily small.
As a result choosing a good set of samples at which
to evaluate is crucial to glean as much information as possible
from the fewest samples. While space-filling sampling designs
such as Latin Hypercubes provide a good initial cover of the entire
domain more detailed studies typically rely on adaptive sampling.
The core of most adaptive sampling methods is the scoring function
which, given an initial set of training points ranks a large set
of candiate points to determine the most valuable one for evaluation.
Traditional scoring functions are based on well know geometric
metrics such as distance in function space. Instead, we propose
topology based techniques that aim to recover the global structure
of the simulation response. In particular, we propose three new
topological scoring functions based on persistent homology and
demonstrate that especially for complicated functions and higher
dimensions these outperform traditional techniques.

Womersley, Rob--University of New South WalesSpherical designs and approximate spherical designs

Spherical t-designs are sets of points on the unit sphere such
that the equal weight cubature rule at these points is exact for
all spherical polynomials of degrees at most t. As such they can
be
regarded as a quasi-Monte Carlo rule for the sphere. Most attention
has been focussed on the minimum numbers of points N needed for
a spherical t-design.
This talk will focus on the two-sphere and calculation of spherical
t-designs with $N = t^2 / 2 + O(t)$ and t up to degree 221, their
geometrical properties, such as mesh norm, separation, (spherical
cap) discrepancy, worst case error and energy. To enable fast
generation of point sets we also consider approximate spherical
designs.