Organizers

Objectives

There are many variants of graph searching studied in the literature, which are either application driven, i.e. motivated by problems in practice, or are inspired by foundational issues in Computer Science, Discrete Mathematics, and Artificial intelligence, including:

In the past three years, problems emerged from real applications related to the structure of modern (or projected) networks that are expected to be large scale and dynamic, and where agents' behaviour can be probabilistic, decentralized and even selfish and antagonistic. This is one of the reasons why the field of Graph Searching is nowadays rapidly expanding. Several new models, problems or approaches have appeared relating it to diverse fields such as random walks, game theory, logic, probabilistic analysis, complex networks, motion planning, and distributed computing. Surprising and not yet widely circulated results have been found in the last two years that have consequences for the whole field.

A major focus of the Workshop will be on graph searching problems and large scale networks, including tutorial in some of the successful methods. The Workshop will bring together researchers in Graph Searching and those versed in the analysis of large scale networks. For some historical reasons Canada is the stronghold of both fields, with several groups and individuals spread around the country. This makes BIRS to be a natural venue for the Workshop.

Little investigation has occurred about the graph searching models in large scale networks because the focus of the graph searching researchers has been on deterministic problems and their associated complexity and algorithmic issues. Recent advances in techniques for understanding `large' graphs bode well for making progress in investigating graph searching on these networks. In 2009, Luczak and Pralat [LP], using probabilistic methods, showed that the cop number behaves in a regular but unexpected fashion---consider the random graph, $G(n,p)$ with expected degree $pn=n^x$, and shows that as $x$ decreases the expected number of cops rises then falls regularly with local extrema at $x=1/2, 1/3, 1/4,dots$. This results suggests that the large networks will have enough edges and local structure to smooth out the irregularities inherent in small graphs. In 2008, the cleaning of large scale networks (the Brush problem, Alon, Messinger, Nowakowski, Pralat, Wormald) [MNP08, APW08, P09] was also investigated using Wormald's differential equations method for random regular graphs.

In 1985, Meyniel conjectured that if G is a connected graph of order n, then the number of cops needed to capture a robber is of order at most $n^{1/2}$. This would be best possible because of a construction of a bipartite graph based on the finite projective plane. In 2009, Bollobas, Kun and Leader [BKL], using probabilistic methods, essentially proved Meyniel's bound in random graphs (up to the logarithmic factor). Recently, Pralat and Wormald showed that the logarithmic factor can be eliminated (from both random binomial graphs as well as random d-regular graphs) and so the Meyniel's conjecture is verified for these models. For deterministic graphs, we are still far away from proving the conjecture. Up until recently, the best known upper bound was given by Frankl in 1987 [F87], who showed that the cop number is always of order at most $nlog log n / log n = o(n)$. After twenty years of attacking this problem we have three recent independent proofs (Lu, Peng [LP]; Scott, Sudakov [SS]; and Frieze, Krivelevich, Loh [FKL]) of the same result, namely, that the cop number is at most $n2^{-(1+o(1)) sqrt{log n}}$ (which is still $n^{1-o(1)}$). We do hope that the Workshop will bring us a bit closer to the solution.

The original problems all had simple optimality requirements---find the minimum number of searchers to do the job. Another major focus will be on more real-world requirements; e.g. minimize the number of time steps or if the searchers have costs associated with their utilization, minimize the cost. Recently, Scott, Stege, Zeh consider a firefighting model (fancifully called Politician's Firefighting), where the number of resources available at any one time was proportion to the present size of the fire.

In many models, the searchers can be regarded as robots having limited processing power. If the robots have only local knowledge of the network, Messinger and Nowakowski [MN09] showed that self-stabilizing behaviour (a minimization of number of steps) could occur even it could take time exponential in the number of edges to achieve (Vetta and Li) [VL10]. Some time would be spent investigating the relationship between the knowledge and decisions that a robot is able to make and the effect on the minimization problem.