I am very impressed by many professional mathematicians posting expository articles on their blog. One of the great advantages it has over just a survey article is that the comments section allows direct interaction with the author. Also, in many cases the presentation is more relaxed than a research paper and more fast paced than a book. I have examples of at least three blogs : Terry Tao’s blog , Rigorous Trivialities and The Unapologetic Mathematician , with regular detailed postings on various topics. I am wondering why there arent so many CS theory blogs? Luca’s blog is a great one that stands out in this regard. There has a been a lot of discussion about selling CS theory, see for example this post by Lance Fortnow. I think that one of the ways to do this is to write good expository articles and popularize various ideas that arise again and again in theoretical CS.

Some photographs from a party on March 8th. Unfortunately I do not have some snaps of the delicious food cooked by Neha. However I am sure Amit has taken some snaps and I will ask him to send me a link to those.

Update 03/20/2008 : I have uploaded some more photos to the same folder on flickr. These were taken by Amit.

Suppose you are given a multiset of real numbers . Given a number , one can ask the question : Find the fraction of points in that are less than or equal to . Here is a simple randomized algorithm to solve this.

1. Choose elements of uniformly at random. Sort the elements. Let be the sorted multiset.
2. Find out the fraction of points in the new multiset that are less than or equal .
3. Output

How good is this algorithm? Here is an attempt at an analysis. For a number denote by the fraction of elements in less than or equal to . Suppose . Suppose . Then of the elements were chosen from elements and of the elements were chosen from . Hence the probability is . Therefore has a binomial distribution . The expected value of is and the standard deviation is . By the The probability that we become wrong by is . By using a modified form of the Hoeffding’s bound the probability is upper bounded by . If then how large should be so that we are at most off by 1% of with probability ? A simple calculation shows that if then we are good. If The nice part about use of this bound is that it does not depend on the size of the underlying multiset . However there is a dependance on that is unknown in the first place! However if we assume then we can get away with sampling a few thousand elements. Are there any better known algorithms? In randomized algorithms that produce answers it is possible to increase the probability of success by repeating algorithm multiple times and then taking the majority of the output as the true answer. Is there a similar way of improving the overall accuracy of algorithms that produce one of many possible values such as the percentile algorithm?

Today I will be blogging about the construction of superconcentrators from expanders. This is directly from the expanders survey. I am just reviewing it here for my own learning. First some definitions.Definition 1. A regular bipartitie graph , with and is called a if it saisfies the properties below. For a given set of vertices, we denote the set of vertices to which some vertex in is connected as .

1. For every with
2. For every with , .

Definition 2. A superconcentrator is a graph with two given subsets with , such that for every and with , the number of disjoint paths from to is at least .

Superconcentrators with edges are interesting for various reasons which we do not go into here.

But we do give a construction of a superconcentrator with edges from magical graphs above. A simple probabilistic argument can show the following result.

Theorem There exists a constant such that for every , such that , there is a magical graph with .

Here is the construction of a superconcentrator from magical graphs. Assume that we can construct a superconcentrator with edges for every . The construction is recursive. First take two copies of the magical graphs with and . Connect every vertex of to the corresponding vertex of and add edges between and so that the graph becomes a superconcentrator with the size of the input vertex set as . We claim that the resulting graph is a superconcentrator with input vertex set of size .

Identify the input vertex set as and the output vertex set as . For every it is true that . Therefore by Halls Marriage Theorem there exists a perfect matching between vertices of and . Similarly there exists a perfect matching between vertices of and . Together with the edges of the superconcentrator between input set and output set there are at least disjoint paths between and . It remains to handle the case of . For , there is a subset of vertices with such that the vertices corresponding to in are in . Edges between such vertices contribute to disjoint paths between and . The remaining disjoint paths exist as proved earlier. Hence we have a superconcentrator with input, output vertex sets of size . How many edges are there in this graph? For the base case of this recursion, we let a superconcentrator with input output sets of size be the complete bipartite graph with edges. The following recursion counts the edges. Let denote the number of edges in the superconcentrator as per this construction with input, output sets of size . Then