CS-450: Advanced algorithms

Introduction

This compendium is made for the course CS-450 Advanced Algorithms at École Polyteqnique Fédérale de Lausanne (EPFL) and is a summary of the lectures and lecture notes. Is is not the complete curriculum, but rather a list of reading material.

Wagner's conjection:
Given a family of graphs defined by a property P. If P is closed under subgraphs and edge contraciton then there exists a finite set of "obstruction" graphs, s.t. every graph not in family includes an obstruction (we can test in polynomial time whether the graphs belogns to the family or not).

An algorithm was also given to find this, which runs in time $c*g(n)$ where $g(n)$ is a $n$ cubed. The probles is that $c$ is of order $10^{1000}$. Conclusion; big-oh can be misleading.

Kuratowski's theorem:
A graph is planar iff it does not include one of $K_5$ or $K_{3,3}$

Example:

P: Can be embedded in 3D without knots. Was believed to be impossible, but Wagner's conjection tells us that this can be solved in polynomial time.

Worst-case analysis

Amortized analysis (connected with data structures)

Look at the cost of a sufficiently long sequence of operations. Do worst-case analysis on sequences of operations (of unknown lenght)

Standard tool: A potential function

Ex: Stacks; push and pop

How many operations is sufficient to calculate amortized? Max $n$ operations. $\Phi_{final} - \Phi_{initial}$. If the running time is $\log n$, then we need a sequence of at least length $\frac{n}{\log n}$. A smarter potential function could give the same running time, but give a smaller sufficient lenght of the sequence.

Add a new operation: "empty", but this has to be implemented as pop'ing as long as there are elements left.

Claim: If we start with an empty stack, then any sequence of $n$ operation takes $\Theta(m)$ time.

$\Phi(stack) = sizeofstack$ where $\Phi$ is the potential function.

Push, pop and empty takes some time and also modify the potential function.

time

$\Delta \Phi$

Push

1

-1

Pop

1

+1

Empty

$\Phi(stack)$

-$\Phi(stack)$

Example:

Incrementing a binary integer: Worst-case $O(log(n))$.

Amortized: $K(k$ large with respect to $n$) incrementations take $O(k)$ fi you take it over a sequence of k operations.

Design of potential:
We want a simple potential function. Instead of counting trailing 1's, we can simplify and take to number of 1's in the binary representation.

Increment $n$.

Yes, you had to do a lot of work, but this made the number nice for further incrementations. It improves the state of the structure.

Amortized data structures

Does not try time give a worst-case guaranty, but will give a worst-case over $k$ operations.

Example: priority queue:

Insert (item, priority)

Delete min.

Binary heap:

Guaranty heap property: Priority of the parent is better (i.e. smaller) than the priority of the children. Delete takes at max $\log n$. Merging two trees is at best linear time. We want meldable priority queue (mergable).

Binary tree, obeying the heap property & it has the leftist property, i.e. the shortest path to a leaf from an internal node is on the right. The consequence is that the tree leans to the left.

THe right edge can at most have lenght $\log n$ of decreasing priority.

How to merge two trees in $\log n$?

We just have to merge the two right paths. We can rotate the tree (see wikipedia article linked in this header). We can merge teh top part (the right siden in the unrotated one) as two sorted lists. Takes $O(\log(n_1) + \log(n_2))$ time. The result is not neccesarily leftist. The problem is that the child trees have to be sorted in increasing height. There is no smart way of doing this except from storing extra information, in practice how far is it to the closest leaf. Each node stores dist. from it to closest leaf.

Delete min:

Behead the tree. How long time does this take? As the frenchmen found out during the revolution, is very easy, constant time!!! - Our French professor

Then we can just meld the two child trees.

Amortized:

When merging trees, instread of keeping track of length to leaf node we can always swap the children instead. This will give just as good time, but amortized. If you do nothing, you double the lenght of the path.

Claim: amortizes to $O(\log n)$ time per operation.

Design a potential function:

How far is it from a leftist tree? This gets a bit complicated.

We can try to define the potential as $w(x) = \#descendants$. The bigger the tree is, the greater the chance of the lenght of the right side is longer.

If $w(y) < w(x) \rightarrow bad$. What if $w(y) = w(x)$

$\log n$ is all I'm after - Professor

$w(x)$ is size of subtree rooted at $x$.

$x$ is heacy iff its weight is larger than $\frac{1}{2}$ weight of its parent.

$\Phi(T) = $ # heavy nodes that are also right children.

If we have a path from $x$ to $y$ famed entirely of light nodes, then the length of that patf is $\leq \log \frac{w(x)}{w(y)}$.

The weigth of whats under the node decreasing by half for every time(binary search).

On any parh from the roow to a leaf, there are $O(\log n)$ light nodes, so heacu nodes are the problem. The right most path is the problem. How does heacu nodes stay or go to the right most part. Because every time we merge we flip the paths.

Sum is $2m_l k $ where $k$ is lenght of the sequence, and $m_l$ is bounded by $\log n$.

Compeatable (<- TODO) algorithms

Splay trees.

KISS: Keep it simple stupid

Think of meldable trees. It has to be some kind of link structure to be melded efficiently. Often trees. KISS. Binary trees.

Either heap-ordered of a sorted inorder.

A binary search tree!

Idea: search($x$). After the first seach we can then use paging as known from as OS course. Actually, we will move $x$ to the root.

operations of splay($x$):

Move $x$ to the root if $x \in T$ else ?.

If $x$ is root, do nothing.

If x is one step down:

If $x$ is deeper:

or

splay($x$) on a tree of size $n$ takes $O(\log n)$ amortized time.

What if $x \notin T$. Still move the neighbor to the top. Move either $x^-$ or $x^+$. We still haven't defined any useful operations on the tree. Splay is the same principle as meld. This is the only operation we will use.

All operations defined in terms of splay

All I can do is splay. I'm sorry, I'm not very good at this. - Professor

We can implement this in a smarter way, but the reason to do everything in terms of splay is that the analysis is smoother.

Insert

run splay(x); if root has $x$ done; if not in tree is splay($x$) on $B$ if $B \neq \emptyset$. ($x$ must go in $B$). If we do splay on $B$, then we will get the successor. Put in in the root, and we can easily insert $x$.

Deletemin

Just splay minimum possible key. This will either bring up the smallest possible key or the predeccessor, which is then the smalles key in the tree.

Amortized analysis

$w(x) =$ weight of $x =$ size of subtree rooted at $x$.

$r(x) = \log w(x)$. $r(x)$ is a surrogate for the height.

Cost of splay($x$):

$\leq 1 + 3(r_{after}(x) - r_{before}(x))$.

Rotation $i$ has a cost $\leq 3(r_{i+1}(x) - r_{i}(x))$ and similar for rotation $i + 1$. If we sum all the steps we end up with the final rank in the end minus the original rank. The 1 in the front is only for the first rotation.

Lazy leftist tree

Leftist trees that support arbitrary delete. We are prepared to do operations on the right side of the tree, but not in an arbitrary place. The solution is to not delete it, but jusk mark it as deleted.

Deletemin: partial fix. The min node might be dead, so not the min. Fix the top of the tree. Go down to une level under the one taht has alive nodes. We can then get many subtrees and meld them two by two. Cost is $k \log n$

Example: Dynamic arrays

Randomized analysis

Protection / guerantee against bad distribution of the real data

Yields very simple & very fast algorithms

Higher reliability than deterministic version

Make a deterministic algorithm stochastic to protect against unknown events in the outside world. We cannot do average case, since we can not control the probability distribution of the outside world. The real world is not perfect random.

Problem with for instance quicksort. By providing the randomness ourselves, we get an expected $=(n \log n)$ time regardsless of input sequence.

Analysis of 2d hull

The addition and removal of edges and vertices amortizes to $\Theta(n)$ over algorithm.

The running time is determined by the maintanance of the conflict list structure by backwards analysis.

What needs to change if we remove the last vertex added. Last vertex is any of the $i$ vertices with equal prob. THe order in which the $i$ vertices were added does not matter.

Karger's algorithm for min cut in a graph

Given a connected, indirected graph, $G=(V,E)$

Def:
A cut is a subset of edges whose removal disconnects the graph.

Select an edge on random, with the assumption that this edge is not a part of the graph, since the cut is smalled compared to the total size of the graph. Mincut size is $k$. We know that all vertices has at least $k$ edges (the degree of the vertex). This also tells us that the graph has $\geq \frac{nk}{2}$ edges.

Idea: Pick an edge at random and decide it's not in the cut; modify the graph to reflect that. The edge we have removed does not contribute on disconnecting the graph. Then merge the two end points of the edge. This removes one vertex and one edge. Keep duplicate edges. This is known as multigraph. The size of the min cut has not changed. Repeat this until we only have two vertices. The edges connection these two vertices are the cut.

The probability that we manage to avoid the min cut in the start is good, but as the algortihm runs, the cut size $k$ stays the same but the total amount of edges decreases.

Algorithm design

Dynamic programming

Discrete, finite Markoc models.

A collection of states, $S = \{S_1, S_2 \}$

A stochastic transition matrix $T = [t_{ij}]$

$$ \text{Markov model} \equiv \text{stochastic autoomation}$$

Add an output function (emission), also stochastic. If we have finite output alphabet $E = \{c_1, c_2, \dots , c_m \}$ then we also have a right stochastic emission matrix $E_{nm} = [e_{ij}]$. In pratice, many entries of the transition matrix $T$ are set to zero.

Slightly dishonest casino

Dice are supposed to create an equally possible output. You can create a slightly weighted (pipped) die. $\Sigma = \{1,2,3,4,5,6\}$. Two states. One where the casino is using a true dice, and one where they are using a pipped dice.