Recap of Last Lecture

(Flow conservation) For every vertex $v$ other than source $s$ and sink $t$, $\sum_w f_{(v,w)}-f_{(w,v)}=0$.

(Capacity constraints) For each edge $e$, $0\leq f_e \leq u_e$, where $u_e$ is the (non-negative) capacity of the edge $e$.

(Any flow satisfying the above constraints is called feasible.)

\text{Note:} Direction/orientation of edges important here - $f_{(v, w)}$ might be different than $f_{(w, v)}$. (Wlog can assume that either $f_{(w, v)}=0$ or $f_{(v, w)}=0$, or both.) Similarly, $u_{(w, v)}$ might be different to $u_{(v, w)}$. (But often both $u_{(w,v)}$ and $u_{(v,w)}$ will be non-zero.)

Goal: Maximize the value $|f| = \sum_w f_{(s,w)}-f_{(w,s)}$, i.e., the net flow out of $s$ (by flow conservation constraints, this is equal to the net flow into $t$).

Important concepts:

Residual graph $G_f$ with respect to some (feasible) flow $f$. $G_f$ is the same as $G$ except its residual capacities become $u_f(w,v):= u(w,v) - f(w, v) + f(v, w)$, for each edge $e=(w,v)$.

$\Rightarrow$ The overall running time is $O(mnU)$. Note: This running time is polynomial in the value of our input but not in the size of its binary representation. (Such algorithms are called pseudo-polynomial.) When $U=1$, this is polynomial time algorithm though.

Important corollary: If all capacities are integral then there exists an integral maximum flow too. (Although there could still exist maximum flows that are non-integral.)

When capacities are rational, we can think of multiplying all capacities by the product of their denominators, and thus reduce to the integral capacity case.

$\Rightarrow$ The running time is finite, but most likely not even pseudo-polynomial.

When capacities are real numbers, we might not even have termination.

Sidenote: The above algorithm was simple to describe but its dynamics can be quite complex.

Each augmentation constitutes a “greedy” improvement step, but these augmentations might undo/overwrite previous updates, adding flow to an edge and then removing it.

Instructive example to look at: “diamond” graph with edges $(s, v)$, $(s,w)$, $(v,w)$, $(v,t)$, and $(w,t)$, and capacities being $1000$ on all the edges except $(v,w)$ which has a capacity of $1$. What will happen if we always choose an augmenting path going through the capacity $1$ edge? How does it compare to making a “smart” choice of the augmenting path?

Maximum Bottleneck Capacity Augmenting Paths

So far, we assumed that we just find an augmenting path. But maybe there is a “smart” way to choose one?
Natural idea: Always choose an augmenting path $P$ that has maximum bottleneck capacity $u_f(P)$.

How to find one? Actually, quite simple. Easy $O(m\log n)$-time algorithm: just do binary search over the capacity of “bottlenecking” edge and, for each guess $u$, check for $s$-$t$ path in $G_f$ after all the edges with capacity less than $u$ were removed. By being (much) more sophisticated, one can get $O(\min\{ m+n \log n, m \log^*n\})$ time.

How much progress each such augmenting path guarantees to make?

Recall: The flow decomposition bound tells us that any flow can be decomposed into at most $m$ flow paths.

$\Rightarrow$ If we apply this observation to the maximum (remaining) flow, i.e., the maximum flow in the residual graph $G_f$ together with an averaging argument, we get that at least one of these paths carries at least $1/m$-fraction of the remaining value of the flow. (Note that all such paths have to be augmenting in $G_f$ by definition.)

$\Rightarrow$ Each flow augmentation reduces the remaining value of the flow by a factor of $(1-1/m)$.

After $m\ln |f| +1$ augmentations, the remaining flow would be less than $1$. So, if capacities are integral, the obtained solution has to be optimal. The overall running time is
\[
O((m\log n)\cdot m \ln |f|)=O(m^2 \log n \log nU).
\]

Note: This is a “truly” polynomial running time, i.e., it is polynomial in the size of the representation of the input numbers. Still, sometimes one can ask for even more: running time that is independent of the input numbers, i.e., depends only on the size of the graph itself. This kind of algorithms are called strongly polynomial (and the ones that are only polynomial in the size of the numbers are called weakly polynomial).

If capacities are rational, the above algorithm still has interesting properties. In particular, if $U$ will be now the largest denominator/enumerator in the capacities, then by multiplying all capacities by the product of all denominators and applying the algorithm above will give us a running time of
\[
O((m\log n)\cdot m \ln |f|)=O(m^2 \log n \log nU^m)=O(m^3 \log n \log nU),
\]
which is still (weakly) polynomial. (Note: we do not actually need to explicitly multiply all capacities here.)

From now on: Unless stated otherwise, we assume that capacities are integral.

Scaling Technique

Idea of maximum bottleneck capacity augmenting was to be greedy

Get most of solution as quick as possible.

Lead to decrease of residual flow quickly.

Another approach: Scaling.

Developed by Gabow in 1985 and also Dinitz in 1973 (but appeared only in Russian so, as
often the case in this period, the American discovery was much later but independent).

Start with an optimal solution to the rounded down instance (drop low order bits)

Repeatedly:

Put back next dropped bit (starting from the highest to lowest order bits).

Your solution is most likely no longer optimal in this new “perturbed” instance.

“Fix up” the solution by making it optimal again.

Big benefit: Fixing up the solution often correspond to solving the original problem in “unit case”. (We, in a sense, reduce the general case to unit case.)

How to apply this approach to the maximum flow problem?

Capacities are $\log U$ bit integer numbers.

Round them down completely: Start with all capacities $0$ (Finding a maximum flow here is easy!)

Then: Roll in one bit of capacity at a time, starting with high order bits
and working down.

After each roll in, update the current maximum flow by augmenting it to a maximum flow solution in the new graph.

Effect of rolling in one bit: all capacities doubled and then some of them additionally increase by $1$.

After we double our current flow too:

It will remain a feasible flow in our graph.

In the residual graph, all residual capacities double and then some of them increase by $1$ too.

Crucial observation: Since our flow was optimal before, there must have existed an $s$-$t$ cut $S^*$ with its residual capacity $u_f(S^*)$ be zero then.

$\Rightarrow$ After rolling in a new bit (and doubling the flow), this cut has residual capacity be at most $m$.

$\Rightarrow$ So, even if we use the basic augmenting path--based pushes, after at most $m$ such pushes, we will find the new optimum. (Note that $S^*$ does not need to be a min $s$-$t$ cut in the new graph, but it still provides an upper bound on the amount of flow that can be pushed.)

$\Rightarrow$ Running time to execute this stage: $O(m^2)$.

After $\log U$ roll-ins, all numbers are correct, so we computed a maximum flow in our original graph.

Note: In a sense, we took an algorithm that worked well only for unit-capacity case (i.e., basic augmenting path--based algorithm) and applied it in a fairly generic manner to a general-capacity case. (Observe though that the residual graphs after the roll-in do not need to be unit-capacity -- we only know that about the $s$-$t$ cut $S^*$.)

Side remark: The resulting algorithm is weakly polynomial. By its nature, scaling technique is unable to deliver strongly polynomial time algorithms. (Still, in most applications, weakly polynomial time is more than ok.)