What I refer to as counting is the problem that consists in finding
the number of solutions to a function. More precisely, given a
function $f:N\to \{0,1\}$ (not necessarily black-box), approximate
$\#\{x\in N\mid f(x)= 1\}= |f^{-1}(1)|$.

I am looking for algorithmic problems which involve some sort of
counting and for which the time complexity is greatly influenced by
this underlying counting problem.

Of course, I am looking for problems which are not counting problems themselves. And it would be greatly appreciated if you could provide documentation for these problems.

6 Answers
6

This is a followup to Suresh's answer. As he says, there are lots of construction problems in computational geometry where the complexity of the output is a trivial lower bound on the running time of any algorithm. For example: planar line arrangements, 3-dimensional Voronoi diagrams, and planar visibility graphs all have combinatorial complexity $\Theta(n^2)$ in the worst case, so any algorithm that constructs those objects trivially requires $\Omega(n^2)$ time in the worst case. (There are $O(n^2)$-time algorithms for all three of those problems.)

But similar constraints are conjectured to apply to decision problems as well. For example, given a set of n lines in the plane, how easily can you check whether any three lines pass through a common point? Well, you could build the arrangement of the lines (the planar graph defined by their intersection points and the segments between them), but that takes $\Theta(n^2)$ time. One of the main results of my PhD thesis was that within a restricted but natural decision tree model of computation, $\Omega(n^2)$ time is required to detect triple intersections. Intuitively, we must enumerate all $\binom{n}{2}$ intersection points and look for duplicates.

Similarly, there is a set of numbers where $\Theta(n^2)$ triples of elements sum to zero. Therefore, any algorithm (modeled by a certain class of decision trees) to test whether a given set contains three elements that sum to zero requires $\Omega(n^2)$ time. (It's possible to shave off some logs via bit-level parallelism, but whatever.)

Another example, also from my thesis, is Hopcroft's problem: Given $n$ points and $n$ lines in the plane, does any point contain any line. The worst-case number of point-line incidences is known to be $\Theta(n^{4/3})$. I proved that in a restricted (but still natural) model of computation, $\Omega(n^{4/3})$ time is required to determine whether there is even one point-line incidence. Intuitively, we must enumerate all $\Theta(n^{4/3})$ near-incidences and check each one to see whether it's really an incidence.

Formally, these lower bounds are still just conjectures, because they require restricted models of computation, which are specialized to the problem at hand, especially for Hopcroft's problem). However, proving lower bounds for these problems in the RAM model is likely just as hard as any other lower-bound problem (ie, we have no clue) — see the SODA 2010 paper by Patrascu and Williams relating generalizations of 3SUM to the exponential time hypothesis.

I am not completely sure if this is what you mean but there are a bunch of problems which don't seem like they are counting problems, however, the best ways that we know how to solve them is to count objects. One such problem is detecting whether a graph contains a triangle. The fastest known algorithm is to compute the trace of the cube of the adjacency matrix, which is 6 times the number of triangles in the (undirected) graph. This takes O($|V|^{2.376}$) time using the Coppersmith-Winograd matrix multiplication algorithm, and was first noticed by Itai and Rodeh in 1978. Similarly, the best way we know to detect a k-clique is to find the number of k-cliques, again via matrix multiplication.

Valiant proved that the problem of finding the permanent of a matrix is complete for #P. See the wikipedia page on the issue. #P is the complexity class corresponding to counting the number of accepting paths of an NP machine.

Bipartite Planar (and log genus) Perfect Matching is a problem where Kastelyn's algorithm for counting planar matchings (extended by Galluccio and Loebl and
parallelized by Kulkarni, Mahajan & Vardarajan) plays an important role even in
the search version of the problem. All relevant references can be found in the following paper:

I'll take "greatly influenced" as a soft constraint rather than as a reduction. In that sense, MANY problems in computational geometry have running times that are bounded by some combinatorial structure underlying them. for example, the complexity of computing an arrangement of shapes is directly linked to the intrinsic complexity of such arrangements.

Another, topical example of this is that various problems in point pattern matching have running times that boil down to estimating quantities like the number of repeated distances in a point set, and so on.

Not sure if this is what you were looking for but phase transitions of NP-Complete problems rely heavily on probabilistic arguments, which are just another form of counting.

LLL has been used to solve some 'low-density' Subset Sum problems, the success of which relies on high probability short lattice vectors existing that meet the criteria of being a Subset Sum solution. Survey Propagation relies on the structure of solution space (and the number of solutions as it fixes variables) to find solutions near the critical threshold.

Borgs, Chayes and Pittel have pretty much completely characterized the phase transition of the (Uniform) Random Number Partition Problem and thus have characterized how many solutions one can expect for a given (random) instance of the Number Partition Problem.