For example, let's say the problem is: What is the square root of 3 (to x bits of precision)?

One way to solve this is to choose a random real number less than 3 and square it.

1.40245^2 = 1.9668660025
2.69362^2 = 7.2555887044
...

Of course, this is a very slow process. Newton-Raphson gives the solution much more quickly. My question is: Is there a problem for which this process is the optimal way to arrive at its solution?

I should point out that information used in each guess cannot be used in future guesses. In the square root example, the next guess could be biased by the knowledge of whether the square of the number being checked was less than or greater than 3.

4 Answers
4

There are certainly problems where a brute force search is quicker than trying to remember (or figure out) a smarter approach. Example: Does 5 have a cube root modulo 11?

An example of a slightly different nature is this recent question where an exhaustive search of the (very small) solution space saves a lot of grief and uncertainty compared to attempting to perfect a "forward" argument.

A third example: NIST is currently running a competition to design a next-generation cryptographic hash function. One among several requirements for such a function is that it should be practically impossible to find two inputs that map to the same output (a "collision"), so if anyone finding any collision by any method automatically disqualifies a proposal. One of the entries built on cellular automata, and its submitter no doubt thought it would be a good idea because there is no nice known way to run a general cellular automaton backwards. The submission, however, fell within days to (what I think must have been) a simple guess-and-check attack -- it turned out that there were two different one-byte inputs that hashed to the same value. Attempting to construct a complete theory that would allow one to derive a collision in an understanding-based way would have been much more difficult than just seeing where some initial aimless guessing takes you.

pure guess and check? Probably not, there's always some amount of mathematical cleverness to reduce the problem. But many famous problems, such as the four colour theorem, were ultimately solved by checking a large number of test cases. Solving non-linear differential equations, when they can be solved at all, also amounts to a lot of guess and check.

What you're describing sounds pretty close to (but not exactly the same as) the definition of a one-way function. It is not known whether one way functions exist, but several conjectured one-way functions do exist.

In particular, one of the properties expected of a secure cryptographic hash function $H$ is that, whereas computing $x = H(m)$ for a given message $m$ should be simple and efficient, there should be no way of finding an $m$ that solves the equation for a given arbitrary $x$ that would be substantially easier than just trying random messages until one gives the correct output. (Also, it is expected that actually carrying out such a brute force search would not be practical with any conceivable amount of computing power.)

Let us interpret "Guess and Check" broadly, and attempt a mathematical answer.

There are quite a few instances where a Randomized Algorithm works on average faster than any known non-randomized algorithm. In a small number of cases, one can even prove that the average performance of a certain randomized algorithm is better than that of any deterministic algorithm.