This question isn't strictly in CL's domain, if moderators find it a better place - please move it.So, the wiki article and many examples of gradient descent suggest that the way to calculate the function that is the fastest way to get to the local minimum is by iteratively approaching it in hopes it will converge after *enough* tries. This feels eeky as this also means that there always be an error + this looks like a rather inefficient way to solve the problem.

I am convinced of that there is always either single best or multiple equaly good functions to reach the local minimum, thus there must be a way to find them without any error (within reasonable limitations of how our computations may be precise).

I suspect there must be some direct calculation to find such function, rather then by repetitively trying different hypotheses.

It's only posible if the input is described by analytically solvable function, otherwise you must use numeric methods.And matematically it's laplace of the input, where laplace is div grad (http://en.wikipedia.org/wiki/Laplace_operator).

Thanks, yeah, the minimization is something that rings the bell to me. But what do you mean when you say "analytically solvable" - this must be my laps of technical language, I don't know what does it mean when applied to function, can you please explain or have a link to an article?