We consider a scenario in which a wireless sensor network is formed by randomly deploying $n$ sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in \cite{wanet.giridhar-kumar03fusion}), of which $\max$, $\min$, and indicator functions are important examples; our discussions are couched in terms of the $\max$ function. We view the problem as one of message passing distributed computation over a geometric random graph. The network is assumed to be synchronous; the sensors synchronously measure values, and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (i) the communication topology assumed, and (ii) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralised contention-free scheduling of packet transmissions. Firstly, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one time maximum computation. We show that, for an optimal algorithm, the computation time and energy expenditure scale, respectively, as $\Theta\left(\sqrt{\frac{n}{\log n }}\right)$ and $\Theta(n)$ asymptotically as the number of sensors $n \rightarrow \infty$. Secondly, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the Tree algorithm, Multi-Hop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as $n \rightarrow \infty$. In particular we show that the computation time for these algorithms scales as $\Theta \left( \sqrt{n \log n}\right)$, $\Theta (n)$ and $\Theta \left( \sqrt{n \log n}\right)$, respectively; whereas the energy expended scales as $\Theta (n)$, $\Theta \left( n \sqrt{\frac{n}{\log n }} \right)$ and $\Theta \left( n \sqrt{n\log n}\right)$, respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling; the simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler and hence our results can be viewed as providing bounds for the performance with practical distributed schedulers.