Those who construct models, including models of the quality of the aquatic environment, are driven largely by the search for (theoretical) completeness in the products of their efforts. For if we know of something of potential relevance, and computational power is increasing, why should that something be left out? Those who use the results of such models are probably reassured by this imprimatur, of having supposedly based their decisions on the best available scientific evidence. Our models, and certainly those we would label “state-of-the-art”, seem destined always to get larger. Some observations on possible strategies for coping with this largeness, while yet making well reasoned and adequately buttressed decisions on how to manage the water environment, are the subject of this paper. Because it is so obvious, and because it has been the foundation of analytical enquiry for such a very long time, our point of departure is the classical procedure of disassembling the whole into its parts with subsequent re-assembly of the resulting part solutions into an overall solution. This continues to serve us well, at least in terms of pragmatic decision-making, but perhaps not in terms of reconciling the model with the field observations, i.e., in terms of model calibration. If the indivisible whole is to be addressed, and it is large, contemporary studies show that we shall have to shed an attachment to locating the single, best decision and be satisfied instead with having identified a multiplicity of acceptably good possibilities. If, in the face of an inevitable uncertainty, there is then a concern for reassurance regarding the robustness of a specific course of action (chosen from among the good possibilities), significant recent advances in the methods of global (as opposed to local) sensitivity analysis are indeed timely. Ultimately, however, no matter how large and seemingly complete the model, whether we trust its output is a very strong function of whether this outcome tallies with our mental image of the given system's behaviour. The paper argues that largeness must therefore be pruned through the application of appropriate methods of model simplification, through procedures aimed directly at this issue of promoting the generation, corroboration, and refutation of high-level conceptual insights and understanding. The paper closes with a brief discussion of two aspects of the role of field observations in evaluating a (large) model: quality assurance of that model in the absence of any data; and the previously somewhat under-estimated challenge of reconciling large models with high-volume data sets.