Hypothesis Testing in Hydrology: Why Falsification of Models is Still a Great Idea

by | Feb 28, 2018

If we consider models as hypotheses about how the hydrology is working, then testing models as hypotheses is one way of doing science in the inexact sciences.

Hydrology is an inexact science but one that is rather important for the management of water resources, freshwater and terrestrial habitats, and environmental quality.   Good management generally requires model predictions of future conditions to inform decisions.   We know that such models are necessarily approximate, but how do we go about testing that they are fit-for-purpose when used for decision making for different types of purpose?   If we consider models as hypotheses about how the hydrology is working, then testing models as hypotheses is one way of doing science in the inexact sciences.

But that is not really how hydrological modelling has worked in the past. Poor model results rarely get reported.   They are considered rather as part of the development of a modelling study.   They are improved by debugging the model code, changing the model assumptions, modifying parameter sets or “correcting” model boundary conditions.  By such learning processes, we aim to gradually improve models as representations of hydrological systems, even if we do so without going through a specific hypothesis testing process.  The result, however, is that we have many competing hydrological models, with different assumptions, parameterisations and numerical solution schemes, that purport to do the same thing.  These have all been accepted in some sense (by the developers, referees on papers, and clients who use the results) but any process of testing has often been rather subjective and less than rigorous.   Thus few studies report the falsification of models in particular applications.

One of the problems in developing methods for testing models as fit-for-purpose hypotheses is that classical statistical methods may not be applicable due to the uncertainties in the modelling process not being the result of random variability, but rather the result of lack of knowledge about inputs, processes and past observations against which model predictions can be evaluated.  This can result in error characteristics that change in time and space and interactions between input and model structural errors that may be impossible to disentangle. This opinion published in WIREs Water outlines some existing approaches for testing models and argues that this should be a focus for future research since it is only by rejection or falsification of current models that the science will be improved.

 

Kindly contributed by Keith Beven.

Related posts: