The title is controversial. Obviously both in epistemology and hydrology. A Reviewer of one of my recent papers asked for us taking away any recurrence on the word validation. The same asked a coauthor of another paper. But, since I am not afraid of the word, let's agree that with this word I do not mean anything ontological but instead I try to indicate a grid of rules that identify a process of scientific knowledge accumulation which could be accepted by the community of scientists as a "good practice" to do science.
here. In this post, I summarize instead its main points.
I did not face the problem directly but analyze two models, the GIUH model (in its width function version) and the Topmodel (Beven and Kirkby, 1979).
What I observed is that they gained the status of good scientific model from:
- having been tested by different researchers group independently from the theoretical point of view: they tested the consistence of the assumptions on which the models were built, and their formal mathematical structure (the simplicity of the algorithms involved allowed in fact multiple implementation of the theories);
- field campaigns that tested the correspondence of model's results with measures (with respect to hydrograph reproduction);
In the case of GIUH,
- Furthermore the assumptions was derived from some general physical principle (e.g. minimum energy dissipation theories).
I also showed that both the models are wrong with respect to some set of measures, i.e. tracer measures for the GIUH, and measures of soil moisture distributions in hillslopes for the Topmodel. Nevertheless, I argue that they remain good models, when applied respecting their assumptions, and for limited scopes. Their generalizations, on the other hands, are indicated by their own failure.
So models that can be falsified (in Popperian style) are good scientific examples, and what is validated is a shared procedure that make our knowledge to increase about phenomena. These models, in particular, even falsified remains good tools, (which, however, should not be abused) because built with simplicity in mind (remember the Occam's razor), and therefore maintain a reasonable success in describing a well defined set of users' cases.
In fact we deliberately do simplifications and errors in hydrology, with the scope to obtain simple and fast models, on the opposite of complex and slow ones, with the hope that their errors cancel each other (“ You cannot deny that our universe is not a chaos; we discern in it beings, things, stuff that we name with words. These beings or things are forms, structures endowed with a certain stability; they fill a certain portion of space and perdure for a certain time", R. Thom, 1975) because reality has built-in scales, and different laws at different scales.
When searching for reconstructing a unique hydrograph at the closure of a basin, it could be reasonable to think that this canceling of errors can happen for the nature itself of the process which collects information (water) all around a basin and concentrates (sums) it at the outlet (as discharge). In other cases, that simplifications work and that errors cancel could not be so heuristically defendable.
In general, when building our models, we should have a clear and disenchanted vision of their limits, a theory for their errors, and the idea of the measures (if we do not have controlled experiments) to falsified them. The best would be to have a theory correlating the information (of the signal) we need to reproduce with the complexity of the model needed to get it, so we do not exaggerate with detailed descriptions of the (micro-)physics, at finer scales, which are not required at the larger ones.
However, if models need to be simple they should not be simpler (as A. Einstein said). Therefore we should give attention to those phenomena which are not well described and add complexity when needed. In turn, when adding complexity,
- any model addition should be tested independently and not just on the basis of the benchmark quantities (like discharge) that were already investigated by testing the simpler model (i.e., if we add discharge with a snow-melt model, we should test directly and independently the snow-model before, and have the certainty -!- that snow was really present in the catchments, and that it, in fact, did melt).
A further note regards the code. Meanwhile more complexity is required, models becomes more complicated in their equations, and more and more models' code becomes the "real thing" that is used to do prognoses (the real model). Therefore models' code should be open, and open to third parties inspections. I talked other times on this issue in this blog (e.g. http://abouthydrology.blogspot.it/2012/05/paper-in-nature-on-scientific-software.html), and I will not repeat myself again.
I finished my talk with some rhetorical polemics with those (great indeed) hydrologists who discuss restless about validation and uncertainty, instead of trying to do better models. But this was actually just for fun.