Saturday, April 30, 2011

The JGrass-NewAge system for forecasting and managing the hydrological budgets at the basin scale: the models of flow generation, propagation, and aggregation

A few year ago, I felt the necessity to built a less distributed model than GEOtop, but, at the same time, less lumped than my Peakflow model based in the GIUH theory. The model had to follow the new informatics envisioned in the GEOFRAME talk (see one of my first post for reference, and the post on adopting OMS3). The occasion was some financial support coming from the Adige river basin Authority. That, not only started the JGrass-NewAGE model, but also the migration of JGrass to the Eclipse Rich Client Platform, the implementation of a Postgres/Postgis database suited to contain a digital watershed model.

The first implementation of the model was based on the OpenMI, but as explained a couple of posts ago, we migrated to the OMS3 platform, and the second implementation of the model can be now found here.

The paper I am introducing, talk about the rainfallrunoff core of JGrass-NewAGE, and presents a discussion of its predictive capacity. The model focuses on the hydrological balance of medium scale to large scale basins, and considers statistics of the processes at the hillslope scale. The whole modeling system consists of six main parts: (i) estimation of energy balance; (ii) estimation of evapotranspiration; (iii) snow modelling; (iv) estimation of runoff production; (v) aggregation and propagation of flows in channel, and (vi) description of intakes, out-takes, and reservoirs. This paper details the processes, of runoff production, and aggregation/propagation of flows on a river network. The system is based on a hillslope-link geometrical partition of the landscape, so the basic unit, where the budget is evaluated, consists of hillslopes that drain into a single associated link rather than cells or pixels. To this conceptual partition corresponds an implementation of informatics that uses vectorial features for channels, and raster data for hillslopes. Runoff production at each channel link is estimated through a combination of the Duffy (1996) model and a GIUH model for estimating residence times in hillslope. Routing in channels uses equations integrated for any channels' link, and produces discharges at any link end, for any link in the river network. The model has been tested against measured discharges according to some indexes of goodness of fit such as RMSE and Nash Sutcliffe. The characteristic ability to reproduce discharge in any point of the river network is used to infer some statistics, and notably, the scaling properties of the modeled discharge.

The full paper is available at the GMMD site. Any comment from you is welcomed.

Tuesday, April 26, 2011

Why did you not choose a gauged basin ?

This is one question that very often reviewers write in commenting my papers, that most of the time, have a conceptual, if not theoretical attitude, on topics where field guys often dominated the scene in the past years. I am sorry: that I am, inclined to be theoretical.

The question is indeed a good question, but the answer is not trivial. Its contrary is: why you experimentalist do not use sound theoretical work to support your measurements ?

As a matter of facts, if our science pretends to be a physical science, experiments are simply necessary and fundamental. Even if, as I wrote in the past, often we "observe events" more than "designing sound controlled experiments" as Galilei would have required (Simply because this is not possible in relevant cases of our science).

However, let's assume I have finally done an experiment (and I did some in my life): what the reviewers would ask me ?

She would ask about the setting of the experiment. He would ask about the calibration of the instruments, and which instruments were used. They would require decent statistical inquires about the results, performing test of consistencies about them. But many times, the simple report of the measurement efforts (especially if considered massive, and difficult) is would be considered valuable enough to have a paper published.

There is clearly no "par condicio" (equal conditions) in this attitude. Reviewers will not request to an experimentalist other than her work is consistent in itself, and, obviously, that it will bring new evidences of confirmations of something in a matter that must be of interest (of course!).
Inverting the roles, they are not required to produce a sound physical theory of their findings. Even if they should, at least, give a look to the work of the more gifted modelers to support quantitatively their statements, and not allowed to silly built on qualitative (in the sense of poor quantitative) and subjective arguments, or on poor mathematics.

For instance, I am really tired to see in field works, at the edge of geomorphology and hydrology, experiments where data are interpreted with homogenous soil characteristics, with very roughly approximated hydraulic conductivities, and, when real measurements are performed, without any trial to assess error bounds, with unspecified instruments' calibration (even when it is known that they have highly non linear responses), and interpreted with art but on the premises od fundamentally flawed models. (Because, hidden or not, written in words or formulas, any interpretation is a model).

On the other side, when with collaborators, I could use data from highly advertised field experiments, I could often touch the indefiniteness of some of their aspects, and, in front of data not able to survive to any systematic analysis of consistence with regards to the delicate aspects we where investigating (quite unknown by definition of our work), I was several times overcome by frustration and disappointment.

However, I am frank, If I had to choose, I believe the current attitude of tolerance on experimental works is correct. Without any tolerance, no paper will be published or written, waiting for the ultimate one where all the things are performed properly, the theory sound, and its explanation crystal clear even to dummy minds. This would definitely block any development of any science, and I prefer the seed of a good idea inside a sea of garbage than no idea. (How many good ideas can you believe to have in life Riccardo, used to say my masters. One, two, maybe three, if you are really good).

But theoretical and conceptual work should be judged with the same attitude.

I think, that after the debate, and a delicate scrutiny, it would be better to let the community decide what is important or not. Otherwise, the most interesting papers could be eliminated from literature, while the most orthodox ones (and "false": modern Ptolemaic models) naturally proliferate and constitute a overwhelming bunch of "not even wrong" contributions. Therefore I vote for controversial but provocative papers to be published. I believe it is better to be wrong than nothing.

This attitude exposes also to other risks: that some groups of researchers, for a reason or another, and often in good faith, negatively influence sectors of a discipline, killing new ideas (which are usually confuse), and just reinforcing existing paradigms. But this is another story.


P.S. - Another additional, but often found statement, refers to thing that should have been done or tempted to have the paper accepted. Not infrequently these questions are of the type:

Please could you find the sense of life ?

Clearly a few words that can imply the involvement of many full life times of research ... without success, as history teaches. Please, my good old sweet reviewers, give me a break. Why you do so to me ?