## Tuesday, March 13, 2018

### A short introduction to precipitation and precipitation statistics

I am sharing here the videos of my lectures, in Italian, about precipitations. They were performed during my today class of Hydrology, whose main site is here. More material on precipitations can be found in this old post.

Precipitation: a short introduction

Statistical properties of precipitation on the ground

The concept of return period

### Some on statistic of extreme precipitations

These are the lectures that regard the interpolation of Intensity-Duration-Frequency (IDF) curves to rainfall estreme datas. It is covered a  little of theory which will be subsequently used to practically interpolate some data sets. These lectures are part of the class of Hydraulic Constructions and Hydrology held at the University of Trento.

Intensity-duration-frequency curves definition

The Gumbel distribution

Moments method

Maximum likelihood

Minimum squares

## Thursday, March 8, 2018

### Open Science Frameworks 4 Italians

For enjoinment of my students, I prepare some slides and and gave a brief introduction to the Open Science Framework in Italian that can be of help for anyone.

The slides can be found here.

Using the slides I gave this talk.

However, I also make a short practical presentation.

They obviously do not substitute the much more comprehensive YouTube in English

## Wednesday, March 7, 2018

### Water viscosity

“Viscosity is a property of the fluid which opposes the relative motion between the two surfaces of the fluid that are moving at different velocities. In simple terms, viscosity means friction between the molecules of fluid. When the fluid is forced through a tube, the particles which compose the fluid generally move more quickly near the tube's axis and more slowly near its walls; therefore some stress (such as a pressure difference between the two ends of the tube) is needed to overcome the friction between particle layers to keep the fluid moving.” (Source Wikipedia)

One relevant point for us is that water viscosity changes with temperature in a non neglibile way between -10 and 40 centigrades, temperature that many soils can across easily in different seasons: this table shows how much. A model for water viscosity in a large range of temperatures is given by Kestin et al. [1978], which can be used in models.
Viscosity is actually so important that an entire website is dedicated to its experimental values of viscosity: viscopedia. Same information can also be read from this other informative website about water as a substance.
Viscosity variation is usually forgotten in hydrological modelling and the fact that water travels two times faster (at least) in summer than in winter is usually forgotten in any model of runoff production. It is probably time that we incorporate such effects in our modelling of infiltration, and for what regards me, in our numerical integrator of Richards equation.
When dealing with infiltration, in hydrologically realistic contexts, papers by Constanz and coworker are a standard reference, starting from Constantz [1981], Ronan et al, 1998, and Costantz and Murphy [1991]. Papers citing them are also interesting (here the Scopus list) and cover quite recent works too. We can identify two issues (the usual ones): first it is necessary to understand how viscosity variation affects equations of flow, secondly how these affect a heterogeneous landscape.
Grifoll et al (2005), in analysing the problem of water vapor transport, independently if you like or not their solutions, contains the right equations, and can be an help to write yours.
A related question is if temperature alters also the soil water retention curves. This problem is faced by a recent paper by Roshani and Sedano [2016] but it is still clearly an open problem.

I did not start really reading these papers. However, here it is their list below.

## Monday, March 5, 2018

### Probability and Statistics basics: a very short simple overview of concepts for my students

These lectures cover both the class of Hydrology and Hydraulic Constructions that share the necessity to talk a little of statistics. In four steps I talk about simple concepts about statistics and probability. Very basic stuff to remind to my students what they should already know. Probably in the second series of slides I performed better.

Samples, Population, empirical distributions

Same topic as above but different class

Introduction to visual statistics, location and scale parameters.

Same topic as above but different class

Probability axioms and some derived concepts  visualised

Same topic as above, different class

Acting with Real numbers

Almost the same as above bur with a couple of slides more

## Sunday, March 4, 2018

### Random sampling (is it defined in probability theory ?)

Really random numbers are not easily obtainable (if they exists). The short story: I perceive that in probability theory the concept of random sampling is not contemplated. Randomness is used by probability theory but is not implied by its axioms.

Randomly literally means that there is no law (expressed in equations) or algorithm (expressed in actions or some programing code) that connects one pick in the sequence to another.  The elements in the sequence can depend on others (as described by their correlation) while this dependence does not imply causation (in the sense that one implies the other): "correlation does not imply causation".
Taking the problem from a different perspective, Judea Pearl stresses that probability is about  "association'' not "causality'' (which is, in a sense, the reverse of randomness): "An associational concept is any relationship that can be defined in terms of a joint distribution of observed variables, and a causal concept is any relationship that cannot be defined from the distribution alone. Examples of associational concepts are: correlation, regression, dependence, conditional independence, like-lihood, collapsibility, propensity score, risk ratio, odds ratio, marginalization, conditionalization, controlling for,'' and so on.
Examples of causal concepts are:randomization, influence, effect, confounding, "holding constant,'' disturbance, spurious correlation, faithfulness/stability, instrumental variables, intervention, explanation, attribution, and so on. The former can, while the latter cannot be defined in term of distribution functions." He also writes: "Every claim invoking causal concepts must rely onsome premises that invoke such concepts; it cannot be inferred from, or even defined in terms statistical associations alone.''
Therefore Pearl, at least, in the sense that random elements in a random sequence are not causally related, supports the idea that if probability is not about causality, it is not either about ranmdomness.

Wikipedia also supports my arguments: "Axiomatic probability theory deliberately avoids a definition of a random sequence [2]. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement let us consider a random sequence" abuse of language [3] "
The same Wikipedia explains very clearly which is the state of art of randomness concept but, for a more interested reader, the educational review paper by Volchan [4], is certainly informative.

I report from Wikipedia the current state of art for the extractions of random sequences:
"Three basic paradigms for dealing with random sequences have now emerged [5]:
•   The frequency / measure-theoretic approach. This approach started with the work of Richard von Mises and Alonzo Church. In the 1960s Per Martin-Loef noticed that the sets coding such frequency-based stochastic properties are a special kind measure zero sets, and that a more general and smooth definition can be obtained by considering all effectively measure zero sets.
•   The complexity / compressibility approach. This paradigm was championed by A. N. Kolmogorov along with contributions Levin and Gregory Chaitin. For finite random sequences, Kolmogorov defined the randomness'' as the entropy, Kolmogorov complexity, of a string of length K of zeros and ones as the closeness of its entropy to K, i.e. if the complexity of the string is close to K it is very random and if the complexity is far below K, it is not so random.
•   The predictability approach. This paradigm was due Claus P. Schnorr and uses a slightly different definition of constructive martingales than martingales used in traditional probability theory. Schnorr showed how the existence of a selective betting strategy implied the existence of a selection rule for a biased sub-sequence. If one only requires a recursive martingale to succeed on a sequence instead of constructively succeeds on a sequence, then one gets the recursively randomness concepts. Yongge Wang that recursively randomness concept is different from Schnorr's randomness concepts. "
"In most cases, theorems relating the three paradigms (often equivalence) have been proven.
I do not pretend to have fully understood the previous statements. However, in summary, we have to grow quite complicate if we want to understand what randomness is.
Once clarified what it is, we can have the problem to assess what can be a random arrangement for an arbitrary set of objects, say  $\Omega$. Taking example of the algorithms used to get a given random sequence of  numbers from a give distribution, we can observe that probability itself can be used to infer the random sequence of a set from a random sequence in [0,1] by inverting the probability $P$.

Random sampling is significant when the set of the domain is subdivided into disjoint parts: a partition. Therefore:

Definition: Given a set (having the structure of a~$\sigma$-algebra)~$\Omega$ let a partition of
~$\Omega$, denoted as:
${\mathcal P }(\Omega):=\{ x | \cup_{x \in \mathcal P} x = \Omega\, {\rm{and}}\ \forall y,z \in \Omega,\, y\cap z = \emptyset \}$
Through probability $P$ defined over ${\mathcal P} (\Omega)$ each element $x$ of the set is mapped into the closed interval [0,1] and it is guaranteed that $P[\cup_{x \in \mathcal P} x] = 1$

There is not necessarily an ordering in the partition of $\Omega$ but we can arbitrarily arrange the set and associate each of its element with a subset of [0,1] of Lesbesgue measure (a.k.a. length) corresponding to its probability. By using the arbitrary order  of the partition, we can at the same time build the (cumulative) probability. By arranging or re-arranging the numbers in [0,1], we thus imply (since $P$ is bijective) a re-arrangement of the set ${\mathcal P} (\Omega)$.

Definition: we call sequence of elements in ${\mathcal P} (\Omega)$, denoted as $\mathcal S$ a numerable set of elements in ${\mathcal P} (\Omega)$:
$${\mathcal S} := \{ x_1 \cdot \cdot \cdot\}$$
Definition: we call a sequence a random sequence~~if it ha no
description shorter that itself via a universal Turing machine (orequivalently we can adopt one of the other two definition proposed above
Theorem: A random sequence of integers, through inverting the probability P, defines a random sequence on the set ${\mathcal P} (\Omega)$.
The prof is trivial. If there is a law that connects elements in ${\mathcal P} (\Omega)$ then through the probability $P$ a describing law is obtained also for the random sequence in [0,1], which is, therefore no more random.

So randomness of any set on which is defined a probability can be derived by getting a random sequence in [0,1].

References

[1] - Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys, 30, 96--146.
http://doi.org/10.1214/09-SS057
[2] Inevitable Randomness in Discrete Mathematics by József Beck 2009 ISBN 0-8218-4756-2 page 44
[3] Algorithms: main ideas and applications by Vladimir Andreevich Uspenskiĭ, Alekseĭ, Lʹvovich Semenov 1993 Springer ISBN 0-7923-2210-X page 166
[4] Sergio B. Volchan, What is a random sequences, The American Mathematical Monthly, Vol. 109, 2002, pp. 46–63[5] R. Downey, Some Recent Progress in Algorithmic Randomness, in Mathematical foundations of computer science 2004: by Jiří Fiala, Václav Koubek 2004 ISBN 3-540-22823-3 page 44