Commit 9afdb735 authored by unknown's avatar unknown
Browse files

v1.0.9.56 vignettes about calibration methods have been added

Showing with 743 additions and 35 deletions
+743 -35
Package: airGR
Type: Package
Title: Suite of GR Hydrological Models for Precipitation-Runoff Modelling
Version: 1.0.9.55
Date: 2017-10-26
Version: 1.0.9.56
Date: 2017-11-07
Authors@R: c(
person("Laurent", "Coron", role = c("aut", "trl")),
person("Charles", "Perrin", role = c("aut", "ths")),
person("Claude", "Michel", role = c("aut", "ths")),
person("Vazken", "Andréassian", role = c("ctb", "ths")),
person("François", "Bourgin", role = c("ctb"), comment = "'Parameter estimation' vignettes"),
person("Pierre", "Brigode", role = c("ctb")),
person("Olivier", "Delaigue", role = c("cre", "ctb"), email = "airGR@irstea.fr"),
person("Nicolas", "Le Moine", role = c("ctb")),
......@@ -19,6 +20,7 @@ Authors@R: c(
person("Audrey", "Valéry", role = c("ctb"))
)
Depends: R (>= 3.0.1)
Suggests: coda, DEoptim, FME, ggmcmc, hydroPSO, Rmalschains
Description: Hydrological modelling tools developed at Irstea-Antony (HBAN Research Unit, France). The package includes several conceptual rainfall-runoff models (GR4H, GR4J, GR5J, GR6J, GR2M, GR1A), a snow accumulation and melt model (CemaNeige) and the associated functions for their calibration and evaluation. Use help(airGR) for package description.
License: GPL-2
URL: https://webgr.irstea.fr/en/airGR/
......
......@@ -14,7 +14,7 @@ output:
### 1.0.9.55 Release Notes (2017-10-26)
### 1.0.9.56 Release Notes (2017-11-07)
#### New features
......@@ -24,6 +24,8 @@ output:
- Added (<code>Param_Sets_GR4J</code>) dataset. It contains generalist parameter sets for the GR4J model.
- Three vignettes have been added. They are relatives to different calibration methods (including the generalist parameters sets of the GR4J model).
#### Bug fixes
......
@phdthesis{mathevet_quels_2005,
address = {Paris},
title = {Quels modèles pluie-débit globaux au pas de temps horaire ? {Développements} empiriques et comparaison de modèles sur un large échantillon de bassins versants},
url = {http://webgr.irstea.fr/wp-content/uploads/2012/07/2005-MATHEVET-THESE.pdf},
school = {ENGREF},
author = {Mathevet, Thibault},
year = {2005},
keywords = {airGR, IRSTEA-HBAN-HYDRO},
file = {Mathevet_2005_Quels_modèles_pluie-débit_globaux_au_pas_de_temps.pdf:E\:\\Data\\boulot\\misc\\scientific_doc\\_bibliographie\\zotero\\Thèse\\Mathevet_2005_Quels_modèles_pluie-débit_globaux_au_pas_de_temps.pdf:application/pdf}
}
@phdthesis{mouelhi_vers_2003,
title = {Vers une chaîne cohérente de modèles pluie-débit conceptuels globaux aux pas de temps pluriannuel, annuel, mensuel et journalier},
url = {http://webgr.irstea.fr/wp-content/uploads/2012/07/2003-MOUELHI-THESE.pdf},
abstract = {,},
urldate = {2016-03-22},
school = {Paris, ENGREF},
author = {Mouelhi, Safouane},
month = jan,
year = {2003},
keywords = {airGR, IRSTEA-HBAN-HYDRO},
file = {Mouelhi_2003_Vers_une_chaîne_cohérente_de_modèles_pluie-débit.pdf:E\:\\Data\\boulot\\misc\\scientific_doc\\_bibliographie\\zotero\\Thèse\\Mouelhi_2003_Vers_une_chaîne_cohérente_de_modèles_pluie-débit.pdf:application/pdf}
}
@phdthesis{le_moine_bassin_2008,
title = {Le bassin versant de surface vu par le souterrain : une voie d’amélioration des performances et du réalisme des modèles pluie-débit ?},
url = {http://webgr.irstea.fr/wp-content/uploads/2012/07/2008-LE_MOINE-THESE.pdf},
school = {Université Pierre et Marie Curie - Paris VI},
author = {Le Moine, Nicolas},
year = {2008},
keywords = {airGR, IRSTEA-HBAN-HYDRO},
file = {Le_Moine_2008_Le_bassin_versant_de_surface_vu_par_le_souterrain.pdf:E\:\\Data\\boulot\\misc\\scientific_doc\\_bibliographie\\zotero\\Thèse\\Le_Moine_2008_Le_bassin_versant_de_surface_vu_par_le_souterrain.pdf:application/pdf}
}
@article{mouelhi_stepwise_2006,
title = {Stepwise development of a two-parameter monthly water balance model},
volume = {318},
issn = {00221694},
url = {http://linkinghub.elsevier.com/retrieve/pii/S0022169405003033},
doi = {10.1016/j.jhydrol.2005.06.014},
language = {en},
number = {1-4},
urldate = {2016-03-22},
journal = {Journal of Hydrology},
author = {Mouelhi, Safouane and Michel, Claude and Perrin, Charles and Andréassian, Vazken},
month = mar,
year = {2006},
keywords = {airGR, IRSTEA-HBAN-HYDRO},
pages = {200--214}
}
@article{mouelhi_linking_2006,
title = {Linking stream flow to rainfall at the annual time step: {The} {Manabe} bucket model revisited},
volume = {328},
issn = {0022-1694},
shorttitle = {Linking stream flow to rainfall at the annual time step},
doi = {10.1016/j.jhydrol.2005.12.022},
abstract = {Trying to determine annual stream flow only from annual rainfall and possibly annual potential evapotranspiration is a challenge that forces hydrologists to focus on very basic questions such as: Is stream flow of the current year only dependent on this year's rainfall? Does previous years' rainfall play a role? Is the role, if any, of antecedent rainfall confined to the transfer function, to the production function or both? How much complexity (i.e. how many free parameters) is required to describe the rainfall-runoff transformation at the annual time step? Our analysis starts with the bucket model of Manabe [Manabe, S., 1969. Climate and the ocean circulation. 1. The atmospheric circulation and the hydrology of the Earth's surface. Mon. Weather Rev. 97(11), 739-774]. We assess this model using data from a large set of basins located in five different countries. We follow a new approach to compare the Manabe model performance to that of other models. The poor relevance of this model at the annual time step is demonstrated, and we put forward a new model that is both simple and clearly better than Manabe bucket model. This model provides new insight on crucial questions raised in the area of annual rainfall-runoff modelling.},
number = {1-2},
urldate = {2011-06-06},
journal = {Journal of Hydrology},
author = {Mouelhi, Safouane and Michel, Claude and Perrin, Charles and Andréassian, Vazken},
month = aug,
year = {2006},
keywords = {IRSTEA-HBAN-HYDRO},
pages = {283--296}
}
@article{perrin_improvement_2003,
title = {Improvement of a parsimonious model for streamflow simulation},
volume = {279},
issn = {0022-1694},
url = {http://www.sciencedirect.com/science/article/B6V6C-49507JG-1/2/c9654f1418557e3106055320abca339a},
doi = {10.1016/S0022-1694(03)00225-7},
abstract = {Hydrologists have been struggling over the past decades to improve rainfall-runoff models. As a consequence, models proposed 20-30 years ago still keep evolving as progress is made in the understanding of catchment hydrological behaviour. Here we present the GR4J model, a daily lumped rainfall-runoff model which is the result of a continuous improvement process over the last 15 years. The article provides the mathematical formulation of a new four-parameter version of the model. Model performance is assessed on a large sample of catchments: compared to other rainfall-runoff models, the GR4J performance is among the best ones. It also gives better results than the previous three-parameter model version, especially in the simulation of low flows. The tests indicate that a four-parameter structure corresponds to the maximum level of complexity that could be afforded in the model. Adding more free parameters did not bring significant improvements. The gain in model robustness with this new version should enhance the confidence in the practical use of this simple model for water engineering and resource management. The discussion underlines the potential limits introduced in the modelling process when one relies on a priori concepts in building a model structure and it stresses the value of large catchment samples to assess models.},
number = {1-4},
urldate = {2010-07-13},
journal = {Journal of Hydrology},
author = {Perrin, Charles and Michel, Claude and Andréassian, Vazken},
year = {2003},
keywords = {airGR, GR4J model, IRSTEA-HBAN-HYDRO, Model complexity, Model improvement, Model robustness, Parsimony, Rainfall-runoff modelling},
pages = {275--289},
file = {Perrin_et_al_2003_Improvement_of_a_parsimonious_model_for.pdf:E\:\\Data\\boulot\\misc\\scientific_doc\\_bibliographie\\zotero\\Article de revue\\Perrin_et_al_2003_Improvement_of_a_parsimonious_model_for.pdf:application/pdf}
}
@article{pushpalatha_downward_2011,
title = {A downward structural sensitivity analysis of hydrological models to improve low-flow simulation},
volume = {411},
issn = {0022-1694},
url = {http://www.sciencedirect.com/science/article/pii/S0022169411006846},
doi = {10.1016/j.jhydrol.2011.09.034},
abstract = {Better simulation and earlier prediction of river low flows are needed for improved water management. Here, a top–down structural analysis to improve a hydrological model in a low-flow simulation perspective is presented. Starting from a simple but efficient rainfall–runoff model (GR5J), we analyse the sensitivity of low-flow simulations to progressive modifications of the model’s structure. These modifications correspond to the introduction of more complex routing schemes and/or the addition of simple representations of groundwater–surface water exchanges. In these tests, we wished to improve low-flow simulation while avoiding performance losses in high-flow conditions, i.e. keeping a general model. In a typical downward modelling perspective, over 60 versions of the model were tested on a large set of French catchments corresponding to various low-flow conditions, and performance was evaluated using criteria emphasising errors in low-flow conditions. The results indicate that several best performing structures yielded quite similar levels of efficiency. The addition of a new flow component to the routing part of the model yielded the most significant improvement. In spite of the close performance of several model structures, we conclude by proposing a modified model version of GR5J with a single additional parameter.},
number = {1–2},
urldate = {2014-04-16},
journal = {Journal of Hydrology},
author = {Pushpalatha, Raji and Perrin, Charles and Le Moine, Nicolas and Mathevet, Thibault and Andréassian, Vazken},
month = dec,
year = {2011},
keywords = {airGR, Downward approach, GR6J model, IRSTEA-HBAN-HYDRO, low flows, Lumped model, Model efficiency, Simulation, Uncertainty},
pages = {66--76},
file = {Pushpalatha_et_al_2011_A_downward_structural_sensitivity_analysis_of.pdf:E\:\\Data\\boulot\\misc\\scientific_doc\\_bibliographie\\zotero\\Article de revue\\Pushpalatha_et_al_2011_A_downward_structural_sensitivity_analysis_of.pdf:application/pdf}
}
@article{valery_as_2014,
title = {'{As} simple as possible but not simpler': what is useful in a temperature-based snow-accounting routine? {Part} 2 - {Sensitivity} analysis of the {Cemaneige} snow accounting routine on 380 catchments},
issn = {0022-1694},
shorttitle = {'{As} simple as possible but not simpler'},
url = {http://www.sciencedirect.com/science/article/pii/S0022169414003321},
doi = {10.1016/j.jhydrol.2014.04.058},
abstract = {This paper investigates the degree of complexity required in a snow accounting routine to ultimately simulate flows at the catchment outlet. We present a simple, parsimonious and general snow accounting routine (SAR), called Cemaneige, that can be associated with any precipitation–runoff model to simulate discharge at the catchment scale. To get results of general applicability, this SAR was tested on a large set of 380 catchments from four countries (France, Switzerland, Sweden and Canada) and combined with four different hydrological models. Our results show that five basic features provide a good reliability and robustness to the SAR, namely considering: (1) a transition range of temperature for the determination of the solid fraction of precipitation; (2) five altitudinal bands of equal area for snow accumulation; (3) the cold-content of the snowpack (with a parameter controlling snowpack inertia); (4) a degree-day factor controlling snowmelt; (5) uneven snow distribution in each band. This general SAR includes two internal states (the snowpack and its cold-content). Results also indicate that only two free parameters (here snowmelt factor and cold-content factor) are warranted in a SAR at the daily time step and that further complexity is not supported by improvements in flow simulation efficiency. To justify the reasons for considering the five features above, a sensitivity analysis comparing Cemaneige with other SAR versions is performed. It analyses the snow processes which should be selected or not to bring significant improvement in model performances. Compared with the six existing SARs presented in the companion article (Valéry et al., this issue) on the 380 catchments set, Cemaneige shows better performance on average than five of these six SARs. It provides performance similar to the sixth SAR (MORD4) but with only half its number of free parameters. However, CemaNeige still appears perfectible on mountainous catchments (France and Switzerland) where the lumped SAR, MORD4, outperforms Cemaneige. Cemaneige can easily be adapted for simulation on ungauged catchments: fixing its two parameters to default values much less degrades performances than the other best performing SAR. This may partly due to the Cemaneige parsimony.},
number = {517(0)},
urldate = {2014-05-14},
journal = {Journal of Hydrology},
author = {Valéry, Audrey and Andréassian, Vazken and Perrin, Charles},
year = {2014},
keywords = {airGR, Degree-day approach, IRSTEA-HBAN-HYDRO, Precipitation-runoff models, Snow accounting Routine (SAR), Snow accumulation, Snowmelt, snowpack variability},
pages = {1176--1187}
}
@article{andreassian_seeking_2014,
title = {Seeking genericity in the selection of parameter sets: {Impact} on hydrological model efficiency},
volume = {50},
issn = {00431397},
shorttitle = {Seeking genericity in the selection of parameter sets},
url = {http://doi.wiley.com/10.1002/2013WR014761},
doi = {10.1002/2013WR014761},
language = {en},
number = {10},
urldate = {2017-10-25},
journal = {Water Resources Research},
author = {Andréassian, Vazken and Bourgin, François and Oudin, Ludovic and Mathevet, Thibault and Perrin, Charles and Lerat, Julien and Coron, Laurent and Berthet, Lionel},
month = oct,
year = {2014},
pages = {8356--8366}
}
\ No newline at end of file
---
title: "airGR -- Overview"
title: "Get Started with airGR"
bibliography: V00_airgr_ref.bib
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{airGR}
%\VignetteIndexEntry{Get Started with airGR}
%\VignetteEncoding{UTF-8}
---
# Introduction
**airGR** is a package which brings into the [**R software**](https://cran.r-project.org/) the hydrological modelling tools used and developed at the [Catchment Hydrology Team](https://webgr.irstea.fr/?lang=en) at [Irstea (France)](http://www.irstea.fr/en/), including the [**GR rainfall-runoff models**](https://webgr.irstea.fr/modeles/?lang=en) and a snowmelt and accumulation model, **CemaNeige**. Each model core is coded in **Fortran** to ensure low computational time. The other package functions (i.e. mainly the calibration algorithm and the efficiency criteria calculation) are coded in **R**.
**airGR** is a package that brings into the [**R software**](https://cran.r-project.org/) the hydrological modelling tools used and developed at the [Catchment Hydrology Team](https://webgr.irstea.fr/en/) at [Irstea (France)](http://www.irstea.fr/en/), including the [**GR rainfall-runoff models**](https://webgr.irstea.fr/en/modeles/) and a snowmelt and accumulation model, [**CemaNeige**](https://webgr.irstea.fr/en/modeles/modele-de-neige/). Each model core is coded in **Fortran** to ensure low computational time. The other package functions (i.e. mainly the calibration algorithm and the efficiency criteria calculation) are coded in **R**.
The **airGR** package has been designed to fulfil two major requirements: to facilitate the use by non-expert users and to allow flexibility regarding the addition of external criteria, models or calibration algorithms. The names of the functions and their arguments were chosen to this end. **airGR** also contains basics plotting facilities.
The **airGR** package has been designed to fulfill two major requirements: to facilitate the use by non-expert users and to allow flexibility regarding the addition of external criteria, models or calibration algorithms. The names of the functions and their arguments were chosen to this end. **airGR** also contains basics plotting facilities.
......@@ -31,7 +32,7 @@ The models can be called within **airGR** using the following functions:
* `RunModel_CemaNeigeGR5J()`: combined use of **GR5J** and **CemaNeige**
* `RunModel_CemaNeigeGR6J()`: combined use of **GR6J** and **CemaNeige**
The [**GRP**](https://webgr.irstea.fr/modeles/modele-de-prevision-grp/?lang=en) forecasting model and the [**Otamin**](https://webgr.irstea.fr/modeles/otamin/?lang=en) predictive uncertainty tool are not availabe in **airGR**.
The [**GRP**](https://webgr.irstea.fr/en/modeles/modele-de-prevision-grp/) forecasting model and the [**Otamin**](https://webgr.irstea.fr/en/modeles/otamin/) predictive uncertainty tool are not available in **airGR**.
......@@ -46,7 +47,7 @@ First, it is necessary to load the **airGR** package:
library(airGR)
```
This is an example of a `data.frame` of hydrometeorological observations time series for a fictional catchment included in the **airGR** package that contains:
Below is presented an example of a `data.frame` of hydrometeorological observations time series for a fictional catchment included in the **airGR** package that contains:
* *DatesR*: dates in the POSIXt format
* *P*: average precipitation [mm/day]
......@@ -59,7 +60,7 @@ This is an example of a `data.frame` of hydrometeorological observations time se
data(L0123001)
summary(BasinObs)
```
The usual functions (e.g. `read.table()`) can be used to load real-case datasets.
The usual functions (e.g. `read.table()`) can be used to load real-case data sets.
......@@ -69,18 +70,18 @@ To run a model, the functions of the **airGR** package (e.g. the models, calibra
To facilitate the use of the package, there are several functions dedicated to the creation of these objects:
* `CreateInputsModel()`: prepares the inputs for the different hydrological models (times series of dates, rainfall, observed streamflow, etc.)
* `CreateInputsModel()`: prepares the inputs for the different hydrological models (times series of dates, precipitation, observed discharge, etc.)
* `CreateRunOptions()`: prepares the options for the hydrological model run (warm-up period, calibration period, etc.)
* `CreateInputsCrit()`: prepares the options in order to compute efficiency criterions (choice of the criterion, choice of the transformation on streamflows: "log", "root", etc.)
* `CreateInputsCrit()`: prepares the options in order to compute the efficiency criterion (choice of the criterion, choice of the transformation on discharge: "log", "sqrt", etc.)
* `CreateCalibOptions()`: prepares the options for the hydrological model calibration algorithm (choice of parameters to optimize, predefined values for uncalibrated parameters, etc.)
## InputsModel object
To run a GR hydrologic model, the user has to prepare the input data with the `CreateInputsModel()` function.
To run a GR hydrological model or CemaNeige, the user has to prepare the input data with the `CreateInputsModel()` function.
As arguments, this function needs the function name corresponding to the model the user wants to run, a vector of dates, a vector of precipitation and a vector of potential evapotranspiration.
In the example below, we already have the potential evapotranspiration. If the user doesn't have these data, it is possible to compute it with the [Oudin's formula](http://dx.doi.org/10.1016/j.jhydrol.2004.08.026) with the `PEdaily_Oudin()` function (this function only needs julian days, daily average air temperature and latitude).
In the example below, we already have the potential evapotranspiration. If the user does not have these data, it is possible to compute it with the [Oudin's formula](http://dx.doi.org/10.1016/j.jhydrol.2004.08.026) with the `PEdaily_Oudin()` function (this function only needs Julian days, daily average air temperature and latitude).
Missing values (`NA`) of precipitation (or potential evapotranspiration) are **not allowed**.
......@@ -111,7 +112,7 @@ Ind_Run <- seq(which(format(BasinObs$DatesR, format = "%d/%m/%Y %H:%M")=="01/01/
str(Ind_Run)
```
The intialization of hydrological models is of the utmost importance. Indeed, an inaccurate intialisation causes poor quality streamflow simulations during the earliest stages of the running period. For example, in the GR models, the reservoirs levels are by default set to 50 % of their capacity, which may be far from their ideal value. Two solutions are offered to accurately initialize the GR models in **airGR**: manually predefining the initial states or running the models during a warm up period before the actual running period. It is generally advised to set up this warm up period to be equal or superior to one year.
The initialization of hydrological models is of the utmost importance. Indeed, an inaccurate initialization causes poor quality discharge simulations during the earliest stages of the running period. For example, in the GR models, by default, the production and the routing store levels store level are respectively set to 30 % and 50 % of their capacity, which may be far from their ideal value. Two solutions are offered to accurately initialize the GR models in **airGR**: manually predefining the initial states (e.g. from a previous run) or running the models during a warm up period before the actual running period. It is generally advised to set up this warm up period to be equal or superior to one year.
As a consequence, it is possible to define in `CreateRunOptions()` the following arguments:
......@@ -128,7 +129,7 @@ str(RunOptions)
```
The `CreateRunOptions()` function returns warnings if the default initialization options are used:
* `IniStates` and `IniResLevels` are automatically set to initialize all the model states at 0, except for the production and routing stores which are initialised at 50 % of their capacity
* `IniStates` and `IniResLevels` are automatically set to initialize all the model states at 0, except for the production and routing stores, which are initialized at respectively 30% and 50 % of their capacity
* `IndPeriod_WarmUp` default setting ensures a one-year warm-up using the time steps preceding the `IndPeriod_Run`, if available
......@@ -138,11 +139,11 @@ The `CreateRunOptions()` function returns warnings if the default initialization
The `CreateInputsCrit()` function allows to prepare the input in order to calculate a criterion. It is possible to define the following arguments:
* `FUN_CRIT`: the name of the error criterion function (they are introduced later on)
* `InputsModel`: the inputs of the hydrological model previously prepared by the `CeateInputsModel()` function
* `InputsModel`: the inputs of the hydrological model previously prepared by the `CreateInputsModel()` function
* `RunOptions`: the options of the hydrological model previously prepared by the `CreateRunOptions()` function
* `Qobs`: the observed streamflows expressed in *mm/time step*
* `Qobs`: the observed discharge expressed in *mm/time step*
Missing values (`NA`) are **allowed** for observed streamflows.
Missing values (`NA`) are **allowed** for observed discharge.
```{r}
InputsCrit <- CreateInputsCrit(FUN_CRIT = ErrorCrit_NSE, InputsModel = InputsModel,
......@@ -152,7 +153,7 @@ str(InputsCrit)
## CalibOptions object
The user needs to prepare the calibration options with the `CreateCalibOptions()`function. For that, it is necessary to define the following arguments:
Before using the automatic calibration tool, the user needs to prepare the calibration options with the `CreateCalibOptions()` function. For that, it is necessary to define the following arguments:
* `FUN_MOD`: the name of the model function
* `FUN_CALIB`: the name of the calibration algorithm
......@@ -165,7 +166,7 @@ str(CalibOptions)
# Criteria
The evaluation of the quality of a simulation is estimated through the calculation of criteria. These criteria can be used both as objective-functions during the calibration of the model, or as a measure for evaluating its control performance.
The evaluation of the quality of a simulation is estimated through the calculation of criteria. These criteria can be used both as objective-functions during the calibration of the model, or as a measure for evaluating its performance on a control period.
The package offers the possibility to use different criteria:
......@@ -175,7 +176,7 @@ The package offers the possibility to use different criteria:
* `ErrorCrit_KGE()`: Kling-Gupta efficiency criterion (KGE)
* `ErrorCrit_KGE2()`: modified Kling-Gupta efficiency criterion (KGE')
It is also possible to create user-defined criteria. For doing that, it is only necessary to define the function in **R** following the same syntax that the criteria functions included in **airGR**.
It is also possible to create user-defined criteria. For doing that, it is only necessary to define the function in **R** following the same syntax as the criteria functions included in **airGR**.
# Calibration
......@@ -198,18 +199,18 @@ Param <- OutputsCalib$ParamFinalR
Param
```
The `Calibration_Michel()` function is the only one implemented in the **airGR** package to calibrate the model, but the user can implement its own calibration function.
The `Calibration_Michel()` function is the only one implemented in the **airGR** package to calibrate the model, but the user can implement its own calibration function. Two vignettes explain how it can be done ([2.1](V02.1_param_optim.html) and [2.2](V02.2_param_mcmc.html)).
This function returns a vector with the parameters of the chosen model, which means that the number of values can differ depending on the model that is used. It is possible to use the `Calibration_Michel()` function with user-implemented hydrological models.
The `Calibration_Michel()` function returns a vector with the parameters of the chosen model, which means that the number of values can differ depending on the model that is used. It is possible to use the `Calibration_Michel()` function with user-implemented hydrological models.
# Validation
# Control
This step assess the predictive capacity of the model. Validation is defined as the estimation of the accuracy of the model on datasets that are not used in its construction, and in particular its calibration.
The classical way to perform a validation is to keep data from a period separated from the calibration period. If possible, this control period should correspond to climatic situations rather that differ from those of the calibration period in order to better point out the qualities and weakness of the model. This exercise is necessary for assessing the robustness of the model, that is to say its ability to keep stable performances outside the calibration conditions.
This step assesses the predictive capacity of the model. Control is defined as the estimation of the accuracy of the model on data sets that are not used in its construction, and in particular its calibration.
The classical way to perform a control is to keep data from a period separated from the calibration period. If possible, this control period should correspond to climatic situations that differ from those of the calibration period in order to better point out the qualities and weaknesses of the model. This exercise is necessary for assessing the robustness of the model, that is to say its ability to keep stable performances outside of the calibration conditions.
Performing a model validation with **airGR** is similar to running a simulation (see below).
Performing a model control with **airGR** is similar to running a simulation (see below), followed by the computation of one or several performance criteria.
......@@ -219,7 +220,7 @@ Performing a model validation with **airGR** is similar to running a simulation
## Simulation run
To run a model, the user has to use the `RunModel*()` functions (`InputsModel`, `RunOptions` and parameters).
All the data needed have already been prepared.
All the data needed have already been prepared in the previous steps defined in this guide.
```{r}
OutputsModel <- RunModel_GR4J(InputsModel = InputsModel, RunOptions = RunOptions, Param = Param)
......@@ -230,18 +231,18 @@ str(OutputsModel)
## Results preview
Although it is possible for the user to design its own graphics from the outputs of the `RunModel*()` functions, the **airGR** package offers the possibility to make use of the `plot.OutputsModel()` function (or `plotl()` with a `OutputsModel` object). This function returns a dashboard of results including various graphs (depending on the model used):
Although it is possible for the user to design its own graphics from the outputs of the `RunModel*()` functions, the **airGR** package offers the possibility to make use of the `plot.OutputsModel()` function (or `plot()` with an `OutputsModel` object). This function returns a dashboard of results including various graphs (depending on the model used):
* time series of total precipitation and simulated streamflows (and observed streamflows if provided)
* interannual median monthly simulated streamflows (and monthly observed streamflows if provided)
* cumulative frequency plot for simulated streamflows (and for observed streamflows if provided)
* correlation plot between simulated and observed streamflows (if observed streamflows provided)
* time series of total precipitation and simulated discharge (and observed discharge if provided)
* interannual average daily simulated discharge (and daily observed discharge if provided) and interannual average monthly precipitation
* cumulative frequency plot for simulated discharge (and for observed discharge if provided)
* correlation plot between simulated and observed discharge (if observed discharge provided)
```{r,eval=F}
plot(OutputsModel, Qobs = BasinObs$Qmm[Ind_Run])
```
Moreover, if the CemaNeige model is used, the temperature and the simulated snowpack time series are plotted.
Moreover, if the CemaNeige model is used, the air temperature and the simulated snowpack water equivalent time series are plotted.
## Efficiency criterion
......@@ -255,3 +256,6 @@ OutputsCrit <- ErrorCrit_NSE(InputsCrit = InputsCrit, OutputsModel = OutputsMode
OutputsCrit <- ErrorCrit_KGE(InputsCrit = InputsCrit, OutputsModel = OutputsModel)
```
# References
---
title: "Plugging new calibration algorithms in airGR"
author: "François Bourgin"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Plugging new calibration algorithms}
%\VignetteEncoding{UTF-8}
---
```{r, warning=FALSE, include=FALSE, fig.keep='none', results='hide'}
library(airGR)
library(DEoptim)
library(hydroPSO)
library(Rmalschains)
# source("airGR.R")
```
# Introduction
## Scope
The Michel's calibration strategy (`Calibration_Michel()` function) is the proposed calibration algorithm in **airGR**. However, other optimization methods can be used in combination with **airGR**.
We show how to use different R packages to perform parameter estimation.
In this vignette, we use the **GR4J** model to illustrate the different optimization strategies.
In particular, we assume that the R global environment contains input climate data, observed discharge and functions from the **airGR** vignette, as shown below.
The calibration period is defined in the `CreateRunOptions()` function .
```{r, warning=FALSE, fig.keep='none', results='hide', fig.height=10, fig.width=10, eval=TRUE, echo=FALSE, message=FALSE}
example("Calibration_Michel", echo = FALSE, ask = FALSE)
```
```{r, echo=TRUE, eval=FALSE}
example("Calibration_Michel")
```
Regarding the different optimization strategies presented here, we refer to each package for in-depth information about the description of the methods used.
Please note that this vignette is only for illustration purposes and does not provide any guidance about which optimization strategies is recommended for the family of the **GR** models.
## Definition of the necessary function
Parameter estimation can be performed by defining a function that takes a parameter set as input and returns the value of the performance criterion.
There are two important steps: the transformation of parameters to real space and the computation of the value of the performance criterion.
Here we choose to minimize the root mean square error.
The change of the repository from the "real" parameter space to a "transformed" space ensures homogeneity of displacement in the different dimensions of the parameter space during the step by step procedure of the calibration algorithm of the model.
```{r, warning=FALSE, results='hide'}
OptimGR4J <- function(Param_Optim) {
## Transformation of the parameter set to real space
Param_Optim_Vre <- airGR::TransfoParam_GR4J(ParamIn = Param_Optim,
Direction = "TR")
## Simulation given a parameter set
OutputsModel <- airGR::RunModel_GR4J(InputsModel = InputsModel,
RunOptions = RunOptions,
Param = Param_Optim_Vre)
## Computation of the value of the performance criteria
OutputsCrit <- airGR::ErrorCrit_RMSE(InputsCrit = InputsCrit,
OutputsModel = OutputsModel,
verbose = FALSE)
return(OutputsCrit$CritValue)
}
```
In addition, we need to define the lower and upper bounds of the four **GR4J** parameters in the transformed parameter space:
```{r, warning=FALSE, results='hide'}
lowerGR4J <- rep(-9.99, times = 4)
upperGR4J <- rep(+9.99, times = 4)
```
# Local optimization
We start with a local optimization strategy by using the PORT routines (using the `nlminb()` of the `stats` package) and by setting a starting point in the transformed parameter space:
```{r, warning=FALSE, results='hide'}
startGR4J <- c(4.1, 3.9, -0.9, -8.7)
optPORT <- stats::nlminb(start = startGR4J,
objective = OptimGR4J,
lower = lowerGR4J, upper = upperGR4J,
control = list(trace = 1))
```
The RMSE value reaches a local minimum value after 35 iterations.
We can also try a multi-start approach to test the consistency of the local optimization.
Here we use the same grid used for the filtering step of the Michel's calibration strategy (`Calibration_Michel()` function).
For each starting point, a local optimization is performed.
```{r, warning=FALSE, results='hide'}
startGR4J <- expand.grid(data.frame(CalibOptions$StartParamDistrib))
optPORT_ <- function(x) {
opt <- stats::nlminb(start = x,
objective = OptimGR4J,
lower = lowerGR4J, upper = upperGR4J,
control = list(trace = 1))
}
list_opt <- apply(startGR4J, 1, optPORT_)
```
We can then extract the best parameter sets and the value of the performance criteria:
```{r, warning=FALSE, results='hide'}
list_par <- t(sapply(list_opt, function(x) x$par))
list_obj <- sapply(list_opt, function(x) x$objective)
df_port <- data.frame(list_par, RMSE = list_obj)
```
As can be seen below, the optimum performance criterion values (column *objective*) can differ from the global optimum value in many cases, resulting in various parameter sets.
```{r, warning=FALSE}
summary(df_port)
```
The existence of several local minima illustrates the importance of defining an appropriate starting point or of using a multi-start strategy or a global optimization strategy.
# Global optimization
Global optimization is most often used when facing a complex response surface, with multiple local mimina.
Here we use the following R implementation of some popular strategies:
* [DEoptim: differential evolution](https://cran.r-project.org/web/packages/DEoptim/index.html)
* [hydroPSO: particle swarm](https://cran.r-project.org/web/packages/hydroPSO/index.html)
* [Rmalschains: memetic algorithms](https://cran.r-project.org/web/packages/Rmalschains/index.html)
## Differential Evolution
```{r, warning=FALSE, results='hide'}
optDE <- DEoptim::DEoptim(fn = OptimGR4J,
lower = lowerGR4J, upper = upperGR4J,
control = DEoptim::DEoptim.control(NP = 40, trace = 10))
```
## Particle Swarm
```{r, warning=FALSE, results='hide', message=FALSE}
optPSO <- hydroPSO::hydroPSO(fn = OptimGR4J,
lower = lowerGR4J, upper = upperGR4J,
control = list(write2disk = FALSE, verbose = FALSE))
```
## MA-LS-Chains
```{r, warning=FALSE, results='hide'}
optMALS <- Rmalschains::malschains(fn = OptimGR4J,
lower = lowerGR4J, upper = upperGR4J,
maxEvals = 2000)
```
# Results
As it can be seen in the table below, the four optimization strategies tested lead to very close optima.
```{r, warning=FALSE, echo=FALSE}
data.frame(Algo = c("Michel", "PORT", "DE", "PSO", "MA-LS"),
round(rbind(
OutputsCalib$ParamFinalR ,
airGR::TransfoParam_GR4J(ParamIn = optPORT$par , Direction = "TR"),
airGR::TransfoParam_GR4J(ParamIn = as.numeric(optDE$optim$bestmem), Direction = "TR"),
airGR::TransfoParam_GR4J(ParamIn = as.numeric(optPSO$par) , Direction = "TR"),
airGR::TransfoParam_GR4J(ParamIn = optMALS$sol , Direction = "TR")),
digits = 3))
```
<!-- This is an expected result because the response surface for quadratic performance criteria of the **GR4J** model is generally sufficiently well defined in the transformed parameter space to allow using a local optimization strategy instead of a more time consuming global optimization strategy. -->
---
title: "Parameter estimation within an Bayesian MCMC framework"
author: "François Bourgin"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{MCMC Parameter estimation}
%\VignetteEncoding{UTF-8}
---
```{r, include=FALSE, fig.keep='none', results='hide'}
library(airGR)
library(coda)
library(FME)
library(ggmcmc)
library(dplyr)
library(tidyr)
# source("airGR.R")
```
# Introduction
## Scope
Here, we give an example of parameter estimation within a Bayesian MCMC approach.
We use the **GR4J** model and we assume that the R global environment contains data and functions from the **airGR** vignette.
```{r, warning=FALSE, fig.keep='none', results='hide', fig.height=10, fig.width=10, eval=TRUE, echo=FALSE, message=FALSE}
example("Calibration_Michel", echo = FALSE, ask = FALSE)
```
```{r, echo=TRUE, eval=FALSE}
example("Calibration_Michel")
```
Please refer to the [2.1 Parameter estimation](V02.1_param_optim.html) vignette for explanations on how to plug in a parameter estimation algorithm to **airGR**.
Please note that this vignette is only for illustration purposes and does not provide any guidance about which parameter inference strategy is recommended for the family of the GR models.
## Standard Least Squares (SLS) Bayesian inference
We show how to use the DRAM algorithm for SLS Bayesian inference, with the `modMCMC()` function of the [FME](https://cran.r-project.org/web/packages/FME/) package.
First, we need to define a function that returns twice the opposite of the log-likelihood for a given parameter set.
Nota: in the `RunAirGR4J()` function the computation of the the log-likelihood is simplified in order to ensure a good computing performance. It corresponds to a translation of the two following lines.
```{r, echo=TRUE, eval=FALSE}
Likelihood <- sum((ObsY - ModY)^2, na.rm = TRUE)^(-sum(!is.na(ObsY)) / 2)
LL <- -2 * log(Likelihood)
```
In our simplified setting of Gaussian likelihood with measurement error integrated out, the log of the sum of squared error is related to the log-likelihood.
Note that we do not use here any discharge transformation but applying Box-Cox transformation is quite popular in hydrological modelling.
```{r, results='hide'}
RunAirGR4J <- function(Param_Optim) {
## Transformation to real space
Param_Optim_Vre <- airGR::TransfoParam_GR4J(ParamIn = Param_Optim,
Direction = "TR")
## Simulation given a parameter set
OutputsModel <- airGR::RunModel_GR4J(InputsModel = InputsModel,
RunOptions = RunOptions,
Param = Param_Optim_Vre)
## Computation of the log-likelihood: N * log(SS)
ObsY <- InputsCrit$Qobs
ModY <- OutputsModel$Qsim
LL <- sum(!is.na(ObsY)) * log(sum((ObsY - ModY)^2, na.rm = TRUE))
}
```
# MCMC algorithm for Bayesian inference
## Estimation of the best-fit parameters as a starting point
We start by using the PORT optimization routine to estimate the best-fit parameters.
```{r, results='hide'}
optPORT <- stats::nlminb(start = c(4.1, 3.9, -0.9, -8.7),
objective = RunAirGR4J,
lower = rep(-9.9, times = 4), upper = rep(9.9, times = 4),
control = list(trace = 1))
IniParam <- optPORT$par
```
## Running 3 chains for convergence assessment
We run 3 chains with different initial values to assess the convergence of the Markov chains.
The number of iterations is fixed to 2000 with a burning length of 0.
Nota: in this example, there are relatively few iterations (2000), in order to limit the running time of this vignette. In addition, the burning length has been set to zero in order to show the convergence process but, in a true exercise, it is better to define more iterations (5000) and to burn the first iterations.
With the DRAM algorithm, the covariance of the proposal is updated every 100 runs and delayed rejection is applied.
```{r, results='hide'}
ListIniParam <- data.frame(Chain1 = IniParam, Chain2 = IniParam, Chain3 = IniParam,
row.names = paste0("X", 1:4))
ListIniParam <- sweep(ListIniParam, MARGIN = 2, STATS = c(1, 0.9, 1.1), FUN = "*")
ListIniParam[ListIniParam < -9.9] <- -9.9
ListIniParam[ListIniParam > +9.9] <- +9.9
mcmcDRAM <- apply(ListIniParam, 2, function(iListIniParam) {
FME::modMCMC(f = RunAirGR4J,
p = iListIniParam,
lower = rep(-9.9, times = 4), ## lower bounds for GR4J
upper = rep(+9.9, times = 4), ## upper bounds for GR4J
niter = 2000,
jump = 0.01,
outputlength = 2000,
burninlength = 0,
updatecov = 100, ## Adaptative Metropolis
ntrydr = 2) ## Delayed Rejection
})
```
## MCMC diagnostics and visualisation tools
There are several diagnostics that can be used to check the convergence of the chains.
The R package [coda](https://cran.r-project.org/web/packages/coda/index.html) provides several diagnostic tests.
Among others, the Gelman and Rubin's convergence can be used. A value close to 1 suggests acceptable convergence
The result will be better with more iterations than 2000. As we have kept the iterations during the convergence process, we have to set the `autoburnin` argument to `TRUE` in order to consider only the second half of the series.
Note that we rescale model parameter sets of the GR4J model from the transformed space to the real space
```{r, results='hide'}
MultDRAM <- coda::as.mcmc(lapply(mcmcDRAM, function(x) {
coda::as.mcmc(airGR::TransfoParam_GR4J(as.matrix(x$pars), Direction = "TR"))
}))
GelRub <- coda::gelman.diag(MultDRAM, autoburnin = TRUE)$mpsrf
GelRub
```
In addition, graphical tools can be used, with for example the [ggmcmc](https://cran.r-project.org/web/packages/ggmcmc/) package.
First, the evolution of the Markov chains can be seen with a traceplot:
```{r, fig.width=6, fig.height=9}
ParamDRAM <- ggmcmc::ggs(MultDRAM) ## to convert objet for using by all ggs_* graphical functions
ggmcmc::ggs_traceplot(ParamDRAM)
```
The posterior density for each parameter can then be visualised:
```{r, fig.width=6, fig.height=9}
ParamDRAM_burn <- dplyr::filter(ParamDRAM, Iteration > 1000) # to keep only the second half of the series
ggmcmc::ggs_density(ParamDRAM_burn)
```
Finally, a paired plot can be useful to look at the correlation between parameters.
Highly correlated parameters can be seen as an indication for a structural deficit of the model used.
```{r, fig.width=6, fig.height=6}
ggmcmc::ggs_pairs(ParamDRAM_burn, lower = list(continuous = "density"))
```
## Exploring further possibilities
We only presented one MCMC algorithm and one parameter inference setting.
Nonetheless, other approaches can be explored within the same framework.
One can for example use different data transformations to deal with the limitations of the Gaussian error model.
<!-- Another extension is to infer the parameters of more advanced error model in addition of the **GR4J** model parameters. -->
---
title: "Generalist parameter sets for the GR4J model"
bibliography: V00_airgr_ref.bib
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{GR4J parameter sets}
%\VignetteEncoding{UTF-8}
---
# Introduction
## Scope
```{r, warning=FALSE, include=FALSE}
library(airGR)
options(digits = 3)
```
In the **airGR** package, the classical way to calibrate a model is to use the Michel's algorithm (see the `Calibration_Michel()` function).
The Michel's algorithm combines a global and a local approach. A screening is first performed either based on a rough predefined grid (considering various initial values for each parameter) or from a list of initial parameter sets.
The best set identified in this screening is then used as a starting point for the steepest descent local search algorithm.
In some specific situations, for example if the calibration period is too short and by consequence non representative of the catchment behavior, a local calibration algorithm can give poor results.
In this vignette, we show a method using a known parameter set that can be used as an alternative for the grid-screening calibration procedure, and we well compare to two methods using the `Calibration_Michel()` function. The generalist parameters sets introduced here are taken from @andreassian_seeking_2014.
## Data preparation
We load an example data set from the package and the GR4J parameter sets.
```{r, warning=FALSE}
## loading catchment data
data(L0123001)
## loading generalist parameter sets
data(Param_Sets_GR4J)
```
The given GR4J **X4u** variable does not correspond to the actual GR4J **X4** parameter. As explained in @andreassian_seeking_2014 [section 2.1], the given GR4J **X4u** value has to be adjusted (rescaled) using catchment area (S) [km2] as follows: $X4 = X4u / 5.995 \times S^{0.3}$ (please note that **the formula is erroneous in the publication**).
So, we need to transform the **X4** parameter in the right way.
```{r, warning=FALSE}
Param_Sets_GR4J$X4 <- Param_Sets_GR4J$X4u / 5.995 * BasinInfo$BasinArea^0.3
Param_Sets_GR4J$X4u <- NULL
Param_Sets_GR4J <- as.matrix(Param_Sets_GR4J)
```
Please, find below the summary of the `r nrow(Param_Sets_GR4J)` sets of the `r ncol(Param_Sets_GR4J)` parameters.
```{r, warning=FALSE, echo=FALSE}
summary(Param_Sets_GR4J)
```
## Object model preparation
We assume that the R global environment contains data and functions from the **airGR** vignette.
The calibration period has been defined from **1990-01-01** to **1990-02-28**, and the validation period from **1990-03-01** to **1999-12-31**.
As a consequence, in this example the calibration period is very short, less than 6 months.
```{r, warning=FALSE, include=TRUE}
## preparation of the InputsModel object
InputsModel <- CreateInputsModel(FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR,
Precip = BasinObs$P, PotEvap = BasinObs$E)
## ---- calibration step
## short calibration period selection (< 6 months)
Ind_Cal <- seq(which(format(BasinObs$DatesR, format = "%d/%m/%Y %H:%M")=="01/01/1990 00:00"),
which(format(BasinObs$DatesR, format = "%d/%m/%Y %H:%M")=="28/02/1990 00:00"))
## preparation of the RunOptions object for the calibration period
RunOptions_Cal <- CreateRunOptions(FUN_MOD = RunModel_GR4J,
InputsModel = InputsModel, IndPeriod_Run = Ind_Cal)
## efficiency criterion: Nash-Sutcliffe Efficiency
InputsCrit_Cal <- CreateInputsCrit(FUN_CRIT = ErrorCrit_NSE, InputsModel = InputsModel,
RunOptions = RunOptions_Cal, Qobs = BasinObs$Qmm[Ind_Cal])
## ---- validation step
## validation period selection
Ind_Val <- seq(which(format(BasinObs$DatesR, format = "%d/%m/%Y %H:%M")=="01/03/1990 00:00"),
which(format(BasinObs$DatesR, format = "%d/%m/%Y %H:%M")=="31/12/1999 00:00"))
## preparation of the RunOptions object for the validation period
RunOptions_Val <- CreateRunOptions(FUN_MOD = RunModel_GR4J,
InputsModel = InputsModel, IndPeriod_Run = Ind_Val)
## efficiency criterion (Nash-Sutcliffe Efficiency) on the validation period
InputsCrit_Val <- CreateInputsCrit(FUN_CRIT = ErrorCrit_NSE, InputsModel = InputsModel,
RunOptions = RunOptions_Val, Qobs = BasinObs$Qmm[Ind_Val])
```
# Calibration of the GR4J model with the generalist parameter sets
It is recommended to use the generalist parameter sets when the calibration period is less than 6 month.
As shown in @andreassian_seeking_2014 [figure 4], a recommended way to use the `Param_Sets_GR4J` data frame is to run the GR4J model with each parameter set and to select the best one according to an objective function (here we use the Nash-Sutcliffe Efficiency criterion).
```{r, warning=FALSE, message=FALSE}
OutputsCrit_Loop <- apply(Param_Sets_GR4J, 1, function(iParam) {
OutputsModel_Cal <- RunModel_GR4J(InputsModel = InputsModel, RunOptions = RunOptions_Cal,
Param = iParam)
OutputsCrit <- ErrorCrit_NSE(InputsCrit = InputsCrit_Cal, OutputsModel = OutputsModel_Cal)
return(OutputsCrit$CritValue)
})
```
Find below the `r nrow(Param_Sets_GR4J)` criteria corresponding to the different parameter sets.
The values are quite low (from `r OutputsCrit_Loop[which.min(OutputsCrit_Loop)]` to `r OutputsCrit_Loop[which.max(OutputsCrit_Loop)]`), which can be expected as this does not represents an actual calibration.
```{r, echo=FALSE}
OutputsCrit_Loop
```
The parameter set corresponding to the best criterion is the following:
```{r, warning=FALSE, message=FALSE, echo=FALSE}
Param_Best <- unlist(Param_Sets_GR4J[which.max(OutputsCrit_Loop), ])
Param_Best
## validation
OutputsModel_Val <- RunModel_GR4J(InputsModel = InputsModel, RunOptions = RunOptions_Val, Param = Param_Best)
OutputsCrit_Val <- ErrorCrit_NSE(InputsCrit = InputsCrit_Val, OutputsModel = OutputsModel_Val)
```
Now we can compute the Nash-Sutcliffe Efficiency criterion on the validation period. A quite good value (`r OutputsCrit_Val$CritValue`) is found.
# Calibration of the GR4J model with the buit-in `Calibration_Michel()` function
As seen above, the Michel's calibration algorithm is based on a local search procedure.
It is **not recommanded** to use the generalist parameter sets when the **calibration period is less than 6 month**.
## GR4J parameter distributions quantiles used in the grid-screening step
By default, the predefined grid used by the `Calibration_Michel()` function contains parameters quantiles computed after recursive calibrations on 1200 basins (from Australia, France and USA).
```{r, warning=FALSE, message=FALSE}
CalibOptions <- CreateCalibOptions(FUN_MOD = RunModel_GR4J, FUN_CALIB = Calibration_Michel)
```
```{r, warning=FALSE, message=FALSE, include=FALSE}
## calibration
OutputsCalib <- Calibration(InputsModel = InputsModel, RunOptions = RunOptions_Cal,
InputsCrit = InputsCrit_Cal, CalibOptions = CalibOptions,
FUN_MOD = RunModel_GR4J, FUN_CRIT = ErrorCrit_NSE,
FUN_CALIB = Calibration_Michel)
OutputsModel_Cal <- RunModel(InputsModel = InputsModel, RunOptions = RunOptions_Cal,
Param = OutputsCalib$ParamFinalR, FUN = RunModel_GR4J)
OutputsCrit_Cal <- ErrorCrit_NSE(InputsCrit = InputsCrit_Cal, OutputsModel = OutputsModel_Cal)
## validation
OutputsModel_Val <- RunModel_GR4J(InputsModel = InputsModel, RunOptions = RunOptions_Val, Param = OutputsCalib$ParamFinalR)
OutputsCrit_Val <- ErrorCrit_NSE(InputsCrit = InputsCrit_Val, OutputsModel = OutputsModel_Val)
```
The parameter set corresponding to the best criterion is the following:
```{r, warning=FALSE, message=FALSE, echo=FALSE}
names(OutputsCalib$ParamFinalR) <- paste0("X", 1:4)
OutputsCalib$ParamFinalR
```
The Nash-Sutcliffe Efficiency criterion computed on the calibration period is better (`r OutputsCrit_Cal$CritValue`) than with the generalist parameter sets, but the one computed on the validation period is lower (`r OutputsCrit_Val$CritValue`).
This shows that the generalist parameter sets give more robust model in this case.
# GR4J parameter sets used in the grid-screening step
It is also possible to give to the `CreateCalibOptions()` function a matrix of parameter sets used for grid-screening calibration procedure.
So, it possible is to use by this way the GR4J generalist parameter sets.
```{r, warning=FALSE, message=FALSE}
CalibOptions <- CreateCalibOptions(FUN_MOD = RunModel_GR4J, FUN_CALIB = Calibration_Michel,
StartParamList = Param_Sets_GR4J)
```
```{r, warning=FALSE, message=FALSE, include=FALSE}
## calibration
OutputsCalib <- Calibration(InputsModel = InputsModel, RunOptions = RunOptions_Cal,
InputsCrit = InputsCrit_Cal, CalibOptions = CalibOptions,
FUN_MOD = RunModel_GR4J, FUN_CRIT = ErrorCrit_NSE,
FUN_CALIB = Calibration_Michel)
OutputsModel_Cal <- RunModel(InputsModel = InputsModel, RunOptions = RunOptions_Cal,
Param = OutputsCalib$ParamFinalR, FUN = RunModel_GR4J)
OutputsCrit_Cal <- ErrorCrit_NSE(InputsCrit = InputsCrit_Cal, OutputsModel = OutputsModel_Cal)
## validation
OutputsModel_Val <- RunModel_GR4J(InputsModel = InputsModel, RunOptions = RunOptions_Val, Param = OutputsCalib$ParamFinalR)
OutputsCrit_Val <- ErrorCrit_NSE(InputsCrit = InputsCrit_Val, OutputsModel = OutputsModel_Val)
```
Here the parameter set corresponding to the best criteria found.
```{r, warning=FALSE, message=FALSE, echo=FALSE}
names(OutputsCalib$ParamFinalR) <- paste0("X", 1:4)
OutputsCalib$ParamFinalR
```
The results are the same here. The Nash-Sutcliffe Efficiency criterion computed on the calibration period is better (`r OutputsCrit_Cal$CritValue`), but the one computed on the validation period is just a little bit higher (`r OutputsCrit_Val$CritValue`).
Generally the advantage of using GR4J parameter sets rather than the GR4J generalist parameter quantiles, is that they make more sense than a simple exploration of combinations of quantiles of parameter distributions. In addition, for the first step, the number of iterations is smaller (27 runs instead of 81), which can save time if one realizes a very large number of calibrations.
# References
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment