floodam-data issueshttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues2022-12-30T16:50:03+01:00https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/14manage scope2022-12-30T16:50:03+01:00Grelot Fredericmanage scope# Purpose of the functionality
Quand les données sont actualisées, par défaut l'échelle de sortie est le département. Cela n'a pas vocation à changer. Toutefois, il est intéressant d'ajouter des fonctionnalités qui facilitent la mise à ...# Purpose of the functionality
Quand les données sont actualisées, par défaut l'échelle de sortie est le département. Cela n'a pas vocation à changer. Toutefois, il est intéressant d'ajouter des fonctionnalités qui facilitent la mise à jour des données centrées sur une autre échelle, celle des terrains d'étude (so_ii mais aussi les sites expérimentaux) car tout gain en taille permet d'augmenter les vitesses d'analyses.
Les fonctionnalités développées pourront être utilisées :
- quand des nouvelles versions apparaissent
- quand l'étendue des terrains d'étude est révisée
La liste des fonctions à ajouter est nécessairement temporaire car, dès qu'on travaille à une échelle centrée sur un objet, il peut être intéressant de faire des mises à jour des analyses.
# Functions to be added
- [ ] **function** *brief description*
- expected inputs
- expected outputs
# Data to be added
- [ ] **data** *brief description
- source of the data
- storage of the data (inst/extdata)
# Documentation
- [ ] **function**
- [ ] **data**
# Test to be performed
- [ ] **function**https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/23manage Admin Express2023-04-17T17:18:19+02:00Grelot Fredericmanage Admin Express# Management of ADMIN EXPRESS database
ADMIN EXPRESS is a DB that describes the administrative organization for France. It is maintained by IGN. In 2022, its version is 3.1 and it is describes in _ADMIN EXPRESS Version 3.1. Descriptif d...# Management of ADMIN EXPRESS database
ADMIN EXPRESS is a DB that describes the administrative organization for France. It is maintained by IGN. In 2022, its version is 3.1 and it is describes in _ADMIN EXPRESS Version 3.1. Descriptif de contenu et de livraison_ (2021-11). There is 3 formats :
- ADMIN EXPRESS that is updated each month, more or less
- ADMIN EXPRESS COG that is maintained each year
- ADMIN EXPRESS COG CARTO that i a view of the later, designed for cartographic purpose (less details, limits of commune that fit contour of land...)
All formats can be downloaded from IGN or from data.cquest :
- https://geoservices.ign.fr/adminexpress gives the link to archive for IGN, but no clear access to older version. In fact, old versions are given in another DB [geofla](https://geoservices.ign.fr/adminexpress gives)
- https://data.cquest.org/ign/adminexpress/ gives the link to archive without restriction in terms of date. Normally the archives have exactly the same name as for IGN.
# Functions/dataset to be added
- [x] download_admin_express
- **description** download the data from official repository (and data.cquest as it is by far more reliable...)
- adaptation of download.admin_express
- use new journal → add_journal_new (à renommer en add_journal)
- **output** archive in original version
- [ ] adapt_admin_express
- **description** format the data from official repository in aimed data based on scheme_admin_express_3_1
- adaptation of adapt.admin_express
- use new journal (add_log_info)
- **output**: archive in formatted version
- prise en compte de la proposition d'ajout de @maxime.modjeska https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/23#note_79361
- [x] scheme_admin_express_3_1
- **description** global data to be used for formatting data
- verification of what has already be done
- [ ] analyse_admin_express
- **description** some analyses to be performed after downloading and formatting. At least:
- new communes given
- old communes dismissed
- possible implication on a give scope (so-ii)
- [ ] vignettes/admin_express
- **description** presentation of the data
For each function/dataset. Please follow this procedure
- **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
- **creation of dataset**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of data1.0.0.0Grelot FredericGrelot Frederic2023-01-31https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/24manage BD Topo2023-02-17T09:45:48+01:00Grelot Fredericmanage BD Topo# Management of BD Topo database
BD Topo is a DB that describes the lad use in France in a vector format. Current version is 3.0. It has many layers, all of them are interesting, but are not treated in floodam.data.
There are 3 formats...# Management of BD Topo database
BD Topo is a DB that describes the lad use in France in a vector format. Current version is 3.0. It has many layers, all of them are interesting, but are not treated in floodam.data.
There are 3 formats for the archives :
- SHP, normally this format can be handled by floodam.data but it is no more a priority
- GPKG, which is the one that should be treated by floodam.data
- SQL, not handled at the moment in floodam.data (and not a goal for version 1.0.0.0)
BD Topo can be downloaded from IGN or from data.cquest :
- https://geoservices.ign.fr/bdtopo gives the link to archives for IGN. It is not clear if all releases are available. Apparently, one per year, at least.
- https://data.cquest.org/ign/bdtopo/ gives the link to archive without restriction in terms of date. It is organized by release in terms of date (one release, one folder).
# Functions/dataset to be added
- [x] download_bd_topo
- **description** download the data from official repository (and data.cquest as it is by far more reliable...)
- adaptation of download.bd_topo
- use new journal (add_log_info)
- **output** archive in original version
- [ ] adapt_bd_topo
- **description** format the data from official repository in aimed data based on scheme_bd_topo_3_0
- should be something that encapsulates extract_building, extract_dwelling
- define first all layers of interest and the view needed (complete the lis, we will decide fater if included in version 1.0.0.0 or later)
- [ ] extract_building_agriculture
- [ ] extract_road
- use new journal (add_log_info)
- **output**: archive in formatted version
- [ ] scheme_bd_topo_3_2
- **description** global data to be used for formatting data
- verification of what has already be done in scheme_bd_topo_3 (that should be replaced)
- it could be interesting to complete scheme with former version also !
- [ ] analyse_bd_topo
- **description** some analyses to be performed after downloading and formatting.
- should be something that encapsulates analyse_dwelling and further analyses.
- [ ] vignettes/bd_topo_fr
- **description** presentation of the data in French
- complete what has already be done
- [ ] vignettes/bd_topo_en
- **description** presentation of the data in English
- complete what has already be done, from tanslation of bd_topo_fr when it is done
For each function/dataset. Please follow this procedure
- **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
- **creation of dataset**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of data1.0.0.0Grelot FredericGrelot Frederic2023-01-31https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/25manage RPG2023-04-21T16:36:58+02:00Grelot Fredericmanage RPG# Management of RPG database
RPG is a DB that describes the usage of agricultural plots in France organization for France. It is maintained by ASP with the help of IGN for distribution. In 2022, its version is 2. It has different versio...# Management of RPG database
RPG is a DB that describes the usage of agricultural plots in France organization for France. It is maintained by ASP with the help of IGN for distribution. In 2022, its version is 2. It has different version
- RPG niveau 1 that is a distributed by IGN
- RPG niveau 2 that is a distributed by ASP, with a need to ask for it (and justify why). It has some more informations that allow to group plots by farms.
RPG niveau 1 can be downloaded from IGN or from data.cquest :
- https://geoservices.ign.fr/rpg gives the link to archive for IGN, but not to old year versions.
- https://data.cquest.org/ign/adminexpress/ gives the link to archive without restriction in terms of date (but it does not have the last one). Normally the archives have exactly the same name as for IGN.
# Functions/dataset to be added
- [ ] download_rpg
- **description** download the data from official repository (and data.cquest as it is by far more reliable...)
- adaptation of download.rpg
- use new journal (add_log_info)
- **output** archive in original version
- [ ] adapt_rpg
- **description** format the data from official repository in aimed data based on scheme_rpg_1 or scheme_rpg_2 depending on version
- adaptation of adapt_rpg
- use new journal (add_log_info)
- **output**: archive in formatted version
- **intégrer le travail de @maxime.modjeska** https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/25#note_77315
- [ ] scheme_rpg_1 and scheme_rpg_2
- **description** global data to be used for formatting data
- verification of what has already be done
- **intégrer le travail de @maxime.modjeska** https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/25#note_75728
- [ ] analyse_rpg
- **description** some analyses to be performed after downloading and formatting.
- This should be based on a simplification of what @maxime.modjeska has done in Cafrua. https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/25#note_75379
- [ ] vignettes/rpg_fr
- **description** presentation of the data in French
- should be completed.
- [ ] vignettes/rpg_en
- **description** presentation of the data in French
- should be a translation of rpg_fr, when it is done.
For each function/dataset. Please follow this procedure
- **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
- **creation of dataset**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of data1.0.0.0Grelot FredericGrelot Frederic2023-02-28https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/26manage geo_sirene2023-01-20T12:04:44+01:00Grelot Fredericmanage geo_sirene# Management of geo_sirene database
geo_sirene is a DB that describes the economic activities for France. It is maintained by data.cquest. It results from a post-treatment of sirene data which is maintained by INSEE, the post-treatment ...# Management of geo_sirene database
geo_sirene is a DB that describes the economic activities for France. It is maintained by data.cquest. It results from a post-treatment of sirene data which is maintained by INSEE, the post-treatment resulting in adding geo-localisation information from the addresses given in sirene DB.
There are tow versions for sirene DB that are names as 2017 and 2019 in https://data.cquest.org/geo_siren :
- monthly vintage since 2017-12 for version 2017
- monthly vintage since 2018-10 for version 2019
Original data can be downloaded at this url: https://files.data.gouv.fr/insee-sirene (monthly vintage since 2018-10, that should be in version 2019 so).
**NEW** Need to have a look to https://files.data.gouv.fr/geo-sirene/ !!!
# Functions/dataset to be added
- [ ] download_geo_sirene
- **description** download the data from data.cquest as it where it is produced.
- adaptation of download.geo_sirene
- use new journal
- **output** archive in original version
- [ ] adapt_geo_sirene
- **description** format the data from official repository in aimed data based on scheme_sirene_2019
- adaptation of adapt.geo_sirene
- use new journal
- **output**: archive in formatted version
- [ ] scheme_sirene_2019 and scheme_sirene_na
- **description** global data to be used for formatting data
- verification of what has already be done, possibly adaptation of scheme_sirene_na
- see if there is a need for scheme_sirene_2017
- [ ] analyse_geo_sirene
- **description** some analyses to be performed after downloading and formatting.
- to be defined (totally)
- [ ] vignettes/geo_sirene
- **description** presentation of the data
For each function/dataset. Please follow this procedure
- **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
- **creation of dataset**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of data1.0.0.0https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/27manage gaspar2023-01-20T13:35:39+01:00Grelot Fredericmanage gaspar# Management of geo_sirene database
geo_sirene is a DB that describes the economic activities for France. It is maintained by data.cquest. It results from a post-treatment of sirene data which is maintained by INSEE, the post-treatment ...# Management of geo_sirene database
geo_sirene is a DB that describes the economic activities for France. It is maintained by data.cquest. It results from a post-treatment of sirene data which is maintained by INSEE, the post-treatment resulting in adding geo-localisation information from the addresses given in sirene DB.
There are tow versions for sirene DB that are names as 2017 and 2019 in https://data.cquest.org/geo_siren :
- monthly vintage since 2017-12 for version 2017
- monthly vintage since 2018-10 for version 2019
Original data can be downloaded at this url: https://files.data.gouv.fr/insee-sirene (monthly vintage since 2018-10, that should be in version 2019 so).
**NEW** Need to have a look to https://files.data.gouv.fr/geo-sirene/ !!!
# Functions/dataset to be added
- [ ] download_geo_sirene
- **description** download the data from data.cquest as it where it is produced.
- adaptation of download.geo_sirene
- use new journal
- **output** archive in original version
- [ ] adapt_geo_sirene
- **description** format the data from official repository in aimed data based on scheme_sirene_2019
- adaptation of adapt.geo_sirene
- use new journal
- **output**: archive in formatted version
- [ ] scheme_sirene_2019 and scheme_sirene_na
- **description** global data to be used for formatting data
- verification of what has already be done, possibly adaptation of scheme_sirene_na
- see if there is a need for scheme_sirene_2017
- [ ] analyse_geo_sirene
- **description** some analyses to be performed after downloading and formatting.
- to be defined (totally)
- [ ] vignettes/geo_sirene
- **description** presentation of the data
For each function/dataset. Please follow this procedure
- **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
- **creation of dataset**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of data1.0.0.0https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/28manage eaip2023-01-20T13:44:50+01:00Grelot Fredericmanage eaip# Management of eaip database
eaip is a DB that describes in a homogeneous way flood extent for France. It has been produced by the French ministry of Environment in 2012. It includes flood coming from rivers and from sea.
# Functions/...# Management of eaip database
eaip is a DB that describes in a homogeneous way flood extent for France. It has been produced by the French ministry of Environment in 2012. It includes flood coming from rivers and from sea.
# Functions/dataset to be added
- [ ] adapt_eaip
- **description** format the data from official repository in aimed data based on scheme_eaip
- adaptation of adapt.eaip
- use new journal
- **output**: archive in formatted version
- [ ] scheme_eaip
- **description** global data to be used for formatting data
- sheme_eaip has not been defined for the moment
- [ ] analyse_eaip
- **description** some analyses to be performed after downloading and formatting.
- to be defined (totally)
- [ ] vignettes/eaip
- **description** presentation of the data
For each function/dataset. Please follow this procedure
- **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
- **creation of dataset**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of data1.0.0.0https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/29manage INSEE DB2023-01-20T16:02:43+01:00Grelot Fredericmanage INSEE DBTO BE PRECISED LATERTO BE PRECISED LATER1.0.0.0David Nortes MartínezDavid Nortes Martínezhttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/30manage report2023-02-01T12:02:46+01:00Grelot Fredericmanage report# Purpose of the functionality
generation of automatic and well-formatted reports from analysies of downloaded data.
# Functions/procedures to be added
- [x] generate_report
- expected inputs: a list of analysis and maps to be inc...# Purpose of the functionality
generation of automatic and well-formatted reports from analysies of downloaded data.
# Functions/procedures to be added
- [x] generate_report
- expected inputs: a list of analysis and maps to be included, a template for the layout
- expected outputs: a report
- [x] make the function exported
- [x] deal correctly with access to template md
- [x] add the possibility to choose the frame
# Data to be added
- [x] bdtopo_dwelling_report_template.rmd
- template for analysis of bd-topo-dwelling1.0.0.0Grelot FredericGrelot Frederichttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/31good practices2023-02-03T18:46:19+01:00David Nortes Martínezgood practices```bash
:~$ cat ./routine
Eat → sleep → code → sport → repeat :p
```
# General R coding good practices and conventions for package development in the *Equipe inondation*
### Tools
- Package *devtools* is powerful. Learn it and use it ...```bash
:~$ cat ./routine
Eat → sleep → code → sport → repeat :p
```
# General R coding good practices and conventions for package development in the *Equipe inondation*
### Tools
- Package *devtools* is powerful. Learn it and use it
- Package *usethis* is powerful. Learn it and use it
### Dependencies
- Avoid as much as possible the use of functions not included in base R. This way we will reduce the number of dependencies of the package and the chance that it does not work because a library is discontinued.
- Dependencies should be listed in the DESCRIPTION file.
- When writing a function, avoid using *@import* and/or *@ImportFrom*. The package should be listed in DESCRIPTION and the function should be called using the notation *package::function()*. This helps maintenance and code readability
### Style
- Avoid long if statements as much as you can. Perhaps you should do function
- Avoid using *'$'* to select variables in a data.frame or in a list. Please use the form `mydataframe["variable"]`.
- Avoid using `getwd()` to get the absolute path of a file. Instead use the `normalizePath()` function include in base R.
- Avoid using `setwd()`in your scripts. Your folder tree is not my folder tree
- Avoid using `<<-`. There exists other ways to the same that are clearer and less prone to errors.
### Repeating structures and iterating
- If you are going to repeat the same structure twice or more, think about doing a proper function or stocking the result in a constant that can be invoked anywhere in the code. This will make your code less prone to errors if modifications should be made.
- Similarly, before making a function, search or ask other maintainers if a function is already made. This make you save time, help maintain the consistency and coherency of the package. It will also help maintenance. For instance, a function to select elements of interest according to certain criterion might already be implemented as internal function to the library. If the way of selecting elements should change, it will be better to have used such a function in your code so the modifications done to select elements are shown coherently throughout functions.
### vignettes
- Preferably, create new vignettes using the function `usethis::use_vignette("name-of-the-new-vignette")` instead of copy-pasting existing ones. This function will create a Rmd file with the bare minimum information and in the right folder.
- >Package vignettes are tested by R CMD check by executing all R code chunks they contain (except those marked for non-evaluation, e.g., with option eval=FALSE for Sweave). The R working directory for all vignette tests in R CMD check is a copy of the vignette source directory. Make sure all files needed to run the R code in the vignette (data sets, …) are accessible by either placing them in the inst/doc hierarchy of the source package or by using calls to system.file(). All other files needed to re-make the vignettes (such as LaTeX style files, BibTeX input files and files for any figures not created by running the code in the vignette) must be in the vignette source directory. R CMD check will check that vignette production has succeeded by comparing modification times of output files in inst/doc with the source in vignettes. (https://cran.r-project.org/doc/manuals/R-exts.html#Writing-package-vignettes). **Read next bulletpoint**
- From https://r-pkgs.org/vignettes.html it would seem as if we could create a folder *img* inside folder *vignette*. From our tests, this approach does not generate warnings nor notes during the check process. Other variants, such as *figure*, provoked notes during the check process
- Assuming datasets are stored inside the ./data directory of *ThePackage* as *.rda files, we should use `data(my_dataset, package = "ThePackage")` to load the package's data into the session where the vignette is built. Thus to build the vignettes it would be ideal to count on very reduced versions of datasets (so the package does not become heavy) that could be used to illustrate and run the vignettes
### Tests
- Test should be done using reduced versions of the data (mini-data hereafter) they are meant to manage
## Specific for floodam.data
- When introducing new dataset to process a database (e.g. the *BD Topo*), the dataset should start with a prefix that allows us to track its origin. For example, let us assume that we want to introduce a new dataset that is going to be used while processing the database *BD-Topo*. Let us assume that such a dataset is going to substitute the codes used to codify the material used to build the roof by the actual name of the material. In this case this dataset should be named **bd_topo_roof_material**https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/32manage tests2023-02-17T09:55:13+01:00David Nortes Martínezmanage tests**This issue is intended to collect all the tests to be incorporated into the floodam.data library.**
Test authors are ask to provide the following elements in a comment below this issue (a comment per test)
- [x] test for read_with_sc...**This issue is intended to collect all the tests to be incorporated into the floodam.data library.**
Test authors are ask to provide the following elements in a comment below this issue (a comment per test)
- [x] test for read_with_scheme() with admin-express [détail](https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/32#note_76183)
- [ ] test for read_with_scheme() with bd-topo [détail](https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/32#note_76187)
- [ ] Discuss whether data for tests will be used in vignettes. If so, decide the are to focus on and the minimum amount of elements
**Function to test**
- Does the test need a dataset?
- What are you testing, how and what is the expected outcome (success, error, warning, silent, etc)?
- If your test needs additional data, please include:
- dataset name
- brief and clear description of the dataset (It helps to decide whether datasets can be used for other tests)
- dataset placed in inst/extdata/
-**IMPORTANT** the size of inst/extdata/ cannot be bigger than 1MB
- Status: TO-DO, ONGOING, DONEhttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/33list-files-data-floodam2023-03-30T17:23:07+02:00Maxime Modjeskalist-files-data-floodam# Besoin
Le besoin est de faire la liste des jeux de données présents dans data-floodam (Nextcloud) et de faire une archive ayant comme informations (exemple avec lpis) :
| nom_archive | data | niveau | scope | millesime | contact |
|-...# Besoin
Le besoin est de faire la liste des jeux de données présents dans data-floodam (Nextcloud) et de faire une archive ayant comme informations (exemple avec lpis) :
| nom_archive | data | niveau | scope | millesime | contact |
|---|---|---|---|---|---|
|lpis_l1_wallonie_2015.zip|lpis|l1|wallonie|2015|helpdesk.carto@spw.wallonie.be|
Les infos **nom_archive**, **niveau**, **scope** et **millesime** sont obtenable à partir du nom du fichier.
Il faut trouver une solution pour y intégrer le nom du contact (ce n'est peut être pas à faire automatiquement)
Une version plutot chaotique est disponible sur Nextcloud : data-floodam/manual/archive.RGrelot FredericGrelot Frederichttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/34download-plu2023-05-01T10:05:03+02:00Maxime Modjeskadownload-pluAutomatiser la procédure pour télécharger les PLU depuis le WFS du Géoportail de l'Urbanisme.
Procédure actuelle avec QGIS :
1. Lister tous les codes INSEE souhaités
2. Ouvrir QGIS
3. Ajouter une couche / Ajouter une couche WFS
4. Ajou...Automatiser la procédure pour télécharger les PLU depuis le WFS du Géoportail de l'Urbanisme.
Procédure actuelle avec QGIS :
1. Lister tous les codes INSEE souhaités
2. Ouvrir QGIS
3. Ajouter une couche / Ajouter une couche WFS
4. Ajouter une connexion : https://wxs-gpu.mongeoportail.ign.fr/externe/39wtxmgtn23okfbbs1al2lz3/wfs
5. connexion
6. Sélectionner "Zonage du document d'urbanisme"
7. Construire la requête SQL
8. SELECT * FROM zone_urba WHERE zone_urba.partition IN ('DU_<code_INSEE>', 'DU_<code_INSEE>', etc.)
9. Ajouter
10. Exporter la couche créée au format souhaité (gpkg ou shp)
Pour le faire avec R, on peut s'inspirer de cette source : https://inbo.github.io/tutorials/tutorials/spatial_wfs_services/Grelot FredericGrelot Frederichttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/35nomenclature_rpg_2 needs update2023-04-21T15:27:24+02:00Maxime Modjeskanomenclature_rpg_2 needs updateBesoin de mettre à jour la base de données de la nomenclature_rpg_2 à partir de la dernière "table référentielle historisée des cultures" de l'IGN → https://geoservices.ign.fr/documentation/donnees/vecteur/rpg.
Repérage de l'erreur en c...Besoin de mettre à jour la base de données de la nomenclature_rpg_2 à partir de la dernière "table référentielle historisée des cultures" de l'IGN → https://geoservices.ign.fr/documentation/donnees/vecteur/rpg.
Repérage de l'erreur en cherchant à adapter le rpg_l2_D011, 4 codes_cultures dans ont été identifiés comme absents dans floodam.data::nomenclature_rpg_2 (il y a surement d'autres codes manquants) :
- ACP
- CSE
- JOS
- MLSGrelot FredericGrelot Frederichttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/36floodam.data::extract_building archive reading only gpkg format2023-05-03T12:04:22+02:00Maxime Modjeskafloodam.data::extract_building archive reading only gpkg formatDans la fonction `floodam.data::extract_building`, les fonctions `gpkg_from_7z` et `read_gpkg_layer` sont appelées ne permettant pas de lire les archives au format .shp par exemple.
Besoin de mettre à jour la fonction `floodam.data::extr...Dans la fonction `floodam.data::extract_building`, les fonctions `gpkg_from_7z` et `read_gpkg_layer` sont appelées ne permettant pas de lire les archives au format .shp par exemple.
Besoin de mettre à jour la fonction `floodam.data::extract_building` pour utiliser la fonction `read_with_scheme` afin de lire tout type d'archive.Grelot FredericGrelot Frederichttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/37so-ii-adaptation-ban2024-02-07T09:58:44+01:00Grelot Fredericso-ii-adaptation-ban# Traitement en lien avec la TO adaptation (so-ii)
L'objectif est de structurer la chaîne des traitements nécessaires à la TO adaptation dans le cadre de so-ii. Cette chaîne de traîtements visent à produire 3 jeux de données stabilisés ...# Traitement en lien avec la TO adaptation (so-ii)
L'objectif est de structurer la chaîne des traitements nécessaires à la TO adaptation dans le cadre de so-ii. Cette chaîne de traîtements visent à produire 3 jeux de données stabilisés dans leur format à partir de 2 jeux de données qui peuvent être amenés à évoluer. La chaîne de traitements comprend également des analyses faites de façon répétées.
## Description sommaire de la chaîne de traitement
Grosso modo, il s'agit de partir des données de la [BAN](https://adresse.data.gouv.fr/base-adresse-nationale) pour créer une base de données des adresses à enquêter via des formulaires ODK qui renseignent sur la présence ou non de dispositifs d'adaptation visibles depuis l'espace public sur les ouvertures des bâtiments d'une territoire. Dans le cadre de so-ii, le territoire est composé d'une liste de communes disponibles aisément via la librairie so.ii.
Les formulaires ODK sont utilisés pour des campagnes de terrain (soit via des promotions d'étudiants, soit par les memres de l'équipe) et stockés sur le serveur ODK d'INRAE. Une extraction des réponses est réalisée au fil de l'eau pour suivre les campagnes. Ces extractions ont un format qui dépend des versions de formulaire ODK, même si ces versions tendent à se stabiliser. C'est à partir des exctractions que les bases de données cibles sont produites :
- **observation** qui renseigne chaque observation réalisée à une adresse donnée (une entrée par observation, i.e. par relevé réalisé à une adresse donneée)
- **state** qui donne un état des observations de disppsotifs à un temps donné (une entrée par adresse de la BAN utilisée, donc avec des informations quand une adresse n'a pas été observée, une gestion des multiples observations pour une même adresse)
- **device** qui renseigne chaque observation réalisée à un emplacement donné (une entrée par relevé réalisé à un emplacement donné). Un emplacement correspond à une _ouverture_ (porte, fenêtre, portail, bouche d'aération etc.) sur lequel peut être installé un dispositif pour empêcher l'eau de pénétrer.
Le format de ces données n'est pas sensés être altéré au cours du temps, sauf en cas de grosse mise à jour. Par contre, des mises à jour de la BAN suppport sont nécessaires au moins chaque année, car la BAN évolue continuement.
# Functions/procedures to be added
- [ ] **brief description**
- expected inputs
- expected outputs
- [ ] **creation of function**
- [ ] code of function
- [ ] documentation of function
- [ ] example for function
- [ ] test for function
# Data to be added
**Warning**: To add data, there is a need to add script to allow maintenance of data
- [ ] **brief description**
- source of the data
- storage of the data (inst/extdata)
- [ ] **creation of data**
- [ ] script for creation of "raw" data (if needed)
- [ ] storage of raw data in ascii format, normally in inst/extdata
- [ ] script in to maintain data in /data-raw
- [ ] documentation of dataGrelot FredericGrelot Frederichttps://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/38ban-adaptation Design & functions2024-01-24T15:09:42+01:00Grelot Fredericban-adaptation Design & functions- [ ] List of functions
- [ ] Diagram of treatments- [ ] List of functions
- [ ] Diagram of treatmentsGrelot FredericGrelot Frederic2024-02-05https://gitlab.irstea.fr/flood-appraisal/floodam/floodam-data/-/issues/39manage external dependency: p7zip-full2024-03-27T17:37:56+01:00David Nortes Martínezmanage external dependency: p7zip-fullTo be able to handle 7Z compressed files, floodam.data relies on external libraries and packages (outside R). These libraries are not always installed by default and the message thrown by R is not clear at all.
We must inform the user ...To be able to handle 7Z compressed files, floodam.data relies on external libraries and packages (outside R). These libraries are not always installed by default and the message thrown by R is not clear at all.
We must inform the user in the website that the external library must be installed. For example, in linux:
```
sudo apt-get install p7zip-full
```David Nortes MartínezDavid Nortes Martínez