diff --git a/README.md b/README.md index f665e02da320bad5b417d582b042a463475482d8..c48fc1a65a182e3cd676b2cc24a39622ed49c183 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,38 @@ # Decloud -Decloud enables the training of various deep nets to remove clouds in optical images. +Decloud enables the training and inference of various neural networks to remove clouds in optical images. Representative illustrations:   -*Examples of de-clouded images using the single date SAR/Optical U-Net model.* +*Examples of de-clouded Sentinel-2 images using the single date SAR/Optical U-Net model.* ## Quickstart: Run a pre-trained model -Some pre-trained models are available. You can find more info on how to use them [here](doc/pretrained_models.md) +Some pre-trained models are available at this [url](https://nextcloud.inrae.fr/s/DEy4PgR2igSQKKH). + +The easiest way to run a model is to run the timeseries processor such as: + +<pre><code>python production/meraner_timeseries_processor.py +<span style="padding:0 0 0 90px;color:blue">--s2_dir</span> S2_PREPARE/T31TCJ +<span style="padding:0 0 0 90px;color:blue">--s1_dir</span> S1_PREPARE/T31TCJ +<span style="padding:0 0 0 90px;color:blue">--model</span> merunet_occitanie_pretrained/ +<span style="padding:0 0 0 90px;color:blue">--dem</span> DEM_PREPARE/T31TCJ.tif +<span style="padding:0 0 0 90px;color:blue">--out_dir</span> meraner_timeseries/ +<span style="padding:0 0 0 90px;color:grey">--write_intermediate --overwrite</span> +<span style="padding:0 0 0 90px;color:grey">--start</span> 2018-01-01 <span style="color:grey">--end</span> 2018-12-31 +<span style="padding:0 0 0 90px;color:grey">--ulx</span> 306000 <span style="color:grey">--uly</span> 4895000 <span style="color:grey">--lrx</span> 320000 <span style="color:grey">--lry</span> 4888000 +</code></pre> +*(mandatory arguments in blue, optional arguments in grey)* + +You can find more info on available models and how to use these models [here](doc/pretrained_models.md) + + ## Advanced usage: Train you own models -1. Prepare the data: convert Sentinel-1 and Sentinel-2 images in the right format (see the documentation). +1. Prepare the data: convert Sentinel-1 and Sentinel-2 images in the right format (see the [documentation](doc/user_doc.md#Part-A:-data-preparation)).  2. Create some *Acquisition Layouts* (.json files) describing how the images are acquired, ROIs for training and validation sites, and generate some TFRecord files containing the samples.  diff --git a/doc/images/crga_os1_pretrained_model.png b/doc/images/crga_os1_pretrained_model.png new file mode 100644 index 0000000000000000000000000000000000000000..46e64b732622a1f0b340e9de617180cfb317672a Binary files /dev/null and b/doc/images/crga_os1_pretrained_model.png differ diff --git a/doc/images/crga_os2_pretrained_model.png b/doc/images/crga_os2_pretrained_model.png new file mode 100644 index 0000000000000000000000000000000000000000..b534af65ef100e05898e04e5f70e6b361f5a8d36 Binary files /dev/null and b/doc/images/crga_os2_pretrained_model.png differ diff --git a/doc/images/merunet_pretrained_model.png b/doc/images/merunet_pretrained_model.png new file mode 100644 index 0000000000000000000000000000000000000000..e8e96d3ab7ee8dda5c102d4fb6473dc65a0b3fae Binary files /dev/null and b/doc/images/merunet_pretrained_model.png differ diff --git a/doc/images/monthly_synthesis_s2_pretrained_model.png b/doc/images/monthly_synthesis_s2_pretrained_model.png new file mode 100644 index 0000000000000000000000000000000000000000..953a10e38f1ec8a1c3571999ed0babdadcb8b0e2 Binary files /dev/null and b/doc/images/monthly_synthesis_s2_pretrained_model.png differ diff --git a/doc/images/monthly_synthesis_s2s1_pretrained_model.png b/doc/images/monthly_synthesis_s2s1_pretrained_model.png new file mode 100644 index 0000000000000000000000000000000000000000..36953d9cbbc712a8ad2d3a63e94a4485ea901138 Binary files /dev/null and b/doc/images/monthly_synthesis_s2s1_pretrained_model.png differ diff --git a/doc/pretrained_models.md b/doc/pretrained_models.md index 0535032b54c40e70047b16c1c866479356e64986..e1db7d50fc71ee6097e64d90d785d7e37e6cde72 100644 --- a/doc/pretrained_models.md +++ b/doc/pretrained_models.md @@ -1,73 +1,81 @@ -## URL +# Download Models are located here: https://nextcloud.inrae.fr/s/DEy4PgR2igSQKKH -## Models available +# Models -### CRGA OS2 -TODO: illustration inputs / output +All models use Sentinel-2 and Sentinel-1 images as inputs. The inputs/output of each model architecture are presented below. -This section covers how to run these pre-trained models: -- crga_os2_occitanie_pretrained -- crga_os2david_occitanie_pretrained -- crga_os2_burkina_pretrained +## CRGA OS2 + -### CRGA OS1 -TODO: illustration inputs / output -This section covers how to run these pre-trained models: -- crga_os1_occitanie_pretrained -- crga_os1_burkina_pretrained +## CRGA OS1 + -### Meraner -TODO: illustration inputs / output -This section covers how to run these pre-trained models: -- meraner_occitanie_pretrained -- meraner_burkina_pretrained +## Merunet: Meraner U-Net + -### Monthly synthesis S2/S1 -TODO: illustration inputs / output -This section covers how to run these pre-trained models: -- monthly_synthesis_s2s1_occitanie_pretrained -- monthly_synthesis_s2s1_david_occitanie_pretrained +## Monthly synthesis S2/S1 + -### Monthly synthesis S2 -TODO: illustration inputs / output +## Monthly synthesis S2 + -This section covers how to run these pre-trained models: -- monthly_synthesis_s2_david_occitanie_pretrained +# Inputs -## How to run a model +- Sentinel-1 SAR images, pre-processed using the S1Tiling OTB Remote Module +- Sentinel-2 optical images (L2 level), can be from the THEIA Land Data Center or from ESA scihub +- DEM: Digital Elevation Model, 20m resolution +# How to run a model + +## Time series processor +This is the highest-level way of running the inference of a model. For example, you can run a CRGA model on a time series like this: + +<pre><code>python production/crga_timeseries_processor.py \ +<span style="padding:0 0 0 90px;color:blue">--s2_dir</span> S2_PREPARE/T31TCJ \ +<span style="padding:0 0 0 90px;color:blue">--s1_dir</span> S1_PREPARE/T31TCJ \ +<span style="padding:0 0 0 90px;color:blue">--model</span> crga_os2_occitanie_pretrained/ \ +<span style="padding:0 0 0 90px;color:blue">--dem</span> DEM_PREPARE/T31TCJ.tif \ +<span style="padding:0 0 0 90px;color:blue">--out_dir</span> reconstructed_timeseries/ \ +<span style="padding:0 0 0 90px;color:grey">--write_intermediate --overwrite</span> \ +<span style="padding:0 0 0 90px;color:grey">--start</span> 2018-01-01 <span style="color:grey">--end</span> 2018-12-31 \ +<span style="padding:0 0 0 90px;color:grey">--ulx</span> 306000 <span style="color:grey">--uly</span> 4895000 <span style="color:grey">--lrx</span> 320000 <span style="color:grey">--lry</span> 4888000 +</code></pre> +*(mandatory arguments in blue, optional arguments in grey)* + + +## Processor For instance, we use `crga_processor.py` to perform the inference of the *crga* models. This program not only performs the inference, but also takes care of preparing the right input images to feed the model, and also the post-processing steps (like removing inferred no-data pixels). -It is built exclusively using OTB application pipelines, and is fully streamable (not limitation or images size). +It is built exclusively using OTB application pipelines, and is fully streamable (no limitation on images size). Below is an example of use : -```yaml +```bash python production/crga_processor.py \ --il_s1before \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1b_31TEJ_vvvh_DES_139_20201001txxxxxx_from-10to3dB.tif \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1a_31TEJ_vvvh_DES_037_20200930txxxxxx_from-10to3dB.tif \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1b_31TEJ_vvvh_DES_110_20200929t060008_from-10to3dB.tif \ + /data/s1b_31TEJ_vvvh_DES_139_20201001txxxxxx_from-10to3dB.tif \ + /data/s1a_31TEJ_vvvh_DES_037_20200930txxxxxx_from-10to3dB.tif \ + /data/s1b_31TEJ_vvvh_DES_110_20200929t060008_from-10to3dB.tif \ --il_s1 \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1b_31TEJ_vvvh_DES_139_20201013txxxxxx_from-10to3dB.tif \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1b_31TEJ_vvvh_DES_110_20201011t060008_from-10to3dB.tif \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1a_31TEJ_vvvh_DES_037_20201012txxxxxx_from-10to3dB.tif \ + /data/s1b_31TEJ_vvvh_DES_139_20201013txxxxxx_from-10to3dB.tif \ + /data/s1b_31TEJ_vvvh_DES_110_20201011t060008_from-10to3dB.tif \ + /data/s1a_31TEJ_vvvh_DES_037_20201012txxxxxx_from-10to3dB.tif \ --il_s1after \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1b_31TEJ_vvvh_DES_139_20201025txxxxxx_from-10to3dB.tif \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1a_31TEJ_vvvh_DES_037_20201024txxxxxx_from-10to3dB.tif \ - /data/decloud/bucket/S1_PREPARE/T31TEJ/s1b_31TEJ_vvvh_DES_110_20201023t060008_from-10to3dB.tif \ + /data/s1b_31TEJ_vvvh_DES_139_20201025txxxxxx_from-10to3dB.tif \ + /data/s1a_31TEJ_vvvh_DES_037_20201024txxxxxx_from-10to3dB.tif \ + /data/s1b_31TEJ_vvvh_DES_110_20201023t060008_from-10to3dB.tif \ --il_s2before \ - /data/decloud/bucket/S2_PREPARE/T31TEJ/SENTINEL2B_20200929-104857-489_L2A_T31TEJ_C_V2-2 \ - /data/decloud/bucket/S2_PREPARE/T31TEJ/SENTINEL2B_20200926-103901-393_L2A_T31TEJ_C_V2-2 \ + /data/SENTINEL2B_20200929-104857-489_L2A_T31TEJ_C_V2-2 \ + /data/T31TEJ/SENTINEL2B_20200926-103901-393_L2A_T31TEJ_C_V2-2 \ --il_s2after \ - /data/decloud/bucket/S2_PREPARE/T31TEJ/SENTINEL2B_20201026-103901-924_L2A_T31TEJ_C_V2-2 \ - /data/decloud/bucket/S2_PREPARE/T31TEJ/SENTINEL2A_20201024-104859-766_L2A_T31TEJ_C_V2-2 \ ---in_s2 /data/decloud/bucket/S2_PREPARE/T31TEJ/SENTINEL2B_20201012-105848-497_L2A_T31TEJ_C_V2-2 \ ---dem /data/decloud/bucket/DEM_PREPARE/T31TEJ.tif \ ---savedmodel /data/decloud/todel/savedmodel_david2/09-04-21_224907_various_enhancements_and_todos_93228_crga_os2_david_bt48_bv48 \ ---output /data/decloud/results/theia_data/SENTINEL2B_20201012-105848-497_L2A_T31TEJ_C_V2-2_FRE_10m_reconst_reference.tif + /data/SENTINEL2B_20201026-103901-924_L2A_T31TEJ_C_V2-2 \ + /data/SENTINEL2A_20201024-104859-766_L2A_T31TEJ_C_V2-2 \ +--in_s2 /data/SENTINEL2B_20201012-105848-497_L2A_T31TEJ_C_V2-2 \ +--dem /data/DEM_T31TEJ.tif \ +--savedmodel /path/to/saved/model/ \ +--output SENTINEL2B_20201012-105848-497_L2A_T31TEJ_C_V2-2_FRE_10m_reconstructed.tif ```