Commit d896568a authored by Cresson Remi's avatar Cresson Remi
Browse files

DOC: update OTB applications section

Showing with 17 additions and 7 deletions
+17 -7
# Description of applications
## PatchesExtraction This section introduces the new OTB applications provided in OTBTF.
This application performs the extraction of patches in images from a vector data containing points. ## Patches extraction
The OTB sampling framework can be used to generate the set of selected points.
After that, you can use the **PatchesExtraction** application to perform the sampling of your images. The `PatchesExtraction` application performs the extraction of patches in images from a vector data containing points.
We denote _input source_ an input image, or a stack of input images or channels (which will be concatenated: they must have the same size). Each point locates the **center** of the **central pixel** of the patch.
For patches with even size of *N*, the **central pixel** corresponds to the pixel index *N/2+1* (index starting at 0).
We denote one _input source_, either an input image, or a stack of input images that will be concatenated (they must have the same size).
The user can set the `OTB_TF_NSOURCES` environment variable to select the number of _input sources_ that he wants. The user can set the `OTB_TF_NSOURCES` environment variable to select the number of _input sources_ that he wants.
For example, for sampling a Time Series (TS) together with a single Very High Resolution image (VHR), two sources are required: For example, for sampling a Time Series (TS) together with a single Very High Resolution image (VHR), two sources are required:
- 1 input images list for time series, - 1 input images list for time series,
...@@ -321,9 +324,11 @@ None ...@@ -321,9 +324,11 @@ None
Note that you can still set the `OTB_TF_NSOURCES` environment variable. Note that you can still set the `OTB_TF_NSOURCES` environment variable.
# How to use # Basic example
Below is a minimal example that presents the main steps to train a model, and perform the inference.
## The basics ## Sampling
Here we will try to provide a simple example of doing a classification using a deep net that performs on one single VHR image. Here we will try to provide a simple example of doing a classification using a deep net that performs on one single VHR image.
Our data set consists in one Spot-7 image, *spot7.tif*, and a training vector data, *terrain_truth.shp* that describes sparsely forest / non-forest polygons. Our data set consists in one Spot-7 image, *spot7.tif*, and a training vector data, *terrain_truth.shp* that describes sparsely forest / non-forest polygons.
...@@ -342,6 +347,9 @@ We want to produce one image of 16x16 patches, and one image for the correspondi ...@@ -342,6 +347,9 @@ We want to produce one image of 16x16 patches, and one image for the correspondi
``` ```
otbcli_PatchesExtraction -source1.il spot7.tif -source1.patchsizex 16 -source1.patchsizey 16 -vec points.shp -field class -source1.out samp_labels.tif -outpatches samp_patches.tif otbcli_PatchesExtraction -source1.il spot7.tif -source1.patchsizex 16 -source1.patchsizey 16 -vec points.shp -field class -source1.out samp_labels.tif -outpatches samp_patches.tif
``` ```
## Training
Now we have two images for patches and labels. Now we have two images for patches and labels.
We can split them to distinguish test/validation groups (with the **ExtractROI** application for instance). We can split them to distinguish test/validation groups (with the **ExtractROI** application for instance).
But here, we will just perform some fine tuning of our model. But here, we will just perform some fine tuning of our model.
...@@ -355,6 +363,8 @@ otbcli_TensorflowModelTrain -model.dir /path/to/oursavedmodel -training.targetno ...@@ -355,6 +363,8 @@ otbcli_TensorflowModelTrain -model.dir /path/to/oursavedmodel -training.targetno
``` ```
Note that we could also have performed validation in this step. In this case, the `validation.source2.placeholder` would be different than the `training.source2.placeholder`, and would be **prediction**. This way, the program know what is the target tensor to evaluate. Note that we could also have performed validation in this step. In this case, the `validation.source2.placeholder` would be different than the `training.source2.placeholder`, and would be **prediction**. This way, the program know what is the target tensor to evaluate.
## Inference
After this step, we use the trained model to produce the entire map of forest over the whole Spot-7 image. After this step, we use the trained model to produce the entire map of forest over the whole Spot-7 image.
For this, we use the **TensorflowModelServe** application to produce the **prediction** tensor output for the entire image. For this, we use the **TensorflowModelServe** application to produce the **prediction** tensor output for the entire image.
``` ```
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment