This section introduces the new OTB applications provided in OTBTF.
This application performs the extraction of patches in images from a vector data containing points.
## Patches extraction
The OTB sampling framework can be used to generate the set of selected points.
After that, you can use the **PatchesExtraction** application to perform the sampling of your images.
The `PatchesExtraction` application performs the extraction of patches in images from a vector data containing points.
We denote _input source_ an input image, or a stack of input images or channels (which will be concatenated: they must have the same size).
Each point locates the **center** of the **central pixel** of the patch.
For patches with even size of *N*, the **central pixel** corresponds to the pixel index *N/2+1* (index starting at 0).
We denote one _input source_, either an input image, or a stack of input images that will be concatenated (they must have the same size).
The user can set the `OTB_TF_NSOURCES` environment variable to select the number of _input sources_ that he wants.
The user can set the `OTB_TF_NSOURCES` environment variable to select the number of _input sources_ that he wants.
For example, for sampling a Time Series (TS) together with a single Very High Resolution image (VHR), two sources are required:
For example, for sampling a Time Series (TS) together with a single Very High Resolution image (VHR), two sources are required:
- 1 input images list for time series,
- 1 input images list for time series,
...
@@ -321,9 +324,11 @@ None
...
@@ -321,9 +324,11 @@ None
Note that you can still set the `OTB_TF_NSOURCES` environment variable.
Note that you can still set the `OTB_TF_NSOURCES` environment variable.
# How to use
# Basic example
Below is a minimal example that presents the main steps to train a model, and perform the inference.
## The basics
## Sampling
Here we will try to provide a simple example of doing a classification using a deep net that performs on one single VHR image.
Here we will try to provide a simple example of doing a classification using a deep net that performs on one single VHR image.
Our data set consists in one Spot-7 image, *spot7.tif*, and a training vector data, *terrain_truth.shp* that describes sparsely forest / non-forest polygons.
Our data set consists in one Spot-7 image, *spot7.tif*, and a training vector data, *terrain_truth.shp* that describes sparsely forest / non-forest polygons.
...
@@ -342,6 +347,9 @@ We want to produce one image of 16x16 patches, and one image for the correspondi
...
@@ -342,6 +347,9 @@ We want to produce one image of 16x16 patches, and one image for the correspondi
Note that we could also have performed validation in this step. In this case, the `validation.source2.placeholder` would be different than the `training.source2.placeholder`, and would be **prediction**. This way, the program know what is the target tensor to evaluate.
Note that we could also have performed validation in this step. In this case, the `validation.source2.placeholder` would be different than the `training.source2.placeholder`, and would be **prediction**. This way, the program know what is the target tensor to evaluate.
## Inference
After this step, we use the trained model to produce the entire map of forest over the whole Spot-7 image.
After this step, we use the trained model to produce the entire map of forest over the whole Spot-7 image.
For this, we use the **TensorflowModelServe** application to produce the **prediction** tensor output for the entire image.
For this, we use the **TensorflowModelServe** application to produce the **prediction** tensor output for the entire image.