@@ -16,7 +16,7 @@ This remote module has been tested successfully on Ubuntu 16.04 and CentOs 7 wit
First, **build the latest *develop* branch of OTB from sources**. You can check the [OTB documentation](https://www.orfeo-toolbox.org/SoftwareGuide/SoftwareGuidech2.html) which details all the steps, if fact it is quite easy thank to the SuperBuild.
## Build TensorFlow
Then you have to **build Tensorflow from source** except if you want to use only the sampling applications of OTBTensorflow (in this case, skip this section).
Then you have to **build Tensorflow from source** except if you want to use only the sampling applications of OTBTensorflow (in this case, skip this section).
Follow [the instructions](https://www.tensorflow.org/install/install_sources) to build Tensorflow.
## Build this remote module
...
...
@@ -71,11 +71,11 @@ However, you can use any deep net available on the web, or use an existing gui a
The important thing here is to know the following parameters for your **placeholders** (the inputs of your model) and **output tensors** (the outputs of your model).
- For each **input placeholder**:
-- Name
-- Perceptive field
-- Receptive field
- For each **output tensor**:
--Name
--Expression field
--Scale factor
--Name
--Expression field
--Scale factor

...
...
@@ -89,47 +89,47 @@ Set you input data for training and for validation. The validation against test

```
This is the (TensorflowModelTrain) application, version 6.5.0
Train a multisource deep learning net using Tensorflow. Change the OTB_TF_NSOURCES environment variable to set the number of sources.
-model.restorefrom <string> Restore model from path (optional, off by default)
-model.saveto <string> Save model to path (optional, off by default)
-training <group> Training parameters
-training.batchsize <int32> Batch size (mandatory, default value is 100)
-training.epochs <int32> Number of epochs (mandatory, default value is 10)
-training.userplaceholders <string list> Additional single-valued placeholders for training. Supported types: int, float, bool. (optional, off by default)
MISSING -training.targetnodes <string list> Names of the target nodes (mandatory)
-training.outputtensors <string list> Names of the output tensors to display (optional, off by default)
-training.usestreaming <boolean> Use the streaming through patches (slower but can process big dataset) (optional, off by default, default value is false)
-training.source1 <group> Parameters for source #1 (training)
MISSING -training.source1.il <string list> Input image (or list to stack) for source #1 (training) (mandatory)
MISSING -training.source1.patchsizex <int32> Patch size (x) for source #1 (mandatory)
MISSING -training.source1.patchsizey <int32> Patch size (y) for source #1 (mandatory)
MISSING -training.source1.placeholder <string> Name of the input placeholder for source #1 (training) (mandatory)
-training.source2 <group> Parameters for source #2 (training)
MISSING -training.source2.il <string list> Input image (or list to stack) for source #2 (training) (mandatory)
MISSING -training.source2.patchsizex <int32> Patch size (x) for source #2 (mandatory)
MISSING -training.source2.patchsizey <int32> Patch size (y) for source #2 (mandatory)
MISSING -training.source2.placeholder <string> Name of the input placeholder for source #2 (training) (mandatory)
-validation <group> Validation parameters
-validation.mode <string> Metrics to compute [none/class/rmse] (mandatory, default value is none)
-validation.userplaceholders <string list> Additional single-valued placeholders for validation. Supported types: int, float, bool. (optional, off by default)
-validation.usestreaming <boolean> Use the streaming through patches (slower but can process big dataset) (optional, off by default, default value is false)
-validation.source1 <group> Parameters for source #1 (validation)
-validation.source1.il <string list> Input image (or list to stack) for source #1 (validation) (mandatory)
-validation.source1.name <string> Name of the input placeholder or output tensor for source #1 (validation) (mandatory)
-validation.source2 <group> Parameters for source #2 (validation)
-validation.source2.il <string list> Input image (or list to stack) for source #2 (validation) (mandatory)
-validation.source2.name <string> Name of the input placeholder or output tensor for source #2 (validation) (mandatory)
-inxml <string> Load otb application from xml file (optional, off by default)
-progress <boolean> Report progress
-help <string list> Display long help (empty list), or help for given parameters keys
Use -help param1 [... paramN] to see detailed documentation of those parameters.
As you can note, there is `$OTB_TF_NSOURCES` + 1 sources for practical purpose: because we need at least 1 source for input data, and 1 source for the truth.
## Serve the model
...
...
@@ -141,36 +141,34 @@ Like it was said before, the user is responsible of giving the *perceptive field
```
This is the (TensorflowModelServe) application, version 6.5.0
Multisource deep learning classifier using Tensorflow. Change the OTB_TF_NSOURCES environment variable to set the number of sources.
Parameters:
-source1 <group> Parameters for source #1
MISSING -source1.il <string list> Input image (or list to stack) for source #1 (mandatory)
MISSING -source1.fovx <int32> Field of view width for source #1 (mandatory)
MISSING -source1.fovy <int32> Field of view height for source #1 (mandatory)
MISSING -source1.placeholder <string> Name of the input placeholder for source #1 (mandatory)
@@ -179,14 +177,12 @@ This is possible thank to two new applications that uses the existing training/c
**TrainClassifierFromDeepFeatures**: is a composite application that wire the **TensorflowModelServe** application output into the existing official **TrainImagesClassifier** application.
```
This is the TrainClassifierFromDeepFeatures (TrainClassifierFromDeepFeatures) application, version 6.5.0
Train a classifier from deep net based features of an image and training vector data.
Parameters:
-source1 <group> Parameters for source 1
MISSING -source1.il <string list> Input image (or list to stack) for source #1 (mandatory)
MISSING -source1.fovx <int32> Field of view width for source #1 (mandatory)
MISSING -source1.fovy <int32> Field of view height for source #1 (mandatory)
MISSING -source1.rfieldx <int32> Input receptive field (width) for source #1 (mandatory)
MISSING -source1.rfieldy <int32> Input receptive field (height) for source #1 (mandatory)
MISSING -source1.placeholder <string> Name of the input placeholder for source #1 (mandatory)
-output.spcscale <float> The output spacing scale (mandatory, default value is 1)
MISSING -output.names <string list> Names of the output tensors (mandatory)
-output.foex <int32> The output field of expression (x) (mandatory, default value is 1)
-output.foey <int32> The output field of expression (y) (mandatory, default value is 1)
-finetuning <group> Deep net fine tuning parameters
-finetuning.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-finetuning.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
-output.efieldx <int32> The output expression field (width) (mandatory, default value is 1)
-output.efieldy <int32> The output expression field (height) (mandatory, default value is 1)
-optim <group> This group of parameters allows optimization of processing time
-optim.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-optim.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
MISSING -model <string> Model file (mandatory)
-imstat <string> Statistics file (optional, off by default)
-nodatalabel <int32> Label mask value (optional, off by default, default value is 0)
MISSING -out <string> [pixel] Output image [pixel=uint8/uint16/int16/uint32/int32/float/double] (default value is uint8) (mandatory)
-confmap <string> [pixel] Confidence map image [pixel=uint8/uint16/int16/uint32/int32/float/double] (default value is double) (optional, off by default)
MISSING -out <string> [pixel] Output image [pixel=uint8/uint16/int16/uint32/int32/float/double/cint16/cint32/cfloat/cdouble] (default value is uint8) (mandatory)
-confmap <string> [pixel] Confidence map image [pixel=uint8/uint16/int16/uint32/int32/float/double/cint16/cint32/cfloat/cdouble] (default value is double) (optional, off by default)
-ram <int32> Ram (optional, off by default, default value is 128)
-inxml <string> Load otb application from xml file (optional, off by default)
That's it. Now we have two images for patches and labels. If we wanna, we can split them to distinguish test/validation groups (with the **ExtractROI** application for instance). But here, we will just perform some fine tuning of our model, located in the `outmodel` directory. Our model is quite basic. It has two input placeholders, **x1** and **y1** respectively for input patches (with size 16x16) and input reference labels (with size 1x1). We named **prediction** the tensor that predict the labels and the optimizer that perform the stochastic gradient descent is an operator named **optimizer**. We perform the fine tuning and we export the new model variables in the `newvars` folder.
Let's use our **TensorflowModelTrain** application to perform the training of this existing model.
Note that we could also have performed validation in this step. In this case, the `validation.source2.placeholder` would be different than the `training.source2.placeholder`, and would be **prediction**. This way, the program know what is the target tensor to evaluate.
After this step, we decide to produce an entire map of forest over the whole Spot-7 image. First, we duplicate the model, and we replace its variable with the new ones that have been computed in the previous step.
Then, we use the **TensorflowModelServe** application to produce the **prediction** tensor output for the entire image.