Commit 93390d3b authored by Cresson Remi's avatar Cresson Remi
Browse files

DOC: update markdown due to refactoring

parent 0cc2c500
......@@ -71,11 +71,11 @@ However, you can use any deep net available on the web, or use an existing gui a
The important thing here is to know the following parameters for your **placeholders** (the inputs of your model) and **output tensors** (the outputs of your model).
- For each **input placeholder**:
-- Name
-- Perceptive field
-- Receptive field
- For each **output tensor**:
--Name
--Expression field
--Scale factor
-- Name
-- Expression field
-- Scale factor
![Schema](doc/schema.png)
......@@ -89,8 +89,6 @@ Set you input data for training and for validation. The validation against test
![Schema](doc/model_training.png)
```
This is the (TensorflowModelTrain) application, version 6.5.0
Train a multisource deep learning net using Tensorflow. Change the OTB_TF_NSOURCES environment variable to set the number of sources.
Parameters:
-model <group> Model parameters
......@@ -101,27 +99,29 @@ MISSING -model.dir <string> Tensorflow model_save d
-training.batchsize <int32> Batch size (mandatory, default value is 100)
-training.epochs <int32> Number of epochs (mandatory, default value is 10)
-training.userplaceholders <string list> Additional single-valued placeholders for training. Supported types: int, float, bool. (optional, off by default)
MISSING -training.targetnodesnames <string list> Names of the target nodes (mandatory)
-training.outputtensorsnames <string list> Names of the output tensors to display (optional, off by default)
MISSING -training.targetnodes <string list> Names of the target nodes (mandatory)
-training.outputtensors <string list> Names of the output tensors to display (optional, off by default)
-training.usestreaming <boolean> Use the streaming through patches (slower but can process big dataset) (optional, off by default, default value is false)
-training.source1 <group> Parameters for source #1 (training)
MISSING -training.source1.il <string list> Input image (or list to stack) for source #1 (training) (mandatory)
MISSING -training.source1.fovx <int32> Field of view width for source #1 (mandatory)
MISSING -training.source1.fovy <int32> Field of view height for source #1 (mandatory)
MISSING -training.source1.patchsizex <int32> Patch size (x) for source #1 (mandatory)
MISSING -training.source1.patchsizey <int32> Patch size (y) for source #1 (mandatory)
MISSING -training.source1.placeholder <string> Name of the input placeholder for source #1 (training) (mandatory)
-training.source2 <group> Parameters for source #2 (training)
MISSING -training.source2.il <string list> Input image (or list to stack) for source #2 (training) (mandatory)
MISSING -training.source2.fovx <int32> Field of view width for source #2 (mandatory)
MISSING -training.source2.fovy <int32> Field of view height for source #2 (mandatory)
MISSING -training.source2.patchsizex <int32> Patch size (x) for source #2 (mandatory)
MISSING -training.source2.patchsizey <int32> Patch size (y) for source #2 (mandatory)
MISSING -training.source2.placeholder <string> Name of the input placeholder for source #2 (training) (mandatory)
-validation <group> Validation parameters
-validation.mode <string> Metrics to compute [none/class/rmse] (mandatory, default value is none)
-validation.userplaceholders <string list> Additional single-valued placeholders for validation. Supported types: int, float, bool. (optional, off by default)
-validation.usestreaming <boolean> Use the streaming through patches (slower but can process big dataset) (optional, off by default, default value is false)
-validation.source1 <group> Parameters for source #1 (validation)
-validation.source1.il <string list> Input image (or list to stack) for source #1 (validation) (mandatory)
-validation.source1.placeholder <string> Name of the input placeholder for source #1 (validation) (mandatory)
-validation.source1.name <string> Name of the input placeholder or output tensor for source #1 (validation) (mandatory)
-validation.source2 <group> Parameters for source #2 (validation)
-validation.source2.il <string list> Input image (or list to stack) for source #2 (validation) (mandatory)
-validation.source2.placeholder <string> Name of the input placeholder for source #2 (validation) (mandatory)
-validation.source2.name <string> Name of the input placeholder or output tensor for source #2 (validation) (mandatory)
-inxml <string> Load otb application from xml file (optional, off by default)
-progress <boolean> Report progress
-help <string list> Display long help (empty list), or help for given parameters keys
......@@ -129,7 +129,7 @@ MISSING -training.source2.placeholder <string> Name of the input place
Use -help param1 [... paramN] to see detailed documentation of those parameters.
Examples:
otbcli_TensorflowModelTrain -source1.il spot6pms.tif -source1.placeholder x1 -source1.fovx 16 -source1.fovy 16 -source2.il labels.tif -source2.placeholder y1 -source2.fovx 1 -source2.fovy 1 -model.dir /tmp/my_saved_model/ -training.userplaceholders is_training=true dropout=0.2 -training.targetnodenames optimizer -model.saveto /tmp/my_saved_model_vars1
otbcli_TensorflowModelTrain -source1.il spot6pms.tif -source1.placeholder x1 -source1.patchsizex 16 -source1.patchsizey 16 -source2.il labels.tif -source2.placeholder y1 -source2.patchsizex 1 -source2.patchsizex 1 -model.dir /tmp/my_saved_model/ -training.userplaceholders is_training=true dropout=0.2 -training.targetnodes optimizer -model.saveto /tmp/my_saved_model_vars1
```
As you can note, there is `$OTB_TF_NSOURCES` + 1 sources for practical purpose: because we need at least 1 source for input data, and 1 source for the truth.
## Serve the model
......@@ -141,14 +141,12 @@ Like it was said before, the user is responsible of giving the *perceptive field
```
This is the (TensorflowModelServe) application, version 6.5.0
Multisource deep learning classifier using Tensorflow. Change the OTB_TF_NSOURCES environment variable to set the number of sources.
Parameters:
-source1 <group> Parameters for source #1
MISSING -source1.il <string list> Input image (or list to stack) for source #1 (mandatory)
MISSING -source1.fovx <int32> Field of view width for source #1 (mandatory)
MISSING -source1.fovy <int32> Field of view height for source #1 (mandatory)
MISSING -source1.rfieldx <int32> Input receptive field (width) for source #1 (mandatory)
MISSING -source1.rfieldy <int32> Input receptive field (height) for source #1 (mandatory)
MISSING -source1.placeholder <string> Name of the input placeholder for source #1 (mandatory)
-model <group> model parameters
MISSING -model.dir <string> Tensorflow model_save directory (mandatory)
......@@ -157,12 +155,12 @@ MISSING -model.dir <string> Tensorflow model_save directo
-output <group> Output tensors parameters
-output.spcscale <float> The output spacing scale (mandatory, default value is 1)
MISSING -output.names <string list> Names of the output tensors (mandatory)
-output.foex <int32> The output field of expression (x) (mandatory, default value is 1)
-output.foey <int32> The output field of expression (y) (mandatory, default value is 1)
-finetuning <group> Fine tuning performance or consistency parameters
-finetuning.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-finetuning.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
MISSING -out <string> [pixel] output image [pixel=uint8/uint16/int16/uint32/int32/float/double] (default value is float) (mandatory)
-output.efieldx <int32> The output expression field (width) (mandatory, default value is 1)
-output.efieldy <int32> The output expression field (height) (mandatory, default value is 1)
-optim <group> This group of parameters allows optimization of processing time
-optim.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-optim.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
MISSING -out <string> [pixel] output image [pixel=uint8/uint16/int16/uint32/int32/float/double/cint16/cint32/cfloat/cdouble] (default value is float) (mandatory)
-inxml <string> Load otb application from xml file (optional, off by default)
-progress <boolean> Report progress
-help <string list> Display long help (empty list), or help for given parameters keys
......@@ -170,7 +168,7 @@ MISSING -out <string> [pixel] output image [pixel=uint8/ui
Use -help param1 [... paramN] to see detailed documentation of those parameters.
Examples:
otbcli_TensorflowModelServe -source1.il spot6pms.tif -source1.placeholder x1 -source1.fovx 16 -source1.fovy 16 -model.dir /tmp/my_saved_model/ -model.userplaceholders is_training=false dropout=0.0 -output.names out_predict1 out_proba1 -out "classif128tgt.tif?&streaming:type=tiled&streaming:sizemode=height&streaming:sizevalue=256" -finetuning.disabletiling
otbcli_TensorflowModelServe -source1.il spot6pms.tif -source1.placeholder x1 -source1.rfieldx 16 -source1.rfieldy 16 -model.dir /tmp/my_saved_model/ -model.userplaceholders is_training=false dropout=0.0 -output.names out_predict1 out_proba1 -out "classif128tgt.tif?&streaming:type=tiled&streaming:sizemode=height&streaming:sizevalue=256"
```
## Composite applications for classification
......@@ -179,14 +177,12 @@ This is possible thank to two new applications that uses the existing training/c
**TrainClassifierFromDeepFeatures**: is a composite application that wire the **TensorflowModelServe** application output into the existing official **TrainImagesClassifier** application.
```
This is the TrainClassifierFromDeepFeatures (TrainClassifierFromDeepFeatures) application, version 6.5.0
Train a classifier from deep net based features of an image and training vector data.
Parameters:
-source1 <group> Parameters for source 1
MISSING -source1.il <string list> Input image (or list to stack) for source #1 (mandatory)
MISSING -source1.fovx <int32> Field of view width for source #1 (mandatory)
MISSING -source1.fovy <int32> Field of view height for source #1 (mandatory)
MISSING -source1.rfieldx <int32> Input receptive field (width) for source #1 (mandatory)
MISSING -source1.rfieldy <int32> Input receptive field (height) for source #1 (mandatory)
MISSING -source1.placeholder <string> Name of the input placeholder for source #1 (mandatory)
-model <group> Deep net model parameters
MISSING -model.dir <string> Tensorflow model_save directory (mandatory)
......@@ -195,11 +191,11 @@ MISSING -model.dir <string> Tensorflow model_save dire
-output <group> Deep net outputs parameters
-output.spcscale <float> The output spacing scale (mandatory, default value is 1)
MISSING -output.names <string list> Names of the output tensors (mandatory)
-output.foex <int32> The output field of expression (x) (mandatory, default value is 1)
-output.foey <int32> The output field of expression (y) (mandatory, default value is 1)
-finetuning <group> Deep net fine tuning parameters
-finetuning.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-finetuning.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
-output.efieldx <int32> The output expression field (width) (mandatory, default value is 1)
-output.efieldy <int32> The output expression field (height) (mandatory, default value is 1)
-optim <group> This group of parameters allows optimization of processing time
-optim.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-optim.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
MISSING -vd <string list> Input vector data list (mandatory)
-valid <string list> Validation vector data list (optional, off by default)
MISSING -out <string> Output model (mandatory)
......@@ -209,7 +205,7 @@ MISSING -out <string> Output model (mandatory)
-sample.mv <int32> Maximum validation sample size per class (mandatory, default value is 1000)
-sample.bm <int32> Bound sample number by minimum (mandatory, default value is 1)
-sample.vtr <float> Training and validation sample ratio (mandatory, default value is 0.5)
-sample.vfn <string> Field containing the class integer label for supervision (mandatory, default value is )
-sample.vfn <string> Field containing the class integer label for supervision (mandatory, no default value)
-elev <group> Elevation management
-elev.dem <string> DEM directory (optional, off by default)
-elev.geoid <string> Geoid File (optional, off by default)
......@@ -273,14 +269,12 @@ Use -help param1 [... paramN] to see detailed documentation of those parameters.
**ImageClassifierFromDeepFeatures** same approach with the official **ImageClassifier**.
```
This is the ImageClassifierFromDeepFeatures (ImageClassifierFromDeepFeatures) application, version 6.5.0
Classify image using features from a deep net and an OTB machine learning classification model
Parameters:
-source1 <group> Parameters for source 1
MISSING -source1.il <string list> Input image (or list to stack) for source #1 (mandatory)
MISSING -source1.fovx <int32> Field of view width for source #1 (mandatory)
MISSING -source1.fovy <int32> Field of view height for source #1 (mandatory)
MISSING -source1.rfieldx <int32> Input receptive field (width) for source #1 (mandatory)
MISSING -source1.rfieldy <int32> Input receptive field (height) for source #1 (mandatory)
MISSING -source1.placeholder <string> Name of the input placeholder for source #1 (mandatory)
-deepmodel <group> Deep net model parameters
MISSING -deepmodel.dir <string> Tensorflow model_save directory (mandatory)
......@@ -289,16 +283,16 @@ MISSING -deepmodel.dir <string> Tensorflow model_save direc
-output <group> Deep net outputs parameters
-output.spcscale <float> The output spacing scale (mandatory, default value is 1)
MISSING -output.names <string list> Names of the output tensors (mandatory)
-output.foex <int32> The output field of expression (x) (mandatory, default value is 1)
-output.foey <int32> The output field of expression (y) (mandatory, default value is 1)
-finetuning <group> Deep net fine tuning parameters
-finetuning.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-finetuning.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
-output.efieldx <int32> The output expression field (width) (mandatory, default value is 1)
-output.efieldy <int32> The output expression field (height) (mandatory, default value is 1)
-optim <group> This group of parameters allows optimization of processing time
-optim.disabletiling <boolean> Disable tiling (optional, off by default, default value is false)
-optim.tilesize <int32> Tile width used to stream the filter output (mandatory, default value is 16)
MISSING -model <string> Model file (mandatory)
-imstat <string> Statistics file (optional, off by default)
-nodatalabel <int32> Label mask value (optional, off by default, default value is 0)
MISSING -out <string> [pixel] Output image [pixel=uint8/uint16/int16/uint32/int32/float/double] (default value is uint8) (mandatory)
-confmap <string> [pixel] Confidence map image [pixel=uint8/uint16/int16/uint32/int32/float/double] (default value is double) (optional, off by default)
MISSING -out <string> [pixel] Output image [pixel=uint8/uint16/int16/uint32/int32/float/double/cint16/cint32/cfloat/cdouble] (default value is uint8) (mandatory)
-confmap <string> [pixel] Confidence map image [pixel=uint8/uint16/int16/uint32/int32/float/double/cint16/cint32/cfloat/cdouble] (default value is double) (optional, off by default)
-ram <int32> Ram (optional, off by default, default value is 128)
-inxml <string> Load otb application from xml file (optional, off by default)
-progress <boolean> Report progress
......@@ -327,14 +321,14 @@ otbcli_PatchesExtraction -source1.il spot7.tif -source1.patchsizex 16 -source1.p
That's it. Now we have two images for patches and labels. If we wanna, we can split them to distinguish test/validation groups (with the **ExtractROI** application for instance). But here, we will just perform some fine tuning of our model, located in the `outmodel` directory. Our model is quite basic. It has two input placeholders, **x1** and **y1** respectively for input patches (with size 16x16) and input reference labels (with size 1x1). We named **prediction** the tensor that predict the labels and the optimizer that perform the stochastic gradient descent is an operator named **optimizer**. We perform the fine tuning and we export the new model variables in the `newvars` folder.
Let's use our **TensorflowModelTrain** application to perform the training of this existing model.
```
otbcli_TensorflowModelTrain -model.dir /path/to/oursavedmodel -training.targetnodesnames optimizer -training.source1.il samp_patches.tif -training.source1.fovx 16 -training.source1.fovy 16 -training.source1.placeholder x1 -training.source2.il samp_labels.tif -training.source2.fovx 1 -training.source2.fovy 1 -training.source2.placeholder y1 -model.saveto newvars
otbcli_TensorflowModelTrain -model.dir /path/to/oursavedmodel -training.targetnodesnames optimizer -training.source1.il samp_patches.tif -training.source1.patchsizex 16 -training.source1.patchsizey 16 -training.source1.placeholder x1 -training.source2.il samp_labels.tif -training.source2.patchsizex 1 -training.source2.patchsizey 1 -training.source2.placeholder y1 -model.saveto newvars
```
Note that we could also have performed validation in this step. In this case, the `validation.source2.placeholder` would be different than the `training.source2.placeholder`, and would be **prediction**. This way, the program know what is the target tensor to evaluate.
After this step, we decide to produce an entire map of forest over the whole Spot-7 image. First, we duplicate the model, and we replace its variable with the new ones that have been computed in the previous step.
Then, we use the **TensorflowModelServe** application to produce the **prediction** tensor output for the entire image.
```
otbcli_TensorflowModelServe -source1.il spot7.tif -source1.placeholder x1 -source1.fovx 16 -source1.fovy 16 -model.dir /tmp/my_new_model -output.names prediction -out map.tif uint8
otbcli_TensorflowModelServe -source1.il spot7.tif -source1.placeholder x1 -source1.rfieldx 16 -source1.rfieldy 16 -model.dir /tmp/my_new_model -output.names prediction -out map.tif uint8
```
# Contact
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment