An error occurred while loading the file. Please try again.
-
Remi Cresson authored0b339d1a
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<title>OTBTF, The Orfeo ToolBox extension for deep learning</title>
<link rel="stylesheet" href="revealjs/dist/reset.css">
<link rel="stylesheet" href="revealjs/dist/reveal.css">
<link rel="stylesheet" href="a11y-light.css">
<link rel="stylesheet" href="otb.css" id="theme">
<style type="text/css">
.reveal pre {
width: 512px
}
.slide-number {
opacity:0
}
</style>
</head>
<body>
<div class="reveal">
<div class="slides">
<!------------------------------------------------------------------------
PREMIER SLIDE
------------------------------------------------------------------------->
<section data-background="illustrations/blank.png" background-size="contain">
<h1> Status of OTBTF </h1>
<h2> The Orfeo ToolBox extension for deep learning </h2>
</br>
<p> Rémi Cresson<sup>1</sup>, Nicolas Narçon<sup>1</sup>, Vincent Delbar<sup>2</sup></p>
<small>(1) French National Research Institute for Agriculture, Food and the Environment (INRAE),
<br>
(2)
LaTeleScop</small>
<br>
<br>
<br>
<br>
<img width="30%" data-src="illustrations/foss4g_logo.png">
</section>
<!------------------------------------------------------------------------
WHAT IS OTBTF
------------------------------------------------------------------------->
<section>
<section>
<h1>What is OTBTF?</h1>
</section>
<section>
<h2>In short</h2>
<ul>
<li><h>Generic</h> framework for deep learning on rasters</li>
<li>Developped at INRAE for <h>research</h>, <h>education</h> and <h>production</h>
</li>
<li>Use <h>deep learning</h> techniques on geospatial images</li>
<ul>
<li>Create datasets (samples selection, patches extraction)</li>
<li>Train models (CLI, Python)</li>
<li>Apply models in OTB applications</li>
</ul>
</ul>
<br>
7172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140
<br>
<img width="15%" data-src="illustrations/logo.png">
</section>
<section>
<h2>Improvements over the years</h2>
<ol>
<li><h>Documentation</h></li>
<li><h>Docker</h> builds for CPU/GPU</li>
<li><h>CI/CD</h></li>
<li><h>TensorFlow 2</h> support</li>
<li><h>Python classes</h> to build/train models</li>
</ol>
<br>
<br>
<img width="45%" data-src="illustrations/ci.jpg">
<p><small>https://gitlab.irstea.fr/remi.cresson/otbtf</small></p>
</section>
<section>
<h2>Repository and docker images</h2>
<img width="5%" data-src="illustrations/repos_git.jpg" style="float:left;padding-left:20%">
<pre style="width:800px"><code data-trim class="bash">
git clone https://github.com/remicres/otbtf.git
</pre></code>
<img width="4%" data-src="illustrations/repos_docker.jpg" style="float:left;padding-left:20%">
<pre style="width:800px"><code data-trim class="bash" >
docker pull mdl4eo/otbtf:3.3.0-cpu
docker pull mdl4eo/otbtf:3.3.0-gpu # GPU enabled
</pre></code>
</section>
</section>
<!------------------------------------------------------------------------
WHAT FOR
------------------------------------------------------------------------->
<section>
<section>
<h1>What for</h1>
</section>
<section>
<img width="40%" data-src="illustrations/ensembles.png">
</section>
<section>
<h2>Tensorflow computational graphs</h2>
<img width="17.5%" data-src="illustrations/computational_graph.gif">
<p><small>Source: the TensorFlow website</small></p>
</section>
<section>
<h3>Example: scalar product</h3>
<br>
<img width="20%" data-src="illustrations/graph_1.png">
<br>
<img width="48px" data-src="illustrations/python.png" style="float:left;padding-left:15%;margin:30px">
<pre style="width:1050px"><code data-trim class="python">
import tensorflow as tf
x1 = tf.keras.Input(shape=[None, None, None], name="x1")
x2 = tf.keras.Input(shape=[None, None, None], name="x2")
# Scalar product
141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210
y = tf.reduce_sum(tf.multiply(x1, x2), axis=-1)
# Create model
model = tf.keras.Model(inputs={"x1": x1, "x2": x2}, outputs={"y": y})
model.save("/tmp/my_savedmodel")
</code></pre>
<img width="48px" data-src="illustrations/cli.png" style="float:left;padding-left:15%;margin:30px">
<pre style="width:1050px"><code data-trim class="bash">
export OTB_TF_NSOURCES=2
otbcli_TensorflowModelServe \
-source1.il "input_img_1.tif" -source2.il "input_img_2.tif" \
-model.dir "/tmp/my_savedmodel" -model.fullyconv on \
-out "output.tif"
</code></pre>
</section>
<section>
<h2>Deep learning</h2>
<h3><strike>Bridging the gap between deep learning and EO</strike></h3>
<h3>Bridging the gap between litterature and real life </h3>
<img width="40%" data-src="illustrations/listof.jpg">
<p><small>Made with imgflip.com</small></p>
</section>
</section>
<!------------------------------------------------------------------------
FEATURES
------------------------------------------------------------------------->
<section>
<section>
<h1>Features</h1>
</section>
<section>
<h2>Out of the box</h2>
<ul>
<li>
Additional <h>applications</h> for the Orfeo ToolBox
<ul>
<li>For users (GIS, teaching)</li>
<li>For production</li>
</ul>
</li>
<li>
<h>Python API</h> (<y>NEW!</y>)
<ul>
<li>Dedicated to model training</li>
<li>For data scientists</li>
<li>To go large scale with distributed training</li>
</ul>
</li>
</ul>
<br><br>
</section>
<section>
<h2>OTB Applications</h2>
<ul>
<li>
<h>TensorflowModelServe</h>: Inference on real world remote sensing products
</li>
<li>
<g>PatchesExtraction</g>: extract patches in images
</li>
<li>
<h>PatchesSelection</h>: for patches selection from rasters
211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280
</li>
<li>
<g>TrainClassifierFromDeepFeatures</g>: train traditionnal classifiers that use features from deep nets
</li>
<li>
<g>ImageClassifierFromDeepFeatures</g>: use traditionnal classifiers with features from deep nets
</li>
<li>
<g>LabelImageSampleSelection</g>: select patches from a label image
</li>
<li>
<g>DensePolygonClassStatistics</g>: fast terrain truth polygons statistics
</li>
<li>
<g>TensorflowModelTrain</g>: training/validation (educational purpose)
</li>
</ul>
<br><br>
</section>
<section>
<h3>TensorflowModelServe</h3>
<h4>Streamable inference: a key feature to go large scale</h4>
<img width="60%" data-src="illustrations/pipeline.png">
<p><small>Typical pipeline for inference in production</small></p>
</section>
<section>
<h3>Patches extraction</h3>
<h4>CLI example</h4>
<img width="48px" data-src="illustrations/cli.png" style="float:left;padding-left:22%;margin:30px">
<pre style="width:800px"><code data-trim class="bash">
export OTB_TF_NSOURCES=3 # Number of sources
otbcli_PatchesExtraction -vec "myvec.gpkg" \
-source1.patchsizex 16 -source1.patchsizey 16 \
-source1.il "raster_x.tif" -source1.out "x1.tif" \
-source2.patchsizex 64 -source2.patchsizey 64 \
-source2.il "raster_y.tif" -source2.out "y1.tif" \
-source3.patchsizex 64 -source3.patchsizey 64 \
-source3.il "raster_z.tif" -source3.out "z1.tif"
</code></pre>
<img width="60%" data-src="illustrations/patches2.png">
<p><small>Patches are stacked in rows and stored in well known raster formats</small></p>
</section>
<section>
<h2>Python API</h2>
<p>Say that you have generated some patches images with PatchesExtraction:</p>
<img width="40%" data-src="illustrations/patches2.png">
<br>
<p>The <g>otbtf.DatasetFromPatchesImages</g> creates a dataset ready to use in TF/Keras:</p>
<img width="48px" data-src="illustrations/python.png" style="float:left;padding-left:20%;margin:30px">
<pre style="width:900px"><code data-trim class="python">
import otbtf
files = {"x": ["x1.tif", ..., "xN.tif"],
"y": ["y1.tif", ..., "yN.tif"],
"z": ["z1.tif", ..., "zN.tif"]}
ds = otbtf.DatasetFromPatchesImages(filenames_dict=files)
# This is a TensorFlow dataset
tf_ds = ds.get_tf_dataset(batch_size=8)
</code></pre>
</section>
<section>
<h3>TFRecords</h3>
<p>Any <g>otbtf.Dataset</g> can be exported in the <g>TFRecords</g> format:</p>
<pre style="width:800px"><code data-trim class="python">