Commit f47ee9ee authored by qubvel's avatar qubvel

docs updated

parent faeb3ea1
......@@ -2,28 +2,36 @@ Installation
============
**Requirements**
- Python 3.X
- Keras >=2.1.0
- Tensorflow >= 1.4
- scikit-image
- image-classifiers == 0.1.0rc0
1) Python 3.5+
2) Keras >= 2.2.0
3) Keras Applications >= 1.7.0
4) Image Classifiers == 0.2.0
5) Tensorflow 1.9 (tested)
.. note::
This library does not have Tensorflow_ in a requirements
This library does not have Tensorflow_ in a requirements.txt
for installation. Please, choose suitable version ('cpu'/'gpu')
and install it manually using official Guide_.
**Pip package**
.. code:: bash
$ pip install segmentation-models
**Latest version**
.. code:: bash
$ pip install git+https://github.com/qubvel/segmentation_models
.. _Guide:
https://www.tensorflow.org/install/
.. _Tensorflow:
https://www.tensorflow.org/
To install library execute at the command line::
$ pip install segmentation-models
Or use latest version available on github.com::
$ pip install git+https://github.com/qubvel/segmentation_models
Tutorial
========
**Segmentation models** is python library with Neural Networks
for `Image Segmentation`_ based on Keras_ (Tensorflow_) framework.
**Segmentation models** is python library with Neural Networks for
`Image
Segmentation <https://en.wikipedia.org/wiki/Image_segmentation>`__ based
on `Keras <https://keras.io>`__
(`Tensorflow <https://www.tensorflow.org/>`__) framework.
**The main features** of this library are:
- High level API (just two lines to create NN)
- **4** models architectures for binary and multi class segmentation (including legendary **Unet**)
- **15** available backbones for each architecture
- All backbones have **pre-trained** weights for faster convergence and higher results
Avaliable models
++++++++++++++++
- Unet_
- Linknet_
- FPN_
- PSPNet_
Avaliable backbones
+++++++++++++++++++
=================== ===================== =====================
Backbone model Name Weights
=================== ===================== =====================
VGG16 ``vgg16`` ``imagenet``
VGG19 ``vgg19`` ``imagenet``
ResNet18 ``resnet18`` ``imagenet``
ResNet34 ``resnet34`` ``imagenet``
ResNet50 ``resnet50`` ``imagenet``\ \ ``imagenet11k-places365ch``
ResNet101 ``resnet101`` ``imagenet``
ResNet152 ``resnet152`` ``imagenet``\ \ ``imagenet11k``
ResNeXt50 ``resnext50`` ``imagenet``
ResNeXt101 ``resnext101`` ``imagenet``
DenseNet121 ``densenet121`` ``imagenet``
DenseNet169 ``densenet169`` ``imagenet``
DenseNet201 ``densenet201`` ``imagenet``
Inception V3 ``inceptionv3`` ``imagenet``
Inception ResNet V2 ``inceptionresnetv2`` ``imagenet``
=================== ===================== =====================
Binary segmentation
+++++++++++++++++++
Consider a problem of binary segmentation (e.g. segmentation of forest
on remote sensing data) where we have:
- ``x`` - input images of shape (H, W, C)
- ``y`` - labeled binary masks of shape (H, W)
If you already familiar with Keras_ API all you need to start is just to
import one of segmentation models and start training:
.. note::
For binary segmentation ``Unet`` based on ``resnet34`` backbone is a
good way to start your experiments.
- **4** models architectures for binary and multi class segmentation
(including legendary **Unet**)
- **25** available backbones for each architecture
- All backbones have **pre-trained** weights for faster and better
convergence
.. code-block:: python
Quick start
~~~~~~~~~~~
Since the library is built on the Keras framework, created segmentaion model is just a Keras Model, which can be created as easy as:
.. code:: python
from segmentation_models import Unet
from segmentation_models.backbones import get_preprocessing
# define backbone name
BACKBONE = 'resnet34'
model = Unet()
# prepare/load data
x, y = ...
Depending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:
preprocessing_fn = get_preprocessing(BACKBONE)
x = preprocessing_fn(x)
.. code:: python
# prepare model
model = Unet(backbone_name=BACKBONE, encoder_weights='imagenet')
model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
model = Unet('resnet34', encoder_weights='imagenet')
# train model
model.fit(x, y, epochs=20)
Change number of output classes in the model:
Multi class segmentation
++++++++++++++++++++++++
.. code:: python
In case you have ``N`` classes as a target (``N > 1``)
you simply have to provide following arguments while
initializing your model:
model = Unet('resnet34', classes=3, activation='softmax')
.. code-block:: python
Change input shape of the model:
.. code:: python
model = Unet('resnet34', input_shape=(None, None, 6), encoder_weights=None)
Simple training pipeline
~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
from segmentation_models import Unet
from segmentation_models.backbones import get_preprocessing
from segmentation_models.losses import bce_jaccard_loss
from segmentation_models.metrics import iou_score
BACKBONE = 'resnet34'
preprocess_input = get_prepocessing(BACKBONE)
# load your data
x_train, y_train, x_val, y_val = load_data(...)
# preprocess input
x_train = preprocess_input(x_train)
x_val = preprocess_input(x_val)
# define model
model = Unet(BACKBONE, encoder_weights='imagenet')
model.complile('Adam', loss=bce_jaccard_loss, metrics=[iou_score])
# fit model
model.fit(
x=x_train,
y=y_train,
batch_size=16,
epochs=100,
validation_data=(x_val, y_val),
)
Same manimulations can be done with ``Linknet``, ``PSPNet`` and ``FPN``. For more detailed information about models API and use cases `Read the Docs <https://segmentation-models.readthedocs.io/en/latest/>`__.
Models and Backbones
~~~~~~~~~~~~~~~~~~~~
**Models**
- `Unet <https://arxiv.org/abs/1505.04597>`__
- `FPN <http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf>`__
- `Linknet <https://arxiv.org/abs/1707.03718>`__
- `PSPNet <https://arxiv.org/abs/1612.01105>`__
============= ==============
Unet Linknet
============= ==============
|unet_image| |linknet_image|
============= ==============
============= ==============
PSPNet FPN
============= ==============
|psp_image| |fpn_image|
============= ==============
.. _Unet: https://github.com/qubvel/segmentation_models/blob/readme/LICENSE
.. _Linknet: https://arxiv.org/abs/1707.03718
.. _PSPNet: https://arxiv.org/abs/1612.01105
.. _FPN: http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf
.. |unet_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/3a873a00c9742dc1fb33105ed846d5b5-full.png
.. |linknet_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/1a996c4ef05531ff3861d80823c373d9-full.png
.. |psp_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/aaabb97f89197b40e4879a7299b3c801-full.png
.. |fpn_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/af00f11ef6bc8a64efd29ed873fcb0c4-full.png
**Backbones**
.. table::
=========== =====
Type Names
=========== =====
VGG ``'vgg16' 'vgg19'``
ResNet ``'resnet18' 'resnet34' 'resnet50' 'resnet101' 'resnet152'``
SE-ResNet ``'seresnet18' 'seresnet34' 'seresnet50' 'seresnet101' 'seresnet152'``
ResNeXt ``'resnext50' 'resnet101'``
SE-ResNeXt ``'seresnext50' 'seresnet101'``
SENet154 ``'senet154'``
DenseNet ``'densenet121' 'densenet169' 'densenet201'``
Inception ``'inceptionv3' 'inceptionresnetv2'``
MobileNet ``'mobilenet' 'mobilenetv2'``
=========== =====
model = Unet(backbone_name=BACKBONE, encoder_weights='imagenet',
classes=N, activation='softmax')
.. epigraph::
All backbones have weights trained on 2012 ILSVRC ImageNet dataset (``encoder_weights='imagenet'``).
Fine tuning
+++++++++++
~~~~~~~~~~~
Some times, it is useful to train only randomly initialized
*decoder* in order not to damage weights of properly trained
......@@ -121,7 +160,7 @@ while initializing the model.
Training with non-RGB data
++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~
In case you have non RGB images (e.g. grayscale or some medical/remote sensing data)
you have few different options:
......@@ -186,4 +225,4 @@ you have few different options:
https://arxiv.org/pdf/1612.01105.pdf
.. _FPN:
http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf
\ No newline at end of file
http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment