Commit faeb3ea1 authored by qubvel's avatar qubvel

Merge branch 'readme' of https://github.com/qubvel/segmentation_models into readme

parents 12b8cf70 4a8bc0c6
<div style="text-align:center"><img src ="https://github.com/qubvel/classification_models/blob/master/logo.png" /></div>
<center> <h1>Segmentation Models</h1> </center>
<center>[![PyPI version](https://badge.fury.io/py/segmentation-models.svg)](https://badge.fury.io/py/segmentation-models)
[![Documentation Status](https://readthedocs.org/projects/segmentation-models/badge/?version=latest)](https://segmentation-models.readthedocs.io/en/latest/?badge=latest)
[![Build Status](https://travis-ci.com/qubvel/segmentation_models.svg?branch=master)](https://travis-ci.com/qubvel/segmentation_models)</center>
**Segmentation models** is python library with Neural Networks
for [Image Segmentation](https://en.wikipedia.org/wiki/Image_segmentation) based on [Keras](https://keras.io)
([Tensorflow](https://www.tensorflow.org/)) framework.
**The main features** of this library are:
- High level API (just two lines to create NN)
- **4** models architectures for binary and multi class segmentation (including legendary **Unet**)
- **15** available backbones for each architecture
- All backbones have **pre-trained** weights for faster and better convergence
Latest **documentation** is avaliable on [Read the Docs](https://segmentation-models.readthedocs.io/en/latest/)
### Avaliable models:
- [Unet](https://arxiv.org/abs/1505.04597)
- [FPN](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)
- [Linknet](https://arxiv.org/abs/1707.03718)
- [PSPNet](https://arxiv.org/abs/1612.01105)
### Avaliable backbones:
| Backbone model |Name| Weights |
|---------------------|:--:|:------------:|
| VGG16 |`vgg16`| `imagenet` |
| VGG19 |`vgg19`| `imagenet` |
| ResNet18 |`resnet18`| `imagenet` |
| ResNet34 |`resnet34`| `imagenet` |
| ResNet50 |`resnet50`| `imagenet`<br>`imagenet11k-places365ch` |
| ResNet101 |`resnet101`| `imagenet` |
| ResNet152 |`resnet152`| `imagenet`<br>`imagenet11k` |
| ResNeXt50 |`resnext50`| `imagenet` |
| ResNeXt101 |`resnext101`| `imagenet` |
| DenseNet121 |`densenet121`| `imagenet` |
| DenseNet169 |`densenet169`| `imagenet` |
| DenseNet201 |`densenet201`| `imagenet` |
| Inception V3 |`inceptionv3`| `imagenet` |
| Inception ResNet V2 |`inceptionresnetv2`| `imagenet` |
### Requirements
1) Python 3.6 or higher
2) Keras >=2.1.0
3) Tensorflow >= 1.4
### Installation
#### Installing via pip
`$ pip install segmentation-models`
#### Using latest version in your project
```bash
$ pip install git+https://github.com/qubvel/segmentation_models
```
### Code examples
Train Unet model:
```python
from segmentation_models import Unet
from segmentation_models.backbones import get_preprocessing
# prepare data
x, y = ...
preprocessing_fn = get_preprocessing('resnet34')
x = preprocessing_fn(x)
# prepare model
model = Unet(backbone_name='resnet34', encoder_weights='imagenet')
model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
# train model
model.fit(x, y)
```
Train FPN model:
```python
from segmentation_models import FPN
model = FPN(backbone_name='resnet34', encoder_weights='imagenet')
```
#### Useful trick
Freeze encoder weights for fine-tuning during first epochs of training:
```python
from segmentation_models import FPN
from segmentation_models.utils import set_trainable
model = FPN(backbone_name='resnet34', encoder_weights='imagenet', freeze_encoder=True)
model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
# pretrain model decoder
model.fit(x, y, epochs=2)
# release all layers for training
set_trainable(model) # set all layers trainable and recompile model
# continue training
model.fit(x, y, epochs=100)
```
### TODO
- [x] Update Unet API
- [x] Update FPN API
- [x] Add Linknet models
- [x] Add PSP models
- [ ] Add DPN backbones
### Change Log
.. raw:: html
<p align="center">
<img width="400" height="200" src="https://cdn1.imggmi.com/uploads/2019/1/27/d7d9d327ea3340445bd82ec5377c56c7-full.png">
</p>
<h1 align="center"> Segmentation Models </h1>
<p align="center">
<a href="https://badge.fury.io/py/segmentation-models" alt="PyPI">
<img src="https://badge.fury.io/py/segmentation-models.svg" /></a>
<a href="https://segmentation-models.readthedocs.io/en/latest/?badge=latest" alt="Documentation">
<img src="https://readthedocs.org/projects/segmentation-models/badge/?version=latest" /></a>
<a href="https://travis-ci.com/qubvel/segmentation_models" alt="Build Status">
<img src="https://travis-ci.com/qubvel/segmentation_models.svg?branch=master" /></a>
</p>
**Segmentation models** is python library with Neural Networks for
`Image
Segmentation <https://en.wikipedia.org/wiki/Image_segmentation>`__ based
on `Keras <https://keras.io>`__
(`Tensorflow <https://www.tensorflow.org/>`__) framework.
**The main features** of this library are:
- High level API (just two lines to create NN)
- **4** models architectures for binary and multi class segmentation
(including legendary **Unet**)
- **25** available backbones for each architecture
- All backbones have **pre-trained** weights for faster and better
convergence
Table of Contents
~~~~~~~~~~~~~~~~~
- `Quick start`_
- `Simple training pipeline`_
- `Models and Backbones`_
- `Installation`_
- `Documentation`_
- `Change log`_
- `License`_
Quick start
~~~~~~~~~~~
Since the library is built on the Keras framework, created segmentaion model is just a Keras Model, which can be created as easy as:
.. code:: python
from segmentation_models import Unet
model = Unet()
Depending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:
.. code:: python
model = Unet('resnet34', encoder_weights='imagenet')
Change number of output classes in the model:
.. code:: python
model = Unet('resnet34', classes=3, activation='softmax')
Change input shape of the model:
.. code:: python
model = Unet('resnet34', input_shape=(None, None, 6), encoder_weights=None)
Simple training pipeline
~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
from segmentation_models import Unet
from segmentation_models.backbones import get_preprocessing
from segmentation_models.losses import bce_jaccard_loss
from segmentation_models.metrics import iou_score
BACKBONE = 'resnet34'
preprocess_input = get_prepocessing(BACKBONE)
# load your data
x_train, y_train, x_val, y_val = load_data(...)
# preprocess input
x_train = preprocess_input(x_train)
x_val = preprocess_input(x_val)
# define model
model = Unet(BACKBONE, encoder_weights='imagenet')
model.complile('Adam', loss=bce_jaccard_loss, metrics=[iou_score])
# fit model
model.fit(
x=x_train,
y=y_train,
batch_size=16,
epochs=100,
validation_data=(x_val, y_val),
)
Same manimulations can be done with ``Linknet``, ``PSPNet`` and ``FPN``. For more detailed information about models API and use cases `Read the Docs <https://segmentation-models.readthedocs.io/en/latest/>`__.
Models and Backbones
~~~~~~~~~~~~~~~~~~~~
**Models**
- `Unet <https://arxiv.org/abs/1505.04597>`__
- `FPN <http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf>`__
- `Linknet <https://arxiv.org/abs/1707.03718>`__
- `PSPNet <https://arxiv.org/abs/1612.01105>`__
============= ==============
Unet Linknet
============= ==============
|unet_image| |linknet_image|
============= ==============
============= ==============
PSPNet FPN
============= ==============
|psp_image| |fpn_image|
============= ==============
.. _Unet: https://github.com/qubvel/segmentation_models/blob/readme/LICENSE
.. _Linknet: https://arxiv.org/abs/1707.03718
.. _PSPNet: https://arxiv.org/abs/1612.01105
.. _FPN: http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf
.. |unet_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/3a873a00c9742dc1fb33105ed846d5b5-full.png
.. |linknet_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/1a996c4ef05531ff3861d80823c373d9-full.png
.. |psp_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/aaabb97f89197b40e4879a7299b3c801-full.png
.. |fpn_image| image:: https://cdn1.imggmi.com/uploads/2019/2/8/af00f11ef6bc8a64efd29ed873fcb0c4-full.png
**Backbones**
.. table::
=========== =====
Type Names
=========== =====
VGG ``'vgg16' 'vgg19'``
ResNet ``'resnet18' 'resnet34' 'resnet50' 'resnet101' 'resnet152'``
SE-ResNet ``'seresnet18' 'seresnet34' 'seresnet50' 'seresnet101' 'seresnet152'``
ResNeXt ``'resnext50' 'resnet101'``
SE-ResNeXt ``'seresnext50' 'seresnet101'``
SENet154 ``'senet154'``
DenseNet ``'densenet121' 'densenet169' 'densenet201'``
Inception ``'inceptionv3' 'inceptionresnetv2'``
MobileNet ``'mobilenet' 'mobilenetv2'``
=========== =====
.. epigraph::
All backbones have weights trained on 2012 ILSVRC ImageNet dataset (``encoder_weights='imagenet'``).
Installation
~~~~~~~~~~~~
**Requirements**
1) Python 3.5+
2) Keras >= 2.2.0
3) Image-classifiers == 0.2.0
4) Tensorflow 1.9 (tested)
**Pip package**
.. code:: bash
$ pip install segmentation-models
**Latest version**
.. code:: bash
$ pip install git+https://github.com/qubvel/segmentation_models
Documentation
~~~~~~~~~~~~~
Latest **documentation** is avaliable on `Read the
Docs <https://segmentation-models.readthedocs.io/en/latest/>`__
Change Log
~~~~~~~~~~
To see important changes between versions look at CHANGELOG.md_
License
~~~~~~~
Project is distributed under `MIT Licence`_.
.. _CHANGELOG.md: https://github.com/qubvel/segmentation_models/blob/readme/CHANGELOG.md
.. _`MIT Licence`: https://github.com/qubvel/segmentation_models/blob/readme/LICENSE
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment