Cell Image Segmentation Using Generative Adversarial Networks, Transfer Learning, and Augmentations
2019
We address the problem of segmenting cell contours from microscopy images of human induced pluripotent Retinal Pigment Epithelial stem cells (iRPE) using Convolutional Neural Networks (CNN). Our goal is to compare the accuracy gains of CNN-based segmentation by using (1) un-annotated images via Generative Adversarial Networks (GAN), (2) annotated out-of-bio-domain images via transfer learning, and (3) a priori knowledge about microscope imaging mapped into geometric augmentations of a small collection of annotated images. First, the GAN learns an abstract representation of cell objects. Next, this unsupervised learned representation is transferred to the CNN segmentation models which are further fine-tuned on a small number of manually segmented iRPE cell images. Second, transfer learning is applied by pre-training a part of the CNN segmentation model with the COCO dataset containing semantic segmentation labels. The CNN model is then adapted to the iRPE cell domain using a small set of annotated iRPE cell images. Third, augmentations based on geometrical transformations are applied to a small collection of annotated images. All these approaches to training CNN-based segmentation model are compared to a baseline CNN model trained on a small collection of annotated images. For very small annotation counts, the results show accuracy improvements up to 20 % by the best approach in comparison to the accuracy achieved using a baseline U-Net model. For larger annotation counts these approaches asymptotically approach the same accuracy.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
24
References
19
Citations
NaN
KQI