REUR: A unified deep framework for signet ring cell detection in low-resolution pathological images.

2021 
Abstract Detecting signet ring cells (SRCs) in pathological images is essential for carcinoma diagnosis. However, it is time consuming for pathologists to detect SRCs manually from pathological images, and the accuracy of detecting them is also relatively low because of their small sizes. Recently, the exploration of deep learning methods in pathology analysis has been widely investigated by researchers. Nevertheless, the automatic detection of SRCs from real pathological images faces two problems. One is that labeled pathological images are insufficient and usually incomplete. The other is that the training data and the real clinical data have a large difference in resolution. Hence, adopting the transfer learning method affects the performance of deep learning methods. To address these two problems, we present a unified framework named REUR [RetinaNet combining USRNet (unfolding super-resolution network) with the RGHMC (revised gradient harmonizing mechanism classification) loss] that can accurately detect SRCs in low-resolution (LR) pathological images. First, the framework with the super-resolution (SR) module can address the difference in resolution between the training data and the real clinical data. Second, the framework with the label correction module can obtain the revised ground-truth labels from noisy examples, which are embedded into the gradient harmonizing mechanism to acquire the RGHMC loss. The results of the numerical experiments showed that the framework can perform better than other one-stage detectors based on the RetinaNet architecture in the high-resolution (HR) noisy dataset. It achieved a kappa value of 0.74 and an accuracy of 0.89 in the test with 27 randomly selected whole slide images (WSIs), and, thus, it can assist pathologists in better analyzing WSIs. The framework provides an essential method in computer-aided diagnosis for medical applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    2
    Citations
    NaN
    KQI
    []