From point to surface: Hierarchical parsing of human anatomy in medical images using machine learning technologies

2016 
Localization and interpretation of anatomical structures in medical images is a key step in radiological workflow. Radiologists/technicians usually accomplish this task by identifying some anatomical signatures—any image features that can distinguish one anatomy from others. Is it possible for a computer to learn these “anatomical signatures” as well? This chapter introduces a framework to learn anatomical signatures from a large quantity of medical image data. It starts from the detection of anatomical landmarks, gradually extending to organ boxes, and eventually reaching precise segmentation of human anatomies. Multiple machine learning technologies are employed and seamlessly integrated to learn “anatomical signatures” at different levels. Our learning-based platform is applied to very diverse applications, ranging from orthopedic studies in magnetic resonance to oncology studies in positron emission tomography/computed tomography. It shows robust and accurate performance and can potentially benefit radiological workflow in various ways.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []