Visual Localisation for Knee Arthroscopy

2021 
Navigation in visually complex endoscopic environments requires an accurate and robust localisation system. This paper presents the single image deep learning based camera localisation method for orthopedic surgery. The approach combines image information, deep learning techniques and bone-tracking data to estimate camera poses relative to the bone-markers. We have collected one arthroscopic video sequence for four knee flexion angles, per synthetic phantom knee model and a cadaveric knee-joint. Experimental results are shown for both a synthetic knee model and a cadaveric knee-joint with mean localisation errors of 9.66mm/0.85 $$^\circ $$ and 9.94mm/1.13 $$^\circ $$ achieved respectively. We have found no correlation between localisation errors achieved on synthetic and cadaveric images, and hence we predict that arthroscopic image artifacts play a minor role in camera pose estimation compared to constraints introduced by the presented setup. We have discovered that the images acquired for 90°and 0°knee flexion angles are respectively most and least informative for visual localisation. The performed study shows deep learning performs well in visually challenging, feature-poor, knee arthroscopy environments, which suggests such techniques can bring further improvements to localisation in Minimally Invasive Surgery.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []