In the field of image-guided surgery, Augmented Reality wearable displays are a widely studied and documented technology for their ability to provide egocentric vision together with the overlap between real and virtual content. In particular, optical see-through (OST) displays have the advantage of maintaining visual perception of the real world. However, OST displays suffer from vergeance-accomodation conflict when virtual content is superimposed on real world. Furthermore, the calibration methods required to achieve geometric consistency between real and virtual are inherently error-prone. One of the solutions, already studied, to these problems is to use of integral imaging displays. In this paper we present an easy and straightforward real-time rendering strategy implemented in modern OpenGL to show the 3D image of a virtual object on a wearable OST display deploying the integral imaging approach. Clinical Relevance- The algorithm proposed open the way towards more effective AR surgical navigation in terms of comfort of the AR experience and accuracy of the AR guidance.
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth of field. This study investigates the use of liquid-lens optics to create an autofocus system for wearable VST visors. The autofocus system is based on a Time of Flight (TOF) distance sensor and an active autofocus control system. The integrated autofocus system in the wearable VST viewers showed good potential in terms of providing rapid focus at various distances and a magnified view.