In ultrasound (US)-guided medical procedures, accurate tracking of interventional tools is crucial to patient safety and clinical outcome. This requires a calibration procedure to recover the relationship between the US image and the tracking coordinate system. In literature, calibration has been performed on passive phantoms, which depend on image quality and parameters, such as frequency, depth, and beam-thickness as well as in-plane assumptions. In this work, we introduce an active phantom for US calibration. This phantom actively detects and responds to the US beams transmitted from the imaging probe. This active echo (AE) approach allows identification of the US image midplane independent of image quality. Both target localization and segmentation can be done automatically, minimizing user dependency. The AE phantom is compared with a crosswire phantom in a robotic US setup. An out-of-plane estimation US calibration method is also demonstrated through simulation and experiments to compensate for remaining elevational uncertainty. The results indicate that the AE calibration phantom can have more consistent results across experiments with varying image configurations. Automatic segmentation is also shown to have similar performance to manual segmentation.
As thermal imaging attempts to estimate very small tissue motion (on the order of tens of microns), it can be negatively influenced by signal decorrelation. Patient's breathing and cardiac cycle generate shifts in the RF signal patterns. Other sources of movement could be found outside the patient's body, like transducer slippage or small vibrations due to environment factors like electronic noise. Here, we build upon a robust displacement estimation method for ultrasound elastography and we investigate an iterative motion compensation algorithm, which can detect and remove non-heat induced tissue motion at every step of the ablation procedure. The validation experiments are performed on laboratory induced ablation lesions in ex-vivo tissue. The ultrasound probe is either held by the operator's hand or supported by a robotic arm. We demonstrate the ability to detect and remove non-heat induced tissue motion in both settings. We show that removing extraneous motion helps unmask the effects of heating. Our strain estimation curves closely mirror the temperature changes within the tissue. While previous results in the area of motion compensation were reported for experiments lasting less than 10 seconds, our algorithm was tested on experiments that lasted close to 20 minutes.
The drying characteristic and rolling deformation of eaglewood leaves are investigated experimentally, and a model is built based on Fick’s law and stress–strain relations to illustrate the leaf rolling rule. The leaves dehydrate free water and hardly roll during the initial drying period. Rolling deformation is induced by the shrink difference along leaf thickness and occurs when the moisture content reaches a critical level. The rolling index of a leaf that is dried on one side is greater than that of a leaf that is dried on both sides. In addition, the rolling index is influenced by drying temperature and leaf thickness. When a leaf is thick or when the drying temperature is high, the leaf rolls considerably.
Visual tracking has advanced significantly in recent years, mainly due to the availability of large-scale training datasets. These datasets have enabled the development of numerous algorithms that can track objects with high accuracy and robustness.However, the majority of current research has been directed towards tracking generic objects, with less emphasis on more specialized and challenging scenarios. One such challenging scenario involves tracking reflected objects. Reflections can significantly distort the appearance of objects, creating ambiguous visual cues that complicate the tracking process. This issue is particularly pertinent in applications such as autonomous driving, security, smart homes, and industrial production, where accurately tracking objects reflected in surfaces like mirrors or glass is crucial. To address this gap, we introduce TRO, a benchmark specifically for Tracking Reflected Objects. TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes. This dataset aims to encourage the development of new, accurate methods for tracking reflected objects, which present unique challenges not sufficiently covered by existing benchmarks. We evaluated 20 state-of-the-art trackers and found that they struggle with the complexities of reflections. To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance, significantly outperforming existing algorithms. We believe our benchmark, evaluation, and HiP-HaTrack will inspire further research and applications in tracking reflected objects. The TRO and code are available at https://github.com/OpenCodeGithub/HIP-HaTrack.
Background: Auditory brainstem response (ABR) testing is an invasive electrophysiological auditory function test. Its waveforms and threshold can reflect auditory functional changes in the auditory centers in the brainstem and are widely used in the clinic to diagnose dysfunction in hearing. However, identifying its waveforms and threshold is mainly dependent on manual recognition by experimental persons, which could be primarily influenced by individual experiences. This is also a heavy job in clinical practice. Methods: In this work, human ABR was recorded. First, binarization is created to mark 1,024 sampling points accordingly. The selected characteristic area of ABR data is 0–8 ms. The marking area is enlarged to expand feature information and reduce marking error. Second, a bidirectional long short-term memory (BiLSTM) network structure is established to improve relevance of sampling points, and an ABR sampling point classifier is obtained by training. Finally, mark points are obtained through thresholding. Results: The specific structure, related parameters, recognition effect, and noise resistance of the network were explored in 614 sets of ABR clinical data. The results show that the average detection time for each data was 0.05 s, and recognition accuracy reached 92.91%. Discussion: The study proposed an automatic recognition of ABR waveforms by using the BiLSTM-based machine learning technique. The results demonstrated that the proposed methods could reduce recording time and help doctors in making diagnosis, suggesting that the proposed method has the potential to be used in the clinic in the future.
Abstract Background During craniotomy for the cerebellopontine angle (CPA) lesion by the typical retrosigmoid approach, the exact exposure of the margin of the venous sinuses complex remains an essential but risky step. This study aimed to reveal the exact position of asterion and sinuses by combining preoperative imaging with intraoperative landmarks. and analyse their clinical features. Methods From February 2008 through November 2019, 94 patients who underwent removal of vestibular schwannoma (VS) through retrosigmoid craniotomies were enrolled in the series. We utilized preoperative images, including computed tomography (CT) and/or magnetic resonance imaging (MRI) combined with intraoperative anatomical landmarks, to determine the exact location of the sigmoid sinus and the transverse and sigmoid sinuses junction (TSSJ). MRI T1 sequences with gadolinium and/or the CT bone window were used to measure the distance relationship of the asterion to the sigmoid sinus. Results In 94 cases of retrosigmoid craniotomies, we observed the asterion lay 12.71 millimeter on the posterior to the body surface projection of the TSSJ averagely. Intraoperative surface landmarks combined with preoperative image information identifying the distance from the asterion to sigmoid sinus at the transverse sinus level, enabled an appropriate initial burr-hole (the margin of the TSSJ ). Just one case had a minor laceration of the sigmoid sinus when the bone flap was opened. Conclusions By combining intraoperative anatomical landmarks with preoperative image information, the margin of the venous sinuses, especially the inferior margin of the transverse sinus in the retrosigmoid approach can be well and truly identified. The distance from the intersection of the asterion and occipitomastoid suture to the TSSJ is the shortest between the occipitomastoid suture and the sigmoid sinus.
Fusion of video and other imaging modalities is common in modern surgical procedures to provide surgeons with additional information that can provide precise surgical guidance. An example of such uses interventional guidance equipment and surgical navigation systems to register the tools and devices used in surgery with each other. In this work, we focus explicitly on registering three-dimensional ultrasound with a stereocamera system. These surgical navigation systems often use optical or electromagnetic trackers. However, both of these tracking systems have various drawbacks leading to target registration errors of approximately 3mm. Previous work has shown that photoacoustic markers can be used to register three-dimensional ultrasound with video resulting in target registration errors which are much lower than the current state of the art. This work extends this idea by generating multiple photoacoustic markers concurrently as opposed to the sequential method used in the previous work. This development greatly enhances the acquisition time by a factor equal to the number of concurrently generated photoacoustic markers. This work is demonstrated on a synthetic phantom and an ex vivo porcine kidney phantom. The resulting target registration errors for these experiments ranged from 840 to 1360 μm and standard deviations from 370 to 640 μm.
Accurate tool tracking is a crucial task that directly affects the safety and effectiveness of many interventional medical procedures. Compared to CT and MRI, ultrasound-based tool tracking has many advantages, including low cost, safety, mobility and ease of use. However, surgical tools are poorly visualized in conventional ultrasound images, thus preventing effective tool tracking and guidance. Existing tracking methods have not yet provided a solution that effectively solves the tool visualization and mid-plane localization accuracy problem and fully meets the clinical requirements. In this paper, we present an active ultrasound tracking and guiding system for interventional tools. The main principle of this system is to establish a bi-directional ultrasound communication between the interventional tool and US imaging machine within the tissue. This method enables the interventional tool to generate an active ultrasound field over the original imaging ultrasound signals. By controlling the timing and amplitude of the active ultrasound field, a virtual pattern can be directly injected into the US machine B mode display. In this work, we introduce the time and frequency modulation, mid-plane detection, and arbitrary pattern injection methods. The implementation of these methods further improves the target visualization and guiding accuracy, and expands the system application beyond simple tool tracking. We performed ex vitro and in vivo experiments, showing significant improvements of tool visualization and accurate localization using different US imaging platforms. An ultrasound image mid-plane detection accuracy of ±0.3 mm and a detectable tissue depth over 8.5 cm was achieved in the experiment. The system performance is tested under different configurations and system parameters. We also report the first experiment of arbitrary pattern injection to the B mode image and its application in accurate tool tracking.