Positron emission tomography (PET) imaging is an imaging modality for diagnosing a number of neurological diseases. In contrast to Magnetic Resonance Imaging (MRI), PET is costly and involves injecting a radioactive substance into the patient. Motivated by developments in modality transfer in vision, we study the generation of certain types of PET images from MRI data. We derive new flow-based generative models which we show perform well in this small sample size regime (much smaller than dataset sizes available in standard vision tasks). Our formulation, DUAL-GLOW, is based on two invertible networks and a relation network that maps the latent spaces to each other. We discuss how given the prior distribution, learning the conditional distribution of PET given the MRI image reduces to obtaining the conditional distribution between the two latent codes w.r.t. the two image types. We also extend our framework to leverage 'side' information (or attributes) when available. By controlling the PET generation through 'conditioning' on age, our model is also able to capture brain FDG-PET (hypometabolism) changes, as a function of age. We present experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset with 826 subjects, and obtain good performance in PET image synthesis, qualitatively and quantitatively better than recent works.
For the early diagnosis of cancer, capturing the unusual temperature distribution of organism surface by using the infrared thermography has become one of the hot spots in the research methods. However, at present this method can only qualitatively judge the early cancer lesion, while it cannot ascertain the lesion area and range accurately. In this paper, we have conducted some exploratory research in the light of this problem, and tried to apply the nonlinear inverse mathematical method in the quantitative analysis for the infrared medical diagnosis. Through this inversion algorithm, the lesion region and range will be judged quantitatively, which makes the value of infrared thermography improve in the field of cancer early diagnosis.
Abstract This paper presents an iterative reconstruction framework for super-resolution imaging and autofocusing via compressive-sensing-based twin-image-free holography ( SRI-AF-CS-TIFH ) for 3D (multi-plane) object in compressed holographic imaging. In our proposed framework, in the first step, the Hough transform edge detection method is incorporated into the eigenvalue-based autofocusing algorithm (dubbed as EIG-AF-Hough) to accurately estimate the focus distances for each plane of multi-plane objects from the snapshot measurements; In the second step, nonlinear optimization is used to achieve the super-resolution reconstruction from the same snapshot measurements. Experimental results demonstrate the effectiveness of our proposed framework for achieving autofocusing and super-resolution in compressed holographic imaging simultaneously in both simulated and real holographic scenarios.
A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.
Electroencephalography (EEG) is typical time-series data. Designing an automatic detection model for EEG is of great significance for disease diagnosis. For example, EEG stands as one of the most potent diagnostic tools for epilepsy detection. A myriad of studies have employed EEG to detect and classify epilepsy, yet these investigations harbor certain limitations. Firstly, most existing research concentrates on the labels of sliced EEG signals, neglecting epilepsy labels associated with each time step in the original EEG signal—what we term fine-grained labels. Secondly, a majority of these studies utilize static graphs to depict EEG’s spatial characteristics, thereby disregarding the dynamic interplay among EEG channels. Consequently, the efficient nature of EEG structures may not be captured. In response to these challenges, we propose a novel seizure detection and classification framework—the dynamic temporal graph convolutional network (DTGCN). This method is specifically designed to model the interdependencies in temporal and spatial dimensions within EEG signals. The proposed DTGCN model includes a unique seizure attention layer conceived to capture the distribution and diffusion patterns of epilepsy. Additionally, the model incorporates a graph structure learning layer to represent the dynamically evolving graph structure inherent in the data. We rigorously evaluated the proposed DTGCN model using a substantial publicly available dataset, TUSZ, consisting of 5499 EEGs. The subsequent experimental results convincingly demonstrated that the DTGCN model outperformed the existing state-of-the-art methods in terms of efficiency and accuracy for both seizure detection and classification tasks.