Deep Learning Image Transformation under Radon Transform
2020
Previously, we have shown that an image location, size, or even constant attenuation factor may be estimated by deep learning from the images Radon transformed representation. In this project, we go a step further to estimate a few other mathematical transformation parameters under Radon transformation. The motivation behind the project is that many medical imaging problems are related to estimating similar invariance parameters. Such estimations are typically performed after image reconstruction from detector images that are in the Radon transformed space. The image reconstruction process introduces additional noise of its own. Deep learning provides a framework for direct estimation of required information from the detector images. A specific case we are interested in is dynamic nuclear imaging, where the quantitative estimations of the target tissues are queried. Motion inherent in biological systems, e.g., in vivo imaging with breathing motion, may be modeled as a transformation in the spatial domain. Motion is particularly prevalent in dynamic imaging, while tracer dynamics in the imaged object are a second source of transformation in the time domain. Our neural network model attempts to discern the two types of transformation (motion and intensity variation dynamics), i.e., tries to learn one type of transformation, ignoring the other.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
4
References
0
Citations
NaN
KQI