Material Classification of Magnetic Resonance Volume Data

1992 
A major unsolved problem in computer graphics is that of making high-quality models. Traditionally, models have consisted of interactively or algorithmically described collections of graphics primitives such as polygons.The process of constructing these models is painstaking and often misses features and behavior that we wish to model. Models extracted from volume data collected from real, physical objects have the potential to show features and behavior that are difficult to capture using these traditional modeling methods. We use vector-valued magnetic resonance volume data in this thesis. The process of extracting models from such data involves four main steps: collecting the sampled volume data; preprocessing it to reduce artifacts from the collection process; classifying materials within the data; and creating either a rigid geometric model that is static, or a flexible, dynamic model that can be simulated. In this thesis we focus on the the first three steps. We present guidelines and techniques for collecting and processing magnetic resonance data to meet the needs of the later steps. Our material classification and model extraction techniques work better when the data values for a given material are constant throughout the dataset, when data values for different materials are different, and when the dataset is free of aliasing artifacts and noise. We present a new material-classification method that operates on vector-valued volume data. The method produces a continuous probability function for each material over the volume of the dataset, and requires no expert interaction to teach it different material classes. It operates by fitting peaks in the histogram of a collected dataset using parameterized gaussian bumps, and by using Bayes' law to calculate material probabilities, with each gaussian bump representing one material. To illustrate the classification method, we apply it to real magnetic resonance data of a human head, a human hand, a banana, and a jade plant. From the classified data, we produce "computationally stained" slices that discriminate among materials better than do the original grey-scale versions. We also generate volume-rendered images of classified datasets clearly showing different anatomical features of various materials. Finally, we extract preliminary static and dynamic geometric models of different tissues.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []