Improving Contextual Self-Organizing Map Solution Times Using GPU Parallel Training

2014 
Visualizing n-dimensional design or optimization data is very challenging using current methods and technologies. Many current techniques perform dimensionality reduction or other “compression” methods to show views of the data in two or three dimensions. Designers are left to infer the relationships with other independent and dependent variables being considered. Contextual self-organizing maps offer a way to process a view and interact with all dimensions of design data simultaneously. Contextual self-organizing maps are a form of neural network that can be used to understand the complex relationships between large amounts of high-dimensional data, as was shown in previous work by the authors. This original formulation of contextual self-organizing maps used a sequential training method that took significant amounts of training time with large datasets. Batch self-organizing maps provide a data-independent training method that allows the training process to be parallelized. This research parallelizes the batch self-organizing map by combining networkpartitioning and data-partitioning methods with CUDA on the GPU to achieve significant training time reductions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []