Automated measures for interpretable dimensionality reduction for visual classification: A user study

2011 
This paper studies the interpretability of transformations of labeled higher dimensional data into a 2D representation (scatterplots) for visual classification. 1 In this context, the term interpretability has two components: the interpretability of the visualization (the image itself) and the interpretability of the visualization axes (the data transformation functions). We define a data transformation function as any linear or non-linear function of the original variables mapping the data into 1D. Even for a small dataset, the space of possible data transformations is beyond the limit of manual exploration, therefore it is important to develop automated techniques that capture both aspects of interpretability so that they can be used to guide the search process without human intervention. The goal of the search process is to find a smaller number of interpretable data transformations for the users to explore. We briefly discuss how we used such automated measures in an evolutionary computing based data dimensionality reduction application for visual analytics. In this paper, we present a two-part user study in which we separately investigated how humans rated the visualizations of labeled data and comprehensibility of mathematical expressions that could be used as data transformation functions. In the first part, we compared human perception with a number of automated measures from the machine learning and visual analytics literature. In the second part, we studied how various structural properties of an expression related to its interpretability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    1
    Citations
    NaN
    KQI
    []