Adaptive user interfaces for relating high-level concepts to low-level photographic parameters

2011 
Common controls for photographic editing can be difficult to use and have a significant learning curve. Often, a user does not know a direct mapping from a high-level concept (such as "soft") to the available parameters or controls. In addition, many concepts are subjective in nature, and the appropriate mapping may vary from user to user. To overcome these problems, we propose a system that can quickly learn a mapping from a high-level subjective concept onto low- level image controls using machine learning techniques. To learn such a concept, the system shows the user a series of training images that are generated by modifying a seed image along different dimensions (e.g., color, sharpness), and collects the user ratings of how well each training image matches the concept. Since it is known precisely how each modified example is different from the original, the system can determine the correlation between the user ratings and the image parameters to generate a controller tailored to the concept for the given user. The end result - a personalized image controller - is applicable to a variety of concepts. We have demonstrated the utility of this approach to relate low-level parameters, such as color balance and sharpness, to simple concepts, such as "lightness" and "crispness," as well as more complex and subjective concepts, such as "pleasantness." We have also applied the proposed approach to relate subband statistics (variance) to perceived roughness of visual textures (from the CUReT database).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []