Explaining anomalies in feature models

2016 
The development of variable software, in general, and feature models, in particular, is an error-prone and time-consuming task. It gets increasingly more challenging with industrial-size models containing hundreds or thousands of features and constraints. Each change may lead to anomalies in the feature model such as making some features impossible to select. While the detection of anomalies is well-researched, giving explanations is still a challenge. Explanations must be as accurate and understandable as possible to support the developer in repairing the source of an error. We propose an efficient and generic algorithm for explaining different anomalies in feature models. Additionally, we achieve a benefit for the developer by computing short explanations expressed in a user-friendly manner and by emphasizing specific parts in explanations that are more likely to be the cause of an anomaly. We provide an open-source implementation in FeatureIDE and show its scalability for industrial-size feature models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    43
    Citations
    NaN
    KQI
    []