FairRover: explorative model building for fair and responsible machine learning

2021 
The potential harms and drawbacks of automated decision making has become a challenge as data science blends into our lives. In particular, fairness issues with deployed machine learning models have drawn significant attention from the research community. Despite the myriad of algorithmic fairness work in various research communities, in practice data scientists still face many roadblocks in ensuring the fairness of their machine learning models. This is primarily because there does not exist an end-to-end system that guides the users in building a fair machine learning model in a responsible way from model auditing, to model explanation, to bias mitigation. We propose a explorative model building system FairRover for responsible fair model building. FairRover guides users in (1) discovering the potential biases in the model; (2) providing explanation to the discovered biases so as to help users in understanding potential causes of the biases; and (3) mitigating the most important biases selected by the users. Because of the impossibility theorem of fairness, and the well-known trade-off between fairness and accuracy, it is generally impossible to achieve a completely fair and accurate machine learning model. Therefore, this responsible model building process is naturally performed iteratively until a satisfying trade-off is reached. Human users are involved in the loop to make various decisions guided by FairRover. We demonstrate a case study on the Adult Census dataset, which shows how FairRover guides users in iteratively building a fair income prediction model in a responsible way. We discuss the current limitations of FairRover and future work.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []