Investigating Google dashboard's explainability to support individual privacy decision making

2019 
Advances in information technology often overwhelm users with complex privacy and security decisions. They make the collection and use of personal data quite invisible. In the current scenario, this data collection can introduce risks, manipulate and influence the decision making process. This research is based on concepts from an emerging field of study called Human Data Interaction (HDI), which proposes to include the human at the center of the data stream, providing mechanisms for citizens to interact explicitly with the collected data. We explored the explanation as a promising mechanism for transparency in automated systems. In the first step, we apply the Semiotic Inspection Method (SIM) longitudinally to investigate how using explanations as an interactive feature can help or prevent users from making privacy decisions on Google services. In the second step, we conducted an empirical study in which users are able to analyze whether these explanations are satisfactory and feel (un) secure in the decision making process. And by comparing the results of the two steps, we find that even in a large company like Google, the right to explanation is not guaranteed. Google does not make its data processing transparent to users, nor does it provide satisfactory explanations of how its services use individual data. Consequently, the lack of coherent, detailed and transparent explanations hamper users to make good and safe decisions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    1
    Citations
    NaN
    KQI
    []