Should I See or Should I Go: Automatic Detection of Sensitive Media in Messaging Apps
2021
A very large amount of multimedia data is continually being shared through social networks. In these public spaces, administrators are legally responsible for moderating and controlling the content uploaded or posted to their platforms. However, the traffic of media in some private spaces, such as chat rooms and some messaging platforms, for example, are often protected, sometimes by end-to-end encryption, and therefore are not subject to this kind of monitoring, which makes them prone to the spread of inappropriate media such as pornography, violence or other potentially offensive content without the consent of recipients. This kind of exposure is especially concerning when considering the vulnerability of children in these environments. Parental control in these settings is often invasive and, due to the lack of other available alternatives, completely disregards any children's rights to privacy. In this work, we propose, as an alternative to those invasive parental interventions, a self-monitoring control to the video content in messaging applications that preserves the interlocutors' privacy. Our approach is based on a Convolutional Neural Network (CNN) that classifies the video content according to its appropriateness. In this video classification task, our model reached a F1 score of 98.95% and 98.94% for the appropriate and inappropriate classes respectively. Our approach also makes room for a simple extension that provides the capacity of classifying any type of media in mobile devices.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
12
References
0
Citations
NaN
KQI