Abstract A Recommender System’s recommendations will each carry a certain level of uncertainty. The quantification of this uncertainty can be useful in a variety of ways. Estimates of uncertainty might be used externally; for example, showing them to the user to increase user trust in the abilities of the system. They may also be used internally; for example, deciding the balance of ‘safe’ and less safe recommendations. In this work, we explore several methods for estimating uncertainty. The novelty comes from proposing methods that work in the implicit feedback setting. We use experiments on two datasets to compare a number of recommendation algorithms that are modified to perform uncertainty estimation. In our experiments, we show that some of these modified algorithms are less accurate than their unmodified counterparts, but others are actually more accurate. We also show which of these methods are best at enabling the recommender to be ‘aware’ of which of its recommendations are likely to be correct and which are likely to be wrong.
Intent-aware methods for recommendation diversification seek to ensure that the recommended items cover so-called aspects, which are assumed to define the user's tastes and interests. Most typically, aspects are item features such as movie or music genres. In recent work, we presented a novel intent-aware diversification method, called Subprofile-Aware Diversification (SPAD). In SPAD, aspects are subprofiles of the active user's profile, detected using an item-item similarity method. In this paper, we propose Community-Aware Diversification (CAD), in which aspects are again subprofiles but are detected indirectly through users who are similar to the active user. We evaluate CAD's precision and diversity on four different datasets, and compare it with SPAD and an intent-aware diversification method called xQuAD. We show that on two of the datasets SPAD outperforms CAD, but for the other two CAD outperforms SPAD. For all datasets, both CAD and SPAD achieve higher precision than xQuAD. When it comes to diversity, xQuAD sometimes results in more diverse recommendations but it is more prone to paying for this diversity with decreases in precision. Arguably, SPAD and CAD strike a better balance between the two.
We propose denoising dictionary learning (DDL), a simple yet effective technique as a protection measure against adversarial perturbations. We examined denoising dictionary learning on MNIST and CIFAR10 perturbed under two different perturbation techniques, fast gradient sign (FGSM) and jacobian saliency maps (JSMA). We evaluated it against five different deep neural networks (DNN) representing the building blocks of most recent architectures indicating a successive progression of model complexity of each other. We show that each model tends to capture different representations based on their architecture. For each model we recorded its accuracy both on the perturbed test data previously misclassified with high confidence and on the denoised one after the reconstruction using dictionary learning. The reconstruction quality of each data point is assessed by means of PSNR (Peak Signal to Noise Ratio) and Structure Similarity Index (SSI). We show that after applying (DDL) the reconstruction of the original data point from a noisy
Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Recent findings suggest that a single explainer may not meet the diverse needs of multiple users in an AI system; indeed, even individual users may require multiple explanations. This highlights the necessity for a “multi-shot” approach, employing a combination of explainers to form what we introduce as an “explanation strategy”. Tailored to a specific user or a user group, an “explanation experience” describes interactions with personalised strategies designed to enhance their AI decision-making processes. The iSee platform is designed for the intelligent sharing and reuse of explanation experiences, using Case-based Reasoning to advance best practices in XAI. The platform provides tools that enable AI system designers, i.e. design users, to design and iteratively revise the most suitable explanation strategy for their AI system to satisfy end-user needs. All knowledge generated within the iSee platform is formalised by the iSee ontology for interoperability. We use a summative mixed methods study protocol to evaluate the usability and utility of the iSEE platform with six design users across varying levels of AI and XAI expertise. Our findings confirm that the iSee platform effectively generalises across applications and its potential to promote the adoption of XAI best practices.
For group recommendations, one objective is to recommend an ordered set of items, a top-N, to a group such that each individual recommendation is relevant for everyone. A common way to do this is to select items on which the group can agree, using so-called 'aggregation strategies'. One weakness of these aggregation strategies is that they select items independently of each other. They therefore cannot guarantee properties such as fairness, that apply to the set of recommendations as a whole.