Abstract Background Clinical Decision Support Systems (CDSSs) based on routine care data using artificial intelligence (AI) are increasingly deployed. CDSSs, however, may affect patient-physician relationships (Figure 1) and potentially result in a decrease in treatment adherence. Communication about these systems has mostly focused on technical aspects and less on their acceptance and trust. We aimed to investigate whether patient trust in physicians is affected when supported by a CDSS. Methods We conducted a vignette study among the patient panel (N=860) of a University Medical Center. Patients were presented either a high-risk or a low-risk vignette. For both vignettes, a physician made a treatment decision with (intervention group) or without (control group) the use of a CDSS. Using a questionnaire with a 1-7 point Likert scale (1 = strongly disagree, 7 = strongly agree), we collected data on patient trust in physicians in three dimensions: competence, integrity and benevolence, and trust in technology in general. We assessed differences between the control and intervention groups using Mann-Whitney U tests and potential effect modification by sex, age and education level using multivariate analyses. Results In total, 398 individuals participated. Trust in general was high (5.8 in the high-risk case and 6.0 in the low-risk case). Figure 2 illustrates the distribution of the dimensions of trust, by case and group. In the high-risk case, perceived competence and integrity was lower in the intervention group compared to the control group (5.8 vs. 5.6 and 6.3 vs. 6.0, respectively). In the intervention group, perception of the physician’s competence and integrity was significantly lower in women than in men. No differences between the groups were found in the trust dimensions within the low-risk case. Participants with higher trust in technology in general also showed a higher confidence in physicians using a CDSS in terms of benevolence and integrity (P=.009 and P=.044, respectively). Conclusion We found that, in general, trust in physicians was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially in women and high-risk situations. Trust in technology in general might increase the likelihood of embracing the use of CDSSs by the treating professional.
Background Clinical decision support systems (CDSSs) based on routine care data, using artificial intelligence (AI), are increasingly being developed. Previous studies focused largely on the technical aspects of using AI, but the acceptability of these technologies by patients remains unclear. Objective We aimed to investigate whether patient-physician trust is affected when medical decision-making is supported by a CDSS. Methods We conducted a vignette study among the patient panel (N=860) of the University Medical Center Utrecht, the Netherlands. Patients were randomly assigned into 4 groups—either the intervention or control groups of the high-risk or low-risk cases. In both the high-risk and low-risk case groups, a physician made a treatment decision with (intervention groups) or without (control groups) the support of a CDSS. Using a questionnaire with a 7-point Likert scale, with 1 indicating “strongly disagree” and 7 indicating “strongly agree,” we collected data on patient-physician trust in 3 dimensions: competence, integrity, and benevolence. We assessed differences in patient-physician trust between the control and intervention groups per case using Mann-Whitney U tests and potential effect modification by the participant’s sex, age, education level, general trust in health care, and general trust in technology using multivariate analyses of (co)variance. Results In total, 398 patients participated. In the high-risk case, median perceived competence and integrity were lower in the intervention group compared to the control group but not statistically significant (5.8 vs 5.6; P=.16 and 6.3 vs 6.0; P=.06, respectively). However, the effect of a CDSS application on the perceived competence of the physician depended on the participant’s sex (P=.03). Although no between-group differences were found in men, in women, the perception of the physician’s competence and integrity was significantly lower in the intervention compared to the control group (P=.009 and P=.01, respectively). In the low-risk case, no differences in trust between the groups were found. However, increased trust in technology positively influenced the perceived benevolence and integrity in the low-risk case (P=.009 and P=.04, respectively). Conclusions We found that, in general, patient-physician trust was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially among women and in high-risk situations. Trust in technology, in general, might increase the likelihood of embracing the use of CDSSs by treating professionals.