The question of how self-driving cars should behave in dilemma situations has recently attracted a lot of attention in science, media and society. A growing number of publications amass insight into the factors underlying the choices we make in such situations, often using forced-choice paradigms closely linked to the trolley dilemma. The methodology used to address these questions, however, varies widely between studies, ranging from fully immersive virtual reality settings to completely text-based surveys. In this paper we compare virtual reality and text-based assessments, analyzing the effect that different factors in the methodology have on decisions and emotional response of participants. We present two studies, comparing a total of six different conditions varying across three dimensions: The level of abstraction, the use of virtual reality, and time-constraints. Our results show that the moral decisions made in this context are not strongly influenced by the assessment, and the compared methods ultimately appear to measure very similar constructs. Furthermore, we add to the pool of evidence on the underlying factors of moral judgment in traffic dilemmas, both in terms of general preferences, i.e., features of the particular situation and potential victims, as well as in terms of individual differences between participants, such as their age and gender.
Abstract In anticipation of the market introduction of highly and fully automated vehicles, regulations for their behavior in public road traffic are emerging in various countries and regions. Yet, as we show using the example of EU and German regulations, these rules are both incomplete and exceptionally vague. In this paper we introduce three traffic scenarios highlighting conflicting ethical, legal, and utility-related claims, and perform a legal analysis with regards to the expected behavior of AVs in these scenarios. We show that the existing regulatory framework disregards the realities of algorithmic decision-making in automated vehicles, such as the incomplete and imprecise perception of their environment and the probabilistic nature of their predictions. Importantly, the current regulations are written in abstract language addressing human interpreters rather than the precise logical-numerical computer code. We argue that the required interpretation and translation of the abstract legal language into the logical-numerical domain is so ambiguous that the regulations as they stand fail to guide or limit automated vehicle behavior in any meaningful way. This comes with significant ethical implications, as the interpretation and translation is unavoidable and, if not provided by regulatory bodies, will have to be performed by manufacturers. We argue that ethical decisions with significant impact on public safety must not be delegated to private companies, and thus, regulatory frameworks need significant improvements.
The most widely used activation functions in current deep feed-forward neural networks are rectified linear units (ReLU), and many alternatives have been successfully applied, as well. However, none of the alternatives have managed to consistently outperform the rest and there is no unified theory connecting properties of the task and network with properties of activation functions for most efficient training. A possible solution is to have the network learn its preferred activation functions. In this work, we introduce Adaptive Blending Units (ABUs), a trainable linear combination of a set of activation functions. Since ABUs learn the shape, as well as the overall scaling of the activation function, we also analyze the effects of adaptive scaling in common activation functions. We experimentally demonstrate advantages of both adaptive scaling and ABUs over common activation functions across a set of systematically varied network specifications. We further show that adaptive scaling works by mitigating covariate shifts during training, and that the observed advantages in performance of ABUs likewise rely largely on the activation function's ability to adapt over the course of training.
The question of how self-driving cars should behave in dilemma situations has recently attracted a lot of attention in science, media and society. A growing number of publications amass insight into the factors underlying the choices we make in such situations, often using forced-choice paradigms closely linked to the trolley dilemma. The methodology used to address these questions, however, varies widely between studies, ranging from fully immersive virtual reality settings to completely text-based surveys. In this paper we compare virtual reality and text-based assessments, analyzing the effect that different factors in the methodology have on decisions and emotional response of participants. We present two studies, comparing a total of six different conditions varying across three dimensions: The level of abstraction, the use of virtual reality, and time-constraints. Our results show that the moral decisions made in this context are not strongly influenced by the assessment, and the compared methods ultimately appear to measure very similar constructs. Furthermore, we add to the pool of evidence on the underlying factors of moral judgment in traffic dilemmas, both in terms of general preferences, i.e., features of the particular situation and potential victims, as well as in terms of individual differences between participants, such as their age and gender.
GENERAL COMMENTARY article Front. Behav. Neurosci., 26 June 2018Sec. Individual and Social Behaviors Volume 12 - 2018 | https://doi.org/10.3389/fnbeh.2018.00128
Federated learning (FL) is a promising approach to distributed compute, as well as distributed data, and provides a level of privacy and compliance to legal frameworks. This makes FL attractive for both consumer and healthcare applications. While the area is actively being explored, few studies have examined FL in the context of larger language models and there is a lack of comprehensive reviews of robustness across tasks, architectures, numbers of clients, and other relevant factors. In this paper, we explore the fine-tuning of Transformer-based language models in a federated learning setting. We evaluate three popular BERT-variants of different sizes (BERT, ALBERT, and DistilBERT) on a number of text classification tasks such as sentiment analysis and author identification. We perform an extensive sweep over the number of clients, ranging up to 32, to evaluate the impact of distributed compute on task performance in the federated averaging setting. While our findings suggest that the large sizes of the evaluated models are not generally prohibitive to federated training, we found that the different models handle federated averaging to a varying degree. Most notably, DistilBERT converges significantly slower with larger numbers of clients, and under some circumstances, even collapses to chance level performance. Investigating this issue presents an interesting perspective for future research.
Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in a utilitarian way, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past years; especially due to the necessity of implementing moral decisions in autonomous driving vehicles. We conducted a set of experiments in which participants experienced modified trolley dilemmas as the driver in a virtual reality environment. Participants had to make decisionsbetween two discrete options: driving on one of two lanes where different obstacles came into view. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, we tested the influence of a sidewalk as a potential safe harbor and a condition implicating a self-sacrifice. Results showed that subjects, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence of the other variables. Our findings support that human behavior is in line with the utilitarian approach to moral decision making. This may serve as a guideline for the implementation of moral decisions in ADVs.