When it comes to how we define success metrics for our products, teams often leave out the user. Daily active users, conversion rates, % uptime, CSAT - these are all important metrics to keep track of from a product and business perspective, but none of these fully capture the user's perspective. They don't give insight into what users care about and what they're trying to achieve. With qualitative research, we gain a deep understanding of what matters to users, but these insights are often quickly forgotten by product teams. In this workshop, we'll introduce Critical User Journeys (CUJs: important tasks your user needs to be able to complete) and Experience Outcomes (XOs: your user's fundamental emotional needs) as tools that will enable product teams to prioritize based on what matters to users.
Geographically distributed collaborative teams often rely on synchronous text-based online communication for accomplishing tasks and maintaining social contact. This technology leaves a trace that can help researchers understand affect expression and dynamics in distributed groups. Although manual labeling of affect in chat logs has shed light on complex group communication phenomena, scaling this process to larger data sets through automation is difficult. We present a pipeline of natural language processing and machine learning techniques that can be used to build automated classifiers of affect in chat logs. Interpreting affect as a dynamic, contextualized process, we explain our development and application of this method to four years of chat logs from a longitudinal study of a multi-cultural distributed scientific collaboration. With ground truth generated through manual labeling of affect over a subset of the chat logs, our approach can successfully identify many commonly occurring types of affect.
Many researchers find themselves in a methodological rut and end up using the same tried-and-true user research methods, such as usability studies or interviews. Though these methods have their merits, there are times when asking questions directly may not suffice or researchers simply have trouble getting to the insights that are needed. This two-hour tutorial will focus on teaching creative methods that can spark new conversation or illuminate different insights. We will focus on three methods: speed-dating, love letters, and couple interviews. These methods are particularly effective for researchers and practitioners who study personal topics such as communication messaging apps and websites. The tutorial will provide a useful toolkit of creative methods and best practices.
The performance of machine learning (ML) classification algorithms in an open-ended problem with manual labels is difficult to assess, because errors can exist both in the classification and the data. This paper introduces a new visualization, confusion diamond, that exposes both kinds of errors in the context of analyzing affect in chat logs of scientists studying supernovae. I present key design elements of this visualization, relevant usage scenarios, and findings from semi-structured interviews with other members of the research team.
Demands of the fast paced tech industry can leave little time for rigorous UX research. Some teams may not even have dedicated UX researchers or access to users. This workshop will focus on teaching various research methods to apply in 24 hours or less, at any phase of the product life cycle. We will demonstrate how to apply four methods: heuristic evaluations, cafe studies, surveys and remote user testing. These methods have been successfully used to provide immediately actionable results for our teams.