column Share on HCI of Arabia: the challenges of HCI research in Egypt Authors: Galal H. Galal-Edeen Cairo University, UCL Institute of Digital Health, and UCL Interaction Centre Cairo University, UCL Institute of Digital Health, and UCL Interaction CentreView Profile , Yasmeen Abdrabou The German University in Cairo and Bundeswehr University Munich The German University in Cairo and Bundeswehr University MunichView Profile , Maha Elgarf KTH Royal Institute of Technology KTH Royal Institute of TechnologyView Profile , Hala M. Hassan Cairo University Cairo UniversityView Profile Authors Info & Claims InteractionsVolume 26Issue 3May - June 2019 pp 55–59https://doi.org/10.1145/3318215Published:23 April 2019Publication History 4citation596DownloadsMetricsTotal Citations4Total Downloads596Last 12 Months140Last 6 weeks21 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access
Tablets are attractive for design work anywhere, but 3D manipulations are notoriously difficult. We explore how engaging the stylus and multi-touch in concert can render such tasks easier. We introduce Bi-3D, an interaction concept where touch gestures are combined with 2D pen commands for 3D manipulation. For example, for a fast and intuitive 3D drag & drop technique: the pen drags the object on-screen, and parallel pinch-to-zoom moves it in the third dimension. In this paper, we describe the Bi-3D design space, crossing two-handed input and the degrees-of-freedom (DOF) of 3D manipulation and navigation tasks. We demonstrate sketching and manipulation tools in a prototype 3D design application, where users can fluidly combine 3D operations through alternating and parallel use of the modalities. We evaluate the core technique, bi-manual 3DOF input, against widget and mid-air baselines in an object movement task. We find that Bi-3D is a fast and practical way for multi-dimensional manipulation of graphical objects, promising to facilitate 3D design on stylus and tablet devices.
Eye gaze has emerged as a promising avenue for implicit authentication/identification on smartphones, offering the potential for seamless user identification and two-factor authentication. However, a crucial gap exists in understanding eye gaze behaviour specifically during smartphone unlocks. This lack of understanding is magnified by scenarios where users' faces are not fully visible in front cameras, leading to inaccurate gaze estimation. In this work, we conducted a 24-hour in-the-wild study tracking 21 users' eye gaze during smartphone unlocks. Our findings highlight substantial eye gaze behaviour variations influenced by authentication methods, physical activity, and environment. Our findings provide insights to enhance and adapt implicit user identification/authentication systems based on gaze tracking on smartphones taking into consideration different users' behaviour, and environmental effects.
Recent research involves biometrics in authentication as they establish a natural way of communication. Accordingly, in this paper, we describe the implementation of an Android-based authentication application and we focus on human factor impostor attacks on gait based systems. We used the smartphone's built-in accelerometer to record gait cycles while walking. Results proved that imitating other's gait cycles is possible and can be a huge threat.
Gaze is promising for natural and spontaneous interaction with public displays, but current gaze-enabled displays require movement-hindering stationary eye trackers or cumbersome head-mounted eye trackers. We propose and evaluate GazeCast – a novel system that leverages users' handheld mobile devices to allow gaze-based interaction with surrounding displays. In a user study (N = 20), we compared GazeCast to a standard webcam for gaze-based interaction using Pursuits. We found that while selection using GazeCast requires more time and physical demand, participants value GazeCast's high accuracy and flexible positioning. We conclude by discussing how mobile computing can facilitate the adoption of gaze interaction with pervasive displays.
We propose an approach to identify users' exposure to fake news from users' gaze and mouse movement behavior.Our approach is meant as an enabler for interventions that make users aware of engaging with fake news while not being consciously aware of this.Our work is motivated by the rapid spread of fake news on the web (in particular, social media) and the difficulty and effort required to identify fake content, either technically or by means of a human fact checker.To this end, we set out with conducting a remote online study (N = 54) in which participants were exposed to real and fake social media posts while their mouse and gaze movements were recorded.We identify the most predictive gaze and mouse movement features and show that fake news can be predicted with 68.4% accuracy from users' gaze and mouse movement behavior.Our work is complemented by discussing the implications of using behavioral features for mitigating the spread of fake news on social media.
In this paper, we propose a calibration-free gaze-based text entry system that uses smooth pursuit eye movements. We report on our implementation, which improves over prior work on smooth pursuit text entry by 1) eliminating the need of calibration using motion correlation, 2) increasing input rate from 3.34 to 3.41 words per minute, 3) featuring text suggestions that were trained on 10,000 lexicon sentences recommended in the literature. We report on a user study (N=26) which shows that users are able to eye type at 3.41 words per minutes without calibration and without user training. Qualitative feedback also indicates that users positively perceive the system. Our work is of particular benefit for disabled users and for situations when voice and tactile input are not feasible (e.g., in noisy environments or when the hands are occupied).
Users are the last line of defense as phishing emails pass filter mechanisms. At the same time, phishing emails are designed so that they are challenging to identify by users. To this end, attackers employ techniques, such as eliciting stress, targeting helpfulness, or exercising authority, due to which users often miss being manipulated out of malicious intent. This work builds on the assumption that manipulation techniques, even if going unnoticed by users, still lead to changes in their behavior. In this work, we present the outcomes of an online study in which we collected gaze and mouse movement data during an email sorting task. Our findings show that phishing emails lead to significant differences across behavioral features but depend on the nature of the email. We discuss how our findings can be leveraged to build security mechanisms protecting users and companies from phishing.
We investigate how problems in understanding text – specifically a word or a sentence – while filling in questionnaires are reflected in gaze behaviour. To identify text comprehension problems, while filling a questionnaire, and their correlation with the gaze features, we collected data from 42 participant. In a follow-up study (N=30), we evoked comprehension problems and features they affect and quantified users' gaze behaviour. Our findings implies that comprehension problems could be reflected in a set of gaze features, namely, in the number of fixations, duration of fixations, and number of regressions. Our findings not only demonstrate the potential of eye tracking for assessing reading comprehension but also pave the way for researchers and designers to build novel questionnaire tools that instantly mitigate problems in reading comprehension.