Touch typing on flat surfaces (e.g. interactive tabletop) is challenging due to lack of tactile feedback and hand drifting. In this paper, we present TOAST, an eyes-free keyboard technique for enabling efficient touch typing on touch-sensitive surfaces. We first formalized the problem of keyboard parameter (e.g. location and size) estimation based on users' typing data. Through a user study, we then examined users' eyes-free touch typing behavior on an interactive tabletop with only asterisk feedback. We fitted the keyboard model to the typing data, results suggested that the model parameters (keyboard location and size) changed not only between different users, but also within the same user along with time. Based on the results, we proposed a Markov-Bayesian algorithm for input prediction, which considers the relative location between successive touch points within each hand respectively. Simulation results showed that based on the pooled data from all users, this model improved the top-1 accuracy of the classical statistical decoding algorithm from 86.2% to 92.1%. In a second user study, we further improved TOAST with dynamical model parameter adaptation, and evaluated users' text entry performance with TOAST using realistic text entry tasks. Participants reached a pick-up speed of 41.4 WPM with a character-level error rate of 0.6%. And with less than 10 minutes of practice, they reached 44.6 WPM without sacrificing accuracy. Participants' subjective feedback also indicated that TOAST offered a natural and efficient typing experience.
An easily accessible tutorial is crucial for older adults to use mobile applications (apps) on smartphones. However, older adults often struggle to search for tutorials independently and efficiently. Through a formative study, we investigated the demands of older adults in seeking assistance and identified patterns of older adults' behaviors and verbal questions when seeking help for smartphone-related issues. Informed by the findings from the formative study, we designed EasyAsk, an app-independent method to make tutorial search accessible for older adults. This method was implemented as an Android app. Using EasyAsk, older adults can obtain interactive tutorials through voice and touch whenever they encounter problems using smartphones. To power the method, EasyAsk uses a large language model to process the voice text and contextual information provided by older adults, and another large language model to search for the tutorial. Our user experiment, involving 18 older participants, demonstrated that EasyAsk helped users obtain tutorials correctly in 98.94% of cases, making tutorial search accessible and natural.
We present Lip-Interact, an interaction technique that allows users to issue commands on their smartphone through silent speech. Lip-Interact repurposes the front camera to capture the user's mouth movements and recognize the issued commands with an end-to-end deep learning model. Our system supports 44 commands for accessing both system-level functionalities (launching apps, changing system settings, and handling pop-up windows) and application-level functionalities (integrated operations for two apps). We verify the feasibility of Lip-Interact with three user experiments: evaluating the recognition accuracy, comparing with touch on input efficiency, and comparing with voiced commands with regards to personal privacy and social norms. We demonstrate that Lip-Interact can help users access functionality efficiently in one step, enable one-handed input when the other hand is occupied, and assist touch to make interactions more fluent.
UI task automation enables efficient task execution by simulating human interactions with graphical user interfaces (GUIs), without modifying the existing application code. However, its broader adoption is constrained by the need for expertise in both scripting languages and workflow design. To address this challenge, we present Prompt2Task, a system designed to comprehend various task-related textual prompts (e.g., goals, procedures), thereby generating and performing the corresponding automation tasks. Prompt2Task incorporates a suite of intelligent agents that mimic human cognitive functions, specializing in interpreting user intent, managing external information for task generation, and executing operations on smartphones. The agents can learn from user feedback and continuously improve their performance based on the accumulated knowledge. Experimental results indicated a performance jump from a 22.28% success rate in the baseline to 95.24% with Prompt2Task, requiring an average of 0.69 user interventions for each new task. Prompt2Task presents promising applications in fields such as tutorial creation, smart assistance, and customer service.