Millions suffer from sleep disorders, and sleep clinics and research institutions seek improved sleep study methods. This paper proposes the Fascia Ecosystem for Sleep Engineering to improve traditional sleep studies. The Fascia Sleep Mask is more comfortable and accessible than overnight stays at a sleep center, and the Fascia Portal and Fascia Hub allow for home-based sleep studies with real-time intervention and data analysis capabilities.A study of 10 sleep experts found that the Fascia Portal is easy to access, navigate, and use, with 44.4% finding it very easy to access, 33.3% very easy to navigate, and 60% very easy to get used to. Most experts found the Fascia Portal reliable and easy to use.Moreover, the study analyzed physiological signals during various states of sleep and wakefulness in two subjects. The results demonstrated that the Fascia dataset captured higher amplitude spindles in N2 sleep (72.20 V and 109.87 V in frontal and parietal regions, respectively) and higher peak-to-peak amplitude slow waves in N3 sleep (93.51 V) compared to benchmark datasets. Fascia produced stronger and more consistent EOG signals during REM sleep, indicating its potential to improve sleep disorder diagnosis and treatment by providing a deeper understanding of sleep patterns.
Abstract Purpose Non-verbal utterances are an important tool of communication for individuals who are non- or minimally-speaking. While these utterances are typically understood by caregivers, they can be challenging to interpret by their larger community. To date, there has been little work done to detect and characterize the vocalizations produced by non- or minimally-speaking individuals. This paper aims to characterize five categories of utterances across a set of 7 non- or minimally-speaking individuals. Methods The characterization is accomplished using a correlation structure methodology, acting as a proxy measurement for motor coordination, to localize similarities and differences to specific speech production systems. Results We specifically find that frustrated and dysregulated utterances show similar correlation structure outputs, especially when compared to self-talk, request, and delighted utterances. We additionally witness higher complexity of coordination between articulatory and respiratory subsystems and lower complexity of coordination between laryngeal and respiratory subsystems in frustration and dysregulation as compared to self-talk, request, and delight. Finally, we observe lower complexity of coordination across all three speech subsystems in the request utterances as compared to self-talk and delight. Conclusion The insights from this work aid in understanding of the modifications made by non- or minimally-speaking individuals to accomplish specific goals in non-verbal communication.
A close integration of human and machine has been envisioned by researchers and artists for generations, however, there has been little effort in investigating the possibility and plausibility of the idea until recent years. We seek to open a discussion on how the notion of self is plastic, and how innately we are capable of empowering and extending ourselves through technologies. Neuroscience studies show using a tool not only offers new capabilities, but also reconstructs our cognitive architecture to include the tool as a part of ourselves. This adaptive nature of the body image poses an opportunity for designing interfaces that become natural extensions to us. In this paper we introduce previous studies drawn from various fields of study, and discuss the role of the body in the self-world relationship, body image plasticity, and how designing the body may affect neural developments. We also offer a categorization of related technologies, along with our current explorations. Finally, potential issues and challenges in realizing the presented form of interfaces are addressed.
Recently there has been a surge of interest in wearable devices both in industry and academia. This includes the introduction of head-worn devices into everyday life. Head-worn devices have the advantage of containing a screen that is easily seen by the wearer at all times, in contrast with other device screens, which can be hidden in pockets or simply easily ignored. However, during certain activities it can be difficult to get the wearer to notice messages even when presented through head-worn devices. For certain applications, it may be important that the user does not miss a particular notification or warning. Not much is known about which methods work best to attract the users' attention in such situations. We describe results from two user studies to determine the best method to catch the attention of a user with a head-worn display.
In this paper we present Remot-IO, a system for mobile collaboration and remote assistance around Internet connected devices. The system uses two Head Mounted Displays, cameras and depth sensors to enable a remote expert to be immersed in a local user's point of view and control devices in that user?s environment. The remote expert can provide guidance through the use of hand gestures that appear in real-time in the local user?s field of view as superimposed 3D hands. In addition, the remote expert is able to operate devices in the novice?s environment and bring about physical changes by using the same hand gestures the novice would use. We describe a smart radio where the knobs of the radio can be controlled by local and remote user alike. Moreover, the user can visualize, interact and modify properties of sound waves in real time by using intuitive hand gestures.
Interface agents are semi-intelligent systems which assist users with daily computer-based tasks. Recently, various researchers have proposed a learning approach towards building such agents and some working prototypes have been demonstrated. Such agents learn by 'watching over the shoulder' of the user and detecting patterns and regularities in the user's behavior. Despite the successes booked, a major problem with the learning approach is that the agent has to learn from scratch and thus takes some time becoming useful. Secondly, the agent's competence is necessarily limited to actions it has seen the user perform. Collaboration between agents assisting different users can alleviate both of these problems. We present a framework for multi-agent collaboration and discuss results of a working prototype, based on learning agents for electronic mail.
NeverMind is an interface and application designed to support human memory. We combine the memory palace memorization method with augmented reality technology to create a tool to help anyone memorize more effectively. Preliminary experiments show that content memorized with NeverMind remains longer in memory compared to general memorization techniques. With this project, we hope to make the memory palace method accessible to novices and demonstrate one way augmented reality can support learning.
Adults who are minimally verbal with autism spectrum disorder (mvASD) have pronounced speech difficulties linked to impaired motor skills. Existing research and clinical assessments primarily use indirect methods such as standardized tests, video-based facial features, and handwriting tasks, which may not directly target speech-related motor skills. In this study, we measure activity from eight facial muscles associated with speech using surface electromyography (sEMG), during carefully designed tasks. The findings reveal a higher power in the sEMG signals and a significantly greater correlation between the sEMG channels in mvASD adults (N=12) compared to age and gender-matched neurotypical controls (N=14). This suggests stronger muscle activation and greater synchrony in the discharge patterns of motor units. Further, eigenvalues derived from correlation matrices indicate lower complexity in muscle coordination in mvASD, implying fewer degrees of freedom in motor control.
This paper presents Fluxa, a compact wearable device that exploits body movements, as well as the visual effects of persistence of vision (POV), to generate mid-air displays on and around the body. When the user moves his/her limb, Fluxa displays a pattern that, due to retinal afterimage, can be perceived by the surrounding people. We envision Fluxa as a wearable display to foster social interactions. It can be used to enhance existing social gestures such as hand-waving to get attention, as a communicative tool that displays the speed and distance covered by joggers, and as a self-expression device that generates images while dancing. We discuss the advantages of Fluxa: a display size that could be much larger than the device itself, a semi-transparent display that allows users and others to see though it and promotes social interaction.