Abstract Caring for a person with dementia is associated with negative outcomes. Few caregiver interventions have been implemented in community settings. Mobile technology is one method for reaching many caregivers. This project translated two empirically-supported interventions for dementia caregivers into a mobile health application. A team of clinical researchers and computer engineers developed an App called CARE-Well (Caregiver Assessment, Resources, and Education) over 6 months. The group worked closely to do the following: 1). translate interventional content to be compatible with a mobile platform; 2). create new materials; 3). determine App components that captured key intervention areas; 4). troubleshoot formatting, technology, and data security; and 5). educate each other about respective areas of expertise. We developed a beta version of the App that included: 1). assessment of caregiver stress and care recipient behavioral problems; 2). psychoeducation; 3). goal diary; 4). managing behavior problems; 5). online message forum; and 6). video library. Several challenges arose during the App development process, such as how to create navigation paths and goal lists based off users’ assessment responses, data storage and usage tracking, enlarging text, and how to ensure privacy and confidentiality in the online message forum. Our experience developing the CARE-Well App showed that translating behavioral interventions into mobile health applications is feasible and dependent upon regular communication among multidisciplinary team members. Next steps for the App include beta testing with dementia caregivers and conducting a pilot randomized trial to determine feasibility for a future trial and its effects on caregiver stress.
BACKGROUND The recent growth of eHealth is unprecedented, especially after the COVID-19 pandemic. Within eHealth, wearable technology is increasingly being adopted because it can offer the remote monitoring of chronic and acute conditions in daily life environments. Wearable technology may be used to monitor and track key indicators of physical and psychological stress in daily life settings, providing helpful information for clinicians. One of the key challenges is to present extensive wearable data to clinicians in an easily interpretable manner to make informed decisions. OBJECTIVE The purpose of this research was to design a wearable data dashboard, named CarePortal, to present analytic visualizations of wearable data that are meaningful to clinicians. The study was divided into 2 main research objectives: to understand the needs of clinicians regarding wearable data interpretation and visualization and to develop a system architecture for a web application to visualize wearable data and related analytics. METHODS We used a wearable data set collected from 116 adolescent participants who experienced trauma. For 2 weeks, participants wore a Microsoft Band that logged physiological sensor data such as heart rate (HR). A total of 834 days of HR data were collected. To design the CarePortal dashboard, we used a participatory design approach that interacted directly with clinicians (stakeholders) with backgrounds in clinical psychology and neuropsychology. A total of 8 clinicians were recruited from the Rhode Island Hospital and the University of Massachusetts Memorial Health. The study involved 5 stages of participatory workshops and began with an understanding of the needs of clinicians. A User Experience Questionnaire was used at the end of the study to quantitatively evaluate user experience. Physiological metrics such as daily and hourly maximum, minimum, average, and SD of HR and HR variability, along with HR-based activity levels, were identified. This study investigated various data visualization graphing methods for wearable data, including radar charts, stacked bar plots, scatter plots combined with line plots, simple bar plots, and box plots. RESULTS We created a CarePortal dashboard after understanding the clinicians’ needs. Results from our workshops indicate that overall clinicians preferred aggregate information such as daily HR instead of continuous HR and want to see trends in wearable sensor data over a period (eg, days). In the User Experience Questionnaire, a score of 1.4 was received, which indicated that CarePortal was exciting to use (question 5), and a similar score was received, indicating that CarePortal was the leading edge (question 8). On average, clinicians reported that CarePortal was supportive and can be useful in making informed decisions. CONCLUSIONS We concluded that the CarePortal dashboard integrated with wearable sensor data visualization techniques would be an acceptable tool for clinicians to use in the future. CLINICALTRIAL
This research study investigates the impact of various insulating textile materials on the performance of smart textile pressure sensors made of conductive threads and piezo resistive material. We designed four sets of identical textile-based pressure sensors each of them integrating a different insulating textile substrate material. Each of these sensors underwent a series of tests that linearly increased and decreased a uniform pressure perpendicular to the surface of the sensors. The controlled change of the integration layer altered the characteristics of the pressure sensors including both the sensitivity and pressure ranges. Our experiments highlighted that the manufacturing design technique of textile material has a significant impact on the sensor; with evidence of reproducibility values directly relating to fabric dimensional stability and elasticity.
The advancement of smart textiles has led to significant interest in developing wearable textile sensors (WTS) and offering new modalities to sense vital signs and activity monitoring in daily life settings. For this, textile fabrication methods such as knitting, weaving, embroidery, and braiding offer promising pathways toward unobtrusive and seamless sensing for WTS applications. Specifically, the knitted sensor has a unique intermeshing loop structure which is currently used to monitor repetitive body movements such as breathing (microscale motion) and walking (macroscale motion). However, the practical sensing application of knit structure demands a comprehensive study of knit structures as a sensor. In this work, we present a detailed performance evaluation of six knitted sensors and sensing variation caused by design, sensor size, stretching percentages % (10, 15, 20, 25), cyclic stretching (1000), and external factors such as sweat (salt-fog test). We also present regulated respiration (inhale-exhale) testing data from 15 healthy human participants; the testing protocol includes three respiration rates; slow (10 breaths/min), normal (15 breaths/min), and fast (30 breaths/min). The test carried out with statistical analysis includes the breathing time and breathing rate variability. These testing results offer an empirically derived guideline for future WTS research, present aggregated information to understand the sensor behavior when it experiences a different range of motion, and highlight the constraints of the silver-based conductive yarn when exposed to the real environment.
Despite great advances in state-of-the-art braincomputer interfaces (BCIs), most BCIs do not consider users' cognitive status during operation, which might have a critical role in BCI performance. This study proposes a novel multimodal BCI to concurrently measure electrical and hemodynamic activities using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), and to quantify the neural correlates of mental arithmetic-induced workload at multiscale levels in two groups of old and young. We propose an oddball-based Math paradigm where the subjects performed a set of mental arithmetic operations at the target intensifications. Our analysis demonstrated an increase of EEG-delta and theta, a decrease of alpha, and an increase of fNIRS-oxyhemoglobin (HbO) level associated with mental workload, which were significantly observed in the elder group. The changes of EEG-delta, theta, and HbO were primarily found in the frontal and prefrontal areas while alpha was mainly seen in the parietal locations. The preliminary analyses suggest a set of functional brain processing, at multiscale levels, required by mental workload conditions that is more profound in the elder group.
This work introduces Wearable deep learning (WearableDL) that is a unifying conceptual architecture inspired by the human nervous system, offering the convergence of deep learning (DL), Internet-of-things (IoT), and wearable technologies (WT) as follows: (1) the brain, the core of the central nervous system, represents deep learning for cloud computing and big data processing. (2) The spinal cord (a part of CNS connected to the brain) represents Internet-of-things for fog computing and big data flow/transfer. (3) Peripheral sensory and motor nerves (components of the peripheral nervous system (PNS)) represent wearable technologies as edge devices for big data collection. In recent times, wearable IoT devices have enabled the streaming of big data from smart wearables (e.g., smartphones, smartwatches, smart clothings, and personalized gadgets) to the cloud servers. Now, the ultimate challenges are (1) how to analyze the collected wearable big data without any background information and also without any labels representing the underlying activity; and (2) how to recognize the spatial/temporal patterns in this unstructured big data for helping end-users in decision making process, e.g., medical diagnosis, rehabilitation efficiency, and/or sports performance. Deep learning (DL) has recently gained popularity due to its ability to (1) scale to the big data size (scalability); (2) learn the feature engineering by itself (no manual feature extraction or hand-crafted features) in an end-to-end fashion; and (3) offer accuracy or precision in learning raw unlabeled/labeled (unsupervised/supervised) data. In order to understand the current state-of-the-art, we systematically reviewed over 100 similar and recently published scientific works on the development of DL approaches for wearable and person-centered technologies. The review supports and strengthens the proposed bioinspired architecture of WearableDL. This article eventually develops an outlook and provides insightful suggestions for WearableDL and its application in the field of big data analytics.
Mobile phones are a ubiquitous and preferred communication, entertainment, and information access platform. Smartphones may provide an opportunity to better assess mood and behavior and to provide intervention timely, economical, rapid and effective intervention for those with mental disorders. This is an important target because behavioral health problems are associated with many of the medical disorders most responsible for morbidity and cost. Today, psychiatrists seek for various channels of mobile technology that can reduce evaluation costs and increase accuracy and also facilitate ubiquitous longitudinal monitoring of treatment and outcome measures on patients' smartphone. Facial expression recognition is one of the active research areas in the field of psychiatry to evaluate a patient's emotional health. Smartphone technology for recognizing facial expression of emotions is still emerging and offers an open platform for the research areas such as ubiquitous intelligence and computing. In this research, we present a framework to track user's emotional engagement to videos played on a smartphone. The presented framework processes user's video recorded from the front-facing camera of a smartphone and tracks facial features to detect joyful durations induced by the played videos. We also conducted subject studies on healthy individuals to evaluate the applied approach of emotional engagement. We believe that the presented results are promising and present a valuable insight to build ubiquitous intelligent systems that can help various areas of psychiatric research.
Brain computer interfaces (BCI) using EEG, fNIRS and body motion (MoCap) data are getting more attention due to the fact that fNIRS and MoCap are not prone to movement artifacts similar to other brain imaging techniques such as EEG. Advancements in deep learning (neural networks) would allow the use of raw data for efficient feature extraction without any pre-/post-processing. In this work, we are performing human activity recognition (BCI classification task) for 5 activity classes using an end-to-end (deep) neural network (NN) (from input all the way to the output) on raw fNIRS, EEG and MoCap data. Our core contribution is focused on applying an end-to-end NN model without any pre-/post-processing on the data. The entire NN model is being trained using backpropagation algorithm. Our end-to-end model is composed of a four-layered MLP: input layer, two hidden layers (using fully connected (dense) layer, batch normalization and leaky-RELU as non-linearity and activation function), and output layer using softmax. We have reached minimum 90\% accuracy on the test dataset for the classification task on 10 subjects data and 5 classes of activity.