Identifying Functional Hand Use and Hand Roles of Stroke Survivors in Activities of Daily Living Using Egocentric Video
2021
Research Objectives To identify hand use and hand roles of stroke survivors using videos from wearable cameras (egocentric videos). Upper limb (UL) function evaluated in the clinic may not reflect hand use in activities of daily living (ADLs) after stroke. Accelerometers have been applied to capture UL use, but also fail to reveal details of hand function. In response, a system composed of an egocentric camera and computer vision approaches is proposed to detect functional hand use and identify the role of the affected hand (stabilizer or manipulator) after stroke in unconstrained environments. Design Cross-sectional study. Setting Home simulation laboratory at the Toronto Rehabilitation Institute. Participants Nine chronic stroke survivors were included in this study. Interventions No intervention included. Methods: Participants used an egocentric camera to record ADLs performed in their typical manner. Features reflecting motion, hand shape, colour, and hand size changes were extracted from images and fed into random forest classifiers to detect hand-object interactions and identify hand roles. Main Outcome Measures Algorithm performance was quantified by the F1-score, evaluated using leave-one-subject-out and leave-one-task-out cross-validation (LOSOCV and LOTOCV, respectively). Results LOSOCV and LOTOCV F1-scores for affected hand use were 0.64 ± 0.24 and 0.76 ± 0.23, respectively. LOSOCV and LOTOCV F1-scores for unaffected hand use were 0.72 ± 0.20 and 0.82 ± 0.22. For hand role classification, F1-scores for LOSOCV and LOTOCV were 0.70 ± 0.19 and 0.68 ± 0.23 in the affected hand, and 0.59 ± 0.23 and 0.65 ± 0.28 in the unaffected hand. Conclusions The results demonstrate the feasibility of using egocentric videos to identify hand use and the role of the affected hand after stroke. Author(s) Disclosures All authors have nothing to disclose.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI