Introduction: Recently, ILCOR and AHA have been recommended to treat patients, who returned spontaneous circulation (ROSC) from cardiac arrest, by therapeutic hypothermia. Recently, we have tried to examine predictors, such as internal jugular venous blood oxygen saturation (SjvO2), glucagon, glucose, glial fibrillary acidic protein, procalcitonin, interleukin-8, interleutin-6, S100B and high morbidity group box 1 in serum and/or cerebrospinal fluid (CSF) within 48 hours after ROSC. But those values could not be got within a few hours except for SjvO2. Therefore, we need a predictor to detect the outcome as soon as possible for farther treatments. Methods: This retrospective cohort study included patients with ROSC after CPR who were admitted to our university hospital between January 2000 and May 2011 or an affiliated hospital between January 2006 and May 2011. Clinical parameters recorded on arrival included age (A), arterial blood pH (B), time from CPR to ROSC (C), pupil diameter (D), and initial rhythm (E). Glasgow outcome scale (GOS) was recorded at 6 months and the patients were divided according to favorable or unfavorable neurological outcomes based on GOS score. Multiple logistic regression analysis was performed to derive a formula to predict neurological outcomes based on basic clinical parameters. Results: The regression equation was derived using a teaching dataset consisting of 389 records: EP = 1/(1 + e-x), where EP is the estimated probability of having a favorable outcome, and x = (-0.034 × A) + (4.669 × B) - (0.105 × C) - (0.976 × D) + (2.603 × E) - 28.279. The sensitivity, specificity, and accuracy were 86%, 91% and 91%, respectively, for the validation dataset (n = 100). Conclusions: The 6 month neurological outcomes can be predicted in patients resuscitated from OHCA using clinical parameters that can be easily recorded at the site of CPR.
We propose that a robot speaks a Hanamogera (a semantic-free speech) when the robot speaks with a person. Hanamogera is semantic-free speech and the sound of the speech is a sound of the words which consists of phonogram characters. The consisted characters can be changed freely because the Hanamogera speech does not have to have any meaning. Each sound of characters of a Hanamogera is thought to have an impression according to the contained consonant/vowel in the characters. The Hanamogera is expected to make a listener feel that the talking with a robot which speaks a Hanamogera is fun because of a sound of the Hanamogera. We conducted an experiment of talking with a NAO and an experiment of evaluating to Hanamogera speeches. The results of the experiment showed that a talking with a Hanamogera speech robot was better fun than a talking with a nodding robot.
Early prediction of the neurological outcomes of patients with out-of-hospital cardiac arrest is important to select the optimal clinical management. We hypothesized that clinical data recorded at the site of cardiopulmonary resuscitation would be clinically useful.This retrospective cohort study included patients with return of spontaneous circulation after cardiopulmonary resuscitation who were admitted to our university hospital between January 2000 and November 2013 or two affiliated hospitals between January 2006 and November 2013. Clinical parameters recorded on arrival included age (A), arterial blood pH (B), time from cardiopulmonary resuscitation to return of spontaneous circulation (C), pupil diameter (D), and initial rhythm (E). Glasgow Outcome Scale was recorded at 6 months and a favorable neurological outcome was defined as a score of 4-5 on the Glasgow Outcome Scale. Multiple logistic regression analysis was carried out to derive a formula to predict neurological outcomes based on basic clinical parameters.The regression equation was derived using a teaching dataset (total, n = 477; favourable outcome, n = 55): EP = 1/(1 + e-x ), where EP is the estimated probability of having a favorable outcome, and x = (-0.023 × A) + (3.296 × B) - (0.070 × C) - (1.006 × D) + (2.426 × E) - 19.489. The sensitivity, specificity, and accuracy were 80%, 92%, and 90%, respectively, for the validation dataset (total, n = 201; favourable outcome, n = 25).The 6-month neurological outcomes can be predicted in patients resuscitated from out-of-hospital cardiac arrest using clinical parameters that can be easily recorded at the site of cardiopulmonary resuscitation.
Robots and other autonomous systems interacting with humans should customize their behaviour to their human partner's preferences. We propose a method for learning and generating robot movement customized to individual preferences. Within a reinforcement learning framework, we generate rewards based on facial expressions observed during the robot's motion. Robot motions are parametrized; the rewards are used to modify these motion parameters using Q learning. The proposed approach is evaluated in a user study, using an interactive kinetic sculpture. The system interacts with participants and evolves its motion based on the rewards estimated from the participants' facial expressions. Our results show that, for a subset of participants, the system was able to successfully generate actions that resulted in higher than random rewards. The ability to successfully generate high-reward actions depends on: being able to recognize positive affect from the face, being able to generate actions that are pleasing to the participant, and being able to learn the mapping from rewards to actions.
We have conducted research on building a robot dialogue system to support the independent living of older adults. In order to provide appropriate support for them, it is necessary to obtain as much information, particularly related to their health condition, as possible. As the first step, we have examined a method to allow dialogue to continue for longer periods.A scenario-based dialogue system utilizing pause detection for turn-taking was built. The practicality of adjusting the system based on the dialogue rhythm of each individual was studied. The system was evaluated through user studies with a total of 20 users, 10 of whom were older adults.The system detected pauses in the user's speech using the sound level of their voice, and predicted the duration and number of pauses based on past dialogue data. Thus, the system initiated the robot's voice-call after the user's predicted speech.Multiple turns of dialogue between robot and older adults are found possible under the system, despite several overlaps of robot's and users' speech observed. The users responded to the robot, including the questions related to health conditions. The feasibility of a scenario-based dialogue system was suggested; however, improvements are required.
We aim to develop a method to actively sense an older adult’s health conditions using a dialogue with a robot in a way like a nurse or a caregiver. Our method estimates the health condition based on users ’responses which include response delay, volume, prosody, and other non-verbal information when the robot voice-calls to the user. The concepts of subsequent voice-calls by the robot, based on the estimated condition of the users, aimed to design encouragement maintaining or improving the user ’s health, including exercise, eating meals regularly, and sleeping, are generated with sensor integrations. This paper reports the voice-calling robot system constructed to test and verify the proposed method at home. The system is constructed to conduct dialogue as scheduled, change the contents of a robot’s voicecalling, and collect the sound data recorded during dialogue. We describe the outline of the constructed system, an operation confirmation experiment using the system. We also report a preliminary experiment and its result and future works.
The aim of this research is that a robot comforts a human to improve his/her mental state. It is known that human's facial expression is one of the triggers of arousing emotion. In this paper, as a means to comfort a human, we propose a method of predicting what kind of facial expression will appear. With this method, a robot selects an action which is expected to elicit a specific facial expression, based on the result of predicting human's facial expression. We performed experiments of eliciting a specific facial expression with Mof-mof robot. Mof-mof robot selected some actions which are expected to elicit designated facial expressions such as 'happy,' 'surprised,' 'angry' and 'sad' from four subjects. Some subjects' facial expressions seemed to be elicited by robot's action. Results of the experiments show that easiness of eliciting a specific facial expression seems to be affected by the type of the facial expression and human's will.
We propose a system that estimates the health condition of older adults at home based on the responses in which they talk with a robot. The system actively elicits the clue for estimation of the health condition by not only observing a living environment passively but by talking to older adults. As a first step, to survey how older adults react to the voice-call robot, we conducted a user study in a situation that the robot talks to older adults who were about to leave a chair. In this paper, we report the user study and discuss appropriate talking.