We investigate whether academic emotions are affected by the color of a robot’s eyes in lecture behaviors. In conventional human-robot interaction research on robot lecturers, the emphasis has been on robots assisting or replacing human lecturers. We expanded these ideas and examined whether robots could lecture using one’s behaviors that are impossible for humans. Psychological research has shown that color affects emotions. Because emotion is strongly related to learning, and a framework of emotion control is required. Thus, we considered whether emotions related to the learner’s academic work, called “academic emotions,” can be controlled by the color of a robot’s illuminated eye light. In this paper, we found that the robot’s eye light color affects academic emotions and that the effect can be manipulated and adapted to individuals. Furthermore, the manipulability of academic emotions by color was confirmed in a situation mimicking a real lecture.
Theaim of this study is to model the cognitive processes based on a perceiving-acting cycle in patients with unilateral spatial neglect (USN). USN is the inability to perceive features of the environment, body, or objects on one side. To extract the cognitive characteristics of USN patients in a multifaceted manner, we constructed a multimodal sensing system using immersive VR. In this paper, we present a system that predicts movement of a subject while performing a search task using the measurement results and an LSTM neural network.
This paper discusses the human preference learning by robot partners through interaction with a person and human position based on sensor network. We use a robot music player; Miuro, and we focus on the music selection for providing the comfortable sound field for the person. We propose a learning method of the relationship between human position and its corresponding music selection based on Q-learning. Furthermore, we propose a steady-state genetic algorithm using template matching to extract a person in 3D distance image based on differential extraction. The experimental results show that the proposed method can learn the relationship between human position and its corresponding human preferable music.
This paper proposes a human localization method in sensor networks for monitoring elderly people. First, we explain the proposed intelligent sensor networks. Next, we apply a spiking neural network to extract feature points for human localization from a measurement data by sensor networks. Furthermore, we propose a learning method using spiking neural network based on the time series of measurement data. Finally, we discuss the effectiveness of proposed method through experimental results in a living room.
In this study, we aim to develop a system of remote-controlled avatar robot for elderly and disabled patients. Most of teleoperation systems have interfaces to visually present the state of the robots including feedback information and receive the control commands manually sent from the operator. However, in elderly and patients with disabilities, they might have difficulty in the manual control of the robot. This paper therefore presents a multimodal interface for remotely controlling a robotic avatar. We furthermore propose a cognitive platform for remotely controlling a robot based on a concept of perceiving-acting cycle. The platform consists of a perceptual system for incremental environment modeling and an action system for extracting patterns of operator behavior. In each system, a self-organized neural network based on unsupervised learning is used. Moreover, we use a spiking neural network for spatial-temporal modeling of interaction between an operator and environment.
This paper presents a multi-robot communication system for guiding customers and providing information about products and deals in a shop. The purpose of the multi-robot communication is to offer a sense of belonging between the robots and a customer. However, even if the robots are collaborating with each other, the robots can be ignored by customers when the robots try to communicate with only their utterance. In the experiment that we conducted in a store, the robots set in front of the store can get the attention and sense of togetherness of the customers by offering to shake hands with them. In this study, we conducted the questionnaire investigation and monitoring to verify the effect of the proposed approach on customer impressions of the guide robots.