Affordance learning and inference based on vision-speech association in human-robot interactions
2017
Human-robot interactions is important for a robot to learn the environment and finish tasks. However, humans are difficult to teach a robot all the required information so the robot needs to infer some new knowledge based on that has been learned. Humans and other animals understand the world based on affordances which have been introduced into robotics to promote a robot's cognitive capabilities in planning, recognition and control. An affordance, which is jointly determined by the object and robot, encodes a potential action that the robot might execute upon the object. Most existing works make a robot build its affordance knowledge in only one dimension such as vision. In order to guide a robot to develop its intelligence like humans as much as possible, we propose an affordance-based interaction model that learns affordances based on vision-speech association. Our model has the following features: (i) use speech to abstractly represent behavioral and visual information in a high level without considering the detail; (ii) reduce the number of parameters in human-robot communications; (iii) infer the unknown information through table association; (iv) promote affordance learning from one dimension to double dimensions. The experiment is carried out on a NAO humanoid robot, and it is proved that our method supports affordance learning and inference correctly and effectively.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
31
References
1
Citations
NaN
KQI