Learning haptic affordances from demonstration and human-guided exploration
2016
In this paper, we present a system for learning haptic affordance models of complex manipulation skills. The goal of a haptic affordance model is to better complete tasks by characterizing what a particular object-action pair feels like. We use learning from demonstration to provide the robot with an example of a successful interaction with a given object. We then use environmental scaffolding to collect grounded examples (successes and unsuccessful "near misses") of the haptic data for the object-action pair, using a force/torque (F/T) sensor mounted at the wrist. From this we build one success Hidden Markov Model (HMM) and one "near-miss" HMM for each object-action pair. We evaluate this approach with five different actions over seven different objects to learn two specific affordances (open-able and scoop-able). We show that by building a library of object-action pairs for each affordance, we can successfully monitor a trajectory of haptic data to determine if the robot finds an affordance. This is supported through cross-validation on the object-action pairs, with four of the seven object-action pairs achieving a perfect F1 score, and with leave-one-object-out testing, where the learned object-actions models correctly identify the specific affordance with an average accuracy of 67% for scoop-able and 53% for open-able.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
31
References
7
Citations
NaN
KQI