Interactive Planning of Manual Assembly Operations: From Language to Motion☆
2016
Abstract This paper describes part of a novel view of planning the assembly of cars at the shop floor, which is currently being explored in the EU-funded project INTERACT, in which 3D worker simulations are automatically generated from textual descriptions. Under this view, all planning is carried out virtually, thereby interactively exploiting the workers’ knowledge. We suggest solutions to the subtask of mapping textual descriptions onto motion sequences. Consider a description like “Tighten arm support with cordless screw driver on center console”. This text belongs to a controlled natural language that is – different from unconstrained language – amenable to unambiguous linguistic analysis. The result is then broken down into a sequence of elementary actions, such as WALK, PICK, or PLACE, carried out by a digital human model (DHM) using and manipulating objects in the 3D scene. The representation level of elementary actions is designed to provide all information needed for subsequent motion synthesis. For proper 3D visualization, each action requires dynamic or static parameters such as the grasp points at the objects, and positions of the DHM and the objects. A semantic interpretation of the linguistic results must account for all variations to be expected in the scene. For instance, if the DHM is not “near” the object of interest, it must WALK. Evaluating this kind of condition-action rules against constraints imposed by the scene for a given textual description leads to a sequence of elementary actions that is then processed further and visualized. When viewing the simulation, the human planner can affect some aspects of processing, such as the order of elementary actions, by causing manipulations to the rules.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
7
References
4
Citations
NaN
KQI