Human Grasping Simulation
0
Citation
15
Reference
10
Related Paper
The paper discusses the scale-dependent grasp. Suppose that a human approaches an object initially placed on a table and finally achieves an enveloping grasp. Under such initial and final conditions, he (or she) unconsciously changes the grasp strategy according to the size of objects, even though they have similar geometry. We call the grasp planning the scale-dependent grasp. We find that grasp patterns are also changed according to the surface friction and the geometry of cross section in addition to the scale of object. Focusing on column objects, we first classify the grasp patterns and extract the essential motions so that we can construct grasp strategies applicable to multifingered robot hands. The grasp strategies constructed for robot hands are verified by experiments. We also consider how a robot hand can recognize the failure mode and how it can switch from one to another.
Table (database)
Cite
Citations (31)
This paper deals with the problem of planning hand configuration for grasp/non-grasp constraining of an object. In order to make a robot perform grasp and graspless manipulation by a multifingered hand, we need a planner to determine hand configuration applicable to both of them. We implement a new performance index on a grasp planner, GraspIt!, so that we can obtain appropriate hand configuration for grasp/non-grasp constraining of an object.
Planner
Cite
Citations (0)
Cite
Citations (0)
Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted. In this paper, we present a novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene. The proposed model uses a deep convolutional neural network to extract features from the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest. Our multi-modal model achieved an accuracy of 89.21% on the standard Cornell Grasp Dataset and runs at real-time speeds. This redefines the state-of-the-art for robotic grasp detection.
RGB color model
Cite
Citations (482)
We propose CAPGrasp, an $\mathbb{R}^3\times \text{SO(2)-equivariant}$ 6-DoF continuous approach-constrained generative grasp sampler. It includes a novel learning strategy for training CAPGrasp that eliminates the need to curate massive conditionally labeled datasets and a constrained grasp refinement technique that improves grasp poses while respecting the grasp approach directional constraints. The experimental results demonstrate that CAPGrasp is more than three times as sample efficient as unconstrained grasp samplers while achieving up to 38% grasp success rate improvement. CAPGrasp also achieves 4-10% higher grasp success rates than constrained but noncontinuous grasp samplers. Overall, CAPGrasp is a sample-efficient solution when grasps must originate from specific directions, such as grasping in confined spaces.
Equivariant map
Generative model
Sample (material)
Cite
Citations (0)
For a multi-fingered hand to grasp an object, there are numerous ways to grasp it stably, and thus an optimal grasp planning is necessary to find the optimal grasp point for achieving the objective of the given task. First, we define several grasp indices to evaluate the quality of each feasible grasp. Since the physical meanings of the defined grasp induces are different from each other, it is not easy to combine those indices to identify the optimal grasping. In this paper, we propose a new generalized grasping performance index to represent all of the grasp indices as one measure based on a non-dimensional technique. Through simulations, we show that the proposed optimal grasp planning is resemblant to the physical sense of human grasping.
Cite
Citations (58)
Cite
Citations (40)
Cite
Citations (0)
Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted. In this paper, we present a novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene. The proposed model uses a deep convolutional neural network to extract features from the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest. Our multi-modal model achieved an accuracy of 89.21% on the standard Cornell Grasp Dataset and runs at real-time speeds. This redefines the state-of-the-art for robotic grasp detection.
RGB color model
Cite
Citations (5)
This paper deals with an identification of contact points between finger links and a grasped object in an enveloping grasp. The grasp has the following merits. One merit is that the grasp is firmer than a fingertip grasp and a distal link grasp. Another merit is that error of contact points is recognized because the object is grasped by inner links. However, it is difficult to identify contact points and contact forces. These parameters are necessary to control the object, not to destroy it, and to shift to an optimum grasp. We propose an analytical method for an identification of unknown parameters, such as contact points and forces, by active sensing in the enveloping grasp. A necessary and sufficient condition for the identification is provided.
Identification
Cite
Citations (0)