logo
    3-D object recognition and orientation from both noisy and occluded 2-D data
    1
    Citation
    17
    Reference
    10
    Related Paper
    When manipulating a grasped object, especially with a robot hand, it is helpful to have an estimate of the object's orientation within the grasp. The object's orientation can be extracted from knowledge of the surface normals at the various points of contact with the object, but these surface normals must first be transformed into a common coordinate system. Successful execution of these transformations requires a prohibitive amount of accuracy in calibration of the arm and hand. An online method to improve these transformations in spite of calibration errors is presented. The method requires collecting contact force readings as the object is manipulated, and computing transform corrections that minimize the variation in the sum of the contact forces. Both experimental and simulated results are presented, and the implications of the results are discussed.< >
    Object-orientation
    Citations (0)
    The recognition task is transformed into simpler subtasks. Two assumptions are vital in this approach: (a) the object representation is pictorial, and (b) the parts of the object do not bear any information about the shape of the object. The aim is to find a framework which will make the problem of recognition easier. The recognition consists of two subtasks: classification of the object into its proper class and identification of the particular member of the class. The classification is performed on the basis of the object's iconic representation; the identification is based on the pattern representation. This fact is used to propose a multiresolution architecture which features classification of the whole object at only one resolution. It provides a framework in which the contemporary neural networks being applied to simple problems may be applied to real-world problems of visual object recognition.< >
    Representation
    Identification
    Basis (linear algebra)
    Citations (1)
    The paper formulates the mathematical foundations of object discrimination and object re-identification in range image sequences using Bayesian decision theory. Object discrimination determines the unique model corresponding to each scene object, while object re-identification finds the unique object in the scene corresponding to a given model. In the first case object identities are independent; in the second case at most one object exists having a given identity. Efficient analytical and numerical techniques for updating and maximizing the posterior distributions are introduced. Experimental results indicate to what extent a single range image of an object can be used for re-identifying this object in arbitrary scenes. Applications including the protection of commercial vessels against piracy are discussed.
    Identification
    Object model
    3D single-object recognition
    Deep-sky object
    Citations (5)
    We present an approach to function-based object recognition that reasons about the functionality of an object's initiative parts. We extend the popular "recognition by parts" shape recognition framework to support "recognition, by functional parts", by combining a set of functional primitives and their relations with a set of abstract volumetric shape primitives and their relations. Previous approaches have relied on more global object features, often ignoring the problem of object segmentation, and thereby restricting themselves to range images of unoccluded scenes. We show how these shape primitives and relations can be easily recovered from superquadric ellipsoids which, in turn, can be recovered from either range or intensity images of occluded scenes. Furthermore, the proposed framework supports both unexpected (bottom-up) object recognition and expected (top-down) object recognition. We demonstrate the approach on, a simple domain by recognizing a restricted class of hand-tools from 2-D images.< >
    3D single-object recognition
    Citations (14)
    This paper presents a method for estimating an object's two-dimensional (2D) position and orientation based on topological information collected using infrared tags without any special location sensors or direction sensors. Estimating a user's and articles' location, irrespective of circumstances, is an important issue for context-aware systems. Users are present in a location with some purpose or intention. Therefore, a user's position and orientation clearly reflect their context. Especially, orientation information can reflect a more detailed context than that obtained merely according to location: people standing face- to-face or back-to-back would have vastly different contexts. The analyses explained in this paper particularly examine an object's orientation and describe a new method for estimating an object's position and orientation in an indoor, real-world environment. Using a simulation and an implemented prototype system, the experimental results demonstrate the feasibility of our topological estimation method.
    Position (finance)
    Object-orientation
    Citations (7)
    Some objects are mono-oriented, possessing a canonical or “preferred upright” orientation (Palmer at al., 1981; Jolicoeur, 1985). The implications of canonical orientations for theories of object recognition have been widely discussed (Rock, 1974; Tarr & Pinker, 1989; Ghose & Liu, 2013), but it remains unclear how “canonical” orientations fit into a theory of object orientation representation. How does the orientation representation for an object differ in canonical versus non-canonical orientations? Eight “horizontal” and 8 “vertical” objects (Fig. 1) were each presented in 16 different orientations (canonical and non-canonical) (Fig. 2). On each trial, participants (under working memory load) viewed an object in a “target” orientation and subsequently attempted to select the target from an array of that object in 16 different orientations. In previous research with poly-oriented (no preferred orientation) objects (Gregory and McCloskey, 2010), participants often selected the object primary-axis (OPA) reflection of the target: an “OPA error” (Fig 3a). We predicted that if the “uprightness” of a mono-oriented object is specifically encoded, OPA errors should be affected by canonical orientation: for canonically-oriented vertical targets, an OPA error maintains “uprightness” (Fig. 3b); for a canonically-oriented horizontal target, an OPA error would result in an “upside-down” orientation (Fig. 3c). Indeed, OPA errors were modulated by an Object type (vertical vs. horizontal) X Orientation (canonical vs. not) interaction (F(1,13) = 16.70, p< .05): for vertical objects, OPA errors were equally common for canonical and non-canonical targets (t(13) = .16, n.s.); for horizontal objects, they were significantly reduced for canonical targets (t(13) = 4.01, p< .05) (Fig. 4). Participants apparently represent the “uprightness” of a stimulus, thereby avoiding otherwise common errors that contradict this representation. These results suggest that the canonical orientation of an object affects not just the cognitive processes underlying object recognition but also those involved in representing spatial information. Meeting abstract presented at VSS 2015
    Object-orientation
    Representation
    Canonical form
    Non canonical
    Horizontal and vertical
    Vertical orientation
    Citations (1)
    Proposes a method of object recognition using appearance models accumulated into a RFID (radio frequency identification) tag attached to the environment. Robots recognize the object using appearance models accumulated in the tag on the object. If the robot fails in recognition, it acquires a model of the object and accumulates it to the tag. Since robots in the environment observe the object from different points of view at different time, various appearance models are accumulated as time passes. In order to accumulate many models, eigenspace analysis is applied. The eigenspace is reconstructed every time robots acquire the model. Experimental result of object recognition shows effectiveness of the proposed method.
    3D single-object recognition
    Identification
    Active appearance model
    Object model
    Citations (12)
    A method of orientation for mobile object underwater based on the difference GPS and super short baseline acoustic orientation technique is presented.A mathematics model for orientation calculation is made.The precision of the application system developed according to the orientation method is less than 2m proved by project practice.
    Object-orientation
    Baseline (sea)
    Citations (0)