Recently, collaborative work environments using Augmented Reality is expected to realize effective collaborative works. However, these environments have some limitations. One of the limitations is the restricted availability of devices. Devices which have no ability of the real-space information sensing cannot be used for these environments. Another limitation is low usability. It is difficult for novice computer users to utilize these environments, because these environments are specifically designed for computer professionals. To overcome these limitations, we propose a construction scheme of 3D symbiotic environment. 3D symbiotic environment is a collaborative work environment which is based on the concept of symbiosis between the real space and digital spaces. 3D symbiotic environment enables users to do collaborative works intuitively as if virtual spaces are integrated with the real space. First, we present the fundamental technologies for 3D symbiotic environment. Next, we show the design and implementation of the prototype system which includes collaborative work supporting agents. Finally, we confirmed the feasibility of 3D symbiotic environment with experimental results of the prototype system.
In this paper, we describe a LIDAR based target classification algorithm by using fusion of reflection intensity, which is adapted for the rules of the Tsukuba Challenge 2013. According to the rule of the Tsukuba Challenge 2013, an autonomous mobile robot has to find a specific standing signboard while navigating a prescribed course area. With the intension of achieving this goal, light detection and ranging (LIDAR) is used to detect both the range and reflection intensity profiles around a mobile robot. The combination of the range and reflection intensity profiles enables the stable and robust identification of a specific standing signboard. To find the standing signboard, we focus on its shape and reflection intensity pattern. The range profile is used to detect the line-like shape, and the reflection intensity is used to detect the reflection pattern on the standing signboard. The proposed target identification algorithm is implemented to find an actual standing signboard and carried out in the actual Tsukuba Challenge 2013 environment. The validity of the proposed algorithm is confirmed through an actual mobile robot experiment.
To reduce the loads imposed on network administrators, we have proposed AIR-NMS, which is a network management support system (NMS) based on Active Information Resource (AIR). In AIR-NMS, various information resources (e.g., state information of a network, practical knowledge of network management) are combined with software agents which have the knowledge and functions for supporting the utilization of the resources, and thus individual resources are given activities as AIRs. Through the organization and cooperation of AIRs, AIR-NMS provides the administrators with practical measures against a wide range of network faults. To make AIR-NMS fit for practical use, this paper proposes a method for achieving the effective installation and utilization of the network management knowledge needed in AIR-NMS.
In domains in which single agent learning is a more natural metaphor for an artifact-embedded agent, Exemplar-Based Learning (EBL) requires significantly large sets of training examples for it to be applicable. Obviously large sets of training examples contradict resource capabilities of artifacts. To make EBL a possibility for these artifacts, sets of training examples must be reduced in size in a way that does not compromise learning performance in order to relieve artifacts' resources (e.g. memory). In this paper, we investigate training sets requirements for artifacts learning and propose a ranking-based Stratified Ordered Selection (SOS) method to scale them down. Contrary to reduction approaches in mainstream learning, this method has been designed with resource constraint nature of artifacts in mind. Artifacts shall use an intermediary which implements SOS to, dynamically and on-demand, retrieve training subsets based on their resource capacities (e.g. memory, CPU). SOS uses a new Level Order (LO) ranking scheme which has been designed to broaden representation of classes of examples, to quicken data retrieval, and to allow for retrieval of subsets of varying sizes while ensuring same or near same learning performance. We present how SOS evaluates on various well known machine learning datasets and how it compares to some of the best performing data reduction approaches.
Realizing application systems that match people's expectations and providing adequate network services for various users under various network and platform environments are difficult challenges. To overcome these problems, we have studied application
<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/08.jpg"" width=""300"" /> New lane detection algorithm</div> The perception of color by the human eye is different from that of cameras. This is due to the optical illusion and color constancy characteristics of human vision. In spite of these characteristics, people can drive cars without the danger of overturning. In this paper, we describe a new white lane detection algorithm for autonomous mobile robots, one based on a method similar to the color perception of human beings. In order to drive safely, we emulate human color perception to reduce the effects of lighting and shadow on the course. The validity of the proposed image compensation method is confirmed by actual while line detection. </span>