Autonomous and semi-autonomous mobile robots have been deployed to cooperate with humans in many industrial applications. These tasks require human and robot to communicate and present information quickly and effectively. Recent human-robot interfaces usually use a setup including a camera and a projector attached to the mobile robot to project the information to the floor or to the wall during the interaction process. However, there are some limitations to these interfaces. First, using a projector for projecting information seems to be fine for an indoor application. On the contrary, it is very difficult or even impossible for users to view this source of information in outdoor contexts. This makes the current framework inappropriate for many outdoor industrial tasks. Secondly, as the projector is the only device for exchanging information between human and robot, the human-robot interacting process is insecure and people who work in the same environment can control the robot in the same manner as the main operator. Finally, the current interfaces normally use mouse, keyboard or a teach pendant to provide task information to the robot. This approach poses some difficulties if the main operator is working in an industrial context where he is supposed to wear protective equipment such as gloves or helmets which make it hard to control a mouse or to type on a keyboard. This work proposes a new interface framework for human - computer interaction in industry that can overcome the current limitations of previous works. The framework uses a laser-writer instead of a projector which is suitable for both indoor and outdoor applications. Furthermore, the combination of see-through head-mounted display augmented reality and spatial augmented reality would provide the system a novel way to enhance the security level of exchanging information since the system now can separate the information presenting to the main user and to people working in the same environment. Finally, a novel hand-held device is incorporated to the framework which provides various input modalities for users to interact with the mobile robot. The device will allows the elimination of mouse and keyboard or teach pendants in industrial contexts.
Lasers are powerful light source. With their thin shafts of bright light and colours, laser beams can provide a dazzling display matching that of outdoor fireworks. With computer assistance, animated laser graphics can generate eye-catching images against a dark sky. Due to technology constraints, laser images are outlines without any interior fill or detail. On a more functional note, lasers assist in the alignment of components, during installation.
Ant Colony Optimization (ACO) is a meta-heuristic approach inspired by the study of the behavior of real ant colonies when finding the shortest path from their nest to food source. ACO has been used for solving approximately NP-hard problems and its elite effects has been proved by the experiments. Currently, two famous ACO algorithms are Ant Colony System (ACS) and Max-Min Ant System (MMAS) proposed by M.Dorigo and T.Stutzle. In this paper, we introduce the idea about Multi-level Ant System (MLAS) and its application as an improved version of Max-Min Ant System through a novel pheromone updating scheme. We applied the new algorithm to the well-known combinatorial optimization problems such as Traveling Salesman Problem, in which we compared the results of the new algorithm with that of MMAS algorithms. Experimental results based on the standard test data showed that MLAS algorithm is more effective than MMAS in term of both the average and the best solution.
Automated docking for AUVs is an important application for prolonged AUV usage. However, current AUV perception systems experience several limitations that are accentuated in more turbid waters. A potential factor that causes this issue is the limited spatial information of the objects extractable from the 2D images utilized by both acoustic and optical-based modalities commonly found on such perception systems today.Inspired by the current progress done for underwater Point Cloud Data (PCDs), this paper thus proposes an acoustic PCD-based system that can synthesize PCD data with minimal acoustic image inputs, and utilize the additional spatial data from PCDs to potentially enhance the AUV perception for complex applications, such as automated docking. The proposed system consists of two main components: acoustic-based PCD reconstruction module, and a PCD-based classifier/pose-estimator (CPE) Convolutional Neural Network (CNN) module. Several simulation and in-field based experiments have been conducted to validate the feasibility of the system's modules. Current results discussed in this paper show a potential feasibility for the proposed system for use in complex applications such as automated garage docking, noting further works to be conducted to improve the design and viability of the proposed system.
The paper presents a multi-mode control system implemented to a ROV. The multi-mode system enables pilots to train on a virtual simulator or to deployed for a physical mission. This provides a cost effective and risk-reducing option arising from using two different or dissimilar systems. In particular, the paper highlights the design methodology and architecture of the system to be built on ROS. The structure of ROS enable data to be transferred easily between models through nodes and topics. This allows flexibility in adding, or removing, features. Furthermore, ROS is compatible on serval platforms which extends applicability and permits codes to be reused on different platforms. By implementing on ROS, it saves on resources and programming effort, compared to developing on a specified package. Design and coding time were found to be faster than implementing a solution solely based on a custom algorithm with native codes