With recent advances in underwater inspections of ships with remote sensing technologies the need for automated data annotations and analysis becomes apparent. During underwater ship inspections, various data such as video, positioning information, and other telemetry data are collected and combined with the results of computer vision models. The variability in the modalities of data makes the automatic analysis across multiple data sources challenging. We propose the use of a Knowledge Graph in combination with industry standards in the ship inspection domain for the taxonomy. This enables automated data analysis for underwater ship inspection videos which is the requirement for different downstream use cases. In this work, we demonstrate the applicability of our approach on 12 ship inspections in two downstream tasks. First, we aim at supporting a detailed ship status report generation, and second, we demonstrate big data analytics for several inspections. We use the fused data to compare different ships by identifying patterns in the findings aided by computer vision algorithms.
Taking advantage of the complimentary properties of sonars and cameras can improve underwater visual odometry and point cloud generation. However, this task remains difficult as the image generation concepts are different, giving challenges to direct acoustic and optic feature matching. Solving this problem can improve applications such as underwater navigation and mapping. A camera-sonar combination is proposed for real time scale estimation using underwater monocular image features combined with a multibeam forward looking sonar. The detected features from a monocular SLAM framework are matched with the acoustic features based on the relative distances in instrument reference frame calculated using the two data streams, and used to estimate a depth ratio. The ratio is optimised over a large sample set to ensure scale stability. The sensor combination enables real time scale estimation of the trajectory and the mapped environment, which is a requirement for autonomous systems. The proposed approach is experimentally demonstrated for two underwater environments and scenarios, a subsea module mapping and a ship hull inspection. The results demonstrate the efficiency and applicability of the proposed solution. In addition to correctly restoring the scale, it significantly improves the localization and outperforms the tested dead reckoning and visual inertial SLAM methods.
Underwater images are often degraded due to backscatter, light attenuation and light artifacts. One important aspect of it is marine snow, which are particles of varying shape and size. Computer vision technologies can be strongly affected by them and may therefore provide incorrect and biased results. In robotic applications, there is limited computational power for online processing. A method for real time marine snow detection is proposed in this paper based on a multi-step process of spatial-temporal data. The RGB colored images are converted to the YCbCr color space before they are decomposed to isolate the high frequency information using a guided filter for a first selection of candidates. Convolution with an uniform kernel is then applied for further analysis of the candidates. The method is demonstrated in two use cases, underwater feature detection and image enhancement.
Abstract Hull inspection is an important task to ensure sustainability of ships. To overcome the challenges of hull structure inspection in an underwater environment in an efficient way, an autonomous system for hull inspection has to be developed. In this paper, a new approach to underwater ship hull inspection is proposed. It aims at developing the basis for an end-to-end autonomous solution. The real-time aspect is an important part of this work, as it allows the operators and inspectors to receive feedback about the inspection as it happens. A reference mission plan is generated and adapted online based on the inspection findings. This is done through the processing of a multibeam forward looking sonar to estimate the pose of the hull relative to the drone. An inspection map is incrementally built in a novel way, incorporating uncertainty estimates to better represent the inspection state, quality, and observation confidence. The proposed methods are experimentally tested in real-time on real ships and demonstrate the applicability to quickly understand what has been done during the inspection.
Localization filters for underwater vehicles are mostly tailored for specific sensor suites, environments, or missions. It is also well known that the underwater environment can evolve over time and throughout the mission, affecting the vehicle's sensors, e.g., tide, currents, and vehicle proximity to structures, especially in harbor areas. In this paper, the Modular and Robust Sensor-Fusion Framework (MaRS) is extended to work with underwater vehicles and their environment. It enables efficient use of asynchronous sensors and handles measurement outliers and outages. Sensor-frame initialization and online extrinsic calibration methods are also explored. Tests are performed in real harbor-like environments using a small remotely operated vehicle (ROV) and show improved handling of sensors and state estimation results.
To enable high computational loads for low cost underwater drones, a cloud based architecture is proposed to take advantage of recent development in machine learning and computer vision. The processing power made available will benefit vehicles with limited onboard processing capacity. The rapid development of cloud computing services have made servers with significant computational resources easier to access. In this paper, a communication interface for cloud based multilayer architecture is proposed to enable real time performance by distributing the workload to networked processing devices. It adopts a publish-subscribe model for efficient communication between the layers. The latency and workload distribution are evaluated to assess the efficiency of the proposed method. An application to semantic segmentation of under-water scenes is also tested to measure the framework capabilities for real-time operation using more resource-demanding tools. The conducted experiments resulted in time and performance gains through offloading the underwater vehicle, and forwarding the computations to the cloud based layer.
In this article, we present the first large-scale data set for underwater ship lifecycle inspection, analysis and condition information (LIACI). It contains 1893 images with pixel annotations for ten object categories: defects, corrosion, paint peel, marine growth, sea chest gratings, overboard valves, propeller, anodes, bilge keel and ship hull. The images have been collected during underwater ship inspections and annotated by human domain experts. We also present a benchmark evaluation of state-of-the-art semantic segmentation approaches based on standard performance metrics. Consequently, we propose to use U-Net with a MobileNetV2 backbone for the segmentation task due to its balanced tradeoff between performance and computational efficiency, which is essential if used for real-time evaluation. Also, we demonstrate its benefits for in-water inspections by providing quantitative evaluations of the inspection findings. With a variety of use cases, the proposed segmentation pipeline and the LIACI data set create new promising opportunities for future research in underwater ship inspections.
The need for vehicle autonomy is constantly increasing, requiring increased load of on-board processing. However, most of low-cost agents are not able to handle the workload in real-time. In this paper, a path planning framework is presented for safe and dynamic navigation in multi-dimension environments. It is design to minimise the computational power required while remaining able to quickly adapt. For this, a highly Paramatrised Rapidly-exploring Random Graph (PRRG) is developed along side a system of rules allowing dynamic node selection and the D* Lite search algorithm to create the path. All of the graph parameters can be changed dynamically, making the graph able to adapt to the environment in real-time. Eventually, an extension for custom node generation is proposed, enabling specific routes to be created online which is especially relevant for survey and inspection applications. Although this framework was made for navigation of Autonomous Underwater Vehicles (AUVs), it can find applications to other domains such as navigation of Unmanned Aerial Vehicles (UAVs). The capabilities of the framework are demonstrated considering an underwater ship hull inspection scenario and confirmed its usefulness and effectiveness for real-time operations.
Inspections of net pens in aquaculture fish farms must be done regularly to verify the structural integrity of the net. Structural failures can lead to a large number of escaped fish, resulting in significant economic loss and potential harm to wild fish populations. To perform safe, efficient, and cost-effective inspections, robotic solutions can be deployed. In this paper, a ship hull inspection procedure has been adapted for autonomous net pen inspection operations using an unmanned underwater vehicle to develop the basis for an end-to-end autonomous solution. The proposed approach integrates online path planning and path following modules based on multibeam forward-looking sonar measurements. This sensor enables the creation of an inspection map to keep track of the inspected areas. The solution was tested in field trials at an industrial-scale fish farm, demonstrating its applicability.