Abstract Efficient sampling of coastal ocean processes, especially mechanisms such as upwelling and internal waves and their influence on primary production, is critical for understanding our changing oceans. Coupling robotic sampling with ocean models provides an effective approach to adaptively sample such features. We present methods that capitalize on information from ocean models and in situ measurements, using Gaussian process modeling and objective functions, allowing sampling efforts to be concentrated to regions with high scientific interest. We demonstrate how to combine and correlate marine data from autonomous underwater vehicles, model forecasts, remote sensing satellite, buoy, and ship‐based measurements, as a means to cross‐validate and improve ocean model accuracy, in addition to resolving upper water‐column interactions. Our work is focused on the west coast of Mid‐Norway where significant influx of Atlantic Water produces a rich and complex physical–biological coupling, which is hard to measure and characterize due to the harsh environmental conditions. Results from both simulation and full‐scale sea trials are presented.
Fronts between Arctic- and Atlantic-origin waters are characterized by strong lateral gradients in temperature and salinity. Ocean processes associated with fronts are complex with considerable space and time variability. Therefore, resolving the processes in frontal zones by observation is challenging but important for understanding the associated physical–biological interactions and their impact on the marine ecosystem. The use of autonomous robotic vehicles and in situ data-driven sampling can help improve and augment the traditional sampling practices, such as ships and profiling instruments. Here, we present the development and results of using an autonomous agent for detection and sampling of an Arctic front, integrated on board an autonomous underwater vehicle. The agent is based on a subsumption architecture implemented as behaviors in a finite-state machine. Once a front is detected, the front tracking behavior uses observations to continuously adapt the path of the vehicle to perform transects across the front interface. Following successful sea trials in the Trondheimsfjord, the front-tracking agent was deployed to perform a full-scale mission near 82 $^{\circ}$ N north of Svalbard, close to the sea ice edge. The agent was able to detect and track an Arctic frontal feature, performing a total of six crossings while collecting vertical profiles in the upper 90 m of the water column. Measurements yield a detailed volumetric description of the frontal feature with high resolution along the frontal zone, augmenting ship-based sampling that was run in parallel.
Taking advantage of the complimentary properties of sonars and cameras can improve underwater visual odometry and point cloud generation. However, this task remains difficult as the image generation concepts are different, giving challenges to direct acoustic and optic feature matching. Solving this problem can improve applications such as underwater navigation and mapping. A camera-sonar combination is proposed for real time scale estimation using underwater monocular image features combined with a multibeam forward looking sonar. The detected features from a monocular SLAM framework are matched with the acoustic features based on the relative distances in instrument reference frame calculated using the two data streams, and used to estimate a depth ratio. The ratio is optimised over a large sample set to ensure scale stability. The sensor combination enables real time scale estimation of the trajectory and the mapped environment, which is a requirement for autonomous systems. The proposed approach is experimentally demonstrated for two underwater environments and scenarios, a subsea module mapping and a ship hull inspection. The results demonstrate the efficiency and applicability of the proposed solution. In addition to correctly restoring the scale, it significantly improves the localization and outperforms the tested dead reckoning and visual inertial SLAM methods.
Rapid assessment and enhanced knowledge of plankton communities and their structures in the productive upper water column is of crucial importance if we are to understand the impact of the changing climate on upper ocean processes.Enabling persistent and systematic ecosystem surveillance by coupling the revolution in robotics and automation with artificial intelligence (AI) methods will improve accuracy of predictions, reduce measurement uncertainty, and accelerate methodological sampling with high spatial and temporal resolution.Further, progress in real-time robotic visual sensing and machine learning have enabled highresolution space-time imaging, analysis, and interpretation.We describe a novel mobile robotic tool that characterizes upper water column biota by employing intelligent onboard sampling to target specific mesoplankton taxa.Although we focus on machine learning techniques, we also outline the processing pipeline that combines imaging, supervised machine learning, hydrodynamics, and AI planning.The tool we describe will accelerate the time-consuming task of analyzing "who is there" and thus advance oceanographic observation.
Hyperspectral seafloor surveys using airborne or spaceborne sensors are generally limited to shallow coastal areas, due to the requirement for target illumination by sunlight. Deeper marine environments devoid of sunlight cannot be imaged by conventional hyperspectral imagers. Instead, a close-range, sunlight-independent hyperspectral survey approach is required. In this study, we present the first hyperspectral image data from the deep seafloor. The data were acquired in approximately 4200 m water depth using a new Underwater Hyperspectral Imager (UHI) mounted on a remotely operated vehicle (ROV). UHI data were recorded for 112 spectral bands between 378 nm and 805 nm, with a high spectral (4 nm) and spatial resolution (1 mm per image pixel). The study area was located in a manganese nodule field in the Peru Basin (SE Pacific), close to the DISCOL (DISturbance and reCOLonization) experimental area. To test whether underwater hyperspectral imaging can be used for detection and mapping of mineral deposits in potential deep-sea mining areas, we compared two supervised classification methods, the Support Vector Machine (SVM) and the Spectral Angle Mapper (SAM). The results show that SVM is superior to SAM and is able to accurately detect nodule surfaces. The UHI therefore represents a promising tool for high-resolution seafloor exploration and characterisation prior to resource exploitation.
Methods for using underwater vehicles for mapping and monitoring are developed continuously. These methods developed must be accurate, quantitative and repeatable while being as cost effective as possible. In 2011, 2012, 2013 and 2014 The Norwegian University of Science and Technology (NTNU) Applied Underwater Robotics Laboratory completed surveys in the Trondheim Fjord area. During these surveys methods relevant for addressing several of these challenges have been tested. We have tested Synthetic Aperture Sonar for establishing baselines for marine habitats and littering. In the sonar data cold-water coral habitats gave a significant signal. These observations were confirmed by video and hyperspectral imaging. We have also revisited the same areas and demonstrated the potential to detect changes. During our campaigns the sonar data has also been verified by ROV video and sampling. A dumping area in the fjord was mapped by the AUV mounted SAS and on a subsequent cruise the ROV was deployed for video survey, stereo photos and sediment sampling. Stereo models may prove an important tool where millimeter resolution and precision is attainable. Based on the experimental work presented in this paper, a proposal for underwater vehicles in environmental management of coastal areas is described.
Expanding spatial presentation from two-dimensional profile transects to three-dimensional ocean mapping is key for a better understanding of ocean processes. Phytoplankton distributions can be highly patchy and the accurate identification of these patches with the context, variability, and uncertainty of measurements on relevant scales is difficult to achieve. Traditional sampling methods, such as plankton nets, water samplers and in-situ vertical sensors, provide a snapshot and often miss the fine-scale horizontal and temporal variability. Here, we show how two autonomous underwater vehicles measured, adapted to, and reported real-time chlorophyll a measurements, giving insights into the spatiotemporal distribution of phytoplankton biomass and patchiness. To gain the maximum available information within their sensing scope, the vehicles moved in an adaptive fashion, looking for the regions of the highest predicted chlorophyll a concentration, the greatest uncertainty, and the least possibility of collision with other underwater vehicles and ships. The vehicles collaborated by exchanging data with each other and operators via satellite, using a common segmentation of the area to maximize information exchange over the limited bandwidth of the satellite. Importantly, the use of multiple autonomous underwater vehicles reporting real-time data combined with targeted sampling can provide better match with sampling towards understanding of plankton patchiness and ocean processes.
This paper will show how AUV and ROV can complement each other in a scientific mapping campaign in the Trondheim Fjord. To complete this survey, a multidisciplinary approach was necessary to adapt the industrial and military technology to identify and map object of interest (OOI) on the seafloor. NTNU AUR-Lab and FFI mobilized for a collaborative cruise with ROV Minerva equipped with: video camera, dynamic positioning system, still camera for photo mosaic, UHI (Underwater Hyper spectral Imager), MRU and MBE and the AUV Hugin HUS with synthetic aperture side scan sonar and still camera as main instruments. These platforms complemented each other; the AUV had an unprecedented area capacity for mapping and search, while the ROV provided detailed information of the site.
Toxicology studies in early fish life stages serve an important function in measuring the impact of potentially harmful substances, such as crude oil, on marine life. Morphometric analysis of larvae can reveal the effects of such substances in retarding growth and development. These studies are labor intensive and time consuming, typically resulting in only a small number of samples being considered. An automated system for imaging and measurement of experimental animals, using flow-through imaging and an artificial neural network to allow faster sampling of more individuals, has been described previously and used in toxicity experiments. This study compares the performance of the automated imaging and analysis system with traditional microscopy techniques in measuring biologically relevant endpoints using two oil treatments as positive controls. We demonstrate that while the automated system typically underestimates morphometric measurements relative to analysis of manual microscopy images, it shows similar statistical results to the manual method when comparing treatments across most endpoints. It allows for many more individual specimens to be sampled in a shorter time period, reducing labor requirements and improving statistical power in such studies, and is noninvasive allowing for repeated sampling of the same population.