Subsea Survey System Boosted by Scottish Research
Scottish researchers have collaborated to develop the implementation of ‘deep learning’ to improve subsea surveys of oil and gas pipelines.
Subsea surveys are a vital tool for oil and gas companies to inspect and identify potential problems with their subsea assets. Currently, human operators are required to remotely operate underwater vehicle (ROV) cameras to perform these time-consuming inspections, which scan the seabed for potential threats to oil and gas pipelines.
Results from such subsea inspections can be adversely affected by sea-life, vegetation, poor-visibility and sand agitation. It is hoped that by using a newly developed algorithm that automates the interpretation of footage, the existing underwater survey system will improve data accuracy and reduce inspection times.
Scottish Research Improving Subsea Survey Video
The University of Strathclyde’s Institute of Sensors Signals and Communications at the Department of Electronic and Electrical Engineering teamed up with N-Sea, an expert in subsea inspection, to develop a deep learning video annotation model for automatic eventing of subsea survey video.
Supported by Scottish Innovation Centre and The Data Lab, the research combines state-of-the-art inspection expertise and innovative data analytics to create a new algorithm, which will annotate video frames from three general-purpose ROV cameras, automatically and in real-time. This approach utilises ensembles that combine the output of three classifiers operating on the three video streams, port, starboard and centre, independently.
David Murray, Survey and Inspection Data Centre Manager at N-Sea, said: “Recently a number of automatic video annotation approaches have been announced. However, these have been demonstrated in clear waters, using bespoke and vendor specific camera systems that mitigate motion blur and poor image quality, through strobed lighting and high shutter speeds. Although these technological advancements in the equipment are beneficial, the vast majority of work class ROVs are still equipped with standard cameras operating in murky waters.”
To address this, the project has developed a 24-layer convolutional neural network to identify features in the video frames. It currently supports a number of events such as burial, exposure and field joints with high classification accuracy on still images. Combining predictions on a number of consecutive frames boosts further the network performance.
Wider Applications for Subsea Survey
The partners are currently in the process of establishing a follow-on project to increase the technology readiness level of the model, which will facilitate the easy adoption by the inspection industry.
Dr Christos Tachtatzis, Lecturer and Chancellor’s Fellow at the University of Strathclyde, said: “Prior state-of-the-art approaches demand high picture clarity and high visibility to operate effectively. We have purposely trained and validated the model using video footage typically acquired by ROVs, making the model applicable to the wider subsea survey community.”