Main Article Content

Authors

Sikha Bagui
Alejandro Perez

Abstract

The need for reliable automated object detection and classification in the maritime sphere has increased significantly over the last decade. The increased use of unmanned marine vehicles necessitates the development and use of an autonomous object detection and classification systems that utilize onboard cameras and operate reliably, efficiently, and accurately without the need for human intervention. As the autonomous vehicle realm moves further away from active human supervision, the requirements for a detection and classification suite call for a higher level of accuracy in the classification models. This paper presents a comparative study using different classification models on a large maritime vessel image dataset. For this study, additional images were annotated using Roboflow, focusing on the types of subclasses that had the lowest detection performance, a previous work by Brown et al. (1). The present study uses a dataset of over 5,000 images. Using the enhanced set of image annotations, models were created in the cloud using Google Colab using YOLOv5, YOLOv7, and YOLOv8 as well as in AmazonWeb Services using Amazon Rekognition. The models were tuned and run for five runs of 150 epochs each to collect efficiency and performance metrics. Comparison of these metrics from the YOLO models yielded interesting improvements in classification accuracies for the subclasses that previously had lower scores, but at the expense of the overall model accuracy. Furthermore, training time efficiency was improved drastically using the newer YOLO APIs. In contrast, using Amazon Rekognition yielded superior accuracy results across the board, but at the expense of the lowest training efficiency.

Share This Article On Social Media
Usage Statistics

Downloads

Download data is not yet available.

Article Details

Section
Research