Can we trust this prediction? Assessing the risk of using a deep learning algorithm for underwater target detection

  • Arhant, Y. (Second lecteur)
  • Aleksandra Pizurica (Premier lecteur)
  • Nicolas Nadisic (Second lecteur)
  • Nicolas Vercheval (Second lecteur)

Activité: Examen and encadrement de thèseEncadrement de thèse de master

Description

Robbe Van Looy's Master dissertation submitted in order to obtain the degree of Master of Science in Information Engineering Technology.

Description complémentaire

This master’s dissertation addresses the challenge of applying explainable artificial intelligence to sonar-based mine detection, a task crucial to modern mine counter measure operations. Deep learning models have shown strong performance in classifying objects captured via side-scan sonar, particularly in distinguishing Mine-Like Contacts from Non-Mine-Like Bottom Objects. However, the ”black box” nature of these models hinders trust and deployment in safety-critical contexts. This research investigates the adaptation and evaluation of three post-hoc explanation methods such as LIME,Grad-CAM and Score-CAM for their ability to clarify model decisions in this domain.Two convolutional models were developed: a less accurate model A and a well performing model B. These were used not as endpoints, but as tools to analyze how model quality affects the reliability and clarity of explanations. The methods were applied to a curated subset of 20 representative sonar images, with visual inspection and quantitative evaluation performed. Key criteria included agreement (consistency across methods), faithfulness (how closely explanations reflect true model behavior) and sparsity(how focused the explanation is). Quantitative results demonstrated that model B yielded more faithful and focused explanations across all methods, particularly in insertion/deletion AUC and Gini coefficient. Grad-CAM and Score-CAM outperformed LIME in both clarity and spatial relevance. Visual case studies supported these findings.This work contributes an evaluation framework for explainability methods in sonar-based classification, bridging practical deep learning with interpretability in real world defense applications.
Périodeoct. 2024juin 2025
L'examen a eu lieu à
  • University of Ghent
Degré de reconnaissanceNational