Activity: Talk or presentation › Talk at a scientific Seminar
Description
Deep learning models can be powerful tools, but they are often bound to fail unexpectedly with degenerate data, and it becomes all the more critical in military applications. Therefore, this is crucial to check and understand model outputs with interpretability. In this webinar, we will explore different methods, such as saliency maps and uncertainty estimation, to interpret the results of seabed semantic segmentation from high-resolution Synthetic Aperture Sonar (SAS) images, specifically for Mine Countermeasures (MCM). These interpretable methods not only help identify biases in both the data and the model, leading to better performance, but they also help build trust with end-users. Additionally, the information gained can be used for improved data curation and model retraining strategies, such as active learning, making the process even more effective.
Period
5 Nov 2024
Event title
Interpretable AI to identify biases and build trust with end users - Application to Sonar Seabed Segmentation: A biweekly webinar held at the Royal Military Academy