Vision Transformers: the threat of realistic adversarial patches

Kasper Cools, Clara Maathuis, Alexander M. van Oers, Claudia S. Hübner, Nikos Deligiannis, Marijke Vandewal, Geert De cubber

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The increasing reliance on machine learning systems has made their security a critical concern. Evasion attacks enable adversaries to manipulate the decision-making processes of AI systems, potentially causing security breaches or misclassification of targets. Vision Transformers (ViTs) have gained significant traction in modern machine learning due to increased 1) performance compared to Convolutional Neural Networks (CNNs) and 2) robustness against adversarial perturbations. However, ViTs remain vulnerable to evasion attacks, particularly to adversarial patches, unique patterns designed to manipulate AI classification systems. These vulnerabilities are investigated by designing realistic adversarial patches to cause misclassification in person vs. non-person classification tasks using the Creases Transformation (CT) technique, which adds subtle geometric distortions similar to those occurring naturally when wearing clothing. This study investigates the transferability of adversarial attack techniques used in CNNs when applied to ViT classification models. Experimental evaluation across four fine-tuned ViT models on a binary person classification task reveals significant vulnerability variations: attack success rates ranged from 40.04\% (google/vit-base-patch16-224-in21k) to 99.97\% (facebook/dino-vitb16), with google/vit-base-patch16-224 achieving 66.40\% and facebook/dinov3-vitb16 reaching 65.17\%. These results confirm the cross-architectural transferability of adversarial patches from CNNs to ViTs, with pre-training dataset scale and methodology strongly influencing model resilience to adversarial attacks.
Original languageEnglish
Title of host publicationProceeding of the SPIE Sensors + Imaging 2025
Subtitle of host publicationArtificial Intelligence for Security and Defence Applications III
PublisherSociety of Photo-Optical Instrumentation Engineers
DOIs
Publication statusPublished - 2025

Fingerprint

Dive into the research topics of 'Vision Transformers: the threat of realistic adversarial patches'. Together they form a unique fingerprint.

Cite this