Skip navigation
Title: 

Robustness of Adversarial Attacks in Sound Event Classification

Authors: Subramanian, Vinod
Benetos, Emmanouil
Sandler, Mark B.
Date Issued: Oct-2019
Citation: V. Subramanian, E. Benetos & M. Sandler, "Robustness of Adversarial Attacks in Sound Event Classification", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 239–243, New York University, NY, USA, Oct. 2019
Abstract: An adversarial attack is a method to generate perturbations to the input of a machine learning model in order to make the output of the model incorrect. The perturbed inputs are known as adversarial examples. In this paper, we investigate the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification. By performing this analysis, we aim to provide insight on strengths and weaknesses in current adversarial attack algorithms as well as provide a baseline for defenses against adversarial attacks. Our work shows that adversarial attacks are not robust to simple input transformations. White noise is the most consistent method to defend against adversarial attacks with a success rate of $73.72\%$ averaged across all models and attack algorithms.
First Page: 239
Last Page: 243
DOI: https://doi.org/10.33682/sp9n-qk06
Type: Article
Appears in Collections:Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019)

Files in This Item:
File SizeFormat 
DCASE2019Workshop_Subramanian_66.pdf632.89 kBAdobe PDFView/Open


Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.