Skip navigation
Title: 

Exploiting Parallel Audio Recordings to Enforce Device Invariance in CNN-based Acoustic Scene Classification

Authors: Primus, Paul
Eghbal-zadeh, Hamid
Eitelsebner, David
Koutini, Khaled
Arzt, Andreas
Widmer, Gerhard
Date Issued: Oct-2019
Citation: P. Primus, H. Eghbal-zadeh, D. Eitelsebner, K. Koutini, A. Arzt & G. Widmer, "Exploiting Parallel Audio Recordings to Enforce Device Invariance in CNN-based Acoustic Scene Classification", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 204–208, New York University, NY, USA, Oct. 2019
Abstract: Distribution mismatches between the data seen at training and at application time remain a major challenge in all application areas of machine learning. We study this problem in the context of machine listening (Task 1b of the DCASE 2019 Challenge). We propose a novel approach to learn domain-invariant classifiers in an end-to-end fashion by enforcing equal hidden layer representations for domain-parallel samples, i.e. time-aligned recordings from different recording devices. No classification labels are needed for our domain adaptation (DA) method, which makes the data collection process cheaper. We show that our method improves the target domain accuracy for both a toy dataset and an urban acoustic scenes dataset. We further compare our method to Maximum Mean Discrepancy-based DA and find it more robust to the choice of DA parameters. Our submission, based on this method, to DCASE 2019 Task 1b gave us the 4th place in the team ranking.
First Page: 204
Last Page: 208
DOI: https://doi.org/10.33682/v9qj-8954
Type: Article
Appears in Collections:Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019)

Files in This Item:
File SizeFormat 
DCASE2019Workshop_Primus_53.pdf691.9 kBAdobe PDFView/Open


Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.