Acoustic Scene Classification Based on a Large-margin Factorized CNN
|Citation:||J. Cho, S. Yun, H. Park, J. Eum & K. Hwang, "Acoustic Scene Classification Based on a Large-margin Factorized CNN", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 45–49, New York University, NY, USA, Oct. 2019|
|Abstract:||In this paper, we present an acoustic scene classification framework based on a large-margin factorized convolutional neural network (CNN). We adopt the factorized CNN to learn the patterns in the time-frequency domain by factorizing the 2D kernel into two separate 1D kernels. The factorized kernel leads to learn the main component of two patterns: the long-term ambient and short-term event sounds which are the key patterns of the audio scene classification. In training our model, we consider the loss function based on the triplet sampling such that the same audio scene samples from different environments are minimized, and simultaneously the different audio scene samples are maximized. With this loss function, the samples from the same audio scene are clustered independently of the environment, and thus we can get the classifier with better generalization ability in an unseen environment. We evaluated our audio scene classification framework using the dataset of the DCASE challenge 2019 task1A. Experimental results show that the proposed algorithm improves the performance of the baseline network and reduces the number of parameters to one third. Furthermore, the performance gain is higher on unseen data, and it shows that the proposed algorithm has better generalization ability.|
|Appears in Collections:||Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019)|
Files in This Item:
|DCASE2019Workshop_Cho_69.pdf||3.35 MB||Adobe PDF||View/Open|
Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.