Shuffling and Mixing Data Augmentation for Environmental Sound Classification
Ko, Bong Jun
|Citation:||T. Inoue, P. Vinayavekhin, S. Wang, D. Wood, A. Munawar, B. Ko, N. Greco & R. Tachibana, "Shuffling and Mixing Data Augmentation for Environmental Sound Classification", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 109–113, New York University, NY, USA, Oct. 2019|
|Abstract:||Smart speakers have been recently adopted and widely used in consumer homes, largely as a communication interface between human and machines. In addition, these speakers can be used to monitor sounds other than human voice, for example, to watch over elderly people living alone, and to notify if there are changes in their usual activities that may affect their health. In this paper, we focus on the sound classification using machine learning, which usually requires a lot of training data to achieve good accuracy. Our main contribution is a data augmentation technique that generates new sound by shuffling and mixing two existing sounds of the same class in the dataset. This technique creates new variations on both the temporal sequence and the density of the sound events. We show in DCASE 2018 Task 5 that the proposed data augmentation method with our proposed convolutional neural network (CNN) achieves an average of macro-averaged F1 score of 89.95% over 4 folds of the development dataset. This is a significant improvement from the baseline result of 84.50%. In addition, we also verify that our proposed data augmentation technique can improve the classification performance on the Urban Sound 8K dataset.|
|Appears in Collections:||Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019)|
Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.