Skip navigation
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRanjan, Rishabh
dc.contributor.authorJayabalan, Sathish
dc.contributor.authorNguyen, Thi Ngoc Tho
dc.contributor.authorGan, Woon Seng
dc.date.accessioned2019-10-24T01:50:23Z-
dc.date.available2019-10-24T01:50:23Z-
dc.date.issued2019-10
dc.identifier.citationR. Ranjan, S. Jayabalan, T. Nguyen & W. Gan, "Sound Event Detection and Direction of Arrival Estimation using Residual Net and Recurrent Neural Networks", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 214–218, New York University, NY, USA, Oct. 2019en
dc.identifier.urihttp://hdl.handle.net/2451/60762-
dc.description.abstractThis paper presents deep learning approach for sound events detection and localization, which is also a part of detection and classification of acoustic scenes and events (DCASE) challenge 2019 Task 3. Deep residual nets originally used for image classification are adapted and combined with recurrent neural networks (RNN) to estimate the onset-offset of sound events, sound events class, and their direction in a reverberant environment. Additionally, data augmentation and post processing techniques are applied to generalize and improve the system performance on unseen data. Using our best model on validation dataset, sound events detection achieves F1-score of 0.89 and error rate of 0.18, whereas sound source localization task achieves angular error of 8° and 90% frame recall.en
dc.rightsCopyright The Authors, 2019en
dc.titleSound Event Detection and Direction of Arrival Estimation using Residual Net and Recurrent Neural Networksen
dc.typeArticleen
dc.identifier.DOIhttps://doi.org/10.33682/93dp-f064
dc.description.firstPage214
dc.description.lastPage218
Appears in Collections:Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019)

Files in This Item:
File SizeFormat 
DCASE2019Workshop_Ranjan_40.pdf671.76 kBAdobe PDFView/Open


Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.