Skip navigation
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDrossos, Konstantinos
dc.contributor.authorGharib, Shayan
dc.contributor.authorMagron, Paul
dc.contributor.authorVirtanen, Tuomas
dc.date.accessioned2019-10-24T01:50:14Z-
dc.date.available2019-10-24T01:50:14Z-
dc.date.issued2019-10
dc.identifier.citationK. Drossos, S. Gharib, P. Magron & T. Virtanen, "Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling", Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), pages 59–63, New York University, NY, USA, Oct. 2019en
dc.identifier.urihttp://hdl.handle.net/2451/60728-
dc.description.abstractA sound event detection (SED) method typically takes as an input a sequence of audio frames and predicts the activities of sound events in each frame. In real-life recordings, the sound events exhibit some temporal structure: for instance, a "car horn" will likely be followed by a "car passing by". While this temporal structure is widely exploited in sequence prediction tasks (e.g., in machine translation), where language models (LM) are exploited, it is not satisfactorily modeled in SED. In this work we propose a method which allows a recurrent neural network (RNN) to learn an LM for the SED task. The method conditions the input of the RNN with the activities of classes at the previous time step. We evaluate our method using F1 score and error rate (ER) over three different and publicly available datasets; the TUT-SED Synthetic 2016 and the TUT Sound Events 2016 and 2017 datasets. The obtained results show an increase of 6% and 3% at the F1 (higher is better) and a decrease of 3% and 2% at ER (lower is better) for the TUT Sound Events 2016 and 2017 datasets, respectively, when using our method. On the contrary, with our method there is a decrease of 10% at F1 score and an increase of 11% at ER for the TUT-SED Synthetic 2016 dataset.en
dc.rightsDistributed under the terms of the Creative Commons Attribution 4.0 International (CC-BY) license.en
dc.titleLanguage Modelling for Sound Event Detection with Teacher Forcing and Scheduled Samplingen
dc.typeArticleen
dc.identifier.DOIhttps://doi.org/10.33682/1dze-8739
dc.description.firstPage59
dc.description.lastPage63
Appears in Collections:Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019)

Files in This Item:
File SizeFormat 
DCASE2019Workshop_Drossos_30.pdf598.06 kBAdobe PDFView/Open


Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.