Florian Schmid, Khaled Koutini, Gerhard Widmer,
"Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models"
, in IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 32, Seite(n) 2227-2241, 2024
Original Titel:
Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models
Sprache des Titels:
Englisch
Original Kurzfassung:
The introduction of large-scale audio datasets, such as AudioSet, paved the way for Transformers to conquer the audio domain and replace CNNs as the state-of-the-art neural network architecture for many tasks. Audio Spectrogram Transformers are excellent at exploiting large datasets, creating powerful pre-trained models that surpass CNNs when fine-tuned on downstream tasks. However, current popular Audio Spectrogram Transformers are demanding in terms of computational complexity compared to CNNs. Recently, we have shown that, by employing Transformer-to-CNN Knowledge Distillation, efficient CNNs can catch up with and even outperform Transformers on large datasets. In this work, we extend this line of research and increase the capacity of efficient CNNs by introducing dynamic CNN blocks constructed of dynamic convolutions, a dynamic ReLU activation function, and Coordinate Attention. We show that these dynamic CNNs outperform traditional efficient CNNs, such as MobileNets, in terms of the performance-complexity trade-off at the task of audio tagging on the large-scale AudioSet. Our experiments further indicate that the proposed dynamic CNNs achieve competitive performance with Transformer-based models for end-to-end fine-tuning on downstream tasks while being much more computationally efficient.
Sprache der Kurzfassung:
Englisch
Journal:
IEEE/ACM Transactions on Audio, Speech and Language Processing