Utilize este identificador para referenciar este registo: https://hdl.handle.net/10316/94095
Título: Multi-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysis
Autor: Panda, Renato Eduardo Silva 
Malheiro, Ricardo 
Rocha, Bruno 
Oliveira, António Pedro
Paiva, Rui Pedro 
Palavras-chave: music emotion recognition; machine learning; multi-modal analysis
Data: 2013
Projeto: info:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Music 
Título da revista, periódico, livro ou evento: 10th International Symposium on Computer Music Multidisciplinary Research (CMMR 2013)
Local de edição ou do evento: Marseille, France
Resumo: We propose a multi-modal approach to the music emotion recognition (MER) problem, combining information from distinct sources, namely audio, MIDI and lyrics. We introduce a methodology for the automatic creation of a multi-modal music emotion dataset resorting to the AllMusic database, based on the emotion tags used in the MIREX Mood Classification Task. Then, MIDI files and lyrics corresponding to a sub-set of the obtained audio samples were gathered. The dataset was organized into the same 5 emotion clusters defined in MIREX. From the audio data, 177 standard features and 98 melodic features were extracted. As for MIDI, 320 features were collected. Finally, 26 lyrical features were extracted. We experimented with several supervised learning and feature selection strategies to evaluate the proposed multi-modal approach. Employing only standard audio features, the best attained performance was 44.3% (F-measure). With the multi-modal approach, results improved to 61.1%, using only 19 multi-modal features. Melodic audio features were particularly important to this improvement.
URI: https://hdl.handle.net/10316/94095
Direitos: openAccess
Aparece nas coleções:I&D CISUC - Artigos em Livros de Actas

Mostrar registo em formato completo

Visualizações de página

892
Visto em 23/abr/2024

Downloads

469
Visto em 23/abr/2024

Google ScholarTM

Verificar


Este registo está protegido por Licença Creative Commons Creative Commons