Please use this identifier to cite or link to this item:
https://hdl.handle.net/10316/94353
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Malheiro, Ricardo | - |
dc.contributor.author | Panda, Renato Eduardo Silva | - |
dc.contributor.author | Gomes, Paulo | - |
dc.contributor.author | Paiva, Rui Pedro Pinto de Carvalho e | - |
dc.date.accessioned | 2021-04-17T21:29:01Z | - |
dc.date.available | 2021-04-17T21:29:01Z | - |
dc.date.issued | 2018 | - |
dc.identifier.issn | 1949-3045 | pt |
dc.identifier.uri | https://hdl.handle.net/10316/94353 | - |
dc.description.abstract | This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell’s emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9%, 82.7% and 85.6% to 80.1%, 88.3% and 90%, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6% F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence. | pt |
dc.language.iso | eng | pt |
dc.publisher | IEEE | pt |
dc.relation | info:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Music | pt |
dc.rights | embargoedAccess | pt |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | pt |
dc.subject | lyrics feature extraction | pt |
dc.subject | lyrics music | pt |
dc.subject | lyrics music classification | pt |
dc.subject | lyrics music emotion recognition | pt |
dc.subject | music information retrieval | pt |
dc.title | Emotionally-Relevant Features for Classification and Regression of Music Lyrics | pt |
dc.type | article | - |
degois.publication.firstPage | 240 | pt |
degois.publication.lastPage | 254 | pt |
degois.publication.issue | 2 | pt |
degois.publication.title | IEEE Transactions on Affective Computing – TAFFC | pt |
dc.relation.publisherversion | http://ieeexplore.ieee.org/document/7536113/ | pt |
dc.peerreviewed | yes | pt |
dc.identifier.doi | 10.1109/TAFFC.2016.2598569 | pt |
degois.publication.volume | 9 | pt |
dc.date.embargo | 2018-06-30 | * |
uc.date.periodoEmbargo | 180 | pt |
item.fulltext | Com Texto completo | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.languageiso639-1 | en | - |
item.openairetype | article | - |
item.cerifentitytype | Publications | - |
item.grantfulltext | open | - |
crisitem.project.grantno | info:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Music | - |
crisitem.author.researchunit | CISUC - Centre for Informatics and Systems of the University of Coimbra | - |
crisitem.author.researchunit | CISUC - Centre for Informatics and Systems of the University of Coimbra | - |
crisitem.author.researchunit | CISUC - Centre for Informatics and Systems of the University of Coimbra | - |
crisitem.author.parentresearchunit | Faculty of Sciences and Technology | - |
crisitem.author.parentresearchunit | Faculty of Sciences and Technology | - |
crisitem.author.parentresearchunit | Faculty of Sciences and Technology | - |
crisitem.author.orcid | 0000-0002-3010-2732 | - |
crisitem.author.orcid | 0000-0003-2539-5590 | - |
crisitem.author.orcid | 0000-0003-3215-3960 | - |
Appears in Collections: | I&D CISUC - Artigos em Revistas Internacionais |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Malheiro et al. - 2018 - Emotionally-Relevant Features for Classification and Regression of Music Lyrics.pdf | 629.22 kB | Adobe PDF | View/Open |
SCOPUSTM
Citations
44
checked on Oct 14, 2024
WEB OF SCIENCETM
Citations
28
checked on Oct 2, 2024
Page view(s)
235
checked on Oct 15, 2024
Download(s)
345
checked on Oct 15, 2024
Google ScholarTM
Check
Altmetric
Altmetric
This item is licensed under a Creative Commons License