Utilize este identificador para referenciar este registo: https://hdl.handle.net/10316/95164
Campo DCValorIdioma
dc.contributor.authorPanda, Renato-
dc.contributor.authorRocha, Bruno-
dc.contributor.authorPaiva, Rui Pedro-
dc.date.accessioned2021-07-04T18:05:25Z-
dc.date.available2021-07-04T18:05:25Z-
dc.date.issued2013-10-15-
dc.identifier.urihttps://hdl.handle.net/10316/95164-
dc.description.abstractWe propose an approach to the dimensional music emotion recognition (MER) problem, combining both standard and melodic audio features. The dataset proposed by Yang is used, which consists of 189 audio clips. From the audio data, 458 standard features and 98 melodic features were extracted. We experimented with several supervised learning and feature selection strategies to evaluate the proposed approach. Employing only standard audio features, the best attained performance was 63.2% and 35.2% for arousal and valence prediction, respectively (R2 statistics). Combining standard audio with melodic features, results improved to 67.4 and 40.6%, for arousal and valence, respectively. To the best of our knowledge, these are the best results attained so far with this dataset.pt
dc.description.sponsorshipThis work was supported by the MOODetector project (PTDC/EIA- EIA/102185/2008), financed by the Fundação para Ciência e a Tecnologia (FCT) and Programa Operacional Temático Factores de Competitividade (COMPETE) - Portugal.pt
dc.language.isoengpt
dc.relationinfo:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Musicpt
dc.rightsopenAccesspt
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/pt
dc.subjectmusic emotion recognitionpt
dc.subjectmachine learningpt
dc.subjectregressionpt
dc.subjectstandard audio featurespt
dc.subjectmelodic featurespt
dc.titleDimensional Music Emotion Recognition: Combining Standard and Melodic Audio Featurespt
dc.typearticlept
degois.publication.firstPage583pt
degois.publication.lastPage593pt
degois.publication.locationMarseille, Francept
degois.publication.title10th International Symposium on Computer Music Multidisciplinary Research – CMMR 2013pt
dc.peerreviewedyespt
dc.date.embargo2013-10-15*
uc.date.periodoEmbargo0pt
item.fulltextCom Texto completo-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.languageiso639-1en-
item.openairetypearticle-
item.cerifentitytypePublications-
item.grantfulltextopen-
crisitem.project.grantnoinfo:eu-repo/grantAgreement/FCT/5876-PPCDTI/102185/PT/MOODetector - A System for Mood-based Classification and Retrieval of Audio Music-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.orcid0000-0003-2539-5590-
crisitem.author.orcid0000-0003-1643-667X-
crisitem.author.orcid0000-0003-3215-3960-
Aparece nas coleções:I&D CISUC - Artigos em Livros de Actas
Mostrar registo em formato simples

Visualizações de página

248
Visto em 15/out/2024

Downloads

93
Visto em 15/out/2024

Google ScholarTM

Verificar


Este registo está protegido por Licença Creative Commons Creative Commons