Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/97343
DC FieldValueLanguage
dc.contributor.advisorRibeiro, Bernardete Martins-
dc.contributor.advisorPimentel, André-
dc.contributor.advisorFrazão, Xavier-
dc.contributor.authorLaranjeira, Ana Filipa-
dc.date.accessioned2022-01-24T10:31:58Z-
dc.date.available2022-01-24T10:31:58Z-
dc.date.issued2016-09-
dc.identifier.urihttps://hdl.handle.net/10316/97343-
dc.descriptionDissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.pt
dc.description.abstractThe emerging success of digital social media has had an impact on several fields ranging from science to economy and business. This has been particularly relevant for the marketing industry which centers its activity on digital social interactions between branding and the end-consumer, in order to increase their market competitiveness. Therefore, there is an invested interest in emotion detection and recognition technology from facial expressions. This work is marked by a detailed research about the concepts and the existent methodologies behind the Automatic Facial Expression Recognition (AFER) systems, as well as an evaluation of their effectiveness. Additionally, the most relevant models were tested, in order to discern the most adequate one for facial expressions recognition. A comparison was made between the traditional methodologies and Deep Learning, a recent trend in Pattern Recognition. Both these domains have challenging inner workings. Traditional methods are strongly dependent on the input, thus any transformation in the dataset will influence the model for more adjustments. The Deep Learning methods are more adaptable to variation, however refining their hyperparameters might be an exhaustive work. Here, we explored the value of Deep Learning by focusing on recent technological breakthroughs, particularly with Convolutional Neural Networks (CNN). Incremental steps were made in order to deploy the better solution to the network architecture. After some preliminary experiments using the more recent and complex networks (such as Googlenet, Alexnet) we ended with the Lenet-5 as a baseline. We found that Lenet-5 simplicity was better suited for the system constraints (dataset dimension, faces size and composition). The Cohn-Kanade Extended dataset was chosen for testing our proposed CNN model. In an attempt to improve the results we also have augmented the dataset with random perturbations from a wide set, including: skew, translation, scale, and horizontal flip. Our refinements to the model led to a 90% overall accuracy when taking static images as an input. To further validate our results we built and present a real-time framework. Based on the data collected, the deep model has emerged as a promising approach for AFER systems.pt
dc.language.isoengpt
dc.rightsembargoedAccesspt
dc.subjectFacial expression recognitionpt
dc.subjectprototypic-expressionspt
dc.subjectMachine Learningpt
dc.subjectDeep Learningpt
dc.subjectConvolutional Neural Networkspt
dc.titleSmart Monitor Health System: Face Expressions Recognitionpt
dc.typemasterThesispt
degois.publication.locationCoimbrapt
dc.date.embargo2022-08-31*
thesis.degree.grantor00500::Universidade de Coimbrapt
thesis.degree.nameMestrado em Engenharia Informáticapt
uc.rechabilitacaoestrangeiranopt
uc.date.periodoEmbargo2190pt
item.grantfulltextopen-
item.cerifentitytypePublications-
item.languageiso639-1en-
item.openairetypemasterThesis-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.fulltextCom Texto completo-
crisitem.advisor.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.advisor.parentresearchunitFaculty of Sciences and Technology-
crisitem.advisor.orcid0000-0002-9770-7672-
Appears in Collections:FCTUC Eng.Informática - Teses de Mestrado
Files in This Item:
File Description SizeFormat
main.pdf10.43 MBAdobe PDFView/Open
Show simple item record

Page view(s)

71
checked on May 7, 2024

Download(s)

30
checked on May 7, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.