Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/103499
DC FieldValueLanguage
dc.contributor.authorPetmezas, Georgios-
dc.contributor.authorCheimariotis, Grigorios-Aris-
dc.contributor.authorStefanopoulos, Leandros-
dc.contributor.authorRocha, Bruno-
dc.contributor.authorPaiva, Rui Pedro-
dc.contributor.authorKatsaggelos, Aggelos K.-
dc.contributor.authorMaglaveras, Nicos-
dc.date.accessioned2022-11-16T11:22:02Z-
dc.date.available2022-11-16T11:22:02Z-
dc.date.issued2022-02-06-
dc.identifier.issn1424-8220pt
dc.identifier.urihttps://hdl.handle.net/10316/103499-
dc.description.abstractRespiratory diseases constitute one of the leading causes of death worldwide and directly affect the patient's quality of life. Early diagnosis and patient monitoring, which conventionally include lung auscultation, are essential for the efficient management of respiratory diseases. Manual lung sound interpretation is a subjective and time-consuming process that requires high medical expertise. The capabilities that deep learning offers could be exploited in order that robust lung sound classification models can be designed. In this paper, we propose a novel hybrid neural model that implements the focal loss (FL) function to deal with training data imbalance. Features initially extracted from short-time Fourier transform (STFT) spectrograms via a convolutional neural network (CNN) are given as input to a long short-term memory (LSTM) network that memorizes the temporal dependencies between data and classifies four types of lung sounds, including normal, crackles, wheezes, and both crackles and wheezes. The model was trained and tested on the ICBHI 2017 Respiratory Sound Database and achieved state-of-the-art results using three different data splitting strategies-namely, sensitivity 47.37%, specificity 82.46%, score 64.92% and accuracy 73.69% for the official 60/40 split, sensitivity 52.78%, specificity 84.26%, score 68.52% and accuracy 76.39% using interpatient 10-fold cross validation, and sensitivity 60.29% and accuracy 74.57% using leave-one-out cross validation.pt
dc.language.isoengpt
dc.publisherMDPIpt
dc.relationEU-WELMO project (project number 210510516)pt
dc.relationPh.D. scholarship SFRH/BD/135686/2018pt
dc.relationPh.D. scholarship 2020.04927.BDpt
dc.rightsopenAccesspt
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/pt
dc.subjectlung soundspt
dc.subjectcracklespt
dc.subjectwheezespt
dc.subjectSTFTpt
dc.subjectCNNpt
dc.subjectLSTMpt
dc.subjectfocal losspt
dc.subjectCOPDpt
dc.subjectasthmapt
dc.subject.meshAuscultationpt
dc.subject.meshHumanspt
dc.subject.meshLungpt
dc.subject.meshNeural Networks, Computerpt
dc.subject.meshQuality of Lifept
dc.subject.meshRespiratory Soundspt
dc.titleAutomated Lung Sound Classification Using a Hybrid CNN-LSTM Network and Focal Loss Functionpt
dc.typearticle-
degois.publication.firstPage1232pt
degois.publication.issue3pt
degois.publication.titleSensorspt
dc.peerreviewedyespt
dc.identifier.doi10.3390/s22031232pt
degois.publication.volume22pt
dc.date.embargo2022-02-06*
uc.date.periodoEmbargo0pt
item.grantfulltextopen-
item.cerifentitytypePublications-
item.languageiso639-1en-
item.openairetypearticle-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.fulltextCom Texto completo-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.researchunitCISUC - Centre for Informatics and Systems of the University of Coimbra-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.parentresearchunitFaculty of Sciences and Technology-
crisitem.author.orcid0000-0003-1643-667X-
crisitem.author.orcid0000-0003-3215-3960-
Appears in Collections:I&D CISUC - Artigos em Revistas Internacionais
Show simple item record

SCOPUSTM   
Citations

60
checked on Apr 29, 2024

WEB OF SCIENCETM
Citations

36
checked on May 2, 2024

Page view(s)

79
checked on May 7, 2024

Download(s)

124
checked on May 7, 2024

Google ScholarTM

Check

Altmetric

Altmetric


This item is licensed under a Creative Commons License Creative Commons