Utilize este identificador para referenciar este registo: https://hdl.handle.net/10316/103238
Título: On the clinical acceptance of black-box systems for EEG seizure prediction
Autor: Pinto, Mauro F. 
Leal, Adriana Costa 
Lopes, Fábio 
Pais, José
Dourado, António 
Sales, Francisco
Martins, Pedro 
Teixeira, César A. 
Palavras-chave: actor network theory; grounded theory; interpretability/explainability; machine learning; seizure prediction
Data: 2022
Título da revista, periódico, livro ou evento: Epilepsia Open
Volume: 7
Número: 2
Resumo: Seizure prediction may be the solution for epileptic patients whose drugs and surgery do not control seizures. Despite 46 years of research, few devices/systems underwent clinical trials and/or are commercialized, where the most recent state-of- the- art approaches, as neural networks models, are not used to their full potential. The latter demonstrates the existence of social barriers to new methodologies due to data bias, patient safety, and legislation compliance. In the form of literature review, we performed a qualitative study to analyze the seizure prediction ecosystem to find these social barriers. With the Grounded Theory, we draw hypotheses from data, while with the Actor-Network Theory we considered that technology shapes social configurations and interests, being fundamental in healthcare. We obtained a social network that describes the ecosystem and propose research guidelines aiming at clinical acceptance. Our most relevant conclusion is the need for model explainability, but not necessarily intrinsically interpretable models, for the case of seizure prediction. Accordingly, we argue that it is possible to develop robust prediction models, including black-box systems to some extent, while avoiding data bias, ensuring patient safety, and still complying with legislation, if they can deliver human-comprehensible explanations. Due to skepticism and patient safety reasons, many authors advocate the use of transparent models which may limit their performance and potential. Our study highlights a possible path, by using model explainability, on how to overcome these barriers while allowing the use of more computationally robust models.
URI: https://hdl.handle.net/10316/103238
ISSN: 2470-9239
2470-9239
DOI: 10.1002/epi4.12597
Direitos: openAccess
Aparece nas coleções:I&D CISUC - Artigos em Revistas Internacionais

Mostrar registo em formato completo

Citações SCOPUSTM   

7
Visto em 25/dez/2023

Visualizações de página

138
Visto em 8/mai/2024

Downloads

155
Visto em 8/mai/2024

Google ScholarTM

Verificar

Altmetric

Altmetric


Este registo está protegido por Licença Creative Commons Creative Commons