Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/82847
Title: Finding the Critical Feature Dimension of Big Datasets
Other Titles: Procura do Tamanho Crítico de Amostragem de Grandes Conjuntos de Dados
Authors: Silva, José Miguel Parreira e 
Orientador: Ribeiro, Bernardete Martins
Teixeira, César Alexandre Domingues
Keywords: Big Data; Critical Sample; Data Mining; Big Data; Critical Sample; Data Mining
Issue Date: 14-Jul-2017
Serial title, monograph or event: Finding the Critical Feature Dimension of Big Datasets
Place of publication or event: DEI-FCTUC
Abstract: Big Data allied to the Internet of Things nowadays provides a powerful resource that various organizations are increasingly exploiting for applications ranging from decision support, predictive and prescriptive analytics, to knowledge and intelligence discovery. In analytics and data mining processes, it is usually desirable to have as much data as possible, though it is often more important that the data is of high quality thereby raising two of the most important problems when handling large datasets: sample and feature selection. This work addresses the sampling problem and presents a heuristic method to find the “critical sampling” of big datasets. The concept of the critical sampling size of a dataset is defined as the minimum number of examples that are required for a given data analytic task to achieve a satisfactory performance. The problem is very important in data mining, since the size of data sets directly relates to the cost of executing the data mining task. Since the problem of determining the optimal solution for the Critical Sampling Size problem is intractable, in this dissertation a heuristic method is tested, in order to infer its capability to find practical solutions. Results have shown an apparent Critical Sampling Size for all the tested datasets, which is rather smaller than the their original sizes. Further, the proposed heuristic method shows a promising utility, providing a practical solution to find a useful critical sample for data mining tasks.
Big Data allied to the Internet of Things nowadays provides a powerful resource that various organizations are increasingly exploiting for applications ranging from decision support, predictive and prescriptive analytics, to knowledge and intelligence discovery. In analytics and data mining processes, it is usually desirable to have as much data as possible, though it is often more important that the data is of high quality thereby raising two of the most important problems when handling large datasets: sample and feature selection. This work addresses the sampling problem and presents a heuristic method to find the “critical sampling” of big datasets. The concept of the critical sampling size of a dataset is defined as the minimum number of examples that are required for a given data analytic task to achieve a satisfactory performance. The problem is very important in data mining, since the size of data sets directly relates to the cost of executing the data mining task. Since the problem of determining the optimal solution for the Critical Sampling Size problem is intractable, in this dissertation a heuristic method is tested, in order to infer its capability to find practical solutions. Results have shown an apparent Critical Sampling Size for all the tested datasets, which is rather smaller than the their original sizes. Further, the proposed heuristic method shows a promising utility, providing a practical solution to find a useful critical sample for data mining tasks.
Description: Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia
URI: https://hdl.handle.net/10316/82847
Rights: openAccess
Appears in Collections:UC - Dissertações de Mestrado

Files in This Item:
File Description SizeFormat
dissertation.pdf2.09 MBAdobe PDFView/Open
Show full item record

Google ScholarTM

Check


This item is licensed under a Creative Commons License Creative Commons