Study of the performance of the K* Algorithm in International Databases
Palavras-chave:
Classification, K* algorithm, experimental, accuracy, dataResumo
This paper presents an experimental study of K* algorithm, which was compared with five classification algorithms of the top ten data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM), which are C4.5, SVM, kNN, Naive Bayes and CART. The experimental results show a satisfactory performance of K* algorithm in comparison with these approaches.
Métricas do artigo
Resumo: 309 HTML (English): 68 PDF (English): 176 XML (English): 24Referências
Ian H Witten, Eibe Frank, and Mark A Hall. Data Mining: Practical Machine Learning Tools and Techniques: Practical Machine Learning Tools and Techniques. Elsevier, 2011.
Xindong Wu, Vipin Kumar, J Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J McLachlan, Angus Ng, Bing Liu, S Yu Philip, et al. Top 10 algorithms in data mining. Knowledge and Information Systems, 14(1):1–37, 2008.
David J Hand and Keming Yu. Idiot’s bayes not so stupid after all? International Statistical Review, 69(3):385–398, 2001.
Tang L. Lui H. Refaeilzadeh, P. K-fold Cross-Validation. Arizona State University, 2008.
Thomas Cover and Peter Hart. Nearest neighbor pattern classification. Information Theory, IEEE Transactions on, 13(1):21–27, 1967.
L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Classification and Regression Trees. Chapman and Hall (Wadsworth and Inc.), 1984.
J.R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kauffman, 1993.
J.G. Cleary and L.E. Trigg. K*: An instancebased learner using an entropic distance measure. In Proceedings of the 12th International Conference on Machine Learning, pages 108–114, 1995.
Tejera Hernández, Dayana C. "An Experimental Study of K* Algorithm", Information Engineering and Electronic Business, 2015, 2, 14-19
Uzun, Y. And G. Tezel, Rule Learning With Machine Learning Algorithms And Artificial Neural Networks. Journal of Seljuk University Natural and Applied Science, 2012. 1(2).
Er, E., Identifying At-Risk Students Using Machine Learning Techniques: A Case Study with IS 100. International Journal of Machine Learning and Computing, 2012. 2(4): p. 279
Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. The weka data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10-18, 2009.
Janez Demšar. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 7:1–30, 2006.
R. A. Fisher. Statistical methods and scientific inference (2nd edition). MHafner Publishing Co., New York, 1959.
Milton Friedman. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32(200):675–701, 1937.
Milton Friedman. A comparison of alternative tests of significance for the problem of m rankings. The Annals of Mathematical Statistics, 11(1):86–92, 1940.
Alcalá-Fdez, J., et al., KEEL: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput., 2009. 13: p. 307–318.