Abstract
We present an evaluation of a rule evaluation support method for post-processing of mined results with rule evaluation models based on objective indices in this paper. To reduce the costs of rule evaluation task, which is one of the key procedures in data mining post-processing, we have developed the rule evaluation support method with rule evaluation models, which are obtained with objective indices of mined classification rules and evaluations of a human expert for each rule. Then we have evaluated performances of learning algorithms for constructing rule evaluation models on the meningitis data mining as an actual problem, and ten rule sets from the ten kinds of UCI datasets as an article problem. With these results, we show the availability of our rule evaluation support method.
Chapter PDF
Similar content being viewed by others
References
Ali, K., Manganaris, S., Srikant, R.: Partial Classification Using Association Rules. In: Proc. of Int. Conf. on Knowledge Discovery and Data Mining KDD 1997, pp. 115–118 (1997)
Brin, S., Motwani, R., Ullman, J., Tsur, S.: Dynamic itemset counting and implication rules for market basket data. In: Proc. of ACM SIGMOD Int. Conf. on Management of Data, pp. 255–264 (1997)
Frank, E., Wang, Y., Inglis, S., Holmes, G., Witten, I.H.: Using model trees for classification. Machine Learning 32(1), 63–76 (1998)
Frank, E., Witten, I.H.: Generating accurate rule sets without global optimization. In: Proc. of the Fifteenth International Conference on Machine Learning, pp. 144–151 (1998)
Gago, P., Bento, C.: A Metric for Selection of the Most Promising Rules. In: PKDD 1998, pp. 19–27 (1998)
Goodman, L.A., Kruskal, W.H.: Measures of association for cross classifications. Springer Series in Statistics, vol. 1. Springer, Heidelberg (1979)
Gray, B., Orlowska, M.E.: CCAIIA: Clustering Categorical Attributes into Interesting Association Rules. In: Wu, X., Kotagiri, R., Korb, K.B. (eds.) PAKDD 1998. LNCS, vol. 1394, pp. 132–143. Springer, Heidelberg (1998)
Hamilton, H.J., Shan, N., Ziarko, W.: Machine Learning of Credible Classifications. In: Proc. of Australian Conf. on Artificial Intelligence AI 1997, pp. 330–339 (1997)
Hatazawa, H., Negishi, N., Suyama, A., Tsumoto, S., Yamaguchi, T.: Knowledge Discovery Support from a Meningoencephalitis Database Using an Automatic Composition Tool for Inductive Applications. In: Proc. of KDD Challenge 2000, in conjunction with PAKDD 2000, pp. 28–33 (2000)
Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases, Irvine, CA, University of California, Department of Information and Computer Science (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
Hilderman, R.J., Hamilton, H.J.: Knowledge Discovery and Measure of Interest. Kluwer Academic Publishers, Dordrecht (2001)
Hinton, G.E.: Learning distributed representations of concepts. In: Morris, R.G.M. (ed.) Proceedings of 8th Annual Conference of the Cognitive Science Society, Amherest, MA (reprinted, 1986)
Holte, R.C.: Very simple classification rules perform well on most commonly used datasets. Machine Learning 11, 63–91 (1993)
Klösgen, W.: Explora: A Multipattern and Multistrategy Discovery Assistant. In: Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R. (eds.) Advances in Knowledge Discovery and Data Mining, pp. 249–271. AAAI/MIT Press, California (1996)
Ohsaki, M., Kitaguchi, S., Okamoto, K., Yokoi, H., Yamaguchi, T.: Evaluation of Rule Interestingness Measures with a Clinical Dataset on Hepatitis. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS (LNAI), vol. 3202, pp. 362–373. Springer, Heidelberg (2004)
Piatetsky-Shapiro, G.: Discovery, Analysis and Presentation of Strong Rules. In: Piatetsky-Shapiro, G., Frawley, W.J. (eds.) Knowledge Discovery in Databases, pp. 229–248. AAAI/MIT Press (1991)
Platt, J.: Fast Training of Support Vector Machines using Sequential Minimal Optimization. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods - Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)
Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Francisco (1993)
Rijsbergen, C.: Information Retrieval, ch. 7, Butterworths, London (1979), http://www.dcs.gla.ac.uk/Keith/Chapter.7/Ch.7.html
Smyth, P., Goodman, R.M.: Rule Induction using Information Theory. In: Piatetsky-Shapiro, G., Frawley, W.J. (eds.) Knowledge Discovery in Databases, pp. 159–176. AAAI/MIT Press (1991)
Tan, P.N., Kumar, V., Srivastava, J.: Selecting the Right Interestingness Measure for Association Patterns. In: Proc. of Int. Conf. on Knowledge Discovery and Data Mining KDD 2002, pp. 32–41 (2002)
Witten, I.H., Frank, E.: DataMining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Francisco (2000)
Yao, Y.Y., Zhong, N.: An Analysis of Quantitative Measures Associated with Rules. In: Zhong, N., Zhou, L. (eds.) PAKDD 1999. LNCS, vol. 1574, pp. 479–488. Springer, Heidelberg (1999)
Zhong, N., Yao, Y.Y., Ohshima, M.: Peculiarity Oriented Multi-Database Mining. IEEE Trans. on Knowledge and Data Engineering 15(4), 952–960 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Abe, H., Tsumoto, S., Ohsaki, M., Yamaguchi, T. (2006). Evaluating Learning Models for a Rule Evaluation Support Method Based on Objective Indices. In: Greco, S., et al. Rough Sets and Current Trends in Computing. RSCTC 2006. Lecture Notes in Computer Science(), vol 4259. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11908029_71
Download citation
DOI: https://doi.org/10.1007/11908029_71
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-47693-1
Online ISBN: 978-3-540-49842-1
eBook Packages: Computer ScienceComputer Science (R0)Springer Nature Proceedings Computer Science
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
