A Review of Supervised Machine Learning Algorithms to Classify Donors for Charity

Main Article Content

Pooja Mittal

Abstract

Abstract--Machine Learning has several supervised algorithms which have the capability for potential prediction based on the data collected from external or internal sources. Different supervised algorithms are employed to find the best-chosen algorithms based on the preliminary results and then optimized further to find the best outcomes. Non-profit organization survives on donations and predicting the individual’s income helps to identify how big a donation can be made by the individuals. Therefore, it helps whether to approach to the individuals or not based on their income. With the

 help of this paper, different algorithms are constructed and discussed based on their accuracy, complexity, speed, and overfitting to choose the best candidate model. Best optimized model helps to predict the individual’s income efficiently and help making decision whether to reach out to them or not which helps in the non-profit organization survival.

Downloads

Download data is not yet available.

Article Details

Section
Articles

References

K. Ron, “Scaling Up the Accuracy of Naive - Bayes Classifiers: A decision Tree Hybrid,†in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, 1996, pp. 202–207, Accessed: Jan. 14, 2021. [Online]. Available: https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf.

A. B. Parsa, A. Movahedi, H. Taghipour, S. Derrible, and A. (Kouros) Mohammadian, “Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis,†Accid. Anal. Prev., vol. 136, p. 105405, Mar. 2020, doi: 10.1016/j.aap.2019.105405.

S. Dhaliwal, A.-A. Nahid, and R. Abbas, “Effective Intrusion Detection System Using XGBoost,†Information, vol. 9, no. 7, p. 149, Jun. 2018, doi: 10.3390/info9070149.

T. Chen and T. He, “xgboost: eXtreme Gradient Boosting,†2020.

T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,†in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2016, vol. 13-17-Augu, pp. 785–794, doi: 10.1145/2939672.2939785.

G. Cybenko and T. G. Allen, “Parallel Algorithms For Classification And Clustering,†Adv. Algorithms Archit. Signal Process. II, vol. 0826, no. 4, p. 126, 1988, doi: 10.1117/12.942023.

A. M. Wilson et al., “Effects of topical corticosteroid and combined mediator blockade on domiciliary and laboratory measurements of nasal function in seasonal allergic rhinitis,†Ann. Allergy, Asthma Immunol., vol. 87, no. 4, pp. 344–349, 2001, doi: 10.1016/S1081-1206(10)62250-8.

C. Rich and A. Niculescu-Mizil, “An Empirical Comparisions of Supervised Learning Algorithms,†Icml, pp. 161–168, 2017, [Online]. Available: www.cs.cornell.edu.

L. Rokach and O. Maimon, “Top-down induction of decision trees classifiers - A survey,†IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., vol. 35, no. 4, pp. 476–487, 2005, doi: 10.1109/TSMCC.2004.843247.

M. Kuhn, “Building predictive models in R using the caret package,†J. Stat. Softw., vol. 28, no. 5, pp. 1–26, 2008, doi: 10.18637/jss.v028.i05.

R.-M. Ştefan, “A Comparison of Data Classification Methods,†Procedia Econ. Financ., vol. 3, no. 12, pp. 420–425, 2012, doi: 10.1016/s2212-5671(12)00174-8.

F. Lin, Y. Zhuang, X. Long, and W. Xu, “Human Gender Classification : A Review Human Gender Classification : A Review Feng Lin , Yingxiao Wu , Yan Zhuang Wenyao Xu ∗,†no. November 2016, 2015.

N. Williams, S. Zander, and G. Armitage, “A preliminary performance comparison of five machine learning algorithms for practical IP traffic flow classification,†Comput. Commun. Rev., vol. 36, no. 5, pp. 7–15, 2006, doi: 10.1145/1163593.1163596.

C. C. Aggarwal, X. Kong, Q. Gu, J. Han, and P. S. Yu, “Active learning: A survey,†Data Classif. Algorithms Appl., pp. 571–605, 2014, doi: 10.1201/b17320.

M. Robnik-Šikonja, “Improving random forests,†Lect. Notes Artif. Intell. (Subseries Lect. Notes Comput. Sci., vol. 3201, no. March, pp. 359–370, 2004, doi: 10.1007/978-3-540-30115-8_34.

K. Wisaeng, “A Comparison of Different Classification Techniques for Bank Direct Marketing,†Int. J. Soft Comput. Eng., no. 4, pp. 116–119, 2013, [Online]. Available: http://www.ijsce.org/wp-content/uploads/papers/v3i4/D1789093413.pdf.

H. Hormozi, E. Hormozi, and H. R. Nohooji, “The Classification of the Applicable Machine Learning Methods in Robot Manipulators,†Int. J. Mach. Learn. Comput., no. July 2018, pp. 560–563, 2012, doi: 10.7763/ijmlc.2012.v2.189.

L. Yu and H. Liu, “Efficient feature selection via analysis of relevance and redundancy,†J. Mach. Learn. Res., vol. 5, pp. 1205–1224, 2004.

C. Zhong, D. Miao, and P. Fränti, “Minimum spanning tree based split-and-merge: A hierarchical clustering method,†Inf. Sci. (Ny)., vol. 181, no. 16, pp. 3397–3410, 2011, doi: 10.1016/j.ins.2011.04.013.

I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, “Data Mining: Practical Machine Learning Tools and Techniques,†Data Min. Pract. Mach. Learn. Tools Tech., no. November, pp. 1–621, 2016, doi: 10.1016/c2009-0-19715-5.

R. Konieczny and R. Idczak, “Mössbauer study of Fe-Re alloys prepared by mechanical alloying,†Hyperfine Interact., vol. 237, no. 1, pp. 1–8, 2016, doi: 10.1007/s10751-016-1232-6.

J. Demšar, “Statistical comparisons of classifiers over multiple data sets,†J. Mach. Learn. Res., vol. 7, pp. 1–30, 2006.

T. C. Sharma and M. Jain, “WEKA Approach for Comparative Study of Classification Algorithm,†Int. J. Adv. Res. Comput. Commun. Eng., vol. 2, no. 4, pp. 1925–1931, 2013, [Online]. Available: www.ijarcce.com.

F. Pernkopf, “Bayesian network classifiers versus selective k-NN classifier,†Pattern Recognit., vol. 38, no. 1, pp. 1–10, Jan. 2005, doi: 10.1016/j.patcog.2004.05.012.

A. C. Lorena et al., “Comparing machine learning classifiers in potential distribution modelling,†Expert Syst. Appl., vol. 38, no. 5, pp. 5268–5275, May 2011, doi: 10.1016/j.eswa.2010.10.031.

S. B. Kotsiantis, I. D. Zaharakis, and P. E. Pintelas, “Machine learning: A review of classification and combining techniques,†Artif. Intell. Rev., vol. 26, no. 3, pp. 159–190, 2006, doi: 10.1007/s10462-007-9052-3.