A MACHINE LEARNING BASED APPROACH FOR DETECTING NON-DETERMINISTIC TESTS AND ITS ANALYSIS IN MOBILE APPLICATION TESTING

Main Article Content

Rajkumar J.Bhojan
Dr.D. Ramyachitra
Dr. K. Vivekanandan
Dr. Subramaniam Ganesan

Abstract

Hundreds of different mobile devices are on the market, produced by different vendors, and with different software features and hardware components. Mobile applications, while running on different devices, may behave differently due to variations in the hardware or O.S. components. Since mobile applications are expected to be deployed and executed on diverse mobile platforms, they must be validated on different mobile platforms and devices. Due to the peculiarities of mobile application development, there is a need for a quality assurance approach that focuses on its challenges. Moreover, mobile test executions take a long time because all the tests were executed in different environments and developers had to create complex tear down procedures. Such procedures were lengthy and far from perfect, leading to unpredictable failures. Regression testing is a crucial part of Mobile app development and it checks that software changes do not break existing functionality. An important assumption of regression testing is that test outcomes are deterministic and tests are expected to either always pass or always fail for the same code under test. But unfortunately, in real projects multiple release cycles, some tests— often called flaky tests—have non-deterministic outcomes. These tests undermine the regression testing cycle as they make it difficult to rely on test results. These results significantly reduced the trust in the tests and thus undermined the whole mobile app test automation effort. We trained machine learning classifiers separately on each test result dataset and compared performance across datasets. The proposed model predicts result types as Non-Deterministic or Deterministic tests from the regression suite results executed in various release cycles.

Downloads

Download data is not yet available.

Article Details

Section
Articles

References

S. Eldh, H. Hansson, S. Punnekkat, A. Pettersson, and D. Sundmark, “A framework for comparing efficiency, effectiveness and applicability of software testing techniques,†in Testing: Academic and Industrial Conference - Practice And Research Techniques, 2006. TAIC PART 2006. Proceedings, Aug 2006, pp. 159–170.

S. Allen, V. Graupera, and L. Lundrigan, “The smartphone is the new PC,†in Pro Smartphone Cross-Platform Development. Apress, 2010, pp. 1–14.

Rajkumar J Bhojan, K.Vivekanandan and Subramaniam Ganesan, "Mobile Test Automation Framework for Automotive HMI", International Journal of Advanced Research in Computer and Communication Engineering, Vol. 3, Issue 1, January 2014

Alex Gyori, et.al Reliable Testing: Detecting State-Polluting Tests to prevent Test Dependency", ISSTA -2015, Baltimore, MD, USA DOI - 10.1145 /2771783.2771793

Arash Vahabzadeh, Amin Milani Fard, Ali Mesbah, "An Empirical Study of Bugs in Test Code", ICSME 2015, Bremen, Germany, DOI: 978-1-4673-7532-0/15, IEEE-2015

August Shi, Alex Gyori, Owolabi Legunsen, Dark Marinov, "Detecting Assumption on Deterministic Implementations of Non-Deterministic Specifications", IEEE International Conference on Software Testing, Verification and Validation, 2016. DOI:10.1109/ICST.2016.40

Baijian Yang, Tonglin Zhang, "A Scalable Feature Selection and Model Updating Approach for Big Data Machine Learning", IEEE International Conference on Smart Cloud, 2016, DOI 10.1109/SmartCloud.2016.32.

I. Guyon and A. Elisseeff, An introduction to variable and feature selection, The Journal of Machine Learning Research, 3, 11571182, 2003.

Lili Li, Jiancheng Lv,Zhang Yim, “A non-negative representation learning algorithm for selecting neighborsâ€, Springer Link, Machine Learning, Feb-2016, volume 102, issue 2, pp 133-153

J. Han, M. Kamber, and J. Pei, Data mining: concepts and techniques: Morgan kaufmann, 2006.

L. Breiman, "Random forests," Machine learning, vol. 45, pp. 5-32, 2001.

Y. Qi, "Random Forest for Bioinformatics," in Ensemble Machine Learning, ed: Springer, 2012, pp. 307-323.

A. Liaw and M. Wiener, "Classification and regression by random Forest," R news, vol. 2, pp. 18-22, 2002.

X. Liu, K. Tang, J. R. Buhrman, and H. Cheng, "An agent-based framework for collaborative data mining optimization," in Collaborative Technologies and Systems (CTS), 2010, International Symposium on, 2010, pp. 295-301.

E. Frank, M. Hall, L. Trigg, G. Holmes, and I. H. Witten, "Data mining in bioinformatics using Weka," Bioinformatics, vol. 20, pp. 2479-2481, 2004.

H. Zhang, M. Wang, and X. Chen, "Willows: a memory efficient tree and forest construction package," BMC Bioinformatics, vol. 10, p. 130, 2009.

Gomes, H.M., Bifet, A., Read, J. et al., "Adaptive random forests for evolving data stream classification", Mach Learn (2017) 106: 1469.https://doi.org/10.1007/s10994-017-5642-8