Searching for Better Test Case Prioritization Schemes: a Case Study of AI-assisted Systematic Literature Review

09/16/2019
by   Zhe Yu, et al.
0

Given the large numbers of publications in the SE field, it is difficult to keep current with the latest developments. In theory, AI tools could assist in finding relevant work but those AI tools have primarily been tested/validated in simulations rather than actual literature reviews. Accordingly, using a realistic case study, this paper assesses how well machine learning algorithms can help with literature reviews. The target of this case study is to identify test case prioritization techniques for automated UI testing; specifically from 8,349 papers on IEEE Xplore. This corpus was studied with an incrementally updated human-in-the-loop active learning text miner. Using that AI tool, in three hours, we found 242 relevant papers from which we identified 12 techniques representing the state-of-the-art in test case prioritization when source code information is not available. The foregoing results were validated by having six graduate students manually explore the same corpus. Using data from that validation study, we determined that without AI tools, this task would take 53 hours and would have found 27 extra papers. That is, with 6 the effort of manual methods, our AI tools achieved a 90 Significantly, the same 12 state-of-the-art test case prioritization techniques were found by both the AI study and the manual study. That is, the 27/242 papers missed by the AI study would not have changed our conclusions. Hence, this study endorses the use of machine learning algorithms to assist future literature reviews.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset