Identifying Similar Test Cases That Are Specified in Natural Language

10/14/2021
by   Markos Viggiato, et al.
0

Software testing is still a manual process in many industries, despite the recent improvements in automated testing techniques. As a result, test cases are often specified in natural language by different employees and many redundant test cases might exist in the test suite. This increases the (already high) cost of test execution. Manually identifying similar test cases is a time-consuming and error-prone task. Therefore, in this paper, we propose an unsupervised approach to identify similar test cases. Our approach uses a combination of text embedding, text similarity and clustering techniques to identify similar test cases. We evaluate five different text embedding techniques, two text similarity metrics, and two clustering techniques to cluster similar test steps and four techniques to identify similar test cases from the test step clusters. Through an evaluation in an industrial setting, we showed that our approach achieves a high performance to cluster test steps (an F-score of 87.39 Furthermore, a validation with developers indicates several different practical usages of our approach (such as identifying redundant and legacy test cases), which help to reduce the testing manual effort and time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset