DART: a framework for regression testing "nightly/daily builds" of GUI applications

TitleDART: a framework for regression testing "nightly/daily builds" of GUI applications
Publication TypeConference Papers
Year of Publication2003
AuthorsMemon AM, Banerjee I, Hashmi N, Nagarajan A
Conference NameSoftware Maintenance, 2003. ICSM 2003. Proceedings. International Conference on
Date Published2003/09//
Keywordsautomated retesting, automatic test software, coverage evaluation, daily automated regression tester, DART, frequent retesting, graphical user interface, Graphical user interfaces, GUI software, instrumentation coding, program testing, regression testing, Software development management, Software development process, Software maintenance, Software quality, structural GUI analysis, Test Case Generation, test cases regeneration, Test execution, test oracle creation
Abstract

"Nightly/daily building and smoke testing" have become widespread since they often reveal bugs early in the software development process. During these builds, software is compiled, linked, and (re)tested with the goal of validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically rerun for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (daily automated regression tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis; test case generation; test oracle creation; to code instrumentation; test execution; coverage evaluation; regeneration of test cases; and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to retest the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We also empirically compare the relative costs of employing different levels of detail in the GUI test cases.

DOI10.1109/ICSM.2003.1235451