Automating regression testing for evolving GUI software

TitleAutomating regression testing for evolving GUI software
Publication TypeJournal Articles
Year of Publication2005
AuthorsMemon AM, Nagarajan A, Xie Q
JournalJournal of Software Maintenance and Evolution: Research and Practice
Volume17
Issue1
Pagination27 - 64
Date Published2005///
ISBN Number1532-0618
Keywordsdaily/nightly builds, event-flow graphs, Graphical user interfaces, GUI regression testing, GUI testing, smoke testing, Software quality
Abstract

With the widespread deployment of broadband connections worldwide, software development and maintenance are increasingly being performed by multiple engineers, often working around-the-clock to maximize code churn rates. To ensure rapid quality assurance of such software, techniques such as ‘nightly/daily building and smoke testing’ have become widespread since they often reveal bugs early in the software development process. During these builds, a development version of the software is checked out from the source code repository tree, compiled, linked, and (re)tested with the goal of (re)validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically re-run for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (Daily Automated Regression Tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis, smoke-test-case generation, test-oracle creation, to code instrumentation, test execution, coverage evaluation, regeneration of test cases, and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to re-test the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We empirically compare the relative costs of employing different levels of detail in the GUI test oracle. We also show the events and statements covered by the smoke test cases. Copyright © 2005 John Wiley & Sons, Ltd.

URLhttp://onlinelibrary.wiley.com/doi/10.1002/smr.305/abstract
DOI10.1002/smr.305