Reliable Effects Screening: A Distributed Continuous Quality Assurance Process for Monitoring Performance Degradation in Evolving Software Systems

TitleReliable Effects Screening: A Distributed Continuous Quality Assurance Process for Monitoring Performance Degradation in Evolving Software Systems
Publication TypeJournal Articles
Year of Publication2007
AuthorsYilmaz C, Porter A, Krishna AS, Memon AM, Schmidt DC, Gokhale AS, Natarajan B
JournalSoftware Engineering, IEEE Transactions on
Volume33
Issue2
Pagination124 - 141
Date Published2007/02//
ISBN Number0098-5589
Keywordsconfiguration subset, distributed continuous quality assurance process, evolving software systems, in house testing, main effects screening, performance bottlenecks, performance degradation monitoring, performance intensive software systems, process configuration, process execution, program testing, regression testing, reliable effects screening, software benchmarks, Software performance, software performance evaluation, Software quality, software reliability, tool support
Abstract

Developers of highly configurable performance-intensive software systems often use in-house performance-oriented "regression testing" to ensure that their modifications do not adversely affect their software's performance across its large configuration space. Unfortunately, time and resource constraints can limit in-house testing to a relatively small number of possible configurations, followed by unreliable extrapolation from these results to the entire configuration space. As a result, many performance bottlenecks escape detection until systems are fielded. In our earlier work, we improved the situation outlined above by developing an initial quality assurance process called "main effects screening". This process 1) executes formally designed experiments to identify an appropriate subset of configurations on which to base the performance-oriented regression testing, 2) executes benchmarks on this subset whenever the software changes, and 3) provides tool support for executing these actions on in-the-field and in-house computing resources. Our initial process had several limitations, however, since it was manually configured (which was tedious and error-prone) and relied on strong and untested assumptions for its accuracy (which made its use unacceptably risky in practice). This paper presents a new quality assurance process called "reliable effects screening" that provides three significant improvements to our earlier work. First, it allows developers to economically verify key assumptions during process execution. Second, it integrates several model-driven engineering tools to make process configuration and execution much easier and less error prone. Third, we evaluate this process via several feasibility studies of three large, widely used performance-intensive software frameworks. Our results indicate that reliable effects screening can detect performance degradation in large-scale systems more reliably and with significantly less resources than conventional t- echniques

DOI10.1109/TSE.2007.20