%0 Conference Paper %B Software Engineering, 2004. ICSE 2004. Proceedings. 26th International Conference on %D 2004 %T Skoll: distributed continuous quality assurance %A Memon, Atif M. %A Porter, Adam %A Yilmaz,C. %A Nagarajan,A. %A Schmidt,D. %A Natarajan,B. %K 1MLOC+ software package %K ACE+TAO %K around-the-clock QA process %K around-the-world QA process %K distributed continuous QA %K distributed continuous quality assurance %K distributed programming %K program verification %K Quality assurance %K Skoll %K software performance evaluation %K software profiling %K Software quality %K Software testing %X Quality assurance (QA) tasks, such as testing, profiling, and performance evaluation, have historically been done in-house on developer-generated workloads and regression suites. Since this approach is inadequate for many systems, tools and processes are being developed to improve software quality by increasing user participation in the QA process. A limitation of these approaches is that they focus on isolated mechanisms, not on the coordination and control policies and tools needed to make the global QA process efficient, effective, and scalable. To address these issues, we have initiated the Skoll project, which is developing and validating novel software QA processes and tools that leverage the extensive computing resources of worldwide user communities in a distributed, continuous manner to significantly and rapidly improve software quality. This paper provides several contributions to the study of distributed continuous QA. First, it illustrates the structure and functionality of a generic around-the-world, around-the-clock QA process and describes several sophisticated tools that support this process. Second, it describes several QA scenarios built using these tools and process. Finally, it presents a feasibility study applying these scenarios to a 1MLOC+ software package called ACE+TAO. While much work remains to be done, the study suggests that the Skoll process and tools effectively manage and control distributed, continuous QA processes. Using Skoll we rapidly identified problems that had taken the ACE+TAO developers substantially longer to find and several of which had previously not been found. Moreover, automatic analysis of QA task results often provided developers information that quickly led them to the root cause of the problems. %B Software Engineering, 2004. ICSE 2004. Proceedings. 26th International Conference on %P 459 - 468 %8 2004/05// %G eng %R 10.1109/ICSE.2004.1317468 %0 Conference Paper %B 26th International Conference on Software Engineering, 2004. ICSE 2004. Proceedings %D 2004 %T Unifying artifacts and activities in a visual tool for distributed software development teams %A Jon Froehlich %A Dourish,P. %K Augur %K complexity management %K distributed programming %K distributed software development teams %K open source software developers %K program visualisation %K Programming %K public domain software %K software artifacts %K software engineering %K software tools %K visual representations %K visual tool %K visualization tool %X In large projects, software developers struggle with two sources of complexity - the complexity of the code itself, and the complexity of the process of producing it. Both of these concerns have been subjected to considerable research investigation, and tools and techniques have been developed to help manage them. However, these solutions have generally been developed independently, making it difficult to deal with problems that inherently span both dimensions. We describe Augur, a visualization tool that supports distributed software development processes. Augur creates visual representations of both software artifacts and software development activities, and, crucially, allows developers to explore the relationship between them. Augur is designed not for managers, but for the developers participating in the software development process. We discuss some of the early results of informal evaluation with open source software developers. Our experiences to date suggest that combining views of artifacts and activities is both meaningful and valuable to software developers. %B 26th International Conference on Software Engineering, 2004. ICSE 2004. Proceedings %I IEEE %P 387 - 396 %8 2004 %@ 0-7695-2163-0 %G eng %0 Conference Paper %B Proceedings of the eighth ACM SIGPLAN international conference on Functional programming %D 2003 %T Dynamic rebinding for marshalling and update, with destruct-time ? %A Bierman,Gavin %A Hicks, Michael W. %A Sewell,Peter %A Stoyle,Gareth %A Wansbrough,Keith %K distributed programming %K dynamic binding %K dynamic update %K lambda calculus %K marshalling %K programming languages %K serialisation %X Most programming languages adopt static binding, but for distributed programming an exclusive reliance on static binding is too restrictive: dynamic binding is required in various guises, for example when a marshalled value is received from the network, containing identifiers that must be rebound to local resources. Typically it is provided only by ad-hoc mechanisms that lack clean semantics.In this paper we adopt a foundational approach, developing core dynamic rebinding mechanisms as extensions to simply-typed call-by-value ? -calculus. To do so we must first explore refinements of the call-by-value reduction strategy that delay instantiation, to ensure computations make use of the most recent versions of rebound definitions. We introduce redex-time and destruct-time strategies. The latter forms the basis for a ?marsh calculus that supports dynamic rebinding of marshalled values, while remaining as far as possible statically-typed. We sketch an extension of ? marsh with concurrency and communication, giving examples showing how wrappers for encapsulating untrusted code can be expressed. Finally, we show that a high-level semantics for dynamic updating can also be based on the destruct-time strategy, defining a ?marsh calculus with simple primitives to provide type-safe updating of running code. We thereby establish primitives and a common semantic foundation for a variety of real-world dynamic rebinding requirements. %B Proceedings of the eighth ACM SIGPLAN international conference on Functional programming %S ICFP '03 %I ACM %C New York, NY, USA %P 99 - 110 %8 2003/// %@ 1-58113-756-7 %G eng %U http://doi.acm.org/10.1145/944705.944715 %R 10.1145/944705.944715 %0 Journal Article %J IEEE Transactions on Software Engineering %D 2001 %T A tool to help tune where computation is performed %A Eom, Hyeonsang %A Hollingsworth, Jeffrey K %K Computational modeling %K Current measurement %K Distributed computing %K distributed program %K distributed programming %K load balancing factor %K Load management %K parallel program %K parallel programming %K Performance analysis %K performance evaluation %K Performance gain %K performance metric %K Programming profession %K software metrics %K software performance evaluation %K Testing %K Time measurement %K tuning %X We introduce a new performance metric, called load balancing factor (LBF), to assist programmers when evaluating different tuning alternatives. The LBF metric differs from traditional performance metrics since it is intended to measure the performance implications of a specific tuning alternative rather than quantifying where time is spent in the current version of the program. A second unique aspect of the metric is that it provides guidance about moving work within a distributed or parallel program rather than reducing it. A variation of the LBF metric can also be used to predict the performance impact of changing the underlying network. The LBF metric is computed incrementally and online during the execution of the program to be tuned. We also present a case study that shows that our metric can accurately predict the actual performance gains for a test suite of six programs %B IEEE Transactions on Software Engineering %V 27 %P 618 - 629 %8 2001/07// %@ 0098-5589 %G eng %N 7 %R 10.1109/32.935854 %0 Conference Paper %D 1999 %T Fault injection based on a partial view of the global state of a distributed system %A Michel Cukier %A Chandra,R. %A Henke,D. %A Pistole,J. %A Sanders,W. H. %K bounding technique %K clock synchronization %K distributed programming %K distributed software systems %K fault injection %K Loki %K post-runtime analysis %K program testing %K program verification %K software reliability %K Synchronisation %X This paper describes the basis for and preliminary implementation of a new fault injector, called Loki, developed specifically for distributed systems. Loki addresses issues related to injecting correlated faults in distributed systems. In Loki, fault injection is performed based on a partial view of the global state of an application. In particular, facilities are provided to pass user-specified state information between nodes to provide a partial view of the global state in order to try to inject complex faults successfully. A post-runtime analysis, using an off-line clock synchronization and a bounding technique, is used to place events and injections on a single global time-line and determine whether the intended faults were properly injected. Finally, observations containing successful fault injections are used to estimate specified dependability measures. In addition to describing the details of our new approach, we present experimental results obtained from a preliminary implementation in order to illustrate Loki's ability to inject complex faults predictably %P 168 - 177 %8 1999/// %G eng %R 10.1109/RELDIS.1999.805093