%0 Conference Paper %B Software Maintenance, 2009. ICSM 2009. IEEE International Conference on %D 2009 %T Prioritizing component compatibility tests via user preferences %A Yoon,Il-Chul %A Sussman, Alan %A Memon, Atif M. %A Porter, Adam %K compatibility testing prioritization %K component configurations %K computer clusters %K Middleware %K Middleware systems %K object-oriented programming %K program testing %K software engineering %K Software systems %K third-party components %K user preferences %X Many software systems rely on third-party components during their build process. Because the components are constantly evolving, quality assurance demands that developers perform compatibility testing to ensure that their software systems build correctly over all deployable combinations of component versions, also called configurations. However, large software systems can have many configurations, and compatibility testing is often time and resource constrained. We present a prioritization mechanism that enhances compatibility testing by examining the ldquomost importantrdquo configurations first, while distributing the work over a cluster of computers. We evaluate our new approach on two large scientific middleware systems and examine tradeoffs between the new prioritization approach and a previously developed lowest-cost-configuration-first approach. %B Software Maintenance, 2009. ICSM 2009. IEEE International Conference on %P 29 - 38 %8 2009/09// %G eng %R 10.1109/ICSM.2009.5306357 %0 Journal Article %J Computer %D 2006 %T High-confidence medical device software and systems %A Lee,I. %A Pappas,G. J %A Cleaveland, Rance %A Hatcliff,J. %A Krogh,B. H %A Lee,P. %A Rubin,H. %A Sha,L. %K Aging %K biomedical equipment %K Clinical software engineering %K Costs %K FDA device %K Food and Drug Administration device %K health and safety %K health care %K health information system %K health safety %K healthcare delivery %K Healthcare technology %K Information systems %K medical computing %K medical device manufacturing %K medical device software development %K medical device systems development %K medical information systems %K Medical services %K Medical software %K Medical tests %K networked medical devices %K Production systems %K Software design %K Software safety %K Software systems %K Software testing %K US healthcare quality %X Given the shortage of caregivers and the increase in an aging US population, the future of US healthcare quality does not look promising and definitely is unlikely to be cheaper. Advances in health information systems and healthcare technology offer a tremendous opportunity for improving the quality of care while reducing costs. The development and production of medical device software and systems is a crucial issue, both for the US economy and for ensuring safe advances in healthcare delivery. As devices become increasingly smaller in physical terms but larger in software terms, the design, testing, and eventual Food and Drug Administration (FDA) device approval is becoming much more expensive for medical device manufacturers both in terms of time and cost. Furthermore, the number of devices that have recently been recalled due to software and hardware problems is increasing at an alarming rate. As medical devices are becoming increasingly networked, ensuring even the same level of health safety seems a challenge. %B Computer %V 39 %P 33 - 38 %8 2006/04// %@ 0018-9162 %G eng %N 4 %R 10.1109/MC.2006.127 %0 Journal Article %J IEEE Transactions on Information Technology in Biomedicine %D 2003 %T The virtual microscope %A Catalyurek,U. %A Beynon,M. D %A Chang,Chialin %A Kurc, T. %A Sussman, Alan %A Saltz, J. %K biomedical optical imaging %K client software %K client/server architecture %K Computer architecture %K Computer Graphics %K computer platforms %K Computer simulation %K Concurrent computing %K configured data server %K data server software %K database management systems %K database software %K Database systems %K digital slide images %K digital telepathology %K diseases %K emulation %K Environment %K Equipment Design %K Equipment Failure Analysis %K high power light microscope %K Image databases %K Image Enhancement %K Image Interpretation, Computer-Assisted %K Image retrieval %K Information retrieval %K Information Storage and Retrieval %K java %K local disks %K microscope image data %K Microscopy %K multiple clients %K optical microscopy %K PACS %K software %K Software design %K software system %K Software systems %K Systems Integration %K Telepathology %K User-Computer Interface %K virtual microscope design %K Virtual reality %K Workstations %X We present the design and implementation of the virtual microscope, a software system employing a client/server architecture to provide a realistic emulation of a high power light microscope. The system provides a form of completely digital telepathology, allowing simultaneous access to archived digital slide images by multiple clients. The main problem the system targets is storing and processing the extremely large quantities of data required to represent a collection of slides. The virtual microscope client software runs on the end user's PC or workstation, while database software for storing, retrieving and processing the microscope image data runs on a parallel computer or on a set of workstations at one or more potentially remote sites. We have designed and implemented two versions of the data server software. One implementation is a customization of a database system framework that is optimized for a tightly coupled parallel machine with attached local disks. The second implementation is component-based, and has been designed to accommodate access to and processing of data in a distributed, heterogeneous environment. We also have developed caching client software, implemented in Java, to achieve good response time and portability across different computer platforms. The performance results presented show that the Virtual Microscope systems scales well, so that many clients can be adequately serviced by an appropriately configured data server. %B IEEE Transactions on Information Technology in Biomedicine %V 7 %P 230 - 248 %8 2003/12// %@ 1089-7771 %G eng %N 4 %R 10.1109/TITB.2004.823952 %0 Conference Paper %B Proceedings of the 1996 ACM/IEEE Conference on Supercomputing, 1996 %D 1996 %T Modeling, Evaluation, and Testing of Paradyn Instrumentation System %A Waheed, A. %A Rover, D. T %A Hollingsworth, Jeffrey K %K Distributed control %K Feedback %K High performance computing %K Instruments %K Monitoring %K Real time systems %K Software measurement %K Software systems %K Software testing %K System testing %X This paper presents a case study of modeling, evaluating, and testing the data collection services (called an instrumentation system) of the Paradyn parallel performance measurement tool using well-known performance evaluation and experiment design techniques. The overall objective of the study is to use modeling- and simulation-based evaluation to provide feedback to the tool developers to help them choose system configurations and task scheduling policies that can significantly reduce the data collection overheads. We develop and parameterize a resource occupancy model for the Paradyn instrumentation system (IS) for an IBM SP-2 platform. This model is parameterized with a measurement-based workload characterization and subsequently used to answer several "what if" questions regarding configuration options and two policies to schedule instrumentation system tasks: collect-and-forward (CF) and batch-and-forward (BF) policies. Simulation results indicate that the BF policy can significantly reduce the overheads. Based on this feedback, the BF policy was implemented in the Paradyn IS as an option to manage the data collection. Measurement-based testing results obtained from this enhanced version of the Paradyn IS are reported in this paper and indicate more than 60% reduction in the direct IS overheads when the BF policy is used. %B Proceedings of the 1996 ACM/IEEE Conference on Supercomputing, 1996 %I IEEE %P 18 - 18 %8 1996/// %@ 0-89791-854-1 %G eng %R 10.1109/SUPERC.1996.183524 %0 Journal Article %J IEEE Software %D 1990 %T Empirically guided software development using metric-based classification trees %A Porter, Adam %A Selby,R. W %K Application software %K Area measurement %K automatic generation %K classification problem %K Classification tree analysis %K Costs %K empirically guided software development %K Error correction %K life cycle %K measurable attributes %K metric-based classification trees %K Predictive models %K Programming %K software engineering %K Software measurement %K software metrics %K Software systems %X The identification of high-risk components early in the life cycle is addressed. A solution that casts this as a classification problem is examined. The proposed approach derives models of problematic components, based on their measurable attributes and those of their development processes. The models provide a basis for forecasting which components are likely to share the same high-risk properties, such as being error-prone or having a high development cost. Developers can use these classification techniques to localize the troublesome 20% of the system. The method for generating the models, called automatic generation of metric-based classification trees, uses metrics from previous releases or projects to identify components that are historically high-risk. %B IEEE Software %V 7 %P 46 - 54 %8 1990/03// %@ 0740-7459 %G eng %N 2 %R 10.1109/52.50773 %0 Conference Paper %B , Conference on Software Maintenance, 1989., Proceedings %D 1989 %T Software metric classification trees help guide the maintenance of large-scale systems %A Selby,R. W %A Porter, Adam %K automated method %K automatic programming %K classification %K Classification tree analysis %K classification trees %K Computer errors %K empirically-based models %K error-prone software objects %K Fault diagnosis %K feasibility study %K high development effort %K Large-scale systems %K multivalued functions %K NASA %K NASA projects %K recursive algorithm %K Software algorithms %K software engineering %K Software maintenance %K Software measurement %K software metrics %K software modules %K Software systems %K trees (mathematics) %X The 80:20 rule states that approximately 20% of a software system is responsible for 80% of its errors. The authors propose an automated method for generating empirically-based models of error-prone software objects. These models are intended to help localize the troublesome 20%. The method uses a recursive algorithm to automatically generate classification trees whose nodes are multivalued functions based on software metrics. The purpose of the classification trees is to identify components that are likely to be error prone or costly, so that developers can focus their resources accordingly. A feasibility study was conducted using 16 NASA projects. On average, the classification trees correctly identified 79.3% of the software modules that had high development effort or faults %B , Conference on Software Maintenance, 1989., Proceedings %I IEEE %P 116 - 123 %8 1989/10/16/19 %@ 0-8186-1965-1 %G eng %R 10.1109/ICSM.1989.65202 %0 Journal Article %J IEEE Transactions on Software Engineering %D 1988 %T Learning from examples: generation and evaluation of decision trees for software resource analysis %A Selby,R. W %A Porter, Adam %K Analysis of variance %K Artificial intelligence %K Classification tree analysis %K Data analysis %K decision theory %K Decision trees %K Fault diagnosis %K Information analysis %K machine learning %K metrics %K NASA %K production environment %K software engineering %K software modules %K software resource analysis %K Software systems %K Termination of employment %K trees (mathematics) %X A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for one problem domain, specifically, that of software resource data analysis. The purpose of the decision trees is to identify classes of objects (software modules) that had high development effort, i.e. in the uppermost quartile relative to past data. Sixteen software systems ranging from 3000 to 112000 source lines have been selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4700 objects, capture a multitude of information about the objects: development effort, faults, changes, design style, and implementation style. A total of 9600 decision trees are automatically generated and evaluated. The analysis focuses on the characterization and evaluation of decision tree accuracy, complexity, and composition. The decision trees correctly identified 79.3% of the software modules that had high development effort or faults, on the average across all 9600 trees. The decision trees generated from the best parameter combinations correctly identified 88.4% of the modules on the average. Visualization of the results is emphasized, and sample decision trees are included %B IEEE Transactions on Software Engineering %V 14 %P 1743 - 1757 %8 1988/12// %@ 0098-5589 %G eng %N 12 %R 10.1109/32.9061 %0 Journal Article %J IEEE Transactions on Computers %D 1986 %T A Special-Function Unit for Sorting and Sort-Based Database Operations %A Raschid, Louiqa %A Fei,T. %A Lam,H. %A Su,S. Y.W %K Application software %K Computer applications %K Database machines %K Hardware %K hardware sorter %K Microelectronics %K Software algorithms %K Software design %K Software systems %K sort-based algorithms for database operations %K sorting %K special-function processor %K Technology management %X Achieving efficiency in database management functions is a fundamental problem underlying many computer applications. Efficiency is difficult to achieve using the traditional general-purpose von Neumann processors. Recent advances in microelectronic technologies have prompted many new research activities in the design, implementation, and application of database machines which are tailored for processing database management functions. To build an efficient system, the software algorithms designed for this type of system need to be tailored to take advantage of the hardware characteristics of these machines. Furthermore, special hardware units should be used, if they are cost- effective, to execute or to assist the execution of these software algorithms. %B IEEE Transactions on Computers %V C-35 %P 1071 - 1077 %8 1986/12// %@ 0018-9340 %G eng %N 12 %R 10.1109/TC.1986.1676715