%0 Conference Paper %D 2006 %T Assessing the Attack Threat due to IRC Channels %A Meyer,R. %A Michel Cukier %K attack threat assessment %K channel activity %K client-server systems %K computer crime %K intrusion-resilient channels %K IRC channels %K IRC protocols %K Protocols %K telecommunication channels %K telecommunication security %X This practical experience report presents the results of an investigation into the threat of attacks associated with the chat medium IRC. A combination of simulated users (i.e., bots), some configured with scripts that simulated conversations, and regular users were used. The average number of attacks per day a user on IRC can expect, the effect of channel activity, gender based on the name, and network type on the number of attacks were determined. The social structure of IRC channels and the types of users that use it were analyzed. The results indicate that attacks through IRC channels come from human users selecting targets rather than automated scripts targeting every user in a channel %P 467 - 472 %8 2006/06// %G eng %R 10.1109/DSN.2006.12 %0 Conference Paper %B 2003 International Conference on Multimedia and Expo, 2003. ICME '03. Proceedings %D 2003 %T Dynamic load balancing across mirrored multimedia servers %A Matthur,A. %A Mundur, Padma %K client-server systems %K Delay %K Internet %K load balancing protocols %K Load management %K media streaming %K metropolitan area network %K metropolitan area networks %K mirrored multimedia servers %K Multimedia communication %K multimedia servers %K network packet loss %K Network servers %K packet transmission delay %K Propagation losses %K Protocols %K Streaming media %K Topology %K Traffic control %K Web server %X The purpose of this paper is to present protocols for efficient load balancing across replicated multimedia servers in a metropolitan area network. Current multimedia infrastructures, even when they use mirrored servers, do not have standardized load balancing schemes. Existing schemes frequently require participation from the clients in balancing the load across the servers efficiently. We propose two protocols in this paper for fair load balancing without any client-side processing being required. Neither protocol requires any change to the network-level infrastructure. Using network packet loss and packet transmission delay as the chief metrics, we show the effectiveness of the protocols through extensive simulations. %B 2003 International Conference on Multimedia and Expo, 2003. ICME '03. Proceedings %I IEEE %V 2 %P II- 53-6 vol.2 - II- 53-6 vol.2 %8 2003/07/06/9 %@ 0-7803-7965-9 %G eng %R 10.1109/ICME.2003.1221551 %0 Conference Paper %D 2003 %T An experimental evaluation of correlated network partitions in the Coda distributed file system %A Lefever,R.M. %A Michel Cukier %A Sanders,W. H. %K client-server systems %K Coda distributed file system %K correlated network partitions %K distributed file systems %K experimental evaluation %K fault model %K Loki fault injector %K multiple correlated failures %K network failure %K Network topology %K performance evaluation %K replicated data %K replicated databases %K software fault tolerance %K system performance evaluation %X Experimental evaluation is an important way to assess distributed systems, and fault injection is the dominant technique in this area for the evaluation of a system's dependability. For distributed systems, network failure is an important fault model. Physical network failures often have far-reaching effects, giving rise to multiple correlated failures as seen by higher-level protocols. This paper presents an experimental evaluation, using the Loki fault injector, which provides insight into the impact that correlated network partitions have on the Coda distributed file system. In this evaluation, Loki created a network partition between two Coda file servers, during which updates were made at each server to the same replicated data volume. Upon repair of the partition, a client requested directory resolution to converge the diverging replicas. At various stages of the resolution, Loki invoked a second correlated network partition, thus allowing us to evaluate its impact on the system's correctness, performance, and availability. %P 273 - 282 %8 2003/10// %G eng %R 10.1109/RELDIS.2003.1238077 %0 Conference Paper %B IEEE INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %D 2002 %T Clustering and server selection using passive monitoring %A Andrews,M. %A Shepherd,B. %A Srinivasan, Aravind %A Winkler,P. %A Zane,F. %K client assignment %K client-server systems %K clustering %K content servers %K Delay %K distributed system %K Educational institutions %K Internet %K IP addresses %K Monitoring %K network conditions %K Network servers %K Network topology %K optimal content server %K passive monitoring %K server selection %K Space technology %K TCPIP %K Transport protocols %K Web pages %K Web server %K Webmapper %X We consider the problem of client assignment in a distributed system of content servers. We present a system called Webmapper for clustering IP addresses and assigning each cluster to an optimal content server. The system is passive in that the only information it uses comes from monitoring the TCP connections between the clients and the servers. It is also flexible in that it makes no a priori assumptions about network topology and server placement and it can react quickly to changing network conditions. We present experimental results to evaluate the performance of Webmapper. %B IEEE INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %I IEEE %V 3 %P 1717- 1725 vol.3 - 1717- 1725 vol.3 %8 2002/// %@ 0-7803-7476-2 %G eng %R 10.1109/INFCOM.2002.1019425 %0 Conference Paper %D 2002 %T Performance evaluation of a probabilistic replica selection algorithm %A Krishnamurthy, S. %A Sanders,W. H. %A Michel Cukier %K client-server systems %K Dependability %K distributed object management %K dynamic selection algorithm %K Middleware %K probabilistic model %K probabilistic model-based replica selection algorithm %K probability %K quality of service %K real-time systems %K replica failures %K round-robin selection scheme %K static scheme %K time-sensitive distributed applications %K timeliness %K timing failures %K transient overload %X When executing time-sensitive distributed applications, a middleware that provides dependability and timeliness is faced with the important problem of preventing timing failures both under normal conditions and when the quality of service is degraded due to replica failures and transient overload on the server. To address this problem, we have designed a probabilistic model-based replica selection algorithm that allows a middleware to choose a set of replicas to service a client based on their ability to meet a client's timeliness requirements. This selection is done based on the prediction made by a probabilistic model that uses the performance history of replicas as inputs. In this paper, we describe the experiments we have conducted to evaluate the ability of this dynamic selection algorithm to meet a client's timing requirements, and compare it with that of a static and round-robin selection scheme under different scenarios %P 119 - 127 %8 2002/// %G eng %R 10.1109/WORDS.2002.1000044 %0 Conference Paper %D 2002 %T Performance evaluation of a QoS-aware framework for providing tunable consistency and timeliness %A Krishnamurthy, S. %A Sanders,W. H. %A Michel Cukier %K client-server systems %K Computer networks %K CORBA-based middleware %K distributed applications %K distributed object management %K Network servers %K QoS %K quality of service %K replica consistency %K replicated services %K server replicas %K timeliness %X Strong replica consistency models ensure that the data delivered by a replica always includes the latest updates, although this may result in poor response times. On the other hand, weak replica consistency models provide quicker access to information, but do not usually provide guarantees about the degree of staleness in the data they deliver. In order to support emerging distributed applications that are characterized by high concurrency demands, an increasing shift towards dynamic content, and timely delivery, we need quality of service models that allow us to explore the intermediate space between these two extreme approaches to replica consistency. Further, for better support of time-sensitive applications that can tolerate relaxed consistency in exchange for better responsiveness, we need to understand how the desired level of consistency affects the timeliness of a response. The QoS model we have developed to realize these objectives considers both timeliness and consistency, and treats consistency along two dimensions: order and staleness. We evaluate experimentally the framework we have developed to study the timeliness/consistency tradeoffs for replicated services and present experimental results that compare these tradeoffs in the context of sequential and FIFO ordering. %P 214 - 223 %8 2002/// %G eng %R 10.1109/IWQoS.2002.1006589 %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 2001 %T An adaptive algorithm for tolerating value faults and crash failures %A Ren,Yansong %A Michel Cukier %A Sanders,W. H. %K active replication communication %K adaptive algorithm %K adaptive fault tolerance %K adaptive majority voting algorithm %K AQuA architecture %K client-server systems %K CORBA %K crash failures %K data consistency %K data integrity %K Dependability %K distributed object management %K fault tolerant computing %K objects replication %K value faults %X The AQuA architecture provides adaptive fault tolerance to CORBA applications by replicating objects and providing a high-level method that an application can use to specify its desired level of dependability. This paper presents the algorithms that AQUA uses, when an application's dependability requirements can change at runtime, to tolerate both value faults in applications and crash failures simultaneously. In particular, we provide an active replication communication scheme that maintains data consistency among replicas, detects crash failures, collates the messages generated by replicated objects, and delivers the result of each vote. We also present an adaptive majority voting algorithm that enables the correct ongoing vote while both the number of replicas and the majority size dynamically change. Together, these two algorithms form the basis of the mechanism for tolerating and recovering from value faults and crash failures in AQuA %B Parallel and Distributed Systems, IEEE Transactions on %V 12 %P 173 - 192 %8 2001/02// %@ 1045-9219 %G eng %N 2 %R 10.1109/71.910872 %0 Conference Paper %D 2001 %T A dynamic replica selection algorithm for tolerating timing faults %A Krishnamurthy, S. %A Sanders,W. H. %A Michel Cukier %K AQuA %K client %K client-server systems %K CORBA-based middleware %K distributed object management %K distributed services %K dynamic replica selection algorithm %K fault tolerant computing %K local area network %K Local area networks %K quality of service %K replica failures %K response time %K server replication %K time-critical applications %K timing failures %K timing fault tolerance %X Server replication is commonly used to improve the fault tolerance and response time of distributed services. An important problem when executing time-critical applications in a replicated environment is that of preventing timing failures by dynamically selecting the replicas that can satisfy a client's timing requirement, even when the quality of service is degraded due to replica failures and excess load on the server. We describe the approach we have used to solve this problem in AQuA, a CORBA-based middleware that transparently replicates objects across a local area network. The approach we use estimates a replica's response time distribution based on performance measurements regularly broadcast by the replica. An online model uses these measurements to predict the probability with which a replica can prevent a timing failure for a client. A selection algorithm then uses this prediction to choose a subset of replicas that can together meet the client's timing constraints with at least the probability requested by the client. We conclude with experimental results based on our implementation. %P 107 - 116 %8 2001/07// %G eng %R 10.1109/DSN.2001.941397 %0 Conference Paper %B Thirteenth International Conference on Scientific and Statistical Database Management, 2001. SSDBM 2001. Proceedings %D 2001 %T Integrating distributed scientific data sources with MOCHA and XRoaster %A Rodriguez-Martinez,M. %A Roussopoulos, Nick %A McGann,J. M %A Kelley,S. %A Mokwa,J. %A White,B. %A Jala,J. %K client-server systems %K data sets %K data sites %K Databases %K Distributed computing %K distributed databases %K distributed scientific data source integration %K Educational institutions %K graphical tool %K hypermedia markup languages %K IP networks %K java %K Large-scale systems %K Maintenance engineering %K meta data %K metadata %K Middleware %K middleware system %K MOCHA %K Query processing %K remote sites %K scientific information systems %K user-defined types %K visual programming %K XML %K XML metadata elements %K XML-based framework %K XRoaster %X MOCHA is a novel middleware system for integrating distributed data sources that we have developed at the University of Maryland. MOCHA is based on the idea that the code that implements user-defined types and functions should be automatically deployed to remote sites by the middleware system itself. To this end, we have developed an XML-based framework to specify metadata about data sites, data sets, and user-defined types and functions. XRoaster is a graphical tool that we have developed to help the user create all the XML metadata elements to be used in MOCHA %B Thirteenth International Conference on Scientific and Statistical Database Management, 2001. SSDBM 2001. Proceedings %I IEEE %P 263 - 266 %8 2001/// %@ 0-7695-1218-6 %G eng %R 10.1109/SSDM.2001.938560 %0 Conference Paper %B Seventeenth IEEE Symposium on Reliable Distributed Systems %D 1998 %T AQuA: an adaptive architecture that provides dependable distributed objects %A Cukier, Michel %A Ren,J. %A Sabnis,C. %A Henke,D. %A Pistole,J. %A Sanders,W. H. %A Bakken,D. E. %A Berman,M.E. %A Karr,D. A. %A Schantz,R.E. %K adaptive architecture %K AQuA %K availability requests %K client-server systems %K commercial off-the-shelf hardware %K CORBA %K dependability manager %K dependability requirements %K dependable distributed objects %K distributed object management %K Ensemble protocol stack %K Fault tolerance %K group communication services %K middleware software %K Object Request Brokers %K process-level communication %K proteus %K Quality Objects %K replication %K software fault tolerance %K Software quality %X Dependable distributed systems are difficult to build. This is particularly true if they have dependability requirements that change during the execution of an application, and are built with commercial off-the-shelf hardware. In that case, fault tolerance must be achieved using middleware software, and mechanisms must be provided to communicate the dependability requirements of a distributed application to the system and to adapt the system's configuration to try to achieve the desired dependability. The AQuA architecture allows distributed applications to request a desired level of availability using the Quality Objects (QuO) framework and includes a dependability manager that attempts to meet requested availability levels by configuring the system in response to outside requests and changes in system resources due to faults. The AQuA architecture uses the QuO runtime to process and invoke availability requests, the Proteus dependability manager to configure the system in response to faults and availability requests, and the Ensemble protocol stack to provide group communication services. Furthermore, a CORBA interface is provided to application objects using the AQuA gateway. The gateway provides a mechanism to translate between process-level communication, as supported by Ensemble, and IIOP messages, understood by Object Request Brokers. Both active and passive replication are supported, and the replication type to use is chosen based on the performance and dependability requirements of particular distributed applications %B Seventeenth IEEE Symposium on Reliable Distributed Systems %P 245 - 253 %8 1998/10// %G eng %R 10.1109/RELDIS.1998.740506 %0 Journal Article %J IEEE Transactions on Knowledge and Data Engineering %D 1998 %T Techniques for update handling in the enhanced client-server DBMS %A Delis,A. %A Roussopoulos, Nick %K client disk managers %K client resources %K client-server computing paradigm %K client-server systems %K Computational modeling %K Computer architecture %K concurrency control %K data pages %K Database systems %K distributed databases %K enhanced client-server DBMS %K Hardware %K Local area networks %K long-term memory %K main-memory caches %K Network servers %K operational spaces %K Personal communication networks %K server update propagation techniques %K Transaction databases %K update handling %K Workstations %K Yarn %X The Client-Server computing paradigm has significantly influenced the way modern Database Management Systems are designed and built. In such systems, clients maintain data pages in their main-memory caches, originating from the server's database. The Enhanced Client-Server architecture takes advantage of all the available client resources, including their long-term memory. Clients can cache server data into their own disk units if these data are part of their operational spaces. However, when updates occur at the server, a number of clients may need to not only be notified about these changes, but also obtain portions of the updates as well. In this paper, we examine the problem of managing server imposed updates that affect data cached on client disk managers. We propose a number of server update propagation techniques in the context of the Enhanced Client-Server DBMS architecture, and examine the performance of these strategies through detailed simulation experiments. In addition, we study how the various settings of the network affect the performance of these policies %B IEEE Transactions on Knowledge and Data Engineering %V 10 %P 458 - 476 %8 1998/06//May %@ 1041-4347 %G eng %N 3 %R 10.1109/69.687978