Prefix computations on symmetric multiprocessors

TitlePrefix computations on symmetric multiprocessors
Publication TypeConference Papers
Year of Publication1999
AuthorsHelman DR, JaJa JF
Conference NameParallel and Distributed Processing, 1999. 13th International and 10th Symposium on Parallel and Distributed Processing, 1999. 1999 IPPS/SPDP. Proceedings
Date Published1999/04//
Keywordsaccess, algorithms;, AlphaServer;HP-Convex, approach;symmetric, architecture;symmetric, Challenge;high-end, computations;scalable, DEC, Exemplar;IBM, Graphics, market;large, multiprocessor, multiprocessors;Unix;distributed, patterns;prefix, performance;sparse, power, ruling, scale, server, set, SP-2;POSIX, systems;memory, threads;Silicon
Abstract

We introduce a new optimal prefix computation algorithm on linked lists which builds upon the sparse ruling set approach of Reid-Miller and Blelloch. Besides being somewhat simpler and requiring nearly half the number of memory accesses, we can bound our complexity with high probability instead of merely on average. Moreover, whereas Reid-Miller and Blelloch (1996) targeted their algorithm for implementation on a vector multiprocessor architecture, we develop our algorithm for implementation on the symmetric multiprocessor architecture (SMP). These symmetric multiprocessors dominate the high-end server market and are currently the primary candidate for constructing large scale multiprocessor systems. Our prefix computation algorithm was implemented in C using POSIX threads and run on four symmetric multiprocessors-the IBM SP-2 (High Node), the HP-Convex Exemplar (S-Class), the DEC AlphaServer; and the Silicon Graphics Power Challenge. We ran our code using a variety of benchmarks which we identified to examine the dependence of our algorithm on memory access patterns. For some problems, our algorithm actually matched or exceeded the performance of the standard sequential solution using only a single thread. Moreover, in spite of the fact that the processors must compete for access to main memory, our algorithm still achieved scalable performance with up to 16 processors, which was the largest platform available to us

DOI10.1109/IPPS.1999.760427