next up previous
Next: Acknowledgements Up: Parallel Algorithms for Personalized Communication and Sorting With an Experimental Previous: References

Sorting Benchmarks

 

Our nine sorting benchmarks are defined as follows, in which MAX is tex2html_wrap_inline2077 for integers and approximately tex2html_wrap_inline2081 for doubles:

  1. Uniform [U], a uniformly distributed random input, obtained by calling the C library random number generator random(). This function, which returns integers in the range 0 to tex2html_wrap_inline2089 , is initialized by each processor tex2html_wrap_inline1400 with the value (23 + 1001i). For the double data type, we ``normalize'' these values by first assigning the integer returned by random() a randomly chosen sign bit and then scaling the result by tex2html_wrap_inline2097 .
  2. Gaussian [G], a Gaussian distributed random input, approximated by adding four calls to random() and then dividing the result by four. For the double type, we first normalize the values returned by random() in the manner described for [U].
  3. Zero [Z], a zero entropy input, created by setting every value to a constant such as zero.
  4. Bucket Sorted [B], an input that is sorted into p buckets, obtained by setting the first tex2html_wrap_inline1578 elements at each processor to be random numbers between 0 and tex2html_wrap_inline2111 , the second tex2html_wrap_inline1578 elements at each processor to be random numbers between tex2html_wrap_inline2115 and
    tex2html_wrap_inline2117 , and so forth.
  5. g-Group [g-G], an input created by first dividing the processors into groups of consecutive processors of size g, where g can be any integer which partitions p evenly. If we index these groups in consecutive order, then for group j we set the first tex2html_wrap_inline2131 elements to be random numbers between tex2html_wrap_inline2133 and tex2html_wrap_inline2135 , the second tex2html_wrap_inline2131 elements at each processor to be random numbers between tex2html_wrap_inline2139 and
    tex2html_wrap_inline2141 , and so forth.
  6. Staggered [S], created as follows: if the processor index i is tex2html_wrap_inline2145 , then we set all tex2html_wrap_inline1384 elements at that processor to be random numbers between tex2html_wrap_inline2149 and tex2html_wrap_inline2151 , and so forth. Otherwise, we set all tex2html_wrap_inline1384 elements to be random numbers between tex2html_wrap_inline2155 and tex2html_wrap_inline2157 , and so forth.
  7. Worst-Case Regular [WR] - an input consisting of values between 0 and MAX designed to induce the worst possible load balance at the completion of our regular sorting. At the completion of sorting, the even-indexed processors will hold tex2html_wrap_inline2161 elements, whereas the odd-indexed processors will hold tex2html_wrap_inline2163 elements. See [15] for additional details.
  8. Randomized Duplicates [RD] an input of duplicates in which each processor fills an array T with some constant number range of random values between 0 and (range - 1) (range is 32 for our work) whose sum is S. The first tex2html_wrap_inline2177 values of the input are then set to a random value between 0 and (range - 1), the next tex2html_wrap_inline2183 values of the input are then set to another random value between 0 and (range - 1), and so forth.

We selected these nine benchmarks for a variety of reasons. Previous researchers have used the Uniform, Gaussian, and Zero benchmarks, and so we too included them for purposes of comparison. But benchmarks should be designed to illicit the worst case behavior from an algorithm, and in this sense the Uniform benchmark is not appropriate. For example, for tex2html_wrap_inline2189 , one would expect that the optimal choice of the splitters in the Uniform benchmark would be those which partition the range of possible values into equal intervals. Thus, algorithms which try to guess the splitters might perform misleadingly well on such an input. In this respect, the Gaussian benchmark is more telling. But we also wanted to find benchmarks which would evaluate the cost of irregular communication. Thus, we wanted to include benchmarks for which an algorithm which uses a single phase of routing would find contention difficult or even impossible to avoid. A naive approach to rearranging the data would perform poorly on the Bucket Sorted benchmark. Here, every processor would try to route data to the same processor at the same time, resulting in poor utilization of communication bandwidth. This problem might be avoided by an algorithm in which at each processor the elements are first grouped by destination and then routed according to the specifications of a sequence of p destination permutations. Perhaps the most straightforward way to do this is by iterating over the possible communication strides. But such a strategy would perform poorly with the g-Group benchmark, for a suitably chosen value of g. In this case, using stride iteration, those processors which belong to a particular group all route data to the same subset of g destination processors. This subset of destinations is selected so that, when the g processors route to this subset, they choose the processors in exactly the same order, producing contention and possibly stalling. Alternatively, one can synchronize the processors after each permutation, but this in turn will reduce the communication bandwidth by a factor of tex2html_wrap_inline2201 . In the worst case scenario, each processor needs to send data to a single processor a unique stride away. This is the case of the Staggered benchmark, and the result is a reduction of the communication bandwidth by a factor of p. Of course, one can correctly object that both the g-Group benchmark and the Staggered benchmark have been tailored to thwart a routing scheme which iterates over the possible strides, and that another sequences of permutations might be found which performs better. This is possible, but at the same time we are unaware of any single phase deterministic algorithm which could avoid an equivalent challenge. The Worst Case Regular benchmark was included to empirically evaluate both the worst case running time expected for our regular sorting algorithm and the effect of the sampling rate on this performance. Finally, the the Randomized Duplicates benchmark was included to assess the performance of the algorithms in the presence of duplicate values


next up previous
Next: Acknowledgements Up: Parallel Algorithms for Personalized Communication and Sorting With an Experimental Previous: References

David R. Helman
helman@umiacs.umd.edu