I do UNIX and Linux systems programming at the University of Maryland Institute for Advanced Computer Studies (UMIACS), though my official job title is "Computer Engineer". My primary project is to build and manage a linux cluster consisting of 29 high end nodes, and a gigabit ethernet interconnect. Part of the requirement is to build the system such that users can boot their own kernel on the hosts, and this is proving to pose some number of problems administratively. I haven't had time to address this issue lately. Eventually, I intend to build a system with "virtualized" clusters carved out of a larger cluster resource, such that the cluster environment someone chooses to work with (for instance, MOSIX, space shared and scheduled, Scyld Beowulf, or what have you) isn't tied to any given set of nodes.
Recently, we've added a 50 node cluster of low end nodes, with 150 to 300 GB of hard disk per node, for a total of 9 Terabytes of disk over the cluster.
I'm also responsible for managing and developing our HPSS installation. I have plans for tying this in with virtualized clusters, as well.