"... We provide an integrated environment to the users of clusters for scientific computation. The environment simplifies the overall usage of the system and makes it easier for new users to learn and start using the system. It is an attempt to integrate the various pieces of software that comprise the u ..."

We provide an integrated environment to the users of clusters for scientific computation. The environment simplifies the overall usage of the system and makes it easier for new users to learn and start using the system. It is an attempt to integrate the various pieces of software that comprise the user environment of an open-source based Linux cluster without introducing limiting constraints. We target this environment towards users who consume the system’s volatile resources (CPU time and network bandwidth) to do research in physics, chemistry, and other application areas. It does not require any additional code in the application as long as the used communication paradigm is supported on the system. Instead, researchers with own code find it easy to build and run their program. This work also extends into the realms of system administration, where tools and the configuration of the system make it feasible to provide more informative services to users. This environment is currently in use on two clusters which are used extensively for research in physics and chemistry. 1

POST is a decentralized messaging infrastructure that supports a wide range of collaborative applications, including electronic mail, instant messaging, chat, news, shared calendars and whiteboards. POST is highly resilient, secure, scalable and does not rely on dedicated servers. Instead, POST is built upon a peer-to-peer (p2p) overlay network, consisting of participants ’ desktop computers. POST offers three simple and general services: (i) secure, single-copy message storage, (ii) metadata based on single-writer logs, and (iii) event notification. We sketch POST’s basic messaging infrastructure and show how POST can be used to construct a cooperative, secure email service called ePOST. 1

Distributed writable storage systems typically provide NFS-like semantics and unbounded persistence for files. We claim that for planetary-scale distributed services such facilities are unnecessary and impose an unwanted overhead in complexity and ease of management. Furthermore,wide-area services have requirements not met by existing solutions,in particular capacity management and a realistic model for billing and charging. We argue for ephemeral storage systems which meet these requirements,and present Palimpsest,an early example being deployed on PlanetLab. Palimpsest is small and simple,yet provides soft capacity,congestion-based pricing,automatic reclamation of space,and a security model suitable for a shared storage facility for wide-area

"... High-end storage systems, such as those in large data centers, must service multiple independent workloads. Workloads often require predictable quality of service, despite the fact that they have to compete with other rapidly-changing workloads for access to common storage resources. We present a no ..."

High-end storage systems, such as those in large data centers, must service multiple independent workloads. Workloads often require predictable quality of service, despite the fact that they have to compete with other rapidly-changing workloads for access to common storage resources. We present a novel approach to providing performance guarantees in this highly-volatile scenario, in an efficient and cost-effective way. Façade, a virtual store controller, sits between hosts and storage devices in the network, and throttles individual I/O requests from multiple clients so that devices do not saturate. We implemented a prototype, and evaluated it using real workloads on an enterprise storage system. We also instantiated it to the particular case of emulating commercial disk arrays. Our results show that Façade satisfies performance objectives while making efficient use of the storage resources—even in the presence of of failures and bursty workloads with stringent performance requirements. 1

Smart Dust sensor networks – consisting of cubic millimeter scale sensor nodes capable of limited computation, sensing, and passive optical communication with a base station – are envisioned to fulfill complex large scale monitoring tasks in a wide variety of application areas. In many potential Smart Dust applications such as object detection and tracking, finegrained node localization plays a key role. However, due to the unique characteristics of Smart Dust, traditional localization systems cannot be used. In this paper we present and analyse the Lighthouse location systems, a novel laser-based location system for Smart Dust, which allows tiny dust nodes to autonomously estimate their location with high accuracy without additional infrastructure components besides a modified base station device. Using an early 2D prototype of the system, node locations could be estimated with an average accuracy of about 2 % and an average standard deviation of about 0.7 % of the node’s distance to the base station. 1

"... In this paper, we describe the collection and analysis of file system traces from a variety of different environments, including both UNIX and NT systems, clients and servers, and instructional and production systems. Our goal is to understand how modern workloads affect the ability of file systems ..."

In this paper, we describe the collection and analysis of file system traces from a variety of different environments, including both UNIX and NT systems, clients and servers, and instructional and production systems. Our goal is to understand how modern workloads affect the ability of file systems to provide high performance to users. Because of the increasing gap between processor speed and disk latency, file system performance is largely determined by its disk behavior. Therefore we primarily focus on the disk I/O aspects of the traces. We find that more processes access files via the memory-map interface than through the read interface. However, because many processes memory-map a small set of files, these files are likely to be cached. We also find that file access has a bimodal distribution pattern: some files are written repeatedly without being read; other files are almost exclusively read. We develop a new metric for measuring file lifetime that accounts for files that are never deleted. Using this metric, we find that the average block lifetime for some workloads is significantly longer than the 30-second write delay used by many file systems. However, all workloads show lifetime locality: the same files tend to be overwritten multiple times. 1

"... We report on an observational study of user response following the OpenSSL remote buffer overflows of July 2002 and the worm that exploited it in September 2002. Immediately after the publication of the bug and its subsequent fix we identified a set of vulnerable servers. In the weeks that followed ..."

We report on an observational study of user response following the OpenSSL remote buffer overflows of July 2002 and the worm that exploited it in September 2002. Immediately after the publication of the bug and its subsequent fix we identified a set of vulnerable servers. In the weeks that followed we regularly probed each server to determine whether its administrator had applied one of the relevant fixes. We report two primary results. First, we find that administrators are generally very slow to apply the fixes. Two weeks after the bug announcement, more than two thirds of the servers were still vulnerable.Second, we identify several weak predictors of user response and find that the pattern differs in the period following the release of the bug and that following the release of the worm. 1

"... Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. Current ..."

Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercialreproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. Current