Papers

Smart phones should not just accompany their owners: they should
provide them with the data they want whenever and wherever they are.
This does not mean that the user should always be able to fetch data
on demand: wireless communication requires a significant amount of
energy; cellular bandwidth is often limited; and, coverage is not
ubiquitous. Instead, scheduling data stream updates, e.g., podcast
downloads and photo uploads, should incorporate predictions of where
the user will be, when and where data will be needed, and when
transfer conditions are good (e.g., WiFi is available). The
challenges we have encountered while designing such a framework
include scheduling transmissions while respecting multiple
optimization goals, application integration, modeling user and data
stream behavior, and managing prefetched data. To better understand
user behavior and evaluate our framework, we are gathering traces of
smart phone use. Initial results show potential energy savings of
over 70%.

We present a storage management framework for Web 2.0 services that
places users back in control of their data. Current Web services
complicate data management due to data lock-in and lack usable
protection mechanisms, which makes cross-service sharing risky. Our
framework allows multiple Web services shared access to a single
copy of data that resides on a personal storage repository, which
the user acquires from a cloud storage provider. Access control is
based on hierarchically, filtered views, which simplify
cross-cutting policies, and enable least privilege management. We
also integrate a powerbox, which allows applications to request
additional authority at run time thereby enabling applications
running under a least privilege regime to provide useful open and
save as dialogs.

Many programs could improve their performance by adapting their
memory use according to availability. If memory is available, a web
browser or DNS server could use a larger cache; if there is memory
pressure, they could use a smaller one. Memory adaptations are also
becoming increasingly important for scalability: server
consolidation is being done more aggressively and end-users want to
run their application on a wider-range of hardware configurations.
Today, memory-based adaptations are fragile: general-purpose
operating systems do not indicate how much memory a process should
or could use.

Enabling efficient adaptations requires rethinking how memory is
allocated among competing programs, and adding a feedback mechanism
that allows applications to make informed adaptations. In this
paper, we present the design and implementation of a minimum-funding
revocation scheduler for memory. We describe a novel algorithm to
compute the amount of memory available to each resource principal
based on its scheduling parameters and the current configuration,
explain how to communicate this information to the principals, and
show how they can exploit it. We also present a new mechanism to
account shared memory based on access frequency. We demonstrate the
effectiveness of the techniques by showing that multiple
applications changed to exploit this information use the full memory
available to them, and smoothly vary their demand as availability
changes. This also results in significant increases in throughput
relative to the conservative management techniques currently used.

Many programs could improve their performance by adapting their memory
use. To ensure that an adaptation increases utility, a program needs
to know not only how much memory is available but how much it should
use at any given time. This is difficult on commodity operating
systems, which provide little information to inform these decisions.
As evidence of the importance of adaptations to program performance,
many programs currently adapt using ad hoc heuristics to
control their behavior.

Supporting adaptive applications has become pressing: the range of
hardware that applications are expected to run on---from smart phones
and netbooks to high-end desktops and servers---is increasing as is
the dynamic nature of workloads stemming from server consolidation.
The practical result is that the ad hoc heuristics are less
effective as assumptions about the environment are less reliable, and
as such memory is more frequently under- or over- utilized. Failing
to adapt limits the degree of possible consolidation. We contend that
in order for programs to make the best of available resources,
research needs to be conducted into how the operating system can
better support aggressive adaptations.

General-purpose operating systems not only fail to provide adaptive
applications the information they need to intelligently adapt, but
also schedule resources in such a way that were applications to
aggressively adapt, resources would be inappropriately scheduled. The
problem is that these systems use demand as the primary indicator of
utility, which is a poor indicator of utility for adaptive
applications.

We present a resource management framework appropriate for traditional
as well as adaptive applications. The primary difference from current
schedulers is the use of stakeholder preferences in addition to
demand. We also show how to revoke memory, compute the amount of
memory available to each principal, and account shared memory.
Finally, we introduce a prototype system, Viengoos, and present some
benchmarks that demonstrate that it can efficiently support multiple
aggressively adaptive applications simultaneously.

Current operating systems provide inadequate mechanisms to protect
user data. The main problem is that all of a user's programs run in
the same trust domain. A better model is one which is consistent
with the principle of least authority (POLA). An object-capability
system may be able to better achieve this: capabilities bundle
authorization and designation thereby easing delegation and the
dynamic creation and management of fine-grained trust domains.

Despite this, object-capability designs are rejected due to a
perceived excessive overhead resulting from the degree of
decomposition and the corresponding rise in the amount of
inter-process communication (IPC). Although the work on L4 has
demonstrated that IPC can be made extremely fast, historically, L4
lacks mechanisms to efficiently delegate fine-grained authority.

In this paper, we present a capability transfer mechanism that
exploits the memory management unit (MMU) present in all modern
commodity hardware by using it to build a content addressable memory
(CAM) to expedite capability resolution. For the common case of an
IPC carrying a single capability, we observe a 2% increase in
message transfer time compared to a less flexible but more commonly
used IPC implementation based on capability registers. Relative to
the time taken to transfer a similarly sized message containing just
data on L4Ka::Pistachio, we observe a 16% increase.

The GNU Hurd's design was motivated by a desire to rectify a number
of observed shortcomings in Unix. Foremost among these is that many
policies that limit users exist simply as remnants of the design of
the system's mechanisms and their implementation. To increase
extensibility and integration, the Hurd adopts an object-based
architecture and defines interfaces, in particular those for the
composition of and access to name spaces, that are virtualizable.

This paper is first a presentation of the Hurd's design goals and a
characterization of its architecture primarily as it represents a
departure from Unix's. We then critique the architecture and assess
it in terms of the user environment of today focusing on security.
Then follows an evaluation of Mach, the microkernel on which the
Hurd is built, emphasizing the design constraints which Mach imposes
as well as a number of deficiencies its design presents for
multi-server like systems. Finally, we reflect on the properties
such a system appears to require.

Commodity operating systems fail to meet the security, resource
management and integration expectations of users. We propose a
unified solution based on a capability framework as it supports fine
grained objects, straightforward access propagation and
virtualizable interfaces and explore how to improve resource use via
access decomposition and policy refinement with minimum
interposition. We argue that only a small static number of
scheduling policies are needed in practice and advocate hierarchical
policy specification and central realization.

Coyotos is a security microkernel. It is a microkernel in the sense
that it is a minimal protected platform on which a complete
operating system can be constructed. It is a security microkernel in
the sense that it is a minimal protected platform on which higher
level security policies can be constructed.

Through the use of a multiserver capability-based architecture, the
GNU Hurd has attempted to increase security and flexibility relative
to traditional Unix-like operating systems. This shift away from a
monolithic design requires a reevaluation of conventional operating
system praxis to determine its degree of continued applicability.
Resource scheduling appears particularly defunct in this regard: to
make smarter scheduling decisions, monolithic systems cross
component boundaries to gain insight into application behavior.
This introspection is incompatible with a multiserver architecture
and its elimination, as observed in Mach, the current microkernel
used by the GNU Hurd, results in noticeable performance degradation.
To this end, I propose that rather than have the operating system
provide virtualized resources, i.e.~schedule the contents of
resources on behalf of applications, it offer near raw access to the
principals which they must multiplex as required thereby relieving
e.g.~the memory manager of paging decisions. The resource managers
must still partition the physical resources among the competing
principals. For this, I suggest a market based solution in which
principals have a periodically renewed credit allowance and lease
the required resources. This approach also suits adaptive and soft
real-time applications.

Intellectual property presupposes the individuality of the
author by asserting that the author is unique and is able to
transcend the realm of the mundane to discover a novel idea.
Intellectual property denies T.S. Elliot's assertion in
Tradition and the Individual Talent that the ``poet's mind
is in fact a receptacle for seizing and storing up numberless
feelings, phrases, images, which remain there until all the
particles which can unite to form a new compound are present
together.'' Intellectual property is a logical impossibility
for its fundamental tenant is that thoughts and ideas, once
shared, can be owned and controlled by an individual.

Contact

You can email me at: neal@walfield.org .

If you send mail to spamtrap@walfield.org, it
will not reach me and be automatically classified as spam.