A look at the dynamics of the JavaScript package ecosystem
Erik Wittern, Philippe Suter, Shriram RajagopalanProceedings of the 13th International Workshop on Mining Software Repositories, pp. 351--361, 2016

2013

Software is modular, and so is runtime state. We argue that
by allowing individual layers of the software stack
to store isolated runtime state, we cripple the
ability of systems to effectively scale or respond
to failures. Given the strong desire to build
elastic and highly available applications for the
cloud, we propose Slice, an abstraction that allows
applications to declare appropriate granularities of
scaleoriented state, and allows layers to contribute
the appropriate layer-specific data to those
containers. Slices can be transparently migrated and
replicated between application instances, thereby
simplifying design of elastic and highly available
systems, while retaining the modularity of modern
software.

Middleboxes are being rearchitected to be service oriented,
composable, extensible, and elastic. Yet
system-level support for high availability (HA)
continues to introduce significant performance
overhead. In this paper, we propose Pico
Replication (PR), a system-level framework for
middleboxes that exploits their flow-centric
structure to achieve low overhead, fully
customizable HA. Unlike generic (virtual machine
level) techniques, PR operates at the flow level.
Individual flows can be checkpointed at very high
frequencies while the middlebox continues to process
other flows. Furthermore, each flow can have its
own checkpoint frequency, output buffer and target
for backup, enabling rich and diverse policies that
balance---per-flow---performance and utilization.
PR leverages OpenFlow to provide near instant
flow-level failure recovery, by dynamically
rerouting a flow's packets to its replication
target. We have implemented PR and a flow-based HA
policy. In controlled experiments, PR sustains
checkpoint frequencies of 1000Hz, an order of
magnitude improvement over current VM replication
solutions. As a result, PR drastically reduces the
overhead on end-to-end latency from 280% to 15.5%
and throughput overhead from 99.5% to 3.2%.