In this paper we give a graph-based decision procedure for a calculus with sum and product types. Al- though our motivation comes from the Bird-Meertens approach to reasoning algebraically about functional programs, the language used here can be seen as the internal language of a category with binary products and coproducts. As such, the decision procedure presented has independent interest. A standard approach based on term rewriting would work modulo a set of equations; the present work proposes a simpler approach, based on graph-rewriting. We show in turn how the system covers reflection equational laws, fusion laws, and cancel lation laws.

Proceedings of the Third International Workshop on Term Graph Rewriting (TERMGRAPH 2006); In this paper we give a graph-based decision procedure for a calculus with sum and product types. Although our motivation comes from the Bird-Meertens approach to reasoning algebraically about functional programs, the language used here can be seen as the internal language of a category with binary products and coproducts. As such, the decision procedure presented has independent interest.
A standard approach based on term rewriting would work modulo a set of equations; the present work proposes a simpler approach, based on graph-rewriting. We show in turn how the system covers reflection equational laws, fusion laws, and cancellation laws.; Fundação para a Ciência e a Tecnologia (FCT)

Executing untrusted code while preserving security requiresenforcement of memory and control-flow safety policies:untrusted code must be prevented from modifying memory orexecuting code except as explicitly allowed. Software-basedfault isolation (SFI) or "sandboxing" enforces thosepolicies by rewriting the untrusted code at the level ofindividual instructions. However, the original sandboxingtechnique of Wahbe et al. is applicable only to RISCarchitectures, and other previous work is either insecure,or has been not described in enough detail to giveconfidence in its security properties. We present a noveltechnique that allows sandboxing to be easily applied to aCISC architecture like the IA-32. The technique can beverified to have been applied at load time, so that neitherthe rewriting tool nor the compiler needs to be trusted. Wedescribe a prototype implementation which provides a robustsecurity guarantee, is scalable to programs of any size, andhas low runtime overheads. Further, we give amachine-checked proof that any program approved by theverification algorithm is guaranteed to respect the desiredsafety property.

Computer memory hierarchy becomes increasingly powerful but also more complex to optimize. Run-time adaptation emerges as a promising strategy. For software, it means adjusting data behavior at different phases of an execution. For hardware, it means recon- figuring the memory system at different times. This paper presents a method that predicts the memory phases of a program when it runs. The analysis first detects memory phases in a profiling run using variable-distance sampling, wavelet filtering, and optimal phase partition. It then identifies the phase hierarchy through grammar compression. Finally, it inserts phase markers into a program through binary rewriting. The technique is a unique combination of locality profiling and phase prediction. The new method is tested on a wide range of programs against programmer manual analysis and pure hardware monitoring. It predicts program executions that are thousands of times longer than profiling runs. The average length of the predicted phases is over 700 million instructions, and the length is predicted with 99.5% accuracy. When tested for cache adaptation, it reduces the cache size by 40% without increasing the number of cache misses. These results suggest that phase prediction can significantly improve the many adaptation techniques now used for increasing performance...

The behavior of service-oriented programs depends strongly on the input. A compiler, for example, behaves differently when compiling different functions. Similar input dependences can be seen in interpreters, compression and encoding utilities, databases, and dynamic content servers. Because their behavior is hard to predict, these programs pose a special challenge for dynamic adaptation mechanisms, which attempt to enhance performance by modifying hardware or software to fit application needs. We present a new technique to detect phases—periods of distinctive behavior—in service-oriented programs. We begin by using special inputs to induce a repeating pattern of behavior. We then employ frequency-based filtering on basic block traces to detect both top-level and second-level repetitions, which we mark via binary rewriting. When the instrumented program runs, on arbitrary input, the inserted markers divide execution into phases of varying length. Experiments with service-oriented programs from the Spec95 and Spec2K benchmark suites indicate that program behavior within phases is surprisingly predictable in many (though not all) cases. This in turn suggests that dynamic adaptation, either in hardware or in software, may be applicable to a wider class of programs than previously believed.

Software has spent the bounty of Moore's law by solving harder problems and exploiting abstractions, such as highlevel languages, virtual machine technology, binary rewriting, and dynamic analysis. Abstractions make programmers more productive and program

In many software attacks, inducing an illegal control-flow transfer in the target system is one common step. Control-Flow Integrity (CFI) protects a software system by enforcing a pre-determined control-flow graph. In addition to providing strong security, CFI enables static analysis on low-level code. This paper evaluates whether CFI-enabled static analysis can help build efficient and validated data sandboxing. Previous systems generally sandbox memory writes for integrity, but avoid protecting confidentiality due to the high overhead of sandboxing memory reads. To reduce overhead, we have implemented a series of optimizations that remove sandboxing instructions if they are proven unnecessary by static analysis. On top of CFI, our system adds only 2.7% runtime overhead on SPECint2000 for sandboxing memory writes and adds modest 19% for sandboxing both reads and writes. We have also built a principled data-sandboxing verifier based on range analysis. The verifier checks the safety of the results of the optimizer, which removes the need to trust the rewriter and optimizer. Our results show that the combination of CFI and static analysis has the potential of bringing down the cost of general inlined reference monitors, while maintaining strong security.; Engineering and Applied Sciences

This paper constructs WOM codes that combine rewriting and error correction
for mitigating the reliability and the endurance problems in flash memory. We
consider a rewriting model that is of practical interest to flash applications
where only the second write uses WOM codes. Our WOM code construction is based
on binary erasure quantization with LDGM codes, where the rewriting uses
message passing and has potential to share the efficient hardware
implementations with LDPC codes in practice. We show that the coding scheme
achieves the capacity of the rewriting model. Extensive simulations show that
the rewriting performance of our scheme compares favorably with that of polar
WOM code in the rate region where high rewriting success probability is
desired. We further augment our coding schemes with error correction
capability. By drawing a connection to the conjugate code pairs studied in the
context of quantum error correction, we develop a general framework for
constructing error-correction WOM codes. Under this framework, we give an
explicit construction of WOM codes whose codewords are contained in BCH codes.; Comment: Submitted to ISIT 2015

Rewriting systems are often defined as binary relations over a given set of
objects. This simple definition is used to describe various properties of
rewriting such as termination, confluence, normal forms etc. In this paper, we
introduce a new notion of abstract rewriting in the framework of categories.
Then, we define the functoriality property of rewriting systems. This property
is sometimes called vertical composition. We show that most of graph
transformation systems are functorial and provide a counter-example of graph
transformation systems which is not functorial.

Ananda Mohan suggested that the first New Chinese Remainder Theorem
introduced by Wang can be derived from the constructive proof of the well-known
Chinese Remainder Theorem (CRT) and claimed that Wang's approach is the same as
the one proposed earlier by Huang. Ananda Mohan's proof is however erroneous
and we show here that Wang's New CRT I is a rewriting of an algorithm
previously sketched by Hitz and Kaltofen.

Recent Microsoft security bulletins show that kernel vulnerabilities are
becoming more and more important security threats. Despite the pretty extensive
security mitigations many of the kernel vulnerabilities are still exploitable.
Successful kernel exploitation typically grants the attacker maximum privilege
level and results in total machine compromise.
To protect against kernel exploitation, we have developed a tool which
statically rewrites the Microsoft Windows kernel as well as other kernel level
modules. Such rewritten binary files allow us to monitor control flow transfers
during operating system execution. At this point we are able to detect whether
selected control transfer flow is valid or should be considered as an attack
attempt. Our solution is especially directed towards preventing remote kernel
exploitation attempts. Additionally, many of the local privilege escalation
attacks are also blocked (also due to additional mitigation techniques we have
implemented). Our tool was tested with Microsoft Windows XP, Windows Vista and
Windows 7 (under both virtual and physical machines) on IA-32 compatible
processors. Our apparatus is also completely standalone and does not require
any third party software.; Comment: 10 pages...

Two cornerstones of the Kasteleyn method are: 1. rewriting the Ising
partition function as the dimer partition function, that is, the generating
function of the perfect matchings, and 2. expressing the dimer partition
function of planar graphs as the determinant. This paper initiates the
3-dimensional Kasteleyn method.
We show that the weight enumerator of any binary linear code is polynomial
reducible to the permanent of a 3-dimensional matrix (3-matrix). In analogy to
the standard (2-dimensional) matrices we say that a 3-matrix A is Kasteleyn if
signs of its entries may be changed so that, denoting by A' the resulting
3-matrix, we have per(A) = det(A'). We show that in contrast with the
2-dimensional case the class of Kasteleyn 3-matrices is rich; namely, for each
matrix M there is a Kasteleyn 3-matrix A so that per(M) = per(A) = det(A').
In particular, the dimer partition function of a finite 3-dimensional cubic
lattice may be written as the determinant of the vertex-adjacency 3-matrix of a
2-dimensional simplicial complex which preserves the natural embedding of the
cubic lattice.; Comment: 10 pages, 6 figures

Paper addresses the process of dynamic data flow analysis using virtual code
integration (VCI), often refered to as dynamic binary rewriting. This article
will try to demonstrate all of the techniques that were applied in the
SpiderPig project. It will also discuss the main differences between the
methods that were employed and those used in other available software, as well
as introducing other related work. SpiderPig's approach was found to be very
fast and was transparent enough for reliable and usable data flow analysis. It
was created with the purpose of providing a tool which would aid vulnerability
and security researchers with tracing and analyzing any necessary data and its
further propagation through a program. At the current state it works on IA-32
platforms with Microsoft Windows systems and it supports FPU, SSE, MMX and all
of the IA-32 general instructions. SpiderPig also demonstrates the usage of a
virtual code integration (VCI) framework which allows for modifying the target
application code at the instruction level. By this I mean that the VCI
framework allows for custom code insertion, original code modification and full
customization of the original application's code. Instructions can be swapped
out, deleted or modified at a whim...

We propose a modal logic tailored to describe graph transformations and
discuss some of its properties. We focus on a particular class of graphs called
termgraphs. They are first-order terms augmented with sharing and cycles.
Termgraphs allow one to describe classical data-structures (possibly with
pointers) such as doubly-linked lists, circular lists etc. We show how the
proposed logic can faithfully describe (i) termgraphs as well as (ii) the
application of a termgraph rewrite rule (i.e. matching and replacement) and
(iii) the computation of normal forms with respect to a given rewrite system.
We also show how the proposed logic, which is more expressive than
propositional dynamic logic, can be used to specify shapes of classical
data-structures (e.g. binary trees, circular lists etc.).

We provide an exact mapping between the density functional of a binary
mixture and that of the effective one-component fluid in the limit of infinite
asymmetry. The fluid of parallel hard cubes is thus mapped onto that of
parallel adhesive hard cubes. Its phase behaviour reveals that demixing of a
very asymmetric mixture can only occur between a solvent-rich fluid and a
permeated large particle solid or between two large particle solids with
different packing fractions. Comparing with hard spheres mixtures we conclude
that the phase behaviour of very asymmetric hard-particle mixtures can be
determined from that of the large component interacting via an adhesive-like
potential.; Comment: Full rewriting of the paper (also new title). 4 pages, LaTeX, uses
revtex, multicol, epsfig, and amstex style files, to appear in Phys. Rev. E
(Rapid Comm.)

General coherence theorems are constructed that yield explicit presentations
of categorical and algebraic objects. The categorical structures involved are
finitary discrete Lawvere 2-theories, though they are approached within the
language of term rewriting theory. Two general coherence theorems are obtained.
The first applies to terminating and confluent rewriting 2-theories. This
result is exploited to construct systematic presentations for the higher
Thompson groups and the Higman-Thompson groups. The presentations are
categorically interesting as they arise from higher-arity analogues of the
Stasheff/Mac Lane coherence axioms, which involve phenomena not present in the
classical binary axioms. The second general coherence theorem holds for
2-theories that are not necessarily confluent or terminating and is used to
construct a new proof of coherence for iterated monoidal categories, which
arise as categorical models of iterated loop spaces and fail to be confluent.; Comment: PhD thesis, 88 pages

We describe a matrix multiplication recognition algorithm for a subset of
binary linear context-free rewriting systems (LCFRS) with running time
$O(n^{\omega d})$ where $M(m) = O(m^{\omega})$ is the running time for $m
\times m$ matrix multiplication and $d$ is the "contact rank" of the LCFRS --
the maximal number of combination and non-combination points that appear in the
grammar rules. We also show that this algorithm can be used as a subroutine to
get a recognition algorithm for general binary LCFRS with running time
$O(n^{\omega d + 1})$. The currently best known $\omega$ is smaller than
$2.38$. Our result provides another proof for the best known result for parsing
mildly context sensitive formalisms such as combinatory categorial grammars,
head grammars, linear indexed grammars, and tree adjoining grammars, which can
be parsed in time $O(n^{4.76})$. It also shows that inversion transduction
grammars can be parsed in time $O(n^{5.76})$. In addition, binary LCFRS
subsumes many other formalisms and types of grammars, for some of which we also
improve the asymptotic complexity of parsing.

In this paper we give a criterion by which one can conclude that every
reduction of a basic term to normal form has the same length. As a consequence,
the number of steps to reach the normal form is independent of the chosen
strategy. In particular this holds for TRSs computing addition and
multiplication of natural numbers, both in unary and binary notation.; Comment: In Proceedings WRS 2011, arXiv:1204.5318

More than a decade after becoming a topic of intense research there is no
transactional memory hardware nor any examples of software transactional memory
use outside the research community. Using software transactional memory in large
pieces of software needs copious source code annotations and often means
that standard compilers and debuggers can no longer be used. At the same time,
overheads associated with software transactional memory fail to motivate
programmers to expend the needed effort to use software transactional
memory. The only way around the overheads in the case of general unmanaged code
is the anticipated availability of hardware support. On the other hand, architects
are unwilling to devote power and area budgets in mainstream microprocessors to
hardware transactional memory, pointing to transactional memory being a
"niche" programming construct. A deadlock has thus ensued that is blocking
transactional memory use and experimentation in the mainstream.
This dissertation covers the design and construction of a software transactional
memory runtime system called SLE_x86 that can potentially break this
deadlock by decoupling transactional memory from programs using it. Unlike most
other STM designs, the core design principle is transparency rather than
performance. SLE_x86 operates at the level of x86 machine code...

This paper constructs WOM codes that combine
rewriting and error correction for mitigating the reliability and the endurance problems in flash memory.We consider a rewriting model that is of practical interest to flash applications where only the second write uses WOM codes. Our WOM code construction is based on binary erasure quantization with LDGM codes, where the rewriting uses message passing and has potential to share the
efficient hardware implementations with LDPC codes in practice. We show that the coding scheme achieves the capacity of the rewriting model. Extensive simulations show that the rewriting performance of our scheme compares favorably with that of polar WOM code in the rate region where high rewriting success probability is desired. We further augment our coding schemes with error correction capability. By drawing a connection to the
conjugate code pairs studied in the context of quantum error
correction, we develop a general framework for constructing
error-correction WOM codes. Under this framework, we give
an explicit construction of WOM codes whose codewords are
contained in BCH codes.