Jennifer L. P. Hackett

My Research

I'm a postdoctoral researcher in
the Functional Programming Lab working
on the Mind the Gap
project funded by the EPSRC. This project aims to develop abstract ways of reasoning about program
efficiency that are compatible with the current state-of-the-art techniques used to reason
about correctness. Currently most program optimisations are justified using empirical techniques
such as benchmarks; this project hopes to put the science of program efficiency on a more
formal footing.

I have a particular interest in evaluation strategies. My programming language of choice, Haskell,
notably uses a "lazy" evaluation strategy which can make it difficult to reason about operational aspects
of programs. It's my hope that by developing theories for reasoning about program efficiency, I can help
to diminish this drawback and further encourage the adoption of lazy evaluation strategies. I am also interested
in linear and affine lambda calculi for what they can tell us about lazy evaluation strategies.

Papers

Pre-ordered Metric Spaces for Program Improvement
Jennifer Hackett, Graham Hutton
Submitted to Annual ACM/IEEE Symposium on Logic in Computer Science, 2019
Improvement theory allows inequational proofs of program efficiency to be written in a similar manner to equational proofs of program correctness. However, existing theories of improvement are operational in nature, in contrast to the denotational approaches that tend to be used for correctness. Moreover, they are tied to specific languages and cost models, with each such choice requiring a new theory. This article addresses these issues using pre-ordered metric spaces, a combination of two usually disparate approaches to program semantics. The resulting theory gives a denotational approach to efficiency that is generic and intuitive, and shows how to resolve previously problematic issues in the area of space efficiency. We develop our theory and show how it can be used both to establish generic improvement results, and to reason about specific programming examples.

Parametric Polymorphism and Operational Improvement
Jennifer Hackett, Graham Hutton
To appear in Proceedings of the ACM on Programming Languages, Volume 2, Issue ICFP, Article 68, September 2018.
Parametricity, in both operational and denotational forms,
has long been a useful tool for reasoning about program correctness.
However, there is as yet no comparable technique for reasoning about program
improvement, that is, when one program uses fewer resources than
another. Existing theories of parametricity cannot be used to address this
problem as they are agnostic with regard to resource usage. This article
addresses this problem by presenting a new operational theory of
parametricity that is sensitive to time costs, which can be used to reason
about time improvement properties. We demonstrate the applicability of our
theory by showing how it can be used to prove that a number of well-known
program fusion techniques are time improvements, including fixed point
fusion, map fusion and short cut fusion.

Programs for Cheap!
Jennifer Hackett, Graham HuttonAnnual ACM/IEEE Symposium on Logic in Computer Science, 2015
Write down the definition of a recursion operator on a piece of paper. Tell me its type, but be careful not to let me see the operator's definition. I will tell you an optimization theorem that the operator satisfies. As an added bonus, I will also give you a proof of correctness for the optimisation, along with a formal guarantee about its effect on performance. The purpose of this paper is to explain these tricks.

Worker/Wrapper/Makes it/Faster
Jennifer Hackett, Graham HuttonInternational Conference on Functional Programming, 2014
Much research in program optimisation has focused on formal approaches to correctness: proving that the meaning of programs is preserved. Paradoxically, there has been little work on formal approaches to efficiency: proving that the performance of optimised programs is actually improved. This paper addresses this problem for a general-purpose optimization technique, the worker/wrapper transformation. In particular, we use improvement theory to establish conditions under which the worker/wrapper transformation is formally guaranteed to improve the time performance of programs. These conditions are a natural refinement of those that guarantee the correctness of the transformation, and the resulting theory is simple to understand and apply.

The Underperforming Unfold: A New Approach to Optimising Recursive Programs
Jennifer Hackett, Graham Hutton, Mauro JaskelioffSymposium on Implementation and Application of Functional Languages, 2013
This paper presents a new approach to optimising corecursive programs by factorisation. In particular, we focus on programs written using the corecursion operator unfold. We use and expand upon the proof techniques of guarded coinduction and unfold fusion, capturing a pattern of generalising coinductive hypotheses by means of abstraction and representation functions. The pattern we observe is simple, has not been observed before, and is widely applicable. We develop a general program factorisation theorem from this pattern, demonstrating its utility with a range of practical examples.

Bio

I was born and grew up in Southampton, Hampshire, but went to Oxford to do my undergraduate degree in Computer Science. I achieved top first for my first three years and a first class degree overall, a fact of which I am eternally embarrassed. After completing my undergraduate degree, I went to the University of Nottingham to do a PhD under the supervision of Graham Hutton, and am now a postdoctoral researcher in the same group.