p4bl0's blog rss feedhttps://p4bl0.net/shebang/
5 last published articlesfugitive - https://clandest.in/fugitive2017-09-19https://p4bl0.net/shebang/liberty-vs-security.html
Security has a price. When talking about computer security, this price can be
paid in different ways, such as performance or ease of use (or both). When
talking about meatspace security, whether it is nation-wide or individually,
the price is paid in freedom and privacy. I claim that the loss in freedom is
usually much higher than the gain in security. Of course I'm far from being
the only one to claim that. However it is quite difficult to convince people
that this is true, because most of the time the discussion is about
nation-wide political decisions, and it is hard to grasp all the whys and the
wherefores of their implications.

I don't want to wave hands explaining how massive surveillance is a bigger
loss in individual freedom and collective freedom than it is a win in
security. I've tried that before and it doesn't really work on the people who
need to be convinced. Instead, I'm going to tell you a story, a real story,
that happened to me a bit more than two years ago, and which I think is a
perfect analogy of our topic, except it is a human-sized story so it is
easier to grasp all the implications and identify the problems. As it turns
out, the story is not only an analogy of the security versus liberty
trade-off, but also to the differences between proprietary and free (as in
freedom) software, or between closed and open formats, showing that openness
is necessary for liberty and privacy.

It all begins in the summer of 2012. I had just finished my master degree and
I would be starting my PhD soon. I had to move out of my room at the student
dorms and find a new place to live for three years, in the Paris area. Even
in the suburbs of Paris, the prices of real estate are so high that one PhD
funding is not sufficient on its own to pay for decent housing for three
years. Getting together with roommates is the obvious solution. Two of my
friends were in the same situation and would do their PhD in the same area,
so we decided to look for an apartment together.

After long weeks of searching and dealing with real estate agencies (trust me
in the Paris area it is anything but pleasant to deal with those), we found a
very nice place in our price range, so we decided to go for it.

We moved in on August 20. As I said, we were three roommates. The problem is,
we had only two keys of the apartment. What's the problem you say? Just go
make a duplicate (or more!) of the key and you're all set. We wish.

It happens that the door of the apartment is a fancy security door, and that
its lock is a fancy security lock… and that the keys thus have to be fancy
security keys. What does "fancy security" means?

It means that the door and the lock are very strong and hard to break, and
that the keys are very difficult to reproduce. Actually, only the company
that issued the whole system in the first place can do it. Let's call them
FancySec.

For the real estate agency, it means that we can't get burglarized (yeah,
because burglars always come in through the front door, it is known
khaleesi). It also means that they know exactly how many keys we have:
since they are the only possible interlocutor of FancySec, we have to go
though them if we want any additional keys.

For us, it is supposed to make us feel safe and happy. In practice, not so
much.

We can't make a duplicate when we want. With regular keys if we
want to host some friends for more than one night or if one of us want to
give a key to his girlfriend or boyfriend, we just go to any locksmith, and
that's it. Instead we are forced to ask the real estate agency (thanks for
our privacy) and it can take weeks before we get the new key.

This kind of keys is expensive. Regular keys for regular doors
cost less than 10€ to duplicate, these keys are almost 70€. Seven times
more expensive. Given the price to reinstall a new FancySec lock and get a
bunch of new FancySec keys, it would be better if none of us accidentally
lose his key.

What is the added security, really? What are the odds that there
is someone in the world with the aim to rob our apartment, knowing that
this person did not manage to steal one of our keys, would have to enter
our building using a code and then a key to get to the elevator or stairs
which lead to our apartment, and then try to open our door? I believe the
odds of that are so small that the probability that we get robbed would not
even change if we had a regular door with regular convenient keys.
The main idea here is that the door is not the limiting factor: even if the
FancySec door is a thousand times more resistant, it does not matter
because the probability someone tries to break in is extremely small.

What if FancySec goes out of business? Or if they just stop
supporting our key models? That's not an entirely absurd supposition, it
happens that FancySec is closed for vacation in August, so when we moved in
our apartment we had to wait for fifteen days before the real estate agency
could order an additional duplicate of the key from them and it took ten
more days to finally get the key. In the meantime we had to juggle with two
keys for the three of us so most of the time when we weren't home there was
one of our FancySec keys in our mailbox which I bet can be opened with a
screwdriver… This is something that needs to be emphasized: strong
security is often so inconvenient that the necessary workarounds actually
lessen the overall security.

So what conclusions can we draw from this story? First, that what we have is
actually not additional security, but rather the illusion of additional
security. Second, that we pay for this illusion of security with a lot of
inconveniences, and with loss of freedom and privacy, and that both can
result in weakening the actual security level. Third, that closed proprietary
formats are a bad thing, even away from the digital world.

Just replace the real estate agency with a government, the burglars with
terrorists, FancySec with the army / intelligence agencies / big private
companies, and for instance our door with an airport or our keys with
surveillance cameras. You'll get the big picture. The trade-offs are the
same. We lose a freakin' lot of freedom and convenience to get a security
improvement that is irrelevant most of the time. It is important to
struggle for our freedom and not let the security rhetoric get to us.

]]>p4bl0https://p4bl0.net/shebang/liberty-vs-security.html2015-02-19 12:57:33 +0100https://p4bl0.net/shebang/phd-midterm-formal-software-methods-cyrptosystems-implementation-security.html
What follows is the content of my PhD midterm report. I'm posting it here
because I sometimes feel like there is a bad a priori about doing a
PhD. At least in communities like Hacker News and some subreddits, there are
regularly stories where someone tells how bad it was for him or her to do a
PhD. Well, I know that as a single individual I'm as statistically
insignificant as them, but I really, truly enjoy my life as a PhD student. I
like doing research. I like teaching. I'm happy. And I guess I feel like
sharing, so that there are not bad stories only about what doing a PhD is
like. Here I'm only talking about the research part, but I hope the text will
reflect how much I'm enjoying myself :-).

Introduction

My PhD started in October 2012. Since then, I have been working
with Sylvain Guilley
and Jean-Luc Danger
in the field of implementation security. More precisely, I try to
increase the use of formal methods in this field, at the software
level since it is what I know and where I come from in my previous studies.
Implementation security is a very young subject (approximately 15 years old),
and a very practical one: a lot of industrial research-and-development /
engineering participate to the field, which explains why the use of formal
methods is not widespread, to say the least. I will explain why and how I aim
to spread the use formal methods, but first I will give a quick overview of
what is implementation security.

When we say implementation security, we talk about actual,
physical implementations, but the security we are talking about is the one of
what is implemented1. Of
course, we work on systems which security is important,
namely cryptosystems. It is obvious that
cryptographic implementations should not leak any information about the
secrets that they are trying to protect. What is less obvious however, is
that even a perfect cryptographic algorithm can leak information once it is
implemented on a physical device, because of the properties of the physical
device itself: computing needs resources, and the usage of these resources
may give information on the computation. Indeed,
so-called side-channels such as power consumption, time,
temperature, or electro-magnetic radiation may directly depend on the running
computation. Attacks exploiting such bias are
called side-channel attacks. They work very well in
practice; for instance, on an
unprotected AES cryptoprocessor, it is possible to extract the full
key in about 1000 side-channel measurements (refer to
DPA Contest v2). Thus, it is
mandatory to implement countermeasures against them. Side-channel
attacks are passive attacks, meaning that only observation is needed
to carry them out. When the attacker is able to have an impact on the system,
it is also possible to conduct active attacks. For instance in the
case of fault injection attacks, the goal is to modify the result of
the computation to get something which will leak information if the attacker
knows how to interpret it. Such attacks can be more or
less invasive: faults can be injected by voluntarily glitching the
clock or the voltage (non-invasive) or by using a laser to modify values in
memory or registers at some point in the computation (invasive).

The use of formal methods to study these attacks and the
countermeasures against them to be able to trust the cryptosystems seems
obvious. Yet, their use in our domain is timidly beginning only now. This can
be explained by several facts. First, this domain is very practical, and many
advances come from the industry rather than the academia, which means that
they are more of an engineering effort than research result. Indeed, many
countermeasures are developed by trial-and-error until they reach some sort
of fixed-point, at which time they are put in production. Most of the time,
this is satisfactory enough from an engineering point of view. Also, formal
analysis requires a formal model of the studied system, but there is a
discrepancy between a proper modelization and the complexity of an actual
physical system which may seem like an obstacle.

This covers why I think it is important to spread the use formal methods in
the field of implementation security. I intend my PhD to be a substantial
participation toward this end. This means my goals are to address the reasons
why formal methods are not widely used yet. I aim at doing so by lifting
several scientific and technological barriers:

Develop models adapted to the study of side-channel and fault injection
attacks and countermeasures, finding ways to avoid the discrepancy between
the model and the actual implementation;

Use these models to develop methods, and the tools which implement them,
that are easy to comprehend and use, and "sexy" enough to make
people want to use or mimic them;

Show the necessity of formal methods by using the tools to break, prove,
and/or optimize existing countermeasures, thereby improving the
state-of-the-art.

In the rest of this report I will present the work I have done since the
beginning of my PhD and how I intend to pursue it in the coming months.

Formally Proved Security of Assembly Code Against Power Analysis

I started my work with the study
of power analysis software countermeasures. My
goal was to have a tool which would be able to automatically protect
arbitrary assembly code against power analysis attacks, while provably
preserving the semantics of the code, and which would output a provably
protected code.

Power consumption is traditionally modeled by Hamming weight of values or
Hamming distance of updates of values [KJJ99]. This
modelization is not perfect but it works well enough in practice and is used
to carry out real-world power analysis attacks.

There are two main types of countermeasures against power analysis:
"palliative" versus "curative". The two defense
strategies are 1. to make the leakage constant, irrespective of the
manipulated data (hiding
or balancing [MOP06, Chp. 7] strategy), or 2.
to make the leakage as decorrelated from the manipulated data
(masking [MOP06, Chp. 9] strategy) as possible.
The second strategy, masking, relies on randomness which is a strong
requirement and is hard to capture formally. The first strategy, balancing,
appears to have a clear invariant (constant leakage) and a software
implementation of balancing using dual-rail with precharge logic
(DPL [TV06]) had been developed by our
lab [HDD11]. So I naturally went with the DPL balancing
option.

The DPL countermeasure consists in computing on a redundant representation:
each bit b is implemented as a pair (yFalse,
yTrue). The bit pair is then used in a protocol made up of two
phases:

a precharge phase, during which all the bit pairs are
zeroized (yFalse, yTrue) = (0, 0), such that
the computation starts from a known reference state;

an evaluation phase, during which the pair (yFalse,
yTrue) is equal to (1, 0) if it carries the logical
value 0, or (0, 1) if it carries the logical value 1.

The DPL has mostly been used as a hardware-level countermeasure, as it was
developed as such. However, it is possible to implement it in software, by
working at the bit level. By replacing each sensitive instruction with a
DPL macro which uses a look-up table to compute the same result as the
original instruction while respecting the DPL protocol.

Results

I defined a generic assembly language and
its semantics. It is generic in that it uses a restricted
set of very generic instructions that can be mapped one-to-one to and from
virtually any actual assembly language. This makes it possible to work with a
single assembly language, while still working on the actual implementation,
thus avoiding the discrepancy between it and the model. It permitted to
develop a tool, called
paioli, that is able to automatically DPLize
any bitsliced assembly code.
Many block-ciphers are already available bitsliced
as it is a common optimization technique [Bih97]. The
transformation has been proved to be semantics preserving. Another
part of the tool does
a symbolic execution of the resulting code
which statically verifies that the security
invariant is well respected. The symbolic execution of the assembly code
is carried out using sets of possible values instead of actual values. Each
bit of the sensitive data (the clear text of the message and
the encryption key) starts with a value
of {0, 1} (or their DPL encoded counterparts) and each
instruction computes all the possible results given the sets of values of its
operands. After each cycle, the security invariant is verified: for each
register, memory cell, and address bus that has changed, the Hamming distance
between any of its previous possible values and any of its new possible
values should be constant. If everything goes well, the code is formally
proved to be well-balanced.

Using this tool we were able to produce a provably protected implementation
of the PRESENT [BKL+07]
block-cipher's encryption algorithm. However, this is a software level proof,
and when the code is run on actual, non-idealized hardware, not all the bits
leak the same amount (some physical bit may consume more power than others).
Thus, before DPLizing a code, it is important to profile the
hardware on which it will be implemented. This profiling allows to choose the
two bits which leak the more similarly, which will then be used as
the yTrue and yFalse in the DPL protocol
to guarantee maximum security. I am currently working on this with
Zakaria
Najm using the DPL balanced PRESENT on an
AVR smartcard. Preliminary results are very promising. This
work shows that it is feasible to use formal methods in the field of
implementation security even when the security properties are physical rather
than functional.

Publications

I gave a talk at the 2013 edition of
the COSADE conference, and presented a
poster at the 2013 edition of
the CHES conference. We posted a
preliminary version of the paper on the IACR ePrint
Archive [RGN13]. Each of these received a warm welcome
and attracted the interest of researchers who are already waiting for the
final version of the paper to be published. The final paper is soon to be
completed and will be submitted to the 2014 edition of the CHES conference.

Future work

I need to clean/rewrite the code and make it usable as it is currently
"research code" and no one except me can be expected to use it as
is. I will also make an attempt at automated bitslicing of arbitrary assembly
code, it will be interesting to explore what can be done automatically with a
reasonable complexity. Another interesting path to follow would be to try to
find optimizations which do not break the DPL protocol of balanced code,
since we have a tool able to statically verify the respect of the security
invariant.

RSA is both an encryption and
a signature scheme. It relies on the identity
that for all message 0 ≤ M < N,
(Md)e ≡ M mod N, where
d ≡ e-1mod φ(N),
by Euler's theorem. For example, if Alice generates the
signature S = Mdmod N, then Bob can verify it by
computing Semod N, which must be equal to M
unless Alice is only pretending to know d. Therefore (N, d) is
called the private key, and (N, e)
the public key.

In CRT-RSA, the private key is a more rich structure than simply (N,
d): it is actually a 5-tuple (p, q, dp, dq,
iq), where:

Injecting faults during the computation of CRT-RSA can yield malformed
signatures that expose the prime factors (p and q) of the
public modulus (N = p · q) [BLK97]. If the
intermediate variable Sp (resp. Sq) is
returned faulted as Sp
(resp. Sq), then the attacker gets an
erroneous signature S, and is able to
recover p (resp. q) as gcd(N, S
- S).

Notwithstanding, computing without the fourfold acceleration conveyed by the
CRT is definitely not an option in practical applications. Therefore, many
countermeasures have appeared that consist in step-wise internal checks
during the CRT computation.

Results

I wrote a tool called
finja
which works within the framework
of modular arithmetic, which is the mathematical
framework of CRT-RSA computations. The tool allows a full fault coverage of
CRT-RSA algorithm, thereby keeping the proof valid even if the code is
transformed (e.g., optimized, compiled, partitioned in
software/hardware, or equipped with dedicated countermeasures). The general
idea is to represent the computation term as a tree (just like
the AST in
a compiler
or interpreter) which encodes the computation properties.
This term is then simplified by our tool. The simplification works like a
naive interpreter would, except that it is a pure symbolic
interpretation using rules from arithmetic in
the ℤring and
its ℤnsubrings. The tool also know about a few theorems,
namely Fermat's little theorem, its
generalization Euler's theorem, the Chinese remainder
theorem, and a particular case of
the binomial theorem. Fault injections in the
computation term are simulated by changing the properties of a subterm, thus
impacting the simplification process. The computation is given in a
convenient high-level input language. Indeed, the model of the computation
can remain as abstract as pseudocode such as it is usually employed in
papers, especially for the computational parts2. An attack success condition is also given and
used on the term resulting from the simplification to check whether the
corresponding attack works on it.

The tool was used to break implementations which were known to be broken and
to formally prove two others: that of Aumüller et
al. [ABF+02], and that of
Vigilant [Vig08] (a repaired version by Coron
et al. [CGM+10]). Prior to this work no existing
BellCoRe countermeasures had been proved, except for a specific
implementation of the latter [CCGV13], but we found
that the repaired version included fixes for weaknesses that did not exist in
the original version. Indeed, these weaknesses had been introduced by eager
speed-oriented optimizations. We also found that 2 out of its 9
verifications were useless, and that some added security against power
analysis actually weaken the fault injections resistance. This work shows the
importance of using formal methods in the field of implementation security,
not only to be able to really trust cryptosystems, but also to enable speed
and security oriented optimizations.

Publications

A first paper [RG13] was accepted at the 2013 edition of
the PROOFS workshop and an
extended version will appear in a future issue of the
JCEN
journal. We have recently submitted a second paper, which is currently under
review.

Future work

The
finja
tool is only able to inject faults in the data and cannot fault instructions
yet, it would be interesting to explore this kind of fault
injections [HMER13]. The tool would also benefit a lot
from the parallelization of its computations: multiple-fault
attacks can take very long to compute and the different possible faults
injections are entirely independent.

If possible I will work with
Gilles Barthe
and François Dupressoir at the
IMDEA lab (Madrid, Spain) in the
beginning of 2014 as they showed an interest in my work on fault injection.
We will try to use the
EasyCrypt [BGZB09]
tool that they work on to do the same kind of formal proofs, and also to use
program synthesis techniques to automatically find
countermeasures.

Sylvain and I will also work on a high-order variation of the Aumüller
countermeasure which would be customizable to resist any order (multiple
faults) of fault injection attacks.

Other Perspectives

I would like to use
finja
to formally study fault injection attacks and countermeasures on other
cryptosystems than CRT-RSA, and to use paioli to
protect at least one other block-cipher, such as AES, on another hardware
platform.

Apart from this and the future work I listed for the two subjects that I
already started to tackle, I have two ideas that I'd like to investigate:

try to model the caching behavior of microprocessors and use the same kind
of symbolic execution techniques to formally study timing attacks;

try to use a homomorphic cryptosystem such as the one
of Paillier to mask (against power analysis) its own
computation.

Note however that in some threat-models the security of the implemented
system and the physical security of the device which implements it are tied
together, but that's not at all what I'm working on.
↑

For example, a fault in the implementation of the multiplication is either
inoffensive, and we do not need to care about it, or it affects the result
of the multiplication, and our model takes it into account without going
into the details of how the multiplication is computed.
↑

]]>p4bl0https://p4bl0.net/shebang/phd-midterm-formal-software-methods-cyrptosystems-implementation-security.html2013-11-18 19:01:19 +0100https://p4bl0.net/shebang/autoproxy-i2p-eepsites-tor-onions-firefox.html
I've been using Tor for a long time,
and I recently started to use I2P too.
Before that I used to launch a new instance of my browser which would tunnel
all its requests through Tor SOCKS proxy when I wanted to use Tor. But when I
want to test things with Tor and I2P it becomes very inefficient to have
multiple browser windows which look all the same but use different proxies.

I quickly found out about the FoxyProxy Firefox addon, which can do just what
I needed. It allows Firefox to select a proxy depending on URL patterns. But
FoxyProxy is a proprietary software, which I was not happy with. In addition
to that it more or less tries to sell you HTTP proxies access, which I don't
need.

And then I found
AutoProxy,
which can do the same job, but is free as in freedom! Yay.

However, AutoProxy has a less user friendly interface. It uses the same
syntax for its URL based rules as AdBlock+, so if you are familiar with that
it's easy to configure. I was not, since I use AdBlock+ only with the
EasyList+FR subscription and never added custom rules.

For those of you who would like to use AutoProxy to be able to browse the web
and Tor Hidden Services and EepSites seamlessly, here is how to setup
AutoProxy:

You will need two rule groups, one for the I2P proxy, and one for the Tor
one. The I2P proxy is an HTTP proxy running on localhost:4444 with the
default configuration. The Tor proxy is a SOCKS v5 proxy running on
localhost:9050 with the default configuration.

Then you will have to add a rule in each group. For the Tor group, the rule
is ||*.onion^. For the I2P group, you guessed it right, the rule
is ||*.i2p^. The || means "beginning of the domain
name", ^ means "end of the domain name&quot,
and as usual * is a wildcard.

AutoProxy will display a red exclamation mark on each of these rules' line
because it estimates that these rules are slow. Don't worry about that, with
only two rules you'll never feel any difference in terms of speed.

Once you have AutoProxy configured like this, congratulation, you can now
browse the clearweb,
Tor Hidden Services,
and EepSites seamlessly in Firefox,
using only free software!

Two weeks ago my research
group asked for a practical introduction to
Git for the following week and I
volunteered for this task. I usually use
Beamer to make my
presentations, but I always start by making a plan in a plain text file using
Emacsoutline-mode.
Then I generally use a few Emacs
keyboard
macros to put the necessary LaTeX markup
(\section, \subsection, etc.) around the sections
titles, then I insert that in my beamer skeleton and start writing the
content of the slides.

But this time, I don't really know why, I started to fill in the slide
contents in outline-mode. Naturally, I then wanted to generate my
presentation directly from that. And as a normal Emacs user (if you allow me
that oxymoron), I decided that I should be able to do my presentation
directly from within Emacs. I googled a bit and found, as I expected, tools
that generate LaTeX from org-mode files,
but also an attempt to do exactly what I wanted on
github.
I tried it and even though it didn't work quite well and didn't have the
features I wanted, it was a good starting point. So I started hacking it to
make it do what I want.

outline-presentation-mode

Introduction

Let me introduce you to
outline-presentation-mode,
which is a minor mode for making presentations with outline-mode files. Here
is how it interprets outline's headings:

Headings at level 1, 2, and 3 are slides.

Headings at level greater than 3 are considered like titles inside the
slides. For the rest of the article I consider them part of the text entry
of headings at level 1, 2, and 3.

Headings at level 1 are considered sections. Their slides show their text
entry plus the level 2 and 3 sub-headings they might have.

Headings at level 2 and 3 are considered normal slides, so only their
text entry are shown.

Installation

To install outline-presentation-mode, you need to put
the outline-presentation.el
elisp file in your Emacs load-path and load it. This can be
automatized by putting (require 'outline-presentation) in your
.emacs file. You can of course M-x byte-compile-file this
elisp file.

Usage

To start a presentation, you simply turn on the minor mode
using M-x outline-presentation-mode. The presentation
starts with the slides that contained the caret.

I want to be able to edit slides even in presentation mode so I use unusual
keybindings to avoid overriding existing ones. Here are available actions:

A-M-n moves to the next slide;

A-M-p moves to the previous slide;

A-M-f moves to the next section slide (corresponding to an
outline level 1 heading);

A-M-b moves to the previous section slide;

A-M-a moves to the first slide;

A-M-t displays only the section titles (table of contents);

A-M-y displays only the slide titles (table of
contents);

A-M-s displays the slide which contains the caret (use it in
one of the two "table of contents" displays);

A-M-r resumes to the slide from which you jumped to the
table of contents (this can be used recursively);

A-M-q quits the presentation mode and goes back to
outline-mode.

While the outline-presentation minor mode in on, the position part of the
modeline contains your position in the presentation (for instance
"13/42" if you are currently displaying the 13th of 42 slides) in
addition to the usual position information. If you are displaying the table
of contents (using A-M-t or A-M-y described above),
this is replaced by "ToC".

As an example, you can find
here
the introduction to git that I gave to my research group. I don't know if the
slides alone are very useful though. During the presentation I also had a
terminal opened where I instantly demonstrated everything I talk about in the
slides.

Hooks

You can tell Emacs to increase text size with
M-x text-scale-adjust which is bound to
C-x = by default. This is useful when using a videoprojector
to give your presentation. You can do that automatically by adding a hook
in outline-presentation-mode-hook. Like in this snippet
from my init.el file:

Tips

You can use M-x center-line
and M-x center-region for your title slides. Do not forget
about M-x artist-mode either.

]]>p4bl0https://p4bl0.net/shebang/emacs-outline-presentation-mode.html2013-01-21 01:01:14 +0100https://p4bl0.net/shebang/lambda-virus.html
What is a λ-virus? It's a virus in the
λ-calculus!
The idea comes from a question asked to
David Naccache by one of his
former students. They were talking about our
IMACC
article, Can a Program
Reverse-Engineer Itself?. In which we define a notion of equivalence
between a function and an obfuscated version of it, and then show a
construction to protect functions from obfuscation (i.e., we can retreive
their original code and also repair them in the environment). The student
asked if the construction used by oximoron (the implementation of
the article) could be a protection against a λ-virus. Pr. Naccache
then asked Antoine and me our opinion about
this idea.

The first thing to do was to define what a λ-virus would actually be.
I came up with an informal but satisfactory definition of what is an infected
(i.e., contaminated and contagious) λ-term for a given λ-virus.
If T is a λ-term then we call TV the
λ-term "T infected by the λ-virus
V":

If T is a λ-abstraction (which we can see as a function
or a program), then for any λ-term E,
(TV E) is (T E)V.

If T isn't a λ-abstraction (we can see it as some data,
e.g., the final result of a computation), then TV is
the value of (V* T), where V* is an arbitrary
λ-abstraction defined by the creator of V.

The first point, for λ-abstraction, defines how the virus propagates.
The second one defines how it acts. For instance we can imagine a virus which
propagates until it infects a function which returns an integer and make this
function always return 4.

At this stage we can already say that oximoron doesn't protect
against this kind of attacks (it wasn't supposed to, but it would have been
cool if it did). But let's forget about that, the idea of a λ-virus is
fun enough in itself.

One thing we can remark with our definition of λ-virus is that the
original form of a virus V, the λ-term whose only feature
is to inoculate the virus to other λ-terms (i.e., Ṽ such
that for any λ-term T, (Ṽ T) is
TV) is simply IdV. I know it is
obvious but I find that quite beautiful at the same time.

At first I tought that I would need to use a
quine-based
construction to create the virus, as we did in
Pastis and oximoron. It
would be true if the implementation was actually in λ-calculus, or at
least working with S-expressions as oximoron does. But we can
actually implement this notion of virus very easily in any language which has
closure
(pun
intended). I did it in Scheme (Racket)
and I implemented the possibility of adding side effects when the virus
propagates:

The two arguments of create-virus have to be functions. The
first one is for the side effects the virus creator might want to add. The
second one is the function the virus should apply to results (the V*
from earlier).

(create-virus (lambda (x) null) identity)
;; will propagate but won't do anything at all