So, what laws or regulations should be created to improve computer security?
Are there any?
Obviously there are risks to creating laws and regulations.
These need to be targeted at countering widespread problems, without
interfering with experimentation, without hindering free expression or
the development of open source software, and so on.
It’s easy to create bad laws and regulations - but
I believe it is possible to create good laws and regulations
that will help.

My article
Creating Laws for Computer Security
lists some potential items that could be turned into laws that
I think could help computer security.
No doubt some could be improved, and there are probably things I’ve
missed.
But I think it’s important that people start discussing how to
create narrowly-tailored laws that counter the more serious problems
without causing too many negative side-effects.
Enjoy!

Wed, 04 May 2016

The Linux Foundation’s Core Infrastructure Initiative (CII)
has just announced its CII best practices badging program
for FLOSS projects.
It’s a free program that lets developers explain how they
follow best practices, and if they do, they can get a badge
that they can show on their GitHub page or anywhere else.
Early badge earners include the Linux kernel, Curl, GitLab,
OpenBlox, OpenSSL, Node.js and Zephyr.

The idea is straightforward.
The Heartbleed vulnerability in OpenSSL made it obvious that
there are widely-accepted best practices that not everyone is doing -
and that even includes important projects.
This isn’t just speculation;
if you compare
OpenSSL
before Heartbleed with
current
OpenSSL the difference is striking.
I think it’s clear that if
more projects would apply generally-accepted best practices,
we’d have more secure software.
This badging process helps projects identify those best practices,
determine if they meet them, and show everyone else that they’re
meeting them.

This kind of distribution option is absolutely not for everyone.
Address Sanitizer on average increases processing time by about 73%,
and memory usage by 340%.
What’s more, this work is currently very experimental,
and you have to disable some other security mechanisms to make it work.
That said, this effort has already borne a lot of valuable fruit.
Turning on these mechanisms across
an entire Linux distribution has revealed a large number of memory errors
that are getting fixed.
I can easily imagine this being directly useful in the future, too.
Computers are very fast and have lots of memory, even when compared
to computers of just a few years earlier.
There are definitely situations where it’s okay to effectively
halve performance and reduce useful memory, and in exchange,
significantly increase the system’s resistance to novel attack.
My congrats!!

With luck, this won’t come true in 2016.
The question is, is that because it doesn’t show up until 2017 or 2018…
or because the first ones were in 2015?
DHS is funding work in this area, and that’s good…
but while research can help, the real problem is that we have
too many software developers who do not have a clue how to develop
secure software… and too many people (software developers or not)
who think that’s acceptable.

In short,
we still have way too many people building safety-critical devices who
don’t understand that security is necessary for safety.
I hope that this changes - and quickly.

This means that software developers should seriously consider using
a more-advanced fuzzer, such as
american fuzzy lop (afl),
along with
Address Sanitizer (ASan)
(an option in both the LLVM/clang and gcc compilers),
whenever you write in C, C++, Objective-C, or in other
circumstances that are not memory-safe.
In particular, seriously consider doing
this if your program is exposed to the internet or
it processes data sent via the internet
(practically all programs meet this criteria nowadays).
I had speculated that this combination could have found Heartbleed in
my essay on Heartbleed,
but this confirmation is really important.
Here I will summarize what’s going on
(using the capitalization conventions
of the various tool developers).

The
american fuzzy lop (afl)
program created by Michal Zalewski is a surprisingly effective fuzzer.
A fuzzer is simply a tool that sends lots of semi-random inputs into a program
and to detect gross problems (typically a crash).
Fuzzers do not know what the exact correct answers are,
but because they do not, they can try out more inputs than systems that
know the exact correct answers.
But afl is smarter than most fuzzers; instead of just sending random inputs,
afl tracks which branches are taken in a program.
Even more interestingly, afl even tracks how often different branches
are taken when running a program (that is especially unusual).
Then, when afl creates new inputs, it prefers to create them
based on inputs that have produced different counts on at least some branches.
This evolutionary approach, using both branch coverage and the
number of times a branch is used, is remarkably effective.
Simple dumb random fuzzers can only perform relatively shallow tests;
getting any depth has required more complex approaches such as
detailed descriptions of the
required format (the approach used by
so-called “smart” fuzzers) and/or
white-box constraint solving (such as
fuzzgrind
or Microsoft’s SAGE).
It’s not at all clear that afl eliminates the value of these other
fuzzing approaches; I can see combining their approaches.
However, afl is clearly getting far better results than
simple dumb fuzzers that just send random values.
Indeed, the afl of today is getting remarkably deep coverage for a fuzzer.
For example, the post
Pulling JPEGs out of thin air shows how afl was able to
start with only the text “hello” (a hideously bad starting point)
and still automatically figure out how to create valid JPEG files.

However, while afl is really good at creating inputs, it can only
detect problems if they lead to a crash; vulnerabilities like
Heartbleed do not normally cause a crash.
That’s where Address Sanitizer (ASan) comes in.
Address Sanitizer turns many memory access errors, including nearly
all out-of-bounds accesses, double-free, and use-after-free, into
a crash.
ASan was originally created by
Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov.
ASan is amazing all by itself, and the combination is even better.
The fuzzer afl is good at creating inputs, and ASan is good
at turning problems into something that afl can detect.
Both are available at no cost as
Free/ libre/ open source software (FLOSS),
so anyone can try them out, see how they work, and even make improvements.

Normally afl can only fuzz file inputs, but Heartbleed could only be
triggered by network access.
This is no big deal; Hanno describes in his article how to wrap up
network programs so they can be fuzzed by file fuzzers.

Sometimes afl and ASan do not work well together today on 64-bit systems.
This has to do with some technical limitations involving memory use;
on 64-bit systems ASan reserves (but does not use) a lot of memory.
This is not necessarily a killer;
in many cases you can use them together anyway (as Hanno did).
More importantly, this problem is about to go away.
Recently I co-authored (along with Sam Hakim) a tool we call
afl-limit-memory; it uses Linux cgroups to eliminate the problem so
that you can always combine afl and ASan (at least on Linux).
We have already submitted the code to the afl project leader,
and we hope it will become part of afl soon.
So this is already a disappearing problem.

I do not think that fuzzers (or any dynamic technique) completely
replace static analysis approaches such as source code weakness analyzers.
Various tools, including dynamic tools like fuzzers and static tools
like source code weakness analyzers,
are valuable complements for finding vulnerabilities
before the attackers do.

Sat, 04 Apr 2015

I’ve updated my
presentations on how to design and
implement secure software.
In particular, I’ve added much about analysis tools and
formal methods.
There is a lot going on in those fields, and no matter what I do
I am only scratching the surface.
On the other hand, if you have not been following these closely,
then there’s a lot you probably don’t know about.
Enjoy!

Fri, 27 Mar 2015

Z3 has been released as
open source software under the
MIT license!
This is great news.
Z3 is a good satisifiability modulo theories (SMT) solver /
theorem prover from Microsoft Research.
An SMT solver accepts a set of constraints
(such as “a<5 and a>1”) and tries to produce values that
satisfy all the constraints.
A satisfiability (SAT) solver does this too, but SAT solvers can only
work with boolean variables;
SMT solvers can handle other types, such as integers.
Here is a Z3 tutorial.

SMT solvers are basically lower-level tools that have many uses
for building larger capabilities, because many problems
require solving logical formulas to find a solution.

I am particularly interested in the use of SMT solvers
to help prove that programs do something or do not do something.
Why3 is a platform that lets you
write programs and their specifications, and then calls out to
various provers to try to determine if the claims are true.
By itself Why3 only supports its WhyML language, but Why3 can be combined
with other tools to prove statements in other languages.
Those include C (using Frama-C and a plug-in), Java, and Ada.
People have been able to prove tiny programs for decades, but scaling up
to bigger programs in practice requires a lot of automation.
I think this approach of combining many different tools, with
different strengths, is very promising.

The more tools that are available to Why3, the more likely it will
solve problems automatically.
That’s because different tools use different heuristics and focus on
different issues, resulting in different ones being good at different things.
There are already several good SMT solvers available as OSS,
including
CVC4
and
alt-ergo.

Now that Microsoft has released Z3 as OSS, there is yet another
strong OSS SMT solver that tools like Why3 can use directly.
In short, the collection of OSS SMT solvers has just become even stronger.
There’s a standard for SMT solver inputs, the
SMT-LIB format,
so it’s not hard to take advantage of many SMT solvers.
My hope is that this will be another step in making it easier to
have strong confidence in software.

Wed, 11 Mar 2015

Currently this website uses only HTTP, and does not support HTTPS.
That means that users cannot trivially authenticate what they receive, and
that in some cases users reveal to others what they are viewing on the site.
(Technical note:
HTTPS is implemented by a lower-level protocol; the current protocol
versions of this protocol are named TLS, and the older ones
are named SSL, but a lot of people use the term SSL to include TLS.)
I would like to use HTTPS, but this website is entirely self-funded.
I do have a plan, though.

My current plan is that I am waiting for
Let’s encrypt to stand up and be ready.
Once that gets going, I intend to use it to add support for HTTPS.
I’d like to eventually only support HTTPS, since that prevents
downgrade attacks, but I need to make sure that the TLS certificates
and configuration works well.
Also, I pay others to maintain the server;
since I am not made of money, I necessarily use low-end cheap services.
That will limit what I can do in terms of HTTPS configuration hardening.
On the other hand, it should be better than the current situation.

The software I develop is generally
available on SourceForge or GitHub, and they already provide HTTPS,
so you don’t need to wait for that.
Currently
you
have to log into SourceForge to get HTTPS, but that is expected to change,
and for now just log in.

Anyway, I thought some of you might like to know that there is a plan.

Tue, 06 Jan 2015

There seems to be a lot of confusion about security fundamentals
of cloud computing (and other utility-based approaches).
For example, many people erroneously think hardware virtualization is required
for clouds (it is not), or that hardware virtualization and containerization
are the same (they are not).

Will people actually learn anything?
Georg Wilhelm Friedrich Hegel reportedly said that,
“We learn from history that we do not learn from history”.

Yet I think there are reasons to hope.
There are a lot of efforts to improve the security of
Free/Libre/Open Source Software (FLOSS) that are important yet
inadequately secure.
The
Linux Foundation (LF) Core Infrastructure Initiative (CII)
was established to “fund open source projects that are in the critical
path for core computing functions” to improve their security.
most recent European Union (EU) budget includes €1 million for auditing free-software programs to identify and fix vulnerabilities.
The US DHS HOST project is also
working to improve security using open source software (OSS).
The
Google Application Security Patch Reward Program is also working to improve security.
And to be fair, these problems were found by people who were examining
the software or protocols so that the problems could be fixed - exactly
what you want to happen.
At an organizational level, I think Sony was unusually lax in its
security posture.
I am already seeing evidence that other organizations have suddenly
become much more serious about security, now that they see what has been
done to Sony Pictures.
In short, they are finally starting to see that
security problems are not theoretical; they are real.

Here’s hoping that 2015 will be known as the year where people took
computer security more seriously, and as a result, software and our
computer systems became much harder to attack.
If that happens, that would make 2015 an awesome year.

Sun, 23 Nov 2014

The year 2014 has not been a good year for the SSL/TLS protocol.
SSL/TLS is the fundamental algorithm for securing web applications.
Yet every major implementation has had at least one
disastrous vulnerability, including
Apple (goto fail),
GnuTLS,
OpenSSL (Heartbleed),
and Microsoft.
Separately a nasty attack has been found in the underlying
SSLv3 protocol
(POODLE).
But instead of just noting those depressing statistics, we
need to figure out why those vulnerabilities happened,
and change how we develop software to prevent them
from happening again.

To help, I just released
The Apple goto fail vulnerability: lessons learned, a paper that is similar to my
previous papers that focuses on
how to counter these kinds of vulnerabilities
in the future.
In many ways Apple goto fail vulnerability was much more embarassing
compared to Heartbleed; the goto fail vulnerability was
easy to detect, in a portion that was a key part of its
functionality.
This vulnerability was reported back in February 2014, but there
does not seem to be a single place where you can find a more complete
list of approaches to counter it.
I also note some information that doesn’t seem to
be available elsewhere.