Contact

[March 20th, 2018] - When I encounter a new class of vulnerability or attack vector, I typically add a check for it in Sobelow. Occasionally, however, I see an opportunity for these issues to be mitigated at the library level. In these cases, I try to reach out to José or Chris to talk through potential solutions.

A developer building SiteA added a form for a third-party service (SiteB), using Phoenix’s built-in form_for function. Because Phoenix forms include a valid CSRF token by default, these tokens were being leaked to the third-party1. As a consequence, SiteB could cause a user to perform unwanted actions on SiteA, such as updating their password or deleting their account.

This is the problem we were aiming to fix. But before getting to our solution, I want to flesh out the problem a little bit. Let’s start with an overview of Cross-Site Request Forgery (or CSRF), one of the web’s most common vulnerabilities.

What is CSRF?

The basic idea is this: Any website can create a form pointing to any other website. They can also automatically fill out and submit these forms. Because cookies are sent with every request, a CSRF attack allows untrusted applications to cause a user’s browser to submit requests and perform authenticated actions on the user’s behalf. Thankfully, web libraries such as Phoenix and Plug have adopted techniques to mitigate such attacks.

The most common solution to this problem, and the one taken by Phoenix, is to store a unique, random token in the user’s session. This token is then fetched by the client and sent along with every request. When an application processes the request, the submitted token should match the token stored in the user’s session. This way, a malicious website can only cause a user to make valid requests if they have access to a valid CSRF token.

Now, this is where our initial issue comes into play. When using form generation for third-party or dynamic endpoints, valid CSRF tokens will be leaked by default, leaving users vulnerable to attack. This is something that an experienced developer will probably catch, but it’s a common enough occurrence that it would be nice if the issue could be addressed.

So, the core issue is simple: In some cases, valid CSRF tokens are leaked by default. And the desired outcome is clear: Don’t leak valid tokens by default. But, in achieving that solution, there are a few considerations:

Tokens leaked in this manner shouldn’t be useful to an attacker.

A solution shouldn’t increase burden on the developer, and it shouldn’t sacrifice performance.

Tokens should work across domains when desired.

What is the solution?

Ultimately, the solution is actually pretty simple. If the token is being generated for a path, as is typical, then everything stays the same. That is, fetching the CSRF token looks something like this:

get_csrf_token()

However, if a token is fetched for a fully qualified host, it looks more like this:

The user’s CSRF token is fetched like normal. However, instead of returning the token directly, the token is used as a “salt” in a key derivation function. This newly derived secret key is then used to sign a message which includes the target host.

On the receiving end of things, the CSRF token’s signature is validated. If the signature verification succeeds, then the host is validated to ensure it matches a set of allowed hosts (or the host header). If either of these checks fails, the request is rejected as fraudulent.

Let’s look back at the initial problem, this time using our new host tokens.

A developer building SiteA adds a form for a third-party service (SiteB), using Phoenix’s built-in form_for function. Because Phoenix forms include a CSRF token by default, these tokens will be leaked to the third-party. Fortunately, these are now the signed host tokens. If the malicious site attempts to initiate a CSRF attack, the host signed in the token (SiteB) won’t match the host being requested (SiteA). As a consequence, SiteB’s attack fails.

This technique has a few benefits. First, and most importantly, nothing changes in the vast majority of cases. Most forms will continue to include plain, standard CSRF tokens. But now, if CSRF tokens are leaked in the described manner, we are safe from attack. Further, unlike other potential solutions, this solution will still work seamlessly across subdomains and domains the developer controls.

With this solution, we prevent usable tokens from leaking by default, require no changes from the end-developer, and still allow the same range of functionality.

These changes are live on Plug 1.5.0 and phoenix_html 2.10.0!

1: This is not a problem unique to Phoenix. You will encounter it in any web framework with form generators that include a CSRF token.

[February 7th, 2018] - A while back, I found a somewhat unique remote code execution attack vector for Elixir and Phoenix projects. Now, right off the bat, it’s worth noting that there is a 99.999% chance that no Elixir library or Phoenix application is unknowingly vulnerable to this issue. However, it’s fairly amusing and I thought that made it worth sharing.

So, to set the stage:

You’ve got a web application, and you let users do a couple things. Not necessarily realistic things, but good for an example. First, you let your users upload PNGs. You do a simple but sufficient validation of the image before saving it to make sure it has a .png extension and that it’s not too big. It seems secure enough. You also have a bunch of non-sensitive database tables, and all of them have a title column. You want your users to be able to dynamically query these tables and fetch all the titles.

On the attacker side, the attacker has uploaded one PNG, hello.png, and they know your dynamic query looks like this:

What is the worst thing the attacker can do here? If the title didn’t already give it away, it’s arbitrary code execution!

So, how does it work?

The first thing you might have noticed, is the call to String.to_atom. This is insecure on its own1, but it won’t cause code execution.

Try passing a known module to the from/2 function, and you will get an error to the effect of "protocol Ecto.Queryable not implemented for <MODULE>, the given module does not provide a schema." This is because, deep down, from/2 calls __schema__ on whatever module you pass as a parameter. You can test this out in the following way:

defmodule ExampleWeb.PageController do
...
def __schema__(_) do
IO.puts "Hello, world!"
end
end

Re-issue the GET request, and check your logs for the “Hello, world!” text.

We are half way to arbitrary code execution — we can execute the __schema__/1 function on any module; we are just missing the “arbitrary” part. This is where the fun part comes into play.

Did you know module names can be paths? For example, if you have a module named :”/My/Full/Path/Module”, Elixir (by way of Erlang) will attempt to load code from the file located at "/My/Full/Path/Module.beam". Better still, you can null-terminate2 the module name. So :”/My/Full/Path/Module.png\0" will load code from the file located at "/My/Full/Path/Module.png".

With this capability, an attacker can locally compile a module :”assets/static/images/hello.png\0” with a malicious __schema__ function. Then they can upload the "hello.png" file that’s been created, and make a request to "http://hostname/?type=assets/static/images/hello.png%00". Arbitrary code execution achieved.

Here are reproduction steps so you can test this yourself:

Go to the root of your Example application, and make the necessary directories: mkdir -p _build/dev/lib/example/ebin/assets/static/images

Now, create a new module in the PageController:

defmodule ExampleWeb.PageController do
...
end
defmodule :"assets/static/images/arbitrary_rce.png\0" do
def __schema__(_) do
IO.puts("\n=== REMOTE CODE EXECUTION ===\n")
end
end

This module is only temporary. Run the Phoenix application to force a build: mix phx.server.

And that’s it.

Like I said, it’s unlikely that this is actively exploitable in the wild. But, given enough time and growing popularity, we may one day see this vulnerability in a live Phoenix application!

Anyway, if you didn’t feel like actively following along, but you still want to give this a test run, you can find the example repository here: GitHub - GriffinMB/RCE_Example. And, if you are a security conscious Elixir/Phoenix developer looking to secure your latest web app, give Sobelow a whirl :)

1: In Elixir, atoms are not garbage collected. As such, if user input is passed to the String.to_atom function, it may result in memory exhaustion or exhaust the atom table.

A couple weeks ago (as of March 18th, 2017), I reported two Plug vulnerabilities to
the Elixir team. They quickly fixed the issues, and made a disclosure on the
forum.

I wanted to go into a little more detail, and cover the aspects I found
interesting. I also hate it when I can't find I workable PoC for a disclosure, so
that's here as well.

Arbitrary Code Execution in Cookie Serialization

This vulnerability is the less practical of the two, and probably hasn't
been exploitable in the wild. However, the method of exploitation hasn't
been covered yet and I thought it was particularly interesting. Here's most
of the original report:

Impact:

Users with the ability to forge session cookies may be able to achieve
arbitrary code execution.

Details:

The default "session" plug provides session cookie storage and serialization
functionality. Cookie data is validated via a signing mechanism, which
makes use of a secret_key_base token. The serialization process utilizes
Erlang's binary_to_term and term_to_binary functions, which will serialize
and deserialize any term or data structure, including partials and
function captures. A user with the ability to forge cookies (e.g. as
a result of a source code disclosure) can create cookies containing
these dangerous values. Function captures are enumerable, which means
that any instance in which a cookie value is enumerated is vulnerable
to code execution.

For example, consider an application that stores cart IDs in the session cookie
with the intention of doing something special for any ID over 10:

To achieve code execution, a malicious user could forge a cookie with
the following value:

%{"cart" => &({IO.inspect(System.cwd), &1, &2})}

The initial get_session call would return the function capture, and
the value &({IO.inspect(System.cwd), &1, &2}) would be stored in
cart_ids. The Enum.any? call would iterate over the capture, and
execute IO.inspect(System.cwd), printing the current working directory
to the log.

Reproduction Steps:

Set up a working Elixir and Phoenix environment.

Clone the following repository: https://github.com/GriffinMB/web

Install dependencies, and run the server. Navigate to the homepage
to view the default Phoenix Framework information.

Null Byte Injection in Plug.Static

This one has been covered a bit more fully, but here are the details I provided:

Impact:

Users with the ability to upload files served by the "static" plug can
bypass filetype restrictions, which may lead to cross-site scripting
and other arbitrary file upload exploits.

Details:

The "static" plug, used to serve assets in Plug-based web
frameworks, serves two primary functions: locating the requested
file, and setting the response content type. The asset content
type is set dynamically, using the Mime.from_path function.
For example, a request for the file "images/phoenix.png" will result
in a content type of "image/png." However, if the request is updated
to "images/phoenix.png%00.html," the resulting content type will be
set to "text/html."

The problem with this is that the mechanism for reading
the file, :prim_file.read_file_info, is null byte terminated. This
means that both "images/phoenix.png" and "images/phoenix.png%00.html"
will return the same static asset. So, if file upload functionality is
provided by the application, and the assets are served with the
"static" plug, a malicious user could do something like the following:

Upload a file, "evil.png," with embedded JavaScript

Request the file at "evil.png%00.html"

Achieve XSS

Reproduction Steps:

Set up a working Elixir and Phoenix environment.

Create a new Phoenix project, and run the server.

In the static images directory, create a file ("evil.png") with the
following content: