Uncategorized

Almost a year ago a huge number of women accused Morgan Marquis-Boire of sexual assault. Their courage in coming forward is incredible, especially considering the horrible record that society has in addressing these wrongs, and I would like to state my opinion on the matter for public record.

I don’t know Marquis-Boire really well, but both he and I worked for SA, although at different times. Some of his closest friends were in the past also my closest friends. I am writing this post because I am ashamed of the lack of response of the NZ infosec industry, I am ashamed of having worked for and having been part of the same company, but mainly I am ashamed of my own lack of response so far.

I want it to be stated that I personally believe all of these women on a number of grounds: the vividness of their descriptions, the widespread geographical locations of the victims, the toxic culture I observed in the workplace, and some of the attitudes that I observed my ex-colleagues have toward women.

I think the lack of prosecution on Marquis-Boire despite the overwhelming amount of testimony reflects poorly on society’s view of women’s right to be free of violence. I hope one day justice will be served and Morgan Marquis-Boire will be sentenced to a lengthy prision term.

Compartir

Like this:

This is a blog post summarising a few notes I’ve gathered around the internet, with the purpose of cementing them in my mind rather than adding anything new or attempting to broadcast them to a wider crowd. If you find it useful, that’s great, but it’s nothing original and its been pulled from several sources noted at the end of the post.

The post is organised in several categories: general attacks against URL parsers and implementations, and specific attacks against parsers in specific languages, with the intention of highlighting differences in interpretation of URL strings.

Discrepancies between parsers and HTTP libraries

Python

This is interesting. These three URL parsing libs in python all interpret this individual URL differently:

In versions earlier than a few years ago, there was an inconsistency between parse_url and readfile which could lead to URL parser bypass with URLs with several colons: http://127.0.0.1:11211:80/ or a URL with extraneous characters http://google.com#@evil.com/. In both examples parse_url and readfile interpreted the URL differently. In the second case, readfile interprets the host as evil.com, whereas parse_url interprets the host as google.com.

This may affect other programming languages.

Curl

Curl is in widespread usage, and there are curl bindings in every language under the sun. Discrepancies between language URL parsers and Curl could lead to SSRF. Consider the following URL, as interpreted by PHP:

As you can see it is attempting to retrieve 127.0.0.1. This example uses PHP, but these discrepancies have been identified in other languages as well and more are bound to exist. Example vulnerabilities have been found in WordPress, VBulletin and MyBB utilising this technique.

Node

Path traversal bypasses are possible with special unicode character U+FF2E. This happens because node’s internal unicode parser interprets this multibyte character as two separate bytes, and then proceeds to discard a part, leaving \x2E, a dot.

Similar results can be observed by injecting U+FF0D and U+FFOA, which results in a newline, allowing newline injection.

General Bugs

In linux, hostname resolution is generally done with gethostbyname. As per RFC1035 it supports escaping of values with \DDD notation. This may allow for additional parser confusion.

The ability to add invalid trailing content can lead to an attacker that can perform HTTP header smuggling attacks in the event they can inject encoded new lines, such as 127.0.0.1\r\nfoo.com\r\nAuthorization: blah

Additionally, an attacker can smuggle other protocols (such as SMTP) thanks to TLS’ SNI.

IDNA

Internationalizing Domain Names in Applications (IDNA) is a standard or a set of standards that allow for characters not in the ascii set to be used in domain names. There are two diferring standards, IDNA2003 and IDNA2008, which are difficult to transition between for client implementations, which lead the unciode consortium to release UTS46.

Different HTTP libraries and URL parsing libraries implement different versions of this standard, as well as implementing the standard in different ways. This can be useful to avoid blacklists of disallowed hosts. An example of this is an inconsistency in PHP’s gethostbynamel function and curl’s resolver: PHP’s gethostbynamel fails when provided with a domain with a special character, which can lead to bypasses. cURL will then retrieve the URL and resolve it successfully.

Values synonymous to localhost

Beside the obvious examples, the following URLs will all attempt to retrieve localhost.

DNS

Several mechanisms for bypassing SSRF protections through DNS shenanigans exist. I will cover these on a high level below:

Host that resolves to a malicious IP

DNS records may point to an internal IP address (such as 10.0.0.2 or 127.0.0.1). This frequently works because developers check whether the ip matches an address range but accept arbitrary DNS names regardless of what they resolve to.

Time of check, time of use vulnerabilities (TOCTOU)

A TOCTOU vulnerability can occur if the target application implements host whitelisting or host blacklisting. Imagine the following pseudo-code:

A TOCTOU vulnerability allows for a bypass of the blacklist check if DNS resolution occurs twice: once for the check, and twice for the retrieval. An attacker-controlled DNS server could resolve the first time to a good address and the second time to a malicious IP.

Malicious redirect

A SSRF protection bypass may occur if an attacker creates a malicious site that redirects to an internal IP because the check is performed on the initial IP address and not the address the HTTP client gets redirected to. This frequently works for most HTTP clients as they tend to follow HTTP redirections by default. Here’s an example:

Final notes

The ability to inject \r\n in combination with either spaces or \t characters allows you to inject new headers into the request, which may allow other attacks. Imagine a request to `victim.com?url=yourserver.com/aa’ results in the following request:

GET /aa HTTP/1.1
Host: yourserver.com

A request that looks like victim.com?url=yourserver.com/aa%20HTTP/1.1%0Ainjected-header: true%0Ax-aa: could result in the following:

Compartir

Like this:

A new version of droopescan has been released, which increases the version’s patch digit from 1.33.6 to 1.33.7. With this, droopescan officially reaches elite status. This is a minor increase to update fingerprint databases for all supported CMS’. No serious vulnerabilities have come out this round, although WordPress has patched a Cross Site Scripting vulnerability.

For those that are not aware, a large number of improvements have been implemented between 1.0 and 1.33.7, almost too many to count. A few highlights are:

Support for CMS type autodetection. This allows you to specify a list of URLs and dscan will automatically determine what CMS it is and perform the usual version and plugin enumeration. Performance is pretty great, and I’ve successfully version scanned several million hosts in three days time.

Allow for resuming of mass scans with the –resume flag.

Several performance improvements and tweaks.

Preliminary support for WordPress and Joomla.

droopescan development continues on, and exciting things for the future include tools that will make updating the fingerprint even easier (for me, as I am very lazy), support for other CMSs, and the release of version 2.0 which will tidy up JSON output.