This was a simple information leak vulnerability in the floppy driver that was reported by Brian Belleville to the Linux kernel. Technically, an attacker could trigger show_floppy() calls in the floppy driver to reveal sensitive kernel addresses (such as global variable pointers and function addresses) which can be used to identify the required offsets and consequently, bypass protections like KASLR. You can see drivers/block/floppy.c below.

As you can see above, there are a few instances where pr_info() functions are called passing the “%p” format specifier for some function pointers. The issue is that this leaks the address of those function pointers which an attacker can use to calculate the offsets of other, more important pointers, that can later be used to effectively bypass protections like KASLR.

One of the extended format specifiers is the “%fp” which stands for “function pointer” and it ensures that it will not leak a function pointer address which can be used to bypass protections such as KASLR. This is exactly what the patch was to fix this vulnerability.

With the increasing popularity of crypto-currencies we see more and more attacks focusing in this area. Some popular examples are coin miners delivered via botnets, JavaScript based miners spread via ad networks, etc. A recent addition to this list is the, so called, “Clipper” malware type.

Let’s start with the definition. Clipper is a type of software that looks at the operating system’s data buffer (commonly referred to as clipboard) for anything that resembles a crypto-currency address. If such an address is identified, it will replace it with one owned by the malware operator. I am adding a simple diagram I created to describe the attack below.

At the time of this writing, the three most prominent such malware are “CryptoShuffler”, “ComboJack” and “Project Evrial”. For the former two, Unit 42 of Palo Alto Networks recently published a blog post. “Project Evrial” started in December 2017 by threat actor “Qutra” based on “CryptoShuffler” and it was later (in January 2018) updated by threat actor “emotion” as well as “Qutra”. Both versions are very popular in underground Russian-speaking communities and their price is around $30. You can see one of the latest English advertisements of this malware below.

Based on the above in combination with the continuously growing popularity of crypto-currencies we can assume that more of these type of new malware approaches will be implemented in the future targeting specifically crypto-currencies. On the other hand, it is important to remember that the same techniques can be easily adjusted for other types of attacks and malware.

I had this book for quite a few years and never read it cover to cover. Recently I decided to do this and this is my review. It is still a very relevant resource if you are entering the world of malware analysis and it is definitely worth reading.

The book is written by two experts in the field, Michael Sikorski and Andrew Honig. Both very experienced malware analysts and reverse engineers. It is an 800 pages long book from 2012 that starts from zero, and moves up to advanced malware analysis and reverse engineering. No Starch Press provides a full listing of contents, reviews and sample chapters online if you want to check it out.

Basically, the book is from 2012 but the vast majority of its content is applicable today too. So far it is the most complete book that I have read on the topic of malware analysis. If you want to enter this world then I definitely recommend it as a good resource. However, keep in mind that it is a book from 2012, there will definitely be a few thing that are not as common today and many newer techniques that are not included in the book. It is also worth noting that it’s written in the form of lecturing book with exercises and examples at the end of each chapter. Overall, very nice book. :)

Recently I came across this report which is kind of sad since this was one nice and funny 0day that had been around for very long time. However, in this post I will only talk about the vulnerability since no exploit has been publicly disclosed yet. The vulnerability is in I4L (ISDN 4 Linux) and starts with the IIOCNETASL (Create slave interface) IOCTL command which is in drivers/isdn/i4l/isdn_common.c as shown below.

Basically this is accessible via /dev/isdnctrl ISDN control device and ioctl(2) system call using IIOCNETASL command. As you can see in the above snippet, it uses copy_from_user() to get the user controlled buffer and store it in “bname” which is a stack allocated buffer that you can see how it was defined below.

Here we can see that the only check on the user derived “p” pointer is that it is not empty. Then it uses strcpy() to copy the contents of it to “newname” which is a stack buffer with size of 10 Bytes. This is like a 90s textbook stack buffer overflow. In August 2017 it was reported and patched by Annie Cherkaev by replacing strcpy() with strscpy() which ensures that the copy will not exceed “newname” buffer’s limits. The patch is the following.

This is kind of sad, not because this is a useful 0day but because it had been around for years. Me and some friends had this 0day literally from 2007 so it is kind of sad seeing it dying quietly like this. In any case, I will not go into how to exploit it but it is a nice trivial vulnerability if you want to play around and practice your Linux kernel stack memory corruption exploitation techniques.

This was the first ever OffensiveCon and it took place last week in Berlin, Germany. Really nice conference which I definitely recommend to anyone interested in offensive security. Here is a very quick overview of the event from my point of view. Note that I did not attend any of the training sessions, so my opinion is based solely on the conference.

The event was dedicated to exploitation, I want to clarify this since offensive security is not just the exploitation, it is also the reconnaissance, building the Command & Control infrastructure, data exfiltration, lateral movement, etc. So, just to be clear, OffensiveCon is about exploitation. To get a better understanding of the content, here is a list of all of the talks of the event.

Day 1 keynote by Rodrigo Branco

Advancing the State of UEFI Bootkits: Persistence in the Age of PatchGuard and Windows 10 by Alex Ionescu

Field Report on a Zero-Day Machine by Niko Schmidt, Marco Bartoli and Fabian Yamaguchi

I attended all of them and the quality was excellent. As you can easily guess the presentations were scheduled in a single track. This is great because you don’t have to worry about what to attend and what to miss. It wasn’t a huge event in terms of people but everyone seemed really interested in exploitation. So, overall a very nice atmosphere.

The location, snacks, lunch, and all of the organizing components were amazing. Very high quality and everything worked exactly as planned (apart from the_grugq’s keynote that didn’t happen but that wasn’t organizers’ fault). So, congrats to everyone involved in this because it made the entire event a very pleasant experience where you didn’t have to care about anything apart from learning and sharing knowledge. Well done guys!

For the people that were not there, the organizers said that all the videos will be published on YouTube unless the speakers don’t want to, so keep an eye for them because all of them were very interesting.

In my pastposts I described a few common techniques used by phising kits authors to evade detection. This seems to be becoming more and more common among popular phising kits. Here I will present a few very common techniques that I came across lately.

The first one is the common anti-detection based on the client’s details such as originating IP address, user-agent string, domain name, etc. I have seen a few references of phising kit authors describing those with the slang term “antiboots” and “antibot”. You can see an example of such files below.

The above means that as an organization you need some “clean” networks, not associated to your organization from where you should be running your phising detection engines. But this is not the only one, another common technique employed by many phising kit authors is to embed static content of the target page in Base64 encoded format as shown below.

This means that if your detection engine was relying on callback images or static content, it is very likely that it will not be able to detect those phising pages. Additionally, I have identified numerous phising kits that do not target credentials only but they are after OAuth2 tokens too. This means that you have to tune your systems to support this attack scenario too. Finally, I have identified at least two separate phising kits which deliver the content AES encrypted along with a JavaScript implementation of AES to do the decryption during the client side execution.

Loading arbitrary kernel modules dynamically has always been a gray area between usability oriented and security oriented Linux developers & users. In this post I will present what options are available today from the Linux kernel and the most popular kernel hardening patch, the grsecurity. Those will give you some ideas on how those projects deal with the threat of Linux kernel’s LKMs (Loadable Kernel Modules).

Threat
This can be split to two main categories, allowing dynamic LKM loading introduces the following two threats:

Malicious LKMs. That’s more or less rootkits or similar malware that an adversary can load for various operations, most commonly to hide specific activities from the user-space.

Vulnerable LKM loading. Imagine that you have a 0day exploit on a specific network driver but this is not loaded by default. If you can trigger a dynamic loading then you can use your code to exploit it and compromise the system. This is what this vector is about.

Linux kernel and KSPP
The KSPP (Kernel Self-Protection Project) of the Linux kernel tried to fix this issue with the introduction of the kernel modules access restriction. Below you can see the exact description that Linux kernel’s documentation has for this restriction.

Restricting access to kernel modules
The kernel should never allow an unprivileged user the ability to load specific
kernel modules, since that would provide a facility to unexpectedly extend the
available attack surface. (The on-demand loading of modules via their predefined
subsystems, e.g. MODULE_ALIAS_*, is considered “expected” here, though additional
consideration should be given even to these.) For example, loading a filesystem
module via an unprivileged socket API is nonsense: only the root or physically
local user should trigger filesystem module loading. (And even this can be up
for debate in some scenarios.)
To protect against even privileged users, systems may need to either disable
module loading entirely (e.g. monolithic kernel builds or modules_disabled
sysctl), or provide signed modules (e.g. CONFIG_MODULE_SIG_FORCE, or dm-crypt
with LoadPin), to keep from having root load arbitrary kernel code via the
module loader interface.

The most restrictive way is via modules_disabled sysctl variable which is available by default on the Linux kernel. This can either be set dynamically as you see here.

sysctl -w kernel.modules_disabled=1

Or permanently as part of the runtime kernel configuration as you can see here.

echo 'kernel.modules_disabled=1' >> /etc/sysctl.d/99-custom.conf

In both cases, the result is the same. Basically, the above change its default value from “0” to “1”. You can find the exact definition of this variable in kernel/sysctl.c.

If we look into kernel/module.c we will see that if modules_disabled has a non-zero value it is not allowing LKM loading (may_init_module()) or even unloading (delete_module() system call) of any LKM. Below you can see the module initialization code that requires both the SYS_MODULE POSIX capability, and modules_disabled to be zero.

Looking in kernel/kmod.c we can also see another check, before the kernel module loading request is passed to call_modprobe() to get loaded in the kernel, the __request_module() function verifies that modprobe_path is set, meaning the LKM is not loaded via an API or socket instead of /sbin/modprobe command.

The above were the features that Linux kernel had for years to protect against this threat. The downside though is that completely disabling loading and unloading of LKMs can break some legitimate operations such as system upgrades, reboots on systems that load modules after boot, automation configuring software RAID devices after boot, etc.

To deal with the above, on 22 May 2017 the KSPP team proposed a patch to __request_module() (still to be added to the kernel) which follows a different approach.

What you see here is that in the very early stage of the kernel module loading security_kernel_module_request() is invoked with the module to be loaded as well as allow_cap variable which can be set to either “0” or “1”. If its value is positive, the security subsystem will trust the caller to load modules with specific predifned (hardcoded) aliases. This should allow auto-loading of specific aliases. This was done to close a design flaw of the Linux kernel where although all modules required the CAP_SYS_MODULE capability to load modules (which is already checked as shown earlier), the network modules required the CAP_NET_ADMIN capability which completely bypassed the previously described controls. Using this modified __request_module() it is ensured that only specific modules that are allowed by the security subsystem will be able to auto-load. However, it is also crucial to note that to this date, the only security subsystem that utilizes security_kernel_module_request() hook is the SELinux.

Before we move on with grsecurity, it is important to note that in 07 November 2010 Dan Rosenberg proposed a replacement of modules_disabled, the modules_restrict which was a copy of grsecurity’s logic. It had three values, 0 (disabled), 1 (only root can load/unload LKMs), 2 (no one can load/unload – same as modules_disabled). You can see the check that it was adding to __request_module() below.

However, this was never added to the upstream kernel so there is no need to dive more into the details behind it. Just as an overview, here is the proposed kernel configuration option documentation for modules_restrict.

modules_restrict:
A toggle value indicating if modules are allowed to be loaded
in an otherwise modular kernel. This toggle defaults to off
(0), but can be set true (1). Once true, modules can be
neither loaded nor unloaded, and the toggle cannot be set back
to false.
A value indicating if module loading is restricted in an
otherwise modular kernel. This value defaults to off (0),
but can be set to (1) or (2). If set to (1), modules cannot
be auto-loaded by non-root users, for example by creating a
socket using a packet family that is compiled as a module and
not already loaded. If set to (2), modules can neither be
loaded nor unloaded, and the value can no longer be changed.

grsecurity
Unfortunately, grsecurity stable patches are no longer publicly available. For this reason, in this article I will be using the grsecurity patch for kernel releases 3.1 to 4.9.24. For the LKM loading hardening grsecurity offers a kernel configuration option known as MODHARDEN (Harden Module Auto-loading). If we go back to ___request_module() in kernel/kmod.c we will see how this feature works.

The check in this case is relatively simple, it verifies that the caller’s UID is the same as the static global UID of root user. This ensure that only users with UID=0 can load kernel modules which completely eliminates the cases of unprivileged users exploiting flaws that are allowing them to request kernel module loading. To overcome the network kernel modules issue grsecurity followed a different approach which maintains the capability check (which is currently used by a very limited amount of security subsystems) but redirects all loading to the ___request_module() function to ensure that only root can load them.

Furthermore, grsecurity identified that a similar security design flaw also exists in the filesystem modules loading (still to be identified and fixed in the upstream kernel), which was fixed in a similar manner. Below is the grsecurity version of fs/filesystems.c’s get_fs_type() function which is ensuring that filesystem modules are loaded only by root user.

This Linux kernel design flaw allows loading of non-filesystem kernel modules via mount. How grsecurity detects those is quite clever and can be found in simplify_symbols() function of kernel/module.c. What it does is ensuring that the the arguments of the module are copied to the kernel side, and then checks the module’s loading information in the symbol table to ensure that the loaded module is trying to register a filesystem instead of any arbitrary kernel module.

To help in detection of malicious users trying to exploit this Linux kernel design flaw, grsecurity has also an alerting mechanism in place which immediately logs any attempts to load Linux kernel modules that are not filesystems using this design flaw. Meaning loading a kernel module via “mount” without that being an actual filesystem module.