4 Answers
4

A core dump, either of the entire system, or single executable, is the entire contents of that processes' memory, or in the case of the system, everything's memory, written out to a file. There can be a lot of data in such a dump - here's an abbreviated form of the memory space of gvim, which I'm currently running:

From that address space, shared objects wouldn't be dumped, but everything else, stack, heap, would be. Addresses will be relative to certain versions of shared objects and potentially with ASLR they may not line up process to process. A kernel dump can be configured to contain varying amounts of information, from small dumps to everything.

What does this tell you? Exactly the state of the system or program at the time of being dumped - what it has on the stack (and so probably which function it is in). Most core dumps also contain other useful information like what is in the registers of the processor at the time of the dump.

In terms of how you work out what is going on, EIP or RIP on x86 (x64) will tell you at exactly which memory address the process was executing. rbp or ebp will tell you where abouts on the stack your function is working. You'd expect to find local variables there. You could also potentially look for all system calls and their arguments made in a core dump.

All of this serves to tell you what is going on with a process at that specific point in time. It isn't just a tool for pen testing either; it's possible to analyse core dumps as a form of debugging applications, particularly where running debuggers is hard or impossible. For example, you might analyse a kernel core dump to work out why the kernel crashed (what was it doing when it panicked?) rather than attaching a debugger and trying again - it also allows other people to send dumps to you for analysis.

How does this help with pen testing? The same way debugging helps with pen testing. You can see what's loaded in memory, what the current state is. You can work out what a process might be doing when you took the dump and perhaps most crucially, any data stored in any executable in memory (heap, stack etc) will also end up in the core dump, including passwords stored as plaintext in memory.

What do the analysis tools do? Look for patterns and structures indicative of a potential exploit (or error, if you're after debugging). A full explanation of that is probably beyond the scope of this box - there are many books dedicated to techniques for analysing memory dumps. Much of this is also system and architecture dependent; techniques for Windows x86 are different to techniques for Linux on MIPS.

As an example of something that might be indicative of an error, here's an example AviD will really appreciate:

As you can see, the first space available to this function on the stack for local variables was rbp-16 and below. Above that, we've just happily clobbered a stack frame with a whole pile of data, probably causing the program to crash. However, this code may well indicate a possible exploit. Looking for patterns like that in a crash dump is what makes them useful, and moreover rip in this case will tell you exactly which memory access caused the problem, because it'll point at the offending instruction (in theory).

In case you're wondering, the offending piece of code was simply a memcpy from an unbounded buffer into a much smaller target space. The function call to memcpy was optimised out.

First of all, there are two main users of memory dumps in security - forensics and exploit writing guys. Pen-testers, not so much - of course it depends on the pentest ;)

The usual way tools that dump memory work is by opening the memory pseudo-device and reading all the contents to a file.

For example with dd in windows you would do something like dd if=\\.\Device\PhysicalMemory of=memory.bin bs=4096 (pre win-2003, after that you need a kernel driver)

The same applies to linux e.g. dd if=/dev/mem of=/home/john/mem.bin bs=1024

This is the simpliest way of course. Other ways to acquire memory are by using crash dumps as Ninefingers mentions, LiveKd dumps, get the memory through the hypervisor if we are talking about a VM, get the hibernation file if one exists or even get the memory through fireware or another DMA interface. Of course, then there's the cold boot attack too.

Having the memory on a file, you can do many things, depends on what you want.
The common one is to search for strings, e.g. password tokens, private keys and the rest, this is the easiest one.

If you want to do any more than that, usually a lot more work is needed, but is automated with varying degrees of success using commercial or open source tools. The main idea is that physical memory in your dump will be scattered in pages. To make any sense of it, you locate by heuristic search the page tables, and try to reconstruct a logical view of the memory. Having done that, you can perform other searches and locate structures for the various running processes, open files, network connections etc. It really depends on what your goals are.