Introduction

We use the NTBackup utility to backup our files at the office. Recently, one of the backup files turned out to be corrupt. Searcing for a solution, I came upon William T. Kranz's freeware ntbkup utility. Although ntbkup did a great job recovering the backup file, it is a command line utility. So, I decided to write a backup reader with a GUI.

Background

Volume, file, and folder information in a MTF backup file is stored as so called Descriptor Blocks. MTF is a linear file format. Reading a MTF backup file is a straightforward process. A backup file contains Data Set Descriptor Blocks, which are followed by Volume Descriptor Blocks. Volume Descriptor Blocks are followed by Directory Descriptor Blocks, which are in turn followed by File Descriptor Blocks.

Each descriptor block is followed by one or more Data Streams. Data streams are used for various purposes. They can be used for padding, storing checksums, long file/folder names, etc. One or more data streams associated with a file descriptor block typically contain file data.

Using the code

To read a backup file, an implementation creates a new instance of the CBackupReader class, passing the name of the backup file as an argument. Then, a catalog is created using the CCatalogNode class. The CCatalogNode class has a tree structure. The root of the tree is the backup file itself. Its child nodes represent the data sets in the backup file. Each data set node contains one (and only one) volume node. Volume nodes contain folders and files as child nodes. To extract a file from the catalog, one traverses the tree to reach the file and calls the ExtractTo method of the CCatalogNode class.

Right. Header.StreamLength is an unsigned long. I am using a BinaryReader which can handle data up to 2 GB if I recall correctly. If the file data is larger than that, it will certainly cause a bug. Actually, reading the entire file data into memory is not a good idea at all. This part of the code requires a major revision. Instead of reading the entire file, I should probably just get the offset to file data, and write that data to disk when required.

Thanks for pointing that out. I will fix that in a future release when I have some free time.