If I have a large file and need to split it into 100 megabyte chunks I will do

split -b 100m myImage.iso

That usually give me something like

xaa
xab
xac
xad

And to get them back together I have been using

cat x* > myImage.iso

Seems like there should be a more efficient way than reading through each line of code in a group of files with cat and redirecting the output to a new file. Like a way of just opening two files, removing the EOF marker from the first one, and connecting them - without having to go through all the contents.

Windows/DOS has a copy command for binary files. The help mentions that this command was designed to able able to combine multiple files. It works with this syntax: (/b is for binary mode)

copy /b file1 + file2 + file3 outputfile

Is there something similar or a better way to join large files on Linux than cat?

Update

It seems that cat is in fact the right way and best way to join files. Glad to know i was using the right command all along :) Thanks everyone for your feedback.

Side note: Better not use cat x*, because the order of files depends on your locale settings. Better start typing cat x, than press Esc and then * - you'll see the expanded order of files and can rearrange.
– rozcietrzewiaczNov 15 '11 at 12:33

@rozcietrzewiacz - can you give an example of how I would adjust my locale setting that would break cat x* ? Would the new locale setting not also affect split so that if split and cat x* were used on the same system they would always work?
– cwdNov 15 '11 at 14:29

3

"opening two files, removing the EOF marker from the first one, and connecting them - without having to go through all the contents."... sounds like you need to invent a new filesystem in order to do what you want
– JoelFanNov 15 '11 at 16:14

5

@cwd: Looking at split.c in GNU Coreutils, the suffixes are constructed from a fixed array of characters: static char const *suffix_alphabet = "abcdefghijklmnopqrstuvwxyz";. The suffix wouldn't be affected by the locale. (But I don't think any sane locale would reorder the lowercase letters; even EBCDIC maintains their standard order.)
– Keith ThompsonNov 16 '11 at 2:04

.. and "catenate" derrives from a Latin word "catena" which means "a chain".. concatenating is joining up the links of a chain. ... (and a bit further off-topic, a catenary curve also derrives from "catena". It is the way a chain hangs)
– Peter.ONov 15 '11 at 17:03

Under the hood

There is no more efficient way than copying the first file, then copying the second file after it, and so on. Both DOS copy and cat do that.

Each file is stored independently of other files on the disk. Almost every filesystem designed to store data on a disk-like device operates by blocks. Here's a highly simplified presentation of what happens: the disk is divided into blocks of, say 1kB, and for each file the operating system stores the list of blocks that make it up. Most files aren't an integer number of blocks long, so the last block is only partially occupied. In practice, filesystems have many optimizations, such as sharing the last partial block between several files or storing “blocks 46798 to 47913” rather than “block 46798, block 46799, …”. When the operating system needs to create a new file, it looks for free blocks. The blocks don't have to be consecutive: if only blocks 4, 5, 98 and 178 are free, you can still store a 4kB file. Using blocks rather than going down to the byte level helps make finding free blocks for a new or growing file considerably faster, and reduces the problems due to fragmentation when you create or grow and delete or shrink a lot of files (leaving an increasing number of holes).

You could support partial blocks in mid-file, but that would add considerable complexity, particularly when accessing files non-sequentially: to jump to the 10340th byte, you could no longer jump to the 100th byte of the 11th block, you'd have to check the length of every intervening block.

Given the use of blocks, you can't just join two files, because in general the first file ends in mid-block. Sure, you could have a special case, but only if you want to delete both files when concatenating. That would be a highly specific handling for a rare operation. Such special handling doesn't live on its own, because on a typical filesystem, many file are being accessed at the same time. So if you want to add an optimization, you need to think carefully: what happens if some other process is reading one of the files involved? What happens if someone tries to concatenate A and B while someone is concatenating A and C? And so on. All in all, this rare optimization would be a huge burden.

All in all, you can't make joining files more efficient without making major sacrifices elsewhere. It's not worth it.

On splitting and joining

split and cat are simple ways of splitting and joining files. split takes care of producing files named in alphabetical order, so that cat * works for joining.

A downside of cat for joining is that it is not robust against common failure modes. If one of the files is truncated or missing, cat will not complain, you'll just get damaged output.

There are compression utilities that produce multipart archives, such as zipsplit and rar -v. They aren't very unixy, because they compress and pack (assemble multiple files into one) in addition to splitting (and conversely unpack and uncompress in addition to joining). But they are useful in that they verify that you have all the parts, and that the parts are complete.

I was just imagining using cat to display several gigabytes of code in the console, then having it captured and put into a file. That's the mental image I have for what must be happening when I use cat and redirect the output that I can't see. It just seemed like if there was a way you could open two files, connect them, and then close them it would be more efficient than running through all of lines of code with cat. Thanks for letting me know about the direct connection.
– cwdNov 15 '11 at 14:16

@cwd It would be possible to design a filesystem where you could join two files that way, but that would complicate the design of the filesystem immensely. You'd optimize for that one operation at the cost of making a lot of common tasks more complicated and slower.
– GillesNov 15 '11 at 23:29

@Gilles - it'd be interesting to know more about the low level details. To me, reading all of the sectors off the hard disk for several files and then dumping them back into other unused sectors on the disk seems inefficient. And I think large files must be stored across multiple blocks of free sectors at times because there may not always be enough blocks side by side to store them. Therefore theoretically you could join files into one by removing the EOF marker and pointing to group of sectors at the start of the next file. *nix is powerful so I wondered if there was a better way than cat.
– cwdNov 15 '11 at 23:53

@cwd There's no “EOF marker”. No sane modern filesystem works like that, because it prevents some characters from occurring in files (or else requires complex encodings). But even if there was an EOF marker, most of the time, you would not have the right file after it.
– GillesNov 15 '11 at 23:59

I meant the concept of the EOF marker and not an actual EOF marker. Otherwise if you look at the bits and bytes of a file on the hard drive, how do you know where it ends? Do you specify the length of the file at the start of it? I'm talking about a really low level thing. Is that what you are also referring to?
– cwdNov 16 '11 at 0:04