RSS

How-To Geek

If you have an unwieldy text file that you are trying to process, splitting it in sections can sometimes help processing time, especially if we were going to import a file into a spreadsheet. Or you might want to just retrieve a particular set of lines from a file.

Enter split, wc, tail, cat, and grep. (don’t forget sed and awk). Linux contains a rich set of utilities for working with text files on the command line. For our task today we will use split and wc.

First we take a look at our log file….

> ls -l-rw-r–r– 1 thegeek ggroup 42046520 2006-09-19 11:42 access.log

We see that the file size is 42MB. That’s kinda big… but how many lines are we dealing with? If we wanted to import this into Excel, we would need to keep it less than 65k lines.

Let’s check the amount of lines in the file using the wc utility, which stands for “word count”.

> wc -l access.log146330 access.log

We’re way over our limit. We’ll need to split this into 3 segments. We’ll use the split utility to do this.

We’ve now split our text files into 3 seperate files, each containing less than 60000 lines, which seemed like a good number to choose. The last file contains the leftover amount. If you were going to cut this particular file in half, you’d have done this:

NB: bc seems to default to a floating-point output. The sed invocation effectively act as a call to floor(3), stripping away the numbers after the decimal, and making my version of split happy. I guess that the sed expression would need to be changed to ‘s/,.*//’ for locales that use ‘,’ as their “numbers after the decimal” indicator.