the 3822 6.7371760973
of 2460 4.33632998414
and 1723 3.03719372466
to 1479 2.60708619778
a 1308 2.30565838181

We can see that the file consists of one row per word.
Each row shows the word itself, the number of occurrences of that
word, and the number of occurrences as a percentage of the total
number of words in the text file.

We can do the same thing for a different book:

$ python countwords.py books/abyss.txt abyss.dat
$ head -5 abyss.dat

the 4044 6.35449402891
and 2807 4.41074795726
of 1907 2.99654305468
a 1594 2.50471401634
to 1515 2.38057825267

Let’s visualize the results.
The script plotcounts.py reads in a data file and plots the 10 most
frequently occurring words as a text-based bar plot:

$ python plotcounts.py isles.dat ascii

the ########################################################################
of ##############################################
and ################################
to ############################
a #########################
in ###################
is #################
that ############
by ###########
it ###########

plotcounts.py can also show the plot graphically:

$ python plotcounts.py isles.dat show

Close the window to exit the plot.

plotcounts.py can also create the plot as an image file (e.g. a PNG file):

$ python plotcounts.py isles.dat isles.png

Finally, let’s test Zipf’s law for these books:

$ python testzipf.py abyss.dat isles.dat

Book First Second Ratio
abyss 4044 2807 1.44
isles 3822 2460 1.55

So we’re not too far off from Zipf’s law.

Together these scripts implement a common workflow:

Read a data file.

Perform an analysis on this data file.

Write the analysis results to a new file.

Plot a graph of the analysis results.

Save the graph as an image, so we can put it in a paper.

Make a summary table of the analyses

Running countwords.py and plotcounts.py at the shell prompt, as we
have been doing, is fine for one or two files. If, however, we had 5
or 10 or 20 text files,
or if the number of steps in the pipeline were to expand, this could turn into
a lot of work.
Plus, no one wants to sit and wait for a command to finish, even just for 30
seconds.

The most common solution to the tedium of data processing is to write
a shell script that runs the whole pipeline from start to finish.

Using your text editor of choice (e.g. nano), add the following to a new file named
run_pipeline.sh.

It allows us to type a single command, bash run_pipeline.sh, to
reproduce the full analysis.

It prevents us from repeating typos or mistakes.
You might not get it right the first time, but once you fix something
it’ll stay fixed.

Despite these benefits it has a few shortcomings.

Let’s adjust the width of the bars in our plot produced by plotcounts.py.

Edit plotcounts.py so that the bars are 0.8 units wide instead of 1 unit.
(Hint: replace width = 1.0 with width = 0.8 in the definition of
plot_word_counts.)

Now we want to recreate our figures.
We could just bash run_pipeline.sh again.
That would work, but it could also be a big pain if counting words takes
more than a few seconds.
The word counting routine hasn’t changed; we shouldn’t need to recreate
those files.

Alternatively, we could manually rerun the plotting for each word-count file.
(Experienced shell scripters can make this easier on themselves using a
for-loop.)

for book in abyss isles;do
python plotcounts.py $book.dat $book.png
done

With this approach, however,
we don’t get many of the benefits of having a shell script in the first place.

Another popular option is to comment out a subset of the lines in
run_pipeline.sh:

# USAGE: bash run_pipeline.sh# to produce plots for isles and abyss# and the summary table for the Zipf's law tests.# These lines are commented out because they don't need to be rerun.#python countwords.py books/isles.txt isles.dat#python countwords.py books/abyss.txt abyss.dat
python plotcounts.py isles.dat isles.png
python plotcounts.py abyss.dat abyss.png
# Generate summary table# This line is also commented out because it doesn't need to be rerun.#python testzipf.py abyss.dat isles.dat > results.txt

Then, we would run our modified shell script using bash run_pipeline.sh.

But commenting out these lines, and subsequently uncommenting them,
can be a hassle and source of errors in complicated pipelines.

What we really want is an executable description of our pipeline that
allows software to do the tricky part for us:
figuring out what steps need to be rerun.

Make was developed by
Stuart Feldman in 1977 as a Bell Labs summer intern, and remains in
widespread use today. Make can execute the commands needed to run our
analysis and plot our results. Like shell scripts it allows us to
execute complex sequences of commands via a single shell
command. Unlike shell scripts it explicitly records the dependencies
between files - what files are needed to create what other files -
and so can determine when to recreate our data files or
image files, if our text files change. Make can be used for any
commands that follow the general pattern of processing files to create
new files, for example:

Run analysis scripts on raw data files to get data files that
summarize the raw data (e.g. creating files with word counts from book text).

There are now many build tools available, for example Apache
ANT, doit, and nmake for Windows. There
are also build tools that build scripts for use with these build tools
and others e.g. GNU Autoconf and CMake. Which is
best for you depends on your requirements, intended usage, and
operating system. However, they all share the same fundamental
concepts as Make.

Why Use Make if it is Almost 40 Years Old?

Today, researchers working with legacy codes in C or FORTRAN, which
are very common in high-performance computing, will, very likely
encounter Make.

Researchers are also finding Make of use in implementing
reproducible research workflows, automating data analysis and
visualisation (using Python or R) and combining tables and plots
with text to produce reports and papers for publication.

Make’s fundamental concepts are common across build tools.

GNU Make is a free, fast, well-documented, and very popular
Make implementation. From now on, we will focus on it, and when we say
Make, we mean GNU Make.

Key Points

Make allows us to specify what depends on what and how to update things that are out of date.