Pages

Sunday, February 9, 2014

Recently, something that I was working on made me think of creating a program to index text files, that is, to create an index file for a text file, something like the index of a book (*), in which, for words in the book, there is a list of page numbers where that word occurs. The difference here is that this program will create, for each word, a list of line numbers where the word occurs in the text file being processed.
(*) To be more specific, what I created was something like a back-of-the-book index, but for text files. I mention that because there are many types of index (Wikipedia), and not just for books. In fact, I was surprised to see the number of meanings or uses of the word index :-) Check the Wikipedia link in the previous sentence to see them. One type of index familiar to programmers, of course, is an array index (or list index, for Python).

Here is the program, called text_file_indexer.py, with a sample input, run and output shown below it. Comments in the code explain the key parts of the logic. Some improvements to the program are possible, of course. I may work on some of them over time. You can already customize the delimiter characters string that is used to remove those characters from around words.

"""
text_file_indexer.py
A program to index a text file.
Author: Vasudev Ram - www.dancingbison.com
Copyright 2014 Vasudev Ram
Given a text file somefile.txt, the program will read it completely,
and while doing so, record the occurrences of each unique word,
and the line numbers on which they occur. This information is
then written to an index file somefile.idx, which is also a text
file.
"""
import sys
import os
import string
from debug1 import debug1
def index_text_file(txt_filename, idx_filename,
delimiter_chars=",.;:!?"):
"""
Function to read txt_file name and create an index of the
occurrences of words in it. The index is written to idx_filename.
There is one index entry per line in the index file. An index entry
is of the form: word line_num line_num line_num ...
where "word" is a word occurring in the text file, and the instances
of "line_num" are the line numbers on which that word occurs in the
text file. The lines in the index file are sorted by the leading word
on the line. The line numbers in an index entry are sorted in
ascending order. The argument delimiter_chars is a string of one or
more characters that may adjoin words and the input and are not
wanted to be considered as part of the word. The function will remove
those delimiter characters from the edges of the words before the rest
of the processing.
"""
try:
txt_fil = open(txt_filename, "r")
"""
Dictionary to hold words and the line numbers on which
they occur. Each key in the dictionary is a word and the
value corresponding to that key is a list of line numbers
on which that word occurs in txt_filename.
"""
word_occurrences = {}
line_num = 0
for lin in txt_fil:
line_num += 1
debug1("line_num", line_num)
# Split the line into words delimited by whitespace.
words = lin.split()
debug1("words", words)
# Remove unwanted delimiter characters adjoining words.
words2 = [ word.strip(delimiter_chars) for word in words ]
debug1("words2", words2)
# Find and save the occurrences of each word in the line.
for word in words2:
if word_occurrences.has_key(word):
word_occurrences[word].append(line_num)
else:
word_occurrences[word] = [ line_num ]
debug1("Processed {} lines".format(line_num))
if line_num < 1:
print "No lines found in text file, no index file created."
txt_fil.close()
sys.exit(0)
# Display results.
word_keys = word_occurrences.keys()
print "{} unique words found.".format(len(word_keys))
debug1("Word_occurrences", word_occurrences)
word_keys = word_occurrences.keys()
debug1("word_keys", word_keys)
# Sort the words in the word_keys list.
word_keys.sort()
debug1("after sort, word_keys", word_keys)
# Create the index file.
idx_fil = open(idx_filename, "w")
# Write the words and their line numbers to the index file.
# Since we read the text file sequentially, there is no need
# to sort the line numbers associated with each word; they are
# already in sorted order.
for word in word_keys:
line_nums = word_occurrences[word]
idx_fil.write(word + " ")
for line_num in line_nums:
idx_fil.write(str(line_num) + " ")
idx_fil.write("\n")
txt_fil.close()
idx_fil.close()
except IOError as ioe:
sys.stderr.write("Caught IOError: " + repr(ioe) + "\n")
sys.exit(1)
except Exception as e:
sys.stderr.write("Caught Exception: " + repr(e) + "\n")
sys.exit(1)
def usage(sys_argv):
sys.stderr.write("Usage: {} text_file.txt index_file.txt\n".format(
sys_argv[0]))
def main():
if len(sys.argv) != 3:
usage(sys.argv)
sys.exit(1)
index_text_file(sys.argv[1], sys.argv[2])
if __name__ == "__main__":
main()
# EOF

Here is a sample input text file, file01.txt, that I tested the program with:

This file is a test of the text_file_indexer.py program.
The program indexes a text file.
The output of the program is another file called an index file.
The index file is like the index of a book.
For each word that occurs in the text file, there will be a line
in the index file, starting with that word, and followed by all
the line numbers in the text file on which that word occurs.

I ran the text file indexer program with the command:

python text_file_indexer.py file01.txt file01.idx

And here is the output of running the program on that text file, that is, the contents of the file file01.idx: