Category Archives: Unix

If you ever look at the definition of specific AutoSys Job, you would find that it contains attribute-value pairs (line-by-line), delimited by colon ‘:’ I thought it would be cool to parse the job definition, by creating python dictionary using the attribute-value pairs.

However, there are chances when the values might contain ‘:’ as well, you could switch back to the earlier solution. Otherwise, invoking split() as above, throws ValueError: too many values to unpack.

Summary

Using f.read() reads the input file at one go and invoking splitlines() splits the input into list of several lines, resulting in creating an iterator.

The for-statement iterates over each line from file object wherein position of first occurrence of colon is found and used for extracting key, value based on slicing and invokes split() to determine key, value.

Many times, I come across scenario, wherein I need to split given file into sections and write it into different files or process them dynamically, as and when those sections are found.

I did solve this problem, although I find it to be rather inefficient, so I though of rewriting and came up with following generic function, which could be used by anyone to work on scenarios like extract function blocks from shell/perl/python scripts, or extract diff block from git diff output, etc.

Once LB and UB are set, it becomes easy to extract block from input file using sed -n, as shown above. To continue iterating, it is important to keep removing first line, after every successful iteration.

sed '1d' outfile > outfile.n
mv outfile.n outfile

As this is an infinite loop, it is important to break the loop, once input file is completely processed.

test ! -s outfile && break
done
}

Usage

Let us say, you generate diff between two git commits using following command;

Couple of months back, I mentioned that I started using Git at workplace, thanks to my colleague. So far, I have been creating local repositories at workplace and in the process, spent lot of time exploring as much as I can.

This made me use the command line extensively and I ended up writing 3-letter aliases as follows, which helped saving lot of time.

Working on AIX Servers with limited grep features, sometimes makes it difficult to use for particular scenarios. For instance, I want to split the lines (read from STDIN or FILE) into words, precisely. However, without grep -o option, I am clueless on how to get desired results. Since past few months, I have been investing time in learning Python and using its features to complement text processing tasks for shell scripts, I often write for automating several tasks and creating productivity tools.

Code Snippet #1

import sys, re
for line in sys.stdin.readlines():
listofwords = [word for word in re.split('\W', line) if word]
print listofwords

Looking at the above code snippet, four lines of code, did the trick. The features and constructs provided by Python, to implement scenarios like this, makes it look really cool. Let us understand that quickly, before we can operate functions on those words.

sys.stdin.readlines() reads the input from STDIN, unless EOF character has been entered.

After which, for loop construct iterates over the input lines, one-by-one.

The special-construct present on the right-side of assignment operator, is representation of in-built function [1], filter()

re.split() generates [2] list of words, based on pattern given as first argument.

The for loop-construct iterates over the generated list and using if-construct, filtering is done.

This is done to make sure, there are no empty strings in the generated list using the special-construct.

filter(None, re.split(‘\W’, line)) can be used as an alternative, which by default takes care of empty strings, as they return false.

Code Snippet #2

What if the line contains a word, “Hi” and I want to replace all occurrences of “Hi” with “Hey”, while the list is getting generated using above approach. To make it possible, the below code snippet imports additional module string, and invokes string.replace() to do the needful.

So far, I have encountered this on data warehousing projects, probably this might happen in some other domains too. Anyways, if there’s an ebcdic file with you, mostly retrieved from Mainframe systems. Then, one would like to convert them to ASCII for making modifications using text editors on UNIX servers, like AIX.

I have used the following command several times for changing the file from ASCII to EBCDIC or vice-versa. So, this is how its done;

dd if=<ebcdic-file> of=<ascii-file> conv=ascii

Now, you can start modifying the ASCII version and once done, you may convert it back to EBCDIC to be used by your application.

dd if=<ascii-file> of=<ebcdic-file> conv=ebcdic

If you’re just replacing particular number of bytes with equivalent number of bytes having different characters, the conversion would be smooth and application reading the file should not have any issues.

However, I had some issues while adding/deleting records into/from ASCII version, as found when converted back to EBCDIC mode. The file was unreadable by the application and I had difficulty reverting without having backup of EBCDIC version.