I happen to know exactly what caused this (this time), but just the other day I was asked by somebody to help them figure out something similar. So the topic seems worthy of a quick blog post.

Suppose one collects all the archive names that contribute to the shared library in question (for DB2 builders one can relink the library having done an ‘export VERY_VERBOSE=true’ on the command line).

A good way to get this is to redirect that build output to a file, grab just the link line and run something like:

So, the archive supplying a reference to this symbol is ‘alibsqe.a’ This completes the task of doing a first localization of where the link error is coming from. One can continue to do this in a brute force way finding the object file within the archive, or solve it by source inspection once one knows where to look a bit better. In an extremely large source base, where nobody really knows any sizeable portion of it, narrowing down the problem is often an important first step.

A while back I assembled the following shell tips and tricks notes for an ad-hoc ‘lunch and learn’ session at work. For some reason (probably for colour) I had made these notes in microsoft word instead of plain text. That made them of limited use for reference, not being cut and pastable (since word mucks up the quote characters). Despite a few things that are work centric (references to clearcase and our source code repository directory structure), there’s enough here that is generally applicable that the converted-to-text version makes sense to have available as a blog post.

Variables

You will have many predefined variables when you login. Examples could include

$HOME home dir.
$EDITOR preferred editor.
$VISUAL preferred editor.
$REPLYTO where mail should be addressed from.
$PS1 What you want your shell prompt to look like.
$TMPDIR Good to set to avoid getting hit as badly when /tmp fills up.
$CDPATH good for build tree paths.

$CDPATH good for build tree paths. Example: CDPATH=.:..:/home/hotelz/peeterj:/vbs/engn:/vbs/test/fvt/standalone:/vbs/common:/vbs/common/osse/core ; one can run: 'cd sqo' and go right to that component dir.
$1 First argument passed to a shell script (or shell function).
$2
$* All arguments that were passed to a shell script

Wildcards

All files starting with an 'a', and ending with a 'b'

# ls a*b

All files of the form 'a'{char}'b'

# ls a?b

Quotes

Three different kinds. This is one of the most important things to know for any "shell programming".

Single quotes

Variables and wildcards are NOT expanded when contained in a set of single quotes. Example:

Double quotes

You don't have to double quote something for this sort of wildcard, and variable expansion, so you could write:

# echo $a $b

and the result will be the same:

foo goo boo

There is a difference though, namely, echo will treat this as three arguments, because the command is expanded before the final execution. This can be important when you want something with spaces to be treated as a single argument. Example:

The alternate syntax can be useful if you wanted to run a command inside of a command.

Other Special Shell Characters

~ your home dir.
; command separator
\ backslash (escape). When you want to use a special character as is, you either have to single quote it, or use an escape character to let the shell know what you want.

The for loop.

If you have the quotes and variables mastered, this is probably the next most useful construct for ad-hoc command line stuff. We use computers for repetitive stuff, but it's amazing how little people sometimes take advantage of this.

By example:

# for i in `grep : /tmp/something` ; do echo $i ; done

Here, i is the variable you name, and you can reference it in the loop as $i.

What's notable here is not the perl itself, but the fact that to run some of these commands required passing a pile of shell special characters. In order to pass these all to perl unaltered, it was required to use single quotes, and not double quotes.

Common to grep, sed, and perl is a concept called a regular expression (or regex). This is an extremely valuable thing to get to know well if you do any programming, since there's often a lot of text manipulation required as a programmer. Going into detail on this topic will require it's own time.

Shell Aliases

These are one liner "shortcuts". ksh/bash example:

alias la='ls -a'

Shell Functions

Multiline shortcuts. ksh/bash example:

function foo
{
echo boo
echo foo
}

This is similar to putting the commands in their own file and running that as a script, and can be used as helper functions in other scripts or as more complex "alias"es.

calling this with ddda 0 will attach the ddd debugger to the db2sysc process db2pd reports to be node 0.

Except for the perl fragment, which is basically a combined 'grep' and 'sed', this example uses many of the things that have been explained above (variables, embedded command, single quotes to avoid expansion and for grouping arguments).

I was asked how to use grep to select everything in a file starting with a pattern, and ending with a different one. The file is our diagnostic log and if this has originated with one of our system testers could be massive (a few hundred thousand lines long). gnu-grep can be used for this. You could do something like:

Here 9999999 is some number of lines that is guessed big enough to contain all the lines of interest (not known ahead of time), so the command says “give me everything after the expression, and then give me everything before the other expression in that output”

The -n flag says to run the whole script as if it is in a ‘while (<>){ … }’ loop. Until the initial pattern is seen $foundIt is false, and nothing will be printed, and we bail if the second pattern is seen. Note that this relies on perl’s lazy variable initialization, since $foundIt = 0 until modified.

Okay, it’s not a one liner as above since I’ve formatted this with newlines instead of semicolons, but when tossing off throwaway bash/ksh for loop stuff like this I usually do it as a one liner. This bash loop is easy enough to write, but messy and also fairly easy to get wrong. I’m tired of doing this over and over again.

It seemed to me that it was time to code up something that I can tweak for automated tasks like this, and wrote the perl script below that consumes grep -n ouput (ie. file:lineno:stuff output), and makes the desired replacements, whatever they are. I’ve based this strictly on the grep output because the unrestricted replacements could be dangerous and I wanted to visually verify that all the replacement sites were appropriate.

This little script, while certainly longer than the one-liner method, is fairly straightforward and easy to modify for other similar ad-hoc replacement tasks later. However, I have to wonder if there’s an easier way?

but that also means I have to know and enumerate all such expressions for what I’m interested in. Since I’m on Linux my grep is gnu-grep, so I considered using ‘grep -B N’ to show N lines of text precededing a match, but this also outputs the text I’m not interested in so doesn’t really work.

Here’s what I came up with (I’m sure there’s lot of ways, some perhaps easier, but I liked this one). It uses the perl filtering option -n once again, to convert the entire script into a filter (specified here inline with -e instead of in a file) :

Now that I’ve learned of how to use evaluated replacement expressions in perl it’s become my new favorite tool. Here’s today’s application, using it as a query engine to figure out all the calls of a particular function that I want to look at in the editor and probably modify.

I’m interested in editing a subset of the function calls for the module in a given directory. I can find them and their line numbers with:

grep -n printIt.*BLAH *.C

But there’s 90 of these function calls, and I know most don’t need alteration. If I grep with context, say grabbing 20 lines of context after the search expression, I can see which of these are of interest:

grep -nA20 printIt.*BLAH *.C | tee grep.out

I really want to weed out all the calls that also do NOT contain additional expressions. Illustrating by example, a fragment of the grep output above had in it:

Any of these calls that happen to have INFORMATIONAL or DUMPIT strings in them aren’t of interest, so I take my pre-canned evaluated regex perl script (see previous posts for an explaination) and modify it slightly.

piping this output through sort doesn’t do what’s desired, since that sorts, probably alphabetically. A sort on what follows after the space would do the trick. This is a common requirement, and only requires one extra sort parameter

The -k2 option on sort says to sort on the second key. Spaces delimit the sort keys by default. You could change the delimiter if required, but it doesn’t really help here. An example of doing so and getting the same results would be:

# grep timestamp *txt | sort -k3 -t: | head -3
...

This says sort on the third key, and delimit the sort patterns by colons instead of spaces. You can get really fancy with the sort command line options specifying secondary and higher sort keys, and different sort modifiers for different keys (numeric for some, increasing or decreasing, …) but knowing how to use just -k and -t will do the trick in many instances. If you want really fancy sorts you are probably better off using perl anyhow where you can write sort subroutines and have the ultimate control.

If you use vi as your editor (and by vi I assume vi == vim), then you want to know about the vim -q option, and grep -n to go with it.

This can be used to navigate through code (or other files) looking at matches to patterns of interest. Suppose you want to look at calls of strchr() that match some pattern. One way to do this is to find the subset of the files that are of interest. Say:

and edit all those files, searching again for the pattern of interest in each file. If there aren’t many such matches, your job is easy and can be done manually. Suppose however that there’s 20 such matches, and 3 or 4 are of interest for editing, but you won’t know till you’ve seen them with a bit more context. What’s an easy way to go from one to the next? The trick is grep -n plus vim. Example:

vim will bring you right to line 710 of sqlecatd.C in this case. To go to the next pattern, which will be in this case also in the next file, use the vim command

:cn

You can move backwards with :cN, and see where you are and the associated pattern with :cc

vim -q understands a lot of common filename/linenumber formats (and can probably be taught more but I haven’t tried that). Of particular utility is compile error output. Redirect your compilation error output (from gcc/g++ for example) to a file, and when that file is stripped down to just the error lines, you can navigate from error to error with ease (until you muck up the line numbers too much).

A small note. If you are grepping only one file, then the grep -n output won’t have the filename and vim -q will get confused. Example: