What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world.
That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and
voted up or down.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted
to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning,
there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

$ define bash
1 knock: a vigorous blow; "the sudden knock floored him"; "he took a bash right in his face"; "he got a bang on the head"
2 sock: hit hard
3 an uproarious party
4 The American comedy television series Weeds was created by Jenji Kohan and airs on premium cable channel Showtime. ...
5 Bash is a free software Unix shell written for the GNU Project. Its name is an acronym which stands for Bourne-again shell. ...

This function takes a word or a phrase as arguments and then fetches definitions using Google's "define" syntax. The "nl" and perl portion isn't strictly necessary. It just makes the output a bit more readable, but this also works:

man perl | grepp Pascal
Perl combines (in the author's opinion, anyway) some of the best features of C, sed, awk, and sh, so people familiar with those languages should have little difficulty with it.
(Language historians will also note some vestiges of csh, Pascal, and even BASIC-PLUS.) Expression syntax corresponds closely to C expression syntax. Unlike most Unix utilities,
Perl does not arbitrarily limit the size of your data--if you've got the memory, Perl can slurp in your whole file as a single string. Recursion is of unlimited depth. And the
tables used by hashes (sometimes called "associative arrays") grow as necessary to prevent degraded performance. Perl can use sophisticated pattern matching techniques to scan
large amounts of data quickly. Although optimized for scanning text, Perl can also deal with binary data, and can make dbm files look like hashes. Setuid Perl scripts are safer
than C programs through a dataflow tracing mechanism that prevents many stupid security holes.

This is a command that I find myself using all the time. It works like regular grep, but returns the paragraph containing the search pattern instead of just the line. It operates on files or standard input.

List packages and their disk usage in decreasing order. This uses the "Installed-Size" from the package metadata. It may differ from the actual used space, because e.g. data files (think of databases) or log files may take additional space.

curl "http://www.house.gov/house/MemberWWW.shtml" 2>/dev/null | sed -e :a -e
's/<[^>]*>//g;/</N;//ba' | perl -nle 's/^\t\t(.*$)/ $1/ and
print;' > DefeatedIn2010.txt
#Fills a file called "DefeatedIn2010.txt" with the name of every member of congress.

This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.

Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?

Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....

Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.