If you want a new file with the edits, you'll have to get a little creative with pipe lines and some of the tools for joining lines. You could also replace the ed script with an ex script, thus allowing an easier "Save as" like command to be used but the portability of ex script varies by the users competency for portability. ed scripts on the other hand are pretty darn universal.

The problem is, sed doesn't work like ed for editing files, which results in $sed -e 'expr' infile > outfile instead of simply editing the file in place, and the only time you can rely on sed being able to edit a file in place is when it is GNU SED or a specific operating system that supports a comparable feature.

Which means for portability reasons, if you're editing in place, ed and perl are your winners.

My take on this is that the in-place options on most utilities are something of a misnomer. The data is written to a scratch file, then copied back onto the original -- one does not get something for nothing. This is true of "-i" on GNU sed, as well as the interactive editors ed/ex. So that the in-place description refers to the result rather than the method.

There are times when I use the interactive editors ex/ed in scripting mode, but it's far more often as a demonstration that it can be done, rather than to do it in practice.

The operation of ed/ex compared to sed/awk is different in the sense that -- at least with sed -- most often a line is read in, operated upon, and written out. The ed/ex editors will read in the entire file (possibly making an index of the locations of the lines on a scratch file), so that the memory demands can be significant.

In most of the situations I have seen, sed/awk is far more often used than is ed/ex for scripting tasks. However, when the file is small enough and certain requirements are present -- such as addressing a line before a line that matches a pattern -- then ed/ex can be quite useful.

There are adherents of sed on comp.unix.shell, but I think the flexibility and speed of GNU awk makes it a tool that one finds oneself reaching for more often than for sed. For the everyday tasks of dealing with fields of data, I have not found a more useful utility than awk.

For mimicking the data-tool-connecting philosophy of *nix, I have found perl better, in that it deals with command-line options better than does awk. For example, the module Getopt::Euclid automatically will produce documentation and an option parser from one set of documentation sequences (POD mark-up). Externally, then, one could ask for

Code:

perl-script-name --man

to get man-style documentation, and the code will also parse command-line options.

My take on this is that the in-place options on most utilities are something of a misnomer. The data is written to a scratch file, then copied back onto the original -- one does not get something for nothing. This is true of "-i" on GNU sed, as well as the interactive editors ed/ex. So that the in-place description refers to the result rather than the method.

I fully agree.

Anotha' (similar) awk solution:

Code:

$ cat file
1
3
4
$ awk 'NR==2{$0=$0" 2"}1' file
1
3 2
4

__________________
The best way to learn UNIX is to play with it, and the harder you play, the more you learn.
If you play hard enough, you'll break something for sure, and having to fix a badly broken system is arguably the fastest way of all to learn. -Michael Lucas, AbsoluteBSD

Point being, you have to write your own w command in shell script when using sed, and some people choose to for awk as well.. Which means dealing with temporary files by hand. For those who think it's simple, $ command file > file.bak && mv file.bak file is not perfect for every situation.

I think the point drl was getting at is that just because those moving parts are "behind the scenes" doesn't mean they don't exist. Is it harder to screw with it and mess stuff up if you can't see it? Sure...but that doesn't mean a well-written script is more prone to failure just because it doesn't hide the steps from you.

I think the point drl was getting at is that just because those moving parts are "behind the scenes" doesn't mean they don't exist. Is it harder to screw with it and mess stuff up if you can't see it? Sure...but that doesn't mean a well-written script is more prone to failure just because it doesn't hide the steps from you.

It's a question of responsibility, do you trust your copy/paste drill over an existing program? Why should you reimplement the same exact moving parts in shell, for each script you would want to make such a change in - when you have a tried and tested tool for doing it that has existed for over 35 years. It's just a waste of time. Coding hours should be spent on things that actually contribute to the program, not a substitute for learning the standard issue stuff.

Quote:

Originally Posted by "7th Commandment of C Programming

Thou shalt study thy libraries and strive not to reinvent them without cause, that thy code may be short and readable and thy days pleasant and productive.

From the perspective of shell scripting, your utilities are much like the libraries of a C program.