marked as duplicate by Gilles bashUsers with the bash badge can single-handedly close bash questions as duplicates and reopen them as needed.Dec 10 '17 at 5:44

This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.

3

I disagree with that this would be a duplicate. The accepted answer answers how to loop over filenames with spaces; that has nothing to do with "why is looping over find's output bad practise". I found this question (not the other) because I need to loop over filenames with spaces, as in: for file in $LIST_OF_FILES; do ... where $LIST_OF_FILES is not the output of find; it's just a list of filenames (separated by newlines).
– Carlo WoodFeb 14 '18 at 1:58

@CarloWood - file names can include newlines, so your question is rather unique: looping over a list of filenames that can contain spaces but not newlines. I think you're going to have to use the IFS technique, to indicate that the break occurs at '\n'
– DiagonNov 28 '18 at 5:52

@Diagon- woah, I never realized that file names are allowed to contain newlines. I use mostly (only) linux/UNIX and there even spaces are rare; I certainly never in my entire life saw newlines being used :p. They might as well forbid that imho.
– Carlo WoodNov 30 '18 at 20:16

@CarloWood - filenames end in a null ('\0', same as ''). Anything else is acceptable.
– DiagonJan 2 at 23:00

By default, the shell splits the output of a command on spaces, tabs, and newlines

Filenames could contain wildcard characters which would get expanded

What if there is a directory whose name ends in *.csv?

1. Splitting only on newlines

To figure out what to set file to, the shell has to take the output of find and interpret it somehow, otherwise file would just be the entire output of find.

The shell reads the IFS variable, which is which is set to <space><tab><newline> by default.

Then it looks at each character in the output of find. As soon as it sees any character that's in IFS, it thinks that marks the end of the file name, so it sets file to whatever characters it saw until now and runs the loop. Then it starts where it left off to get the next file name, and runs the next loop, etc., until it reaches the end of output.

So it's effectively doing this:

for file in "zquery" "-" "abc" ...

To tell it to only split the input on newlines, you need to do

IFS=$'\n'

before your for ... find command.

That sets IFS to a single newline, so it only splits on newlines, and not spaces and tabs as well.

If you are using sh or dash instead of ksh93, bash or zsh, you need to write IFS=$'\n' like this instead:

IFS='
'

That is probably enough to get your script working, but if you're interested to handle some other corner cases properly, read on...

2. Expanding $file without wildcards

Inside the loop where you do

diff $file /some/other/path/$file

the shell tries to expand $file (again!).

It could contain spaces, but since we already set IFS above, that won't be a problem here.

But it could also contain wildcard characters such as * or ?, which would lead to unpredictable behavior. (Thanks to Gilles for pointing this out.)

To tell the shell not to expand wildcard characters, put the variable inside double quotes, e.g.

This makes find put a null byte at the end of each file name. Null bytes are the only characters not allowed in file names, so this should handle all possible file names, no matter how weird.

To get the file name on the other side, we use IFS= read -r -d ''.

Where we used read above, we used the default line delimiter of newline, but now, find is using null as the line delimiter. In bash, you can't pass a NUL character in an argument to a command (even builtin ones), but bash understands -d '' as meaning NUL delimited. So we use -d '' to make read use the same line delimiter as find. Note that -d $'\0', incidentally, works as well, because bash not supporting NUL bytes treats it as the empty string.

To be correct, we also add -r, which says don't handle backslashes in file names specially. For example, without -r, \<newline> are removed, and \n is converted into n.

A more portable way of writing this that doesn't require bash or zsh or remembering all the above rules about null bytes (again, thanks to Gilles):

putting while in a pipeline can create issues with the subshell created (variables in the loop block not visible after the command completes for example). With bash, I would use input redirection and process substitution: while read -r -d $'\0' file; do ...; done < <(find ... -print0)
– glenn jackmanMar 18 '11 at 1:23

Sure, or using a heredoc: while read; do; done <<EOF "$(find)" EOF. Not so easy to read however.
– MikelMar 18 '11 at 1:41

@glenn jackman: I tried to add more explanation just now. Did I just make it better or worse?
– MikelMar 18 '11 at 2:36

You don't need IFS, -print0, while and read if you handle find to its full, as shown below in my solution.
– user unknownMar 19 '11 at 23:10

1

Your first solution will cope with any character except newline if you also turn off globbing with set -f.
– GillesApr 4 '11 at 19:28

This script fails if any file name contains spaces or shell globbing characters \[?*. The find command outputs one file name per line. Then the command substitution `find …` is evaluated by the shell as follows:

Execute the find command, grab its output.

Split the find output into separate words. Any whitespace character is a word separator.

For each word, if it is a globbing pattern, expand it to the list of files it matches.

For example, suppose there are three files in the current directory, called `foo* bar.csv, foo 1.txt and foo 2.txt.

The find command returns ./foo* bar.csv.

The shell splits this string at the space, producing two words: ./foo* and bar.csv.

Since ./foo* contains a globbing metacharacter, it's expanded to the list of matching files: ./foo 1.txt and ./foo 2.txt.

Therefore the for loop is executed successively with ./foo 1.txt, ./foo 2.txt and bar.csv.

You can avoid most problems at this stage by toning down word splitting and turning off globbing. To tone down word splitting, set the IFS variable to a single newline character; this way the output of find will only be split at newlines and spaces will remain. To turn off globbing, run set -f. Then this part of the code will work as long as no file name contains a newline character.

IFS='
'
set -f
for file in $(find . -name "*.csv"); do …

(This isn't part of your problem, but I recommend using $(…) over `…`. They have the same meaning, but the backquote version has weird quoting rules.)

There's another problem below: diff $file /some/other/path/$file should be

diff "$file" "/some/other/path/$file"

Otherwise, the value of $file is split into words and the words are treated as glob patterns, like with the command substitutio above. If you must remember one thing about shell programming, remember this: always use double quotes around variable expansions ($foo) and command substitutions ($(bar)), unless you know you want to split. (Above, we knew we wanted to split the find output into lines.)

A reliable way of calling find is telling it to run a command for each file it finds:

instead of find -exec sh -c 'cmd 1; cmd 2' ";", you should use find -exec cmd 1 {} ";" -exec cmd 2 {} ";", because the shell needs to mask the parameters, but find doesn't. In the special case here, echo "$0" doesn't need to be a part of the script, just append -print after the ';'. You didn't include a question to proceed, but even that can be done by find, as shown below in my soulution. ;)
– user unknownMar 19 '11 at 23:25

2

@userunknown: The use of {} as a substring of a parameter in find -exec is not portable, that's why the shell is needed. I don't understand what you mean by “the shell needs to mask the parameters”; if it's about quoting, my solution is properly quoted. You're right that the echo part could be performed by -print instead. -okdir is a fairly recent GNU find extension, it's not available everywhere. I didn't include the wait to proceed because I consider that extremely poor UI and the asker can easily put read in the shell snippet if he wants.
– GillesMar 19 '11 at 23:59

Quoting is a form of masking, isn't it? I don't understand your remark about what is portable, and what not. Your example (2nd from bottom) uses -exec to invoke sh and uses {} - so where is my example (beside -okdir) less portable? find . -name "*.csv" -exec diff {} /some/other/path/{} ";" -print
– user unknownMar 20 '11 at 1:05

2

“Masking” isn't common terminology in shell literature, so you'll have to explain what you mean if you want to be understood. My example uses {} only once and in a separate argument; other cases (used twice or as a substring) are not portable. “Portable” means that it'll work on all unix systems; a good guideline is the POSIX/Single Unix specification.
– GillesMar 20 '11 at 1:15

Thanks for the answer. Why are you doing it wrong if you combine find with for/while/do/xargs?
– Amir AfghaniMar 18 '11 at 14:56

1

Find already iterates over a subset of files. Most people who show up with questions could just use one of the actions (-ok(dir) -exec(dir), -delete) in combination wiht ";" or + (later for parallel invocation). The main reason to do so, is, that you don't have to fiddle around with file parameters, masking them for the shell. Not that important: You needn't new processes all the time, less memory, more speed. shorter program.
– user unknownMar 18 '11 at 21:05

Not here to crush your spirit, but compare: time find -type f -exec cat "{}" \; with time find -type f -print0 | xargs -0 -I stuff cat stuff. The xargs version was faster by 11 seconds when processing 10000 empty files. Be careful when asserting that in most cases combining find with other utilities is wrong. -print0 and -0 are there to deal with spaces in the file names by using a zero byte as the item separator rather than a space.
– Jonathan KomarJul 5 '17 at 11:00

@JonathanKomar: Your find/exec commando took 11.7 s on my system with 10.000 files, the xargs version 9.7 s, time find -type f -exec cat {} + as suggested in my previous comment took 0.1 s. Note the subtile difference between "it is wrong" and "you're doing it wrong", especially when decorated with a smilie. Did you, for instance, do it wrong? ;) BTW, spaces in the filename are no problem for the above command and find in general. Cargo cult programmer? And by the way, combining find with other tools is fine, just xargs is most of the time superflous.
– user unknownJul 5 '17 at 12:48

@userunknown I explained how my code deals with spaces for posterity (education of future viewers), and was not implying that your code does not. The + for parallel calls is very fast, as you mentioned. I would not say cargo cult programmer, because this ability to use xargs in this way comes in handy on numerous occasions. I agree more with the Unix philosophy: do one thing and do it well (use programs separately or in combination to get a job done). find is walking a fine line there.
– Jonathan KomarJul 6 '17 at 7:21

This helps but it doesn't solve my problem. I still see cases where the file is being split up into multiple tokens.
– Amir AfghaniMar 18 '11 at 0:37

This answer is misleading. The problem is the for file in `find . -name "*.csv"` command. If there is a file called Hello World.csv, file will be set to ./Hello and then to World.csv. Quoting $file won't help.
– G-ManMar 4 '15 at 19:11

This is misleading. The problem is the for file in `find . -name "*.csv"` command. If there is a file called Hello World.csv, file will be set to ./Hello and then to World.csv. Quoting $file won't help.
– G-ManMar 4 '15 at 19:11

“Looping through files” – that is what the question says. Your solution will output the entirels -l output at once. It is effectively equivalent with echo "CHECKSTR `ls -l /root/somedir`".
– manatworkMay 13 '13 at 7:02