many of the below solution hope that the filenames can "fit" on a du -ch line, which may not be the case. So it would then print several partial results instead of the global sum. See my solution for an alternative which should work in "all" cases (and is portable as it doesn't depend on GNU find, GNU xargs, and not even on recent find options such as "printf")
–
Olivier DulacDec 5 '13 at 14:48

Those other solution of course are neat, though, and will work in many cases (the commandline can be huge, especially on recent systems!). But "ymmv", and huge directories occur too...
–
Olivier DulacDec 5 '13 at 14:51

du (disk usage) count the space files take up. Pass your found files to it and direct it to summarize (-c) and print in a human readable format (-h) instead of byte counts. Yo will then get the sizes of all the files concluded with a grand total. If you are only interested in this last line, you can then tail for it.

To also handle spaces in filenames, the delimiting symbol that find prints and xargs expects is set to the null symbol instead of the usual space.

find -maxdepth 2 -type f -print0 | xargs -0 du -ch | tail -n1

If you expect to find many files which burst the number of maximum arguments, xargs will split these into multiple du invocations. Then you could work around with replacing tail with a grep, that only shows the summarizing lines.

+1 It will fail for files with a space in their name, though.
–
Joseph R.Dec 5 '13 at 14:31

It will fail too if the list of files is extremely long. You're literally expanding the entire list of files from the $(find ..) command, and passing it as args to du -ch. But in a pinch this is completely usable! Also suffers from spaces and non-printables.
–
slm♦Dec 5 '13 at 14:40

It now cares for spaces and there is also a hint for an overwhelming argument list.
–
XZSDec 5 '13 at 15:04

1) we print each file's "-ls" output, FOLLOWED by a "\000" caracter (on the next line, but it's not a problem, see step 2)
2) we get rid of everything 'non-ascii-printable' (including '\t' and '\n'. But we do keep also the \000 in addition to the "regular" printable ascii, as we need it to know where the line of each file ends!). That way, filenames don't have anymore any quirks in them (no '\n', no '\t', no ';', etc). We do keep the spaces too, as we need those as well to find out the 7th field of "-ls", ie the filesize
3) we translate the added '\000' into a '\n' (step 2) got rid of those too, in case some filenames contained them as well!)
4) then we add the 7th column to get the final size in bytes.

I do it this way to 1) avoid limitation on the number of filenames 2) avoid any problem with any kind of filenames (they can not contain a '\000' by design) 3) portability: on many systems you don't have GNU find but a legacy one [mine doesn't even have -printf .... otherwise I could simply just output the filesize only...]
–
Olivier DulacDec 5 '13 at 14:19

the steps 1), 2), 3) and 4) also correspond to the different lines in the command
–
Olivier DulacDec 5 '13 at 14:50

@JamesYoungman: I was taking the OP's options to better reflect his/her needs. But I don't have it on some of my systems, indeed. apart from that option, the rest should work on "any" unix system.
–
Olivier DulacDec 10 '13 at 9:14

This is a simple way that handles whatever odd file names that can be found:

find . -maxdepth 2 -type f -exec du -ch {} + | grep -w "total"

If there is a really large number of files under the current directory, you might have more than one total line displayed. There might be also unwanted total lines if some file names contain an isolated "total", eg: a file named "Grand total file.txt"