In order to estimate how long a process has left to take, you have to have some metric on which to base your estimation on. e.g. number of files copied against number left to copy. Even here, this can be a tricky business, as one huge file amongst lots of small files can ruin your estimation, so collection of file sizes might be a way to get a better estimate, but such an estimate would have to bear in mind the system characteristics. In other words, you need to choose the metrics you use to do your estimation carefully and work out what can possibly go wrong.

You can then ask yourself is how precise do you want your estimate to be. Do you want a "guess" or a precisely calculated estimate? If the latter, is the time spent gathering the data required for a precise estimate going to take more time than it is worth? For example, collecting the number of files you have to copy would produce a good guess, but collecting their file sizes would produce a better estimate but would take longer. And of course collecting their file sizes may or may not be relevant. Copying a file from one directory on a disk to another location is a simple edit to a directory, and is (probably) independent of file size, whereas copying a file on a network drive to another network drive has entirely different overheads.

A Monk aims to give answers to those who have none, and to learn from those who know more.