That was a test program and it didn't take care of the all the cases :). Thanks for fixing that issue.

As I was looking to different solutions which can be used while assigning jobs to multiple threads. As I want a fixed size queue is assigned to each thread.
As the job items are present in a HASH, I was looking a method through which this slicing can be done on a faster way as the total keys can be high.

Why do you want a fixed work unit size for each thread? Given the vagaries of multitasking, some thread is sure to finish a fixed length task before another doing the same size task. Why not just pour all your data into a Thread::Queue and have a pool of workers servicing this queue? If you have a very large amount of data it can also be handy to have a size limited queue, BrowserUK gives a great example here: Re^5: dynamic number of threads based on CPU utilization.

Now I am trying to multi-thread where each thread will work on a fixed size set.

As Random_walk pointed out, splitting work in equal-sized chunks for threading is not a good strategy.

Some of your directories will contain less files than others, and some will contain only small files and others large files; so it is easy to see that some of your threads will finish more quickly than others which doesn't distribute the workload evenly.

A better approach would be to make the hash shared, and then queue the keys to the hash (the directory names) to a Thread::Queue and let the threads pick their work directories from there. Thus, the threads becomes self balancing.

That said, I have very profound doubts that threading your application will have any great benefit to your throughput if the directories you are backing up exist on a single physical volume. The problem is that if you have multiple threads (or processes) reading files concurrently from the same physical drive, you will likely create severe head-thrash and so, slow the overall throughput rather than increase it.

Even if your files are distributed across multiple spindles -- with SAS or raid or similar -- it is still dubious whether you will achieve huge benefits unless you could isolate the location of your files so that you could ensure that only one file from each physical unit was being read at any given time. Mostly, this is not possible as these multi-spindle setups tend to split files across multiple physical volumes transparently to the file system.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.