I have written a FUSE filesystem using rust-fuse. I was using std::fs::remove_dir_all to test that files in my file system can be removed properly. This is currently failing for large directories, when the number of files in the directory requires multiple readdir calls. I notice that remove_dir_all:s sister function sys_common::fs::remove_dir_all_recursive is implemented by iterating over the directory entries and performing deletions while iterating. The iterator is aquired by calling fs::read_dir.

This approach seems a bit fragile. POSIX does not seem to guarantee that this works, see for instance this SO question.

In my case, I first receive one readdir call which returns the first N entries. Then I get N file deletions in said directory, before getting another readdir call with offset N. In my current filesystem implementation the offset maps directly to files, and since there are N files less in the directory at this stage, the offset is no longer valid and a lot of files are missed during deletion.

I wonder if you think this should be considered a bug in Rust, or if I’m missing something. Thanks!

Considering the POSIX definition that file deletions may invalidate a DIR* I think this is a bug. I don’t know what the correct behavior should be if files are being added duringremove_dir_all. Do you think the code should loop until it reaches a convergent state?

Thank you for your input. Regarding a solution, I saw a suggestion to start iterating from the start of the directory after each “batch”. The downside is that that approach wouldn’t work with the current iterator, which doesn’t expose batch information.

What do you want the behavior to be in the pathological case that entries are created faster than they are being removed? Should remove_dir_all just hang? Or should it return without having removed everything?