So how do I get an ifstream to start at the top of the file again?

This is a discussion on So how do I get an ifstream to start at the top of the file again? within the C++ Programming forums, part of the General Programming Boards category; Originally Posted by grumpy
open and closing is simpler than seek and clear
The rest of that blurb is an ...

The rest of that blurb is an "I know I'm wrong but I'm not admitting it" answer if ever I saw one. There are multiple problems with that solution:

In the best case any content buffering that's taken place at fstream level has to be purged and reloaded, in the worst case any OS cache will be too.
The function now has to take an extra piece of data (the file name) that has nothing to do with its actual purpose
You have to turn your relative path into an absolute one or institute synchronization on the current directory mechanism to ensure you continue opening the same file
Your function now has to take an fstream parameter directly rather than a base (i|o)stream making it *less* flexible.

The rest of that blurb is an "I know I'm wrong but I'm not admitting it" answer if ever I saw one. There are multiple problems with that solution:

In the best case any content buffering that's taken place at fstream level has to be purged and reloaded, in the worst case any OS cache will be too.
The function now has to take an extra piece of data (the file name) that has nothing to do with its actual purpose
You have to turn your relative path into an absolute one or institute synchronization on the current directory mechanism to ensure you continue opening the same file
Your function now has to take an fstream parameter directly rather than a base (i|o)stream making it *less* flexible.

Not to mentiong the number of times the drive heads end up making long traverses across the platter as the file is flushed to disk and then re-opened...

I can't speak for all implementations Grumpy... but there is also a speed issue here. On windows, at least, seeking to file postion 0 is one heck of a lot faster than closing and re-opening the file.

I've heard claims like that before, but never seen anyone provide solid data to back up their claims. The data I have seen (with measurements on such things) were inconclusive, particularly for larger files and also very sensitive to system configuration (eg sizes of various caches, relative performance of different types of operation).

In any event, logic says that the impacts of reading (and, in this case, parsing) a file more than once would comfortably exceed any gains made by optimising how you rewind that file to re-read it. There would be more benefit in working out how to avoid reading the file more than once than in optimising how you rewind it (which I would characterise as premature optimisation, unless you have done profiling to establish the need).

There is also nothing that says closing and reopening a file flushes it physically to storage device. In practice, the effect is often to move to a level of cache that is "closer" to the physical storage (eg from system cache to hard drive cache).

Why do you think operating systems get upset at being powered down abruptly? The reason is that data moves between various levels of caches, and an orderly shutdown is needed to ensure all levels get flushed.

Originally Posted by adeyblue

The rest of that blurb is an "I know I'm wrong but I'm not admitting it" answer if ever I saw one.

That comment is specious, as you selectively edited the text you quoted from my previous post to remove a qualification, and then you dispute the unqualified statement.

In any event, my comments were about complexity and maintainability of code not about relative performance of different options for rewinding a file. Even if the differential between different approaches for rewinding a file was measured in seconds or minutes (and you would have to work pretty hard to achieve that sort of differential) it would have a much smaller impact on software life-cycle costs than factors that compromise maintainability of that software.

If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

While I certainly don't consider there is anything wrong with the clear/seek approach, having a code structure that discourages you from using a equally simple (arguably simpler) alternative approach is usually a sign that the code will be increasingly hard to change in future. At some point, you will need to bite the bullet and structure things in a manner easier to maintain.

I know there are more posts after this, but your ideas don't seem to have changed so I would like to add something: Maintainability is harder with closing and re-opening the file.

Let me explain. To close and reopen a file you'll need to know the filename. Which means that the function that needs to do that MUST take a filename as a parameter, rather than a stream. Or, if the stream was needed before that function was even called, it must take BOTH a stream and a filename. And maybe the function that calls THAT function doesn't even know the filename so that, too, must have an extra parameter just to be able to tell it's called function what filename it is.
Of course that's worst case scenario, but the scenario can get even worse case: what if you implement another filestream that is seekeable and you want to use the (general) function with that? You can't; there's no way to reopen a file because it's not related to any file.

I fail to see how seeking to the beginning would in any way be regarded "hard to maintain". Could you explain why you think it does?

Why do you think operating systems get upset at being powered down abruptly? The reason is that data moves between various levels of caches, and an orderly shutdown is needed to ensure all levels get flushed.

On Windows (can't speak about Linux) closing the file does not merely shuffle stuff around in memory... it also flushes the data to disk to ensure that it is written. Ever closed a really BIG file... 100megs or more... with lots of changes made? It takes a while, believe me...

(The only exception to this is when the the OS itself is copying or moving files. Technically these files are not open and sometimes do not get flushed to disk for several seconds... eg. The all too frequent problem with flash drive corruption on windows 7)

However, this is not the big reason Windows gets all upset when you pull the plug... That's because all the registry hives are open and any settings changes might not be saved.