So, after the first iteration of the loop, out_fp was CLOSED,
yet I was still trying to write data to it.

Is it possible that since it was closed, that (bad) data could
have been written to fp??

Reason I ask is that a few records in fp became trashed. When I looked to see what program I might have run at the time the datafile was last modified (17:06, 11/6/04), this program was
indeed run at that time.

After you closed the file, write attempts to the file will fail.
However, if you program opened another file subsequently,
it likely acquired the same file handle. [In fact, it must do
so for stdio to work correctly.] In that case writing through
the stale file handle could corrupt a separate file. This is
certainly true for the level 1 I/O APIs (open, close, read, write).

With the buffered, level 2 I/O APIs (fopen, fclose, fread, fwrite),
the situation is even more complicated. The call to fclose()
deallocates the FILE structure that was allocated via fopen().
Using the stale FILE structure adds using freed memory to
the list of problems.

Managing email signatures in Office 365 can be a challenging task if you don't have the right tool. CodeTwo Email Signatures for Office 365 will help you implement a unified email signature look, no matter what email client is used by users. Test it for free!

Would fprintf() be considered level 1 or level 2 i/o? That's
what I used to write to out_fp.

Now, I did use fopen and fclose, so I know I have
some level 2 i/o there. Can I assume your comments
about Level 1 (acquiring the same file handle, etc.) still
apply?

Finally, I "reintroduced' my bug, and dumped out the
value of the file handle. (I used %ld, I assume that
was correct?) Surely enough, when I opened
fp for the 2nd time, its value was the same as
out_fp "used" to have

Sorry, I can not remember where I read "level 1" and "level 2" file I/O APIs
(maybe Microsoft doc from the '80s).

Level 1 i/o uses open(), creat(), read(), write(), lseek(), close() for block access
to a file. open() and creat() return an integer file descriptor that is passed
to the others. Opening a file returns a file descriptor that is the lowest number
unused file descriptor. For instance, suppose file descriptors 0-4 are open,
and you open a new file, open() will return 5. If you close file descriptor 2,
then open a new file, open() will return 2 as the new file descriptor. This is
how command shells effect stdio redirection.

Level 2 i/o uses fopen(), popen(), fread(), fwrite(), fprintf(), fgets() etc, for buffered
stream access to files. fopen() and popen() return a pointer to a FILE structure
which is used in the remaining calls. The file structure contains the buffered
state, making it easier to read lines, individual characters, getc, ungetc, etc.
The level 2 suite uses the level 1 calls to provide the underlying access to the file,
and the file descriptor is stored in a member of the FILE structure. If the system
fclose() does not set the embedded file descriptor to -1 after closing the file, the
structure will contain a stale file descriptor. I would not expect fclose() to take
this precaution since it will immediately deallocate the FILE structure itself.

If you closed the file with fclose(), AND opened another file which would reuse
the previous integer file desciptor, AND you wrote through the deallocated FILE
structure, you could corrupt the newly opened file thinking you were writing to
the previously opened file. Note that fopen() is unlikely to reallocate the same
memory for the FILE structure, however when it calls open(), that call might
return a previously closed integer file descriptor.

1. The fclose is executing asynchronously. Your next attempt at writing to the closed file got in before the file was actually closed.
2. The fclose actually leaves the file open, just marking the file as closed. This might be a form of cacheing to speed up the case where you open the file again soon after.

It is quite possible to open a file, close it and open it again and discover that the file handle is the same. Especially on a windoze system where there are usually only about 10 file handles available to you by default anyway.

Generally speaking, whenever you close a file, set the handle to NULL. If only the old ANSI people had specified that fclose returned 'NULL' there would now be loads of much safer code out there that says:

Even though the file handle of the data file that was corrupted (FP) DOES have the same value
as the file handle of the output file (out_fp) that was closed, I'm not seeing the trashed
data I expected, nor is the timestamp of the data file being updated.

I removed setting the file handles to NULL and now I can recreate the bug. Saw it happen
right before my eyes. The timestamps on the 3 files being processed after the mistaken
close got updated, and the data indeed got trashed.

I'm actually relieved. This bug was in a stand-alone utilty I had written for internal testing.
I'd much rather the bug be there, as opposed to in my application itself :)

Featured Post

Managing email signatures in Office 365 can be a challenging task if you don't have the right tool. CodeTwo Email Signatures for Office 365 will help you implement a unified email signature look, no matter what email client is used by users. Test it for free!

Preface
I don't like visual development tools that are supposed to write a program for me. Even if it is Xcode and I can use Interface Builder. Yes, it is a perfect tool and has helped me a lot, mainly, in the beginning, when my programs were small…

This tutorial is posted by Aaron Wojnowski, administrator at SDKExpert.net. To view more iPhone tutorials, visit www.sdkexpert.net.
This is a very simple tutorial on finding the user's current location easily. In this tutorial, you will learn ho…