EOF in ObjectInputStream

I assumed that I could loop on ObjectInputStream.readObject() until it returned null, indicating that all objects had been read. However, instead, I get an EOFException after it reads all the objects and attempts to read the next (nonexistent) one.

What is the preferred way to detect EOF with Object streams? I suppose I could write a header that includes the number of objects to follow, but somehow I think there must be a preferred way.

Yes, using exceptions to determine program flow is bad form. In this respect, Object*putStream seems to break the contract that every other stream uses, there being some value to indicate EOF. If you want to serialize more than one object, package all the objects into a single collection (i.e. an list or map) and serialize the collection. Then you don't have to catch the EOF and get all your data in one simple readObject call.

Originally posted by Joe Ess: Yes, using exceptions to determine program flow is bad form. In this respect, Object*putStream seems to break the contract that every other stream uses, there being some value to indicate EOF. If you want to serialize more than one object, package all the objects into a single collection (i.e. an list or map) and serialize the collection. Then you don't have to catch the EOF and get all your data in one simple readObject call.

so you must be certain that the implementation of that collection you use don't change in a futur jdk version, because else you can't read your file.

[Joe]: Yes, using exceptions to determine program flow is bad form. In this respect, Object*putStream seems to break the contract that every other stream uses, there being some value to indicate EOF.

Agreed. It would've made sense if they'd simply returned null in this case - at least, it would've ben consisten with other streams. But they didn't; too bad.

Wrapping the objects in a Collection or array is one way to handle this problem. Another is to use writeInt() to write the number of serialized objects that will be in the stream, before you write the objects themselves. The the reading stream needs to first readInt() to get the number of objects, then readObject() that many times. Another solution is to use a variant of the Null Object Pattern - create an object which indicates the end of the stream. This is particularly useful if you don't know how many objects there will be before you start to write them.

[Sudha]: I suppose you can try using the available() method.

Please, don't. The problem with available() is that is often does what people want, but sometimes does something different. Which means that when something does go wrong, it's difficult to replicate it reliably, and therefore very difficult to track down. The problem is that available() does not necessarily have anything to do with EOF or stream closure. The number returned by available() only tells you how many bytes are available right now, without blocking. Maybe no more bytes are available because a file is fragmented and it takes a few more milliseconds for the disk reader to move to the next fragment. Or maybe there is a hardware buffer somewhere in the system which is smaller than the requested number of bytes, and the system is designed to return the size of the available buffer (which will then be reloaded shortly after it's been read). Or maybe (very often, in fact) you're reading across some sort of network connection and the remaining bytes simply have not been sent yet from the other side of the connection. In all these cases, available() does not tell you actual number of bytes remaining to be read. The method is massively useless, except for a few cases where it was mildly useful for some pitiful attempts at nonblocking IO prior to JDK 1.4. Ever since JDK 1.4 though, the java.nio package offers better alternatives for that sort of thing. There is really no good reason to use available() nowadays.

[Roel]: so you must be certain that the implementation of that collection you use don't change in a futur jdk version, because else you can't read your file.

Yes, but that's true whenever you use serialization.

In practice, standard collections such as ArrayList and HashMap are protected against this. They've defined serialVersionUID (from when they were first defined), and Sun is unlikely to change them enough to make them incompatible with previous versions. In comparison, it's much more likely that the class of the objects within the collection will change over time. And that's an issue whether you wrap the objects in a collection or not. [ October 21, 2005: Message edited by: Jim Yingst ]