PASS1 reports "shallow" races, which I fixed and re-ran the tool, and PASS2 reports any remaining deep races. I fixed those as well and re-ran the tool, and it did not report any more races. The 2 passes are because a single instance of bad synchronization can cause multiple races (and, conversely, a single race can indicate multiple instances of bad synchronization), so to prevent overwhelming the user with race reports, the first pass checks only for races on fields of top-level objects (ex. NetConnection) but the second pass does a full-blown check.

In most of the above cases, it is the classic situation of possibly incorrect synchronization in which the programmer leaves get* methods unsychronized. It may or may not be correct depending upon what those get* methods are doing. I leave it to the programmer to decide in this case. Also, it is not sufficient to reason at the Java level: given the subtleties of the Java memory model, one needs to reason at the bytecode level (see the next item for more on this).

All are on field metaDataInfoIsCached_ of class DatabaseMetaData due to lack of synchronization on connection_ in 107 public methods in this class that call a getMetaData* method. One of these 107 methods is:

The programmer seems to be aware of these races b/w the protected write access to field metaDataInfoIsCached_ in the below method

// We synchronize at this level so that we don't have to synchronize all
// the meta data info methods. If we just return hardwired answers we
// don't need to synchronize at the higher level.
private void metaDataInfoCall() {
synchronized (connection_)

{
... // update metaDataInfoCache_
metaDataInfoIsCached_ = true;
}

}

and the unprotected read access to that field in each of the getMetaData* methods, for instance:

The races are benign if one reasons at the Java level, but they might be worth looking at from the perspective of the bytecode level. In particular, since a JVM is free to reorder instructions within a synchronized block, it is legal (though highly unlikely) for it to move the write to metaDataInfoIsCached_ in the metaDataInfoCall method to before the "..." code that updates the elements of array metaDataInfoCache_, in which case there is clearly a bug. I myself am not very familiar with the Java memory model, but see http://www.cs.umd.edu/users/pugh/java/memoryModel/jsr-133-faq.html for some unexpected things that can happen due to reordering of bytecode which is legal within (but not across) a synchronization block.

But as I said above, it might be worth looking at these races because of unexpected transformations that the JVM might do at the bytecode level. Clearly, the second comment in the above method suggests that the programmer assumes absence of aggressive transformations.