Dustin's Pages

Thursday, June 30, 2016

My bachelors degree is in Electrical Engineering and when I started looking for my first post-college job, I had to make the decision whether to work in more traditional electrical engineering careers or in computer science-oriented careers. I had been writing code in BASIC since I was a kid, then Borland Turbo Pascal in my middle school and high school years, then more Pascal and C and C++ in my college years. I had a computer science emphasis as part of my EE degree, but the actual degree was in electrical engineering. There were many factors that influenced my decision to take the computer science fork in the road instead of the electrical engineering fork and one of these was an experience I had in an electrical engineering lab in which a tool lied to me. This post is about how our software development tools sometimes deceive us, though I'm really addressing cases where developers may share some culpability versus software that is intentionally deceptive.

Several of my electrical engineering classes had 1 credit hour labs associated with them that actually often required 10-20 hours of my week to complete the labs. I had one circuits-related lab that was giving me particular trouble and I spent hours trying to get my circuit to work as expected so that I could pass off the lab assignment. After hours of frustration and increasing doubt in my own understanding of the project and associated topics, the teaching assistant realized that the oscilloscope I had been using was faulty. We connected a properly working oscilloscope to the circuit and it immediately passed. I left relieved, but also left with a stronger impression than ever before that I would prefer to make my career in software development rather than in working with circuits or hardware.

It turns out, of course, that our tools in software development sometimes fail us just as the oscilloscope failed me in that lab experience that has perhaps forever traumatized me. I generally have more confidence in my own abilities in software development than I did in the circuit topic being discussed in that lab and many years of software development experience have contributed to that confidence that I did not have after only a couple semesters of study in the circuits topic. I am using the remainder of this post to provide some examples of where software development tools have failed me or those around me, though I emphasize that many of these are as much or more developer issues than tools issues. I think it's better to just blame the tools.

The Case of the Missing Float

I was working on a project in the early days of Oracle's PL/SQL Server Pages (PSP) product and our the PSP-based pages were not displaying our primary key column in some of our Oracle 8i Database tables. After working with Oracle Support and having Oracle Support put us in touch with a developer of PSP, it was realized that this early stage of the tool did not expect the PL/SQL type FLOAT. Any column of that type was not rendered in the PSP presentation of the table structure. As I recall, no types other than FLOAT were affected in this way. I don't remember how much time we spent before we realized the specific cause of this issue, but there certainly was some time lost and some questioning of other issues surrounding our DDL statements and database construct before we realized the issued was with the tool.

The Cases of Database Command-line Tools Misrepresenting Their Databases

When using command-line tools such as psql for PostgreSQL or SQL*Plus for Oracle database, one can occasionally be deceived if not careful. There are multiple ways in which this can happen. These "deceptions" tend not to be issues with the tools themselves as much as misconceptions upon the part of the users of these tools.

Based on a particular user's settings, these command-line tools often don't differentiate null from empty string. Both command-line tools referenced allow users to have a string or character substituted for null in query results to avoid this potential deception. These command-line tools may also not show full precision of some numeric values, but this is again controlled by user settings.

A really easy deception trap for users to fall into when using command-line tools to connect to a database is that of blaming slowness in the command-line tool or its enclosing terminal on the database. For example, if a table with many large (lots of columns) rows is queried, the scrolling results from that query may take quite a bit of time as the user watches them scroll across the screen. It could be easy to blame the query for being slow and taking however long it took for the results to be queried to the terminal, but in reality it can be shown that the query is much quicker when its results are spooled to a file instead of the terminal or, better yet, when its timing is measured using the database command-line interface's query performance measuring tools.

One other deception I've seen related to database command-line client tools (or really any database client tool) is when a developer thinks his or her software is not working properly because they cannot see the changes being made to the database in their client. In some cases, this is because the software being tested or debugged has not been allowed to commit its transaction yet and so, in their particular isolation level, the developer should not be able to see the not-yet-committed changes being made with a different database session.

The Case of the Java IDE Classpath Deception

Most Java developers prefer doing the bulk of their development in an IDE. These powerful IDEs make many of us much more productive, but these tools have been known to lie to developers. Perhaps the most common deception in a Java IDE occurs when the IDE maintains a separate classpath than the project's command-line-based build (for example, with Gradle, Maven, or Ant). In this case, it's easy for the IDE to report successfully building code that doesn't build from the command-line or vice versa.

During my software development career, I've been burned multiple times by a tool that is slow to update its presentation or report. This has led me down a wrong road as I investigated a certain issue because I thought a tool was telling me something, but it really hadn't gotten around to telling me that yet. If I'm lucky, I'll eventually see a case presented by the tool where I know the data being shown me by the tool cannot be correct and then, on looking into it further, I realize that I'm still seeing output data from a previous run. When I run into this issue with a particular tool, I like to make sure that I have some field or indicator in the tool's report that will have to be updated for each run of that tool so that I know if the data has been refreshed or not.

A very similar issue can occur with tools that cache results. In such cases, a developer may change things without any noticeable effect in the cached presentation and therefore think his or her changes are ineffectual or won't impact anything. In these case, the developer is best served to ensure that data refreshes occur, even if it means forcibly causing the refresh.

Conclusion

This post has looked at situations in which we might want to blame the tools and suggest that they have led us astray. While this is true when the tool is built to intentionally lie or when the tool is broken or immature (as was the case in my first example), most of my examples are of situations in which the tool was actually doing its advertised job and it was developer misuse or misunderstanding of how to use the tool or the tool's limitations that was the real issue.

Software development tools have come a long way and make our jobs easier and make us more productive. However, when not used appropriately or used too carelessly, they can sometimes deceive us or at least contribute to our making some erroneous decisions based on what we think the tools are telling us. The best approaches for addressing these potential deceptions by our tools is to understand our tools well, understand how our tools perform their job, and understand our tools' limitations.

Lombok, AutoValue, and Immutables share quite a bit in common and I try to summarize these similarities in this single descriptive sentence: Lombok, AutoValue, and Immutables use annotation processing to generate boilerplate code for common operations used by value object classes. The remainder of this post looks at these similarities in more detail and contrasts the three approaches.

Code Generation

Lombok, AutoValue, and Immutables are all designed to generate verbose boilerplate code from concise code representations that focus on the high-level business logic and leave low-level details of implementation to the code generation. Common object methods such as toString(), equals(Object), and hashCode() are important but need to be written correctly. It is easy to make mistakes with these and even when they are written correctly originally (including via IDE generation), they can be neglected when other changes are made to the class that impact them.

Value Objects

Lombok, AutoValue, and Immutables each support generation of "value objects." While AutoValue strictly enforces generation of value objects, Immutables allows generated objects to be modifiable if @Modifiable is specified, and Lombok supports multiple levels of modification in its generated classes with annotations such as @Set and @Data.

Beyond Value Objects

AutoValue is focused on generation of value objects and supports generation of fields, constructor/builder, concrete accessor methods, and implementations of common methods equals(Object), hashCode(), and toString() based on the abstract methods in the template class.

Immutables provides capability similar to that provided by AutoValue and adds the ability to generate modifiable classes with @Value.Modifiable. Immutables also offers additional features that include:

Lombok provides value class generation capability similar to AutoValue with the @Value annotation and provides the ability to generate modifiable classes with the @Data annotation. Lombok also offers additional features that include:

Although Lombok, AutoValue, and Immutables all employ annotation processing via javac, the particulars of how Lombok uses annotation processing are different than how AutoValue and Immutables do it. AutoValue and Immutables use annotation processing in the more conventional sense and generate source from source. The class source code generated by AutoValue and Immutables is not named the same as the template class and, in fact, extends the template class. AutoValue and Immutables both read the template class and generate an entirely new class in Java source with its own name that has all the generated methods and fields. This avoids any name collisions with the template class and makes it fairly easy to mix the template class source code and generated class source code in the same IDE project because they are in fact different classes.

AutoValue's Generation via Annotation Processing

Immutables's Generation via Annotation Processing

Lombok approaches generation via annotations processing differently than AutoValue and Immutables do. Lombok generates a compiled .class file with the same class name as the "template" source code and adds the generated methods to this compiled version. A developer only sees the concise template code when looking at .java files, but sees the compiled .class file with methods not present in the source code when looking at the .class files. The generation by Lombok is not of another source file but rather is of an enhanced compiled version of the original source. There is a delombok option one can use with Lombok to see what the generated source behind the enhanced .class file looks like, but the project is really designed to go straight from concise template source to enhanced compiled class without need or use for the intermediate enhanced source file. The delombok option can be used to see what the generated source would look like or, perhaps more importantly, can be used in situations where it is confusing to the tools to have inconsistent source (concise template .java file) and generated class (enhanced .class file of same name) in the same space.

The main reasons for the controversy surrounding Lombok's approach are closely related and are that it uses non-standard APIs and, because of this, it can be difficult to integrate well with IDEs and other tools that perform their own compilation (such as javadoc). Because AutoValue and Immutables naturally generate source code with new class names, any traditional tools and IDEs can work with the generated source alongside the template source without any major issues.

Lombok, AutoValue, and Immutables are similar toolkits that provide similar benefits and any of these three could be used successfully by a wide range of applications. However, there are differences between these toolkits that can be considered when selecting which of them to use.

Lombok generates a class with the same package and class name as the template while AutoValue and Immutables generate classes that extend the template class and have their own class name (but same package).

Developers who would like the compiled .class file to have exactly the same package and name as the template class will prefer Lombok.

Developers who prefer the generated source code always be available and not in conflict in any way with the template source will prefer AutoValue or Immutables.

AutoValue is the most opinionated of the three toolkits and Lombok tends to be the least opinionated.

Developers wanting the tight enforcement of characteristics of "value objects" are likely to prefer AutoValue. AutoValue does not provide a mechanism for generated classes to be modifiable and enforces several other rules that the other two toolkits do not enforce. For example, AutoValue only allows the template class to be expressed as an abstract class and not as an interface to avoid "[losing] the immutability guarantee ... and ... [inviting] more ... bad behavior." Immutables, on the other hand, does allow interfaces to be used as the templates for code generation.

Developers who want to depart from strict immutability or use some of the features AutoValue does not support in the interest of best practices opinions will likely prefer Immutables or Lombok.

Developers wanting to avoid IDE plugins or other special tools outside of javac and basic Java IDE support will favor AutoValue or Immutable.

All three toolkits support some level of customization and developers wishing to customize the generated code may want to choose the toolkit that allows them to customize the generated code in the ways they desire.

Lombok provides a configuration system that allows for several aspects of the generated code to be adjusted to desired conventions.

Immutables provides style customization that allows for several aspects of the generated code to be adjusted to desired conventions.

The How Do I? section of AutoValue's User Guide spells out some approaches to customize the code AutoValue generates (typically via use or avoidance of keywords in the template class).

AutoValue and Lombok are supported on JDK 1.6, but Immutables requires JDK 1.7.

Conclusion

Lombok, AutoValue, and Immutables share much in common and all three can be used to generate value classes from simple template files. However, they each also offer different advantages and features that may make any one of them more or less appealing to developers than the others based on the developers' individual circumstances.

Immutables, like AutoValue, uses compile-time annotations to generate the source code for the classes that define immutable objects. Because they both use this approach, both introduce only compile-time dependencies and their respective JARs are not needed on the application's runtime classpath. In other words, the Immutable JARs need to be on the compiler's (javac's) classpath but not on Java launcher's (java's) classpath.

The code listing for a "template" Person class is shown in the next code listing (Person.java). It looks very similar to the Person.java I used in my AutoValue demonstration.

The only differences in this "template" class and the "template" class I listed in my AutoValue post is the name of the package, the Javadoc comments on which product is being demonstrated, and (most significantly) the annotation imported and applied to the class. There is a specific "create" method in the AutoValue example that's not in the Immutables example, but that's only because I didn't demonstrate use of AutoValue's builder, which would have rendered the "create" method unnecessary.

When I appropriately specify use of Immutables on my classpath and use javac to compile the above source code, the annotation processor is invoked and the following Java source code is generated:

Several observations can be made from examining the generated code (and you'll find that these are remarkably similar to the observations listed for AutoValue in my earlier post):

The generated class extends (implementation inheritance) the abstract class that was hand-written, allowing consuming code to use the hand-written class's API without having to know that a generated class was being used.

Fields were generated even though no fields were defined directly in the source class; Immutables interpreted the fields from the provided abstract accessor methods.

The generated class does not provide "set"/mutator methods for the fields (get/accessor methods). This is not surprising because a key concept of Value Objects is that they are immutable and even the name of this project (Immutables) implies this characteristic. Note that Immutables does provide some ability for modifiable objects with the @Value.Modifiable annotation.

Javadoc comments on the source class and methods are not reproduced on the generated extension class. Instead, simpler (and more generic) Javadoc comments are supplied on the generated class's methods and more significant (but still generic) Javadoc comments are provided on the builder class's methods.

As I stated with regards to AutoValue, one of the major advantages of using an approach such as Immutables generation is that developers can focus on the easier higher level concepts of what a particular class should support and the code generation ensures that the lower-level details are implemented consistently and correctly. However, there are some things to keep in mind when using this approach.

Immutables is most likely to be helpful when the developers are disciplined enough to review and maintain the abstract "source" Java class instead of the generated class.

Changes to the generated classes would be overwritten the next time the annotation processing generated the class again or generation of that class would have to be halted so that this did not happen.

The "template" abstract class has the documentation and other higher-level items most developers will want to focus on and the generated class simply implements the nitty gritty details.

You'll want to set your build/IDE up so that the generated classes are considered "source code" so that the abstract class will compile and any dependencies on the generated classes will compile.

Special care must be taken when using mutable fields with Immutables if one wants to maintain immutability (which is typically the case when choosing to use Immutables or Value Objects in general).

Conclusion

My conclusion can be almost word-for-word the same as for my post on AutoValue. Immutables allows developers to write more concise code that focuses on high-level details and delegates the tedious implementation of low-level (and often error-prone) details to Immutables for automatic code generation. This is similar to what an IDE's source code generation can do, but Immutables's advantage over the IDE approach is that Immutables can regenerate the source code every time the code is compiled, keeping the generated code current. This advantage of Immutables is also a good example of the power of Java custom annotation processing.

When using AutoValue to generate full-fledged "value classes," one simply provides an abstract class (interfaces are intentionally not supported) for AutoValue to generate a corresponding concrete extension of. This abstract class must be annotated with the @AutoValue annotation, must provide a static method that provides an instance of the value class, and must provide abstract accessor methods of either public or package scope that imply the value class's supported fields.

In the code listing above, the static instance creation method instantiates a AutoValue_Person object, but I have no such AutoValue_Person class defined. This class is instead the name of the AutoValue generated class that will be generated when AutoValue's annotation processing is executed against as part of the javac compiling of Person.java. From this, we can see the naming convention of the AutoValue-generated classes: AutoValue_ is prepended to the source class's name to form the generated class's name.

When Person.java is compiled with the AutoValue annotation processing applied as part of the compilation process, the generated class is written. In my case (using AutoValue 1.2 / auto-value-1.2.jar), the following code was generated:

The generated class extends (implementation inheritance) the abstract class that was hand-written, allowing consuming code to use the hand-written class's API without having to know that a generated class was being used.

Fields were generated even though no fields were defined directly in the source class; AutoValue interpreted the fields from the provided abstract accessor methods.

The generated class does not provide "set"/mutator methods for the fields (get/accessor methods). This is an intentional design decision of AutoValue because a key concept of Value Objects is that they are immutable.

Javadoc comments on the source class and methods are not reproduced on the generated extension class.

One of the major advantages of using an approach such as AutoValue generation is that developers can focus on the easier higher level concepts of what a particular class should support and the code generation ensures that the lower-level details are implemented consistently and correctly. However, there are some things to keep in mind when using this approach and the Best Practices section of the document is a good place to read early to find out if AutoValue's assumptions work for your own case.

AutoValue is most likely to be helpful when the developers are disciplined enough to review and maintain the abstract "source" Java class instead of the generated class.

Changes to the generated classes would be overwritten the next time the annotation processing generated the class again or generation of that class would have to be halted so that this did not happen.

The "source" abstract class has the documentation and other higher-level items most developers will want to focus on and the generated class simply implements the nitty gritty details.

You'll want to set your build/IDE up so that the generated classes are considered "source code" so that the abstract class will compile.

Special care must be taken when using mutable fields with AutoValue if one wants to maintain immutability (which is typically the case when choosing to use Value Objects).

Review the Best Practices and How do I... sections to make sure no design assumptions of AutoValue make it not conducive to your needs.

Conclusion

AutoValue allows developers to write more concise code that focuses on high-level details and delegates the tedious implementation of low-level (and often error-prone) details to AutoValue for automatic code generation. This is similar to what an IDE's source code generation can do, but AutoValue's advantage over the IDE approach is that AutoValue can regenerate the source code every time the code is compiled, keeping the generated code current. This advantage of AutoValue is also a good example of the power of Java custom annotation processing.

Tuesday, June 14, 2016

After a few weeks/months of what felt like unusual quiet (at least in my perception) in the world of Java, there has recently been an upsurge in the amount of Java-related news and I briefly reference some of these news stories in this post.

OpenJDK 9 Not Yet Feature Complete

On Friday, Mark Reinhold (Chief Architect of the Java Platform Group at Oracle) announced that "JDK 9 is not (yet) Feature Complete." In this message, he states that "milestones listed in the JDK 9 schedule are condition-driven rather than date-driven" and references the OpenJDK Milestone Definitions. This referenced section on milestone definitions also defines "Feature Complete" as "All features have been implemented and integrated into the master forest, together with unit tests." Reinhold's message corroborates this definition, "The goal of the Feature Complete milestone is to get all of the planned features, i.e., JEPs, and smaller enhancements integrated into the JDK 9 master forest, together with their unit tests."

One of Reinhold's purposes in writing this is to assure people who feared "the JDK 9 (and hence Java SE 9) feature set is somehow frozen" that this is "not the case." The reason some feared this is that 26 May 2016 is listed as the "Feature Complete" milestone date as shown in the next screen snapshot taken from the OpenJDK JDK 9 project page.

Reinhold also uses the post to propose a process to be followed to get to "Feature Complete" that includes the JEP owner potentially proposing that their JEPs be dropped from JDK 9.

The State of Java EE 8

There is significant consternation regarding the future of Enterprise Java, particularly on Java EE 8. The Java EE Guardians have been formed with intent to "send a clear signal that Java EE is important and needs to be safeguarded for the community." The concern is that there seems to be no recent advertised progress on the Java EE 8 specification and no announcements to explain why.

When Reza Rahman announced his departure from Oracle, he wrote, "I will be rejoining the purely community driven Java EE efforts I have been part of for the better part of a decade in complete good faith as soon as possible post-Oracle." Rahman soon helped form Java EE Guardians and stated, "The bottom line is, if Oracle is not committed to server-side Java and not committed to supporting the EE space, then fundamentally, someone else needs to step in."

In his post Help Move Java EE Forward, Josh Juneau writes, "Java EE as a whole has seen little to no movement forward since JavaOne 2015." He concludes, "In the end, if Oracle is not interested putting forth effort internally and moving Java EE forward, hopefully they will be open to working more with the community, and hand off some of the specifications to those who are interested."

Mark Little has posted Does Java EE Have a Future? on the JBossDeveloper Forum and states, "The principles on which Java EE are based are pretty common to distributed systems in general." Speaking of last week's DevoxxUK panel on the future of Java EE, Little writes, "Given rumours and other concerns about the future of Java EE, I can certainly empathise with developers who want to hear that the major vendors are standing behind it. Well Red Hat and those on the panel at DevoxxUK hopefully made it clear: we are prepared to continue innovating with and on Java EE, and it's a key part of our strategy."

I am a longtime fan of social media oriented software development sites such as StackOverflow, DZone, and Java Code Geeks. However, it can sometimes be almost overwhelming to filter through these sites' vast amount of content. I have found a nice complement to these sites to be four blogs that aggregate some of the most interesting Java and software development related articles and blog posts and provide brief descriptions and commentary on the linked-to references. These four, in no particular order, are Baeldung Java Web Weekly (weekly aggregation of mostly Java-related links with brief descriptions), Thoughts on JavaWeekly (weekly aggregation of mostly Java-related links with brief descriptions), Robert Diana's Geek Reading (daily [weekdays] collection of links to general software development and technology posts with significant dose of Java-related posts), and Morning Dew Dew Drops (daily [weekday] collection of links to general software development and technology posts with what seems to me like a .NET/Windows emphasis).

Monday, June 6, 2016

For the most part, Java is a very backwards compatible programming language. The advantage of this is that large systems can generally be upgraded to use newer versions of Java in a relatively easier fashion than would be possible if compatibility was broken on a larger scale. A primary disadvantage of this is that Java is stuck with some design decisions that have since been realized to be less optimal than desired, but must be left in place to maintain general backwards compatibility. Even with Java's relatively strong tie to backwards compatibility, there are differences in each major release of Java that can break Java-based applications when they are upgraded. These potential breaks that can occur, most commonly in "corner cases", are the subject of this post.

Sun Microsystems and Oracle have provided fairly detailed outlines of compatibility issues associated with Java upgrades. My point is not to cover everyone of these issues in everyone of the versions, but to instead highlight some key incompatibility issues introduced with each major release of Java that either personally impacted me or had more significant effect on others. Links at the bottom of this post are provided to the Sun/Oracle Java versions' compatibility documents for those seeking greater coverage.

Upgrading to JDK 1.2

With hindsight, it's not surprising that this early release in Java fixed several incompatibilities of the implementation with the specification. For example, the JDK 1.2 compatibility reference states, The String hash function implemented in 1.1 releases did not match the function specified in the first edition of the Java Language Specification, and was, in fact, unimplementable." It adds, "the implemented function performed very poorly on certain classes of strings" and explains that "the new String hash function in version 1.2" was implemented to "to bring the implementation into accord with the specification and to fix the performance problems." Although it was anticipated that this change to String.hashCode() would not impact most applications, it was acknowledged that "an application [that] has persistent data that depends on actual String hash values ... could theoretically be affected." This is a reminder that it's not typically a good idea to depend on an object's hashCode() method to return specific codes.

Upgrading to JDK 1.3

The JDK 1.3 compatibility reference mentions several changes that brought more implementation conformance with the JDK specification. One example of this was the change that introduced "name conflicts between types and subpackages":

According to ... the Java Language Specification, ... it is illegal for a package to contain a class or interface type and a subpackage with the same name. This rule was almost never enforced prior to version 1.3. The new compiler now enforces this rule consistently. A package, class, or interface is presumed to exist if there is a corresponding directory, source file, or class file accessible on the classpath or the sourcepath, regardless of its content.

The upgrade effort I was leading on a project to move to JDK 1.4 ended up taking more time than estimated due to JDK 1.4's change so that "the compiler now rejects import statements that import a type from the unnamed namespace." In other words, JDK 1.4 took away the ability to import a class defined without an explicit package. We did not realize this would be an issue for us because the code that it impacted was code generated by a third-party tool. We had not control over the generation of the code to force the generated classes to be in named packages and so they were automatically part of the "unnamed namespace." This meant that, with JDK 1.4, we could no longer compile these generated classes along with our own source code. Discovering this and working around this change took more time than we had anticipated or what we thought was going to be a relatively straightforward JDK version upgrade. The same JDK 1.4 compatibility reference also states the most appropriate solution when one controls the code: "move all of the classes from the unnamed namespace into a named namespace."

Any uses of the com.sun.image.codec.jpeg package were broken when upgrading to Java 7. The Java 7 compatibility reference states, "The com.sun.image.codec.jpeg package was added in JDK 1.2 (Dec 1998) as a non-standard way of controlling the loading and saving of JPEG format image files. This package was never part of the platform specification and it has been removed from the Java SE 7 release. The Java Image I/O API was added to the JDK 1.4 release as a standard API and eliminated the need for the com.sun.image.codec.jpeg package."

Another incompatibility reintroduced in Java 7 is actually another example of making an implementation better conform to the specification. In this case, in Java SE 6, methods that had essentially the same erased signature but with different return types were seen as two different methods. This does not conform with the specification and Java 7 fixed this. More details on this issue can be found in my blog post NetBeans 7.1's Internal Compiler and JDK 6 Respecting Return Type for Method Overloading and in the Java 7 compatibility reference under "Synopsis" headings " A Class Cannot Define Two Methods with the Same Erased Signature but Two Different Return Types" and "Compiler Disallows Non-Overriding Methods with the Same Erased Signatures".

Just as Java 7 changes impacted Substantial, Java 8 brought a change that directly impacted several popularly and widely used Java libraries. Although this change likely directly affected relatively few Java applications, it indirectly had the potential to affect many Java applications. Fortunately, the maintainers of these Java libraries tended to fix the issue quickly. This was another example of enforcement of the specification being tightened (corrected) and breaking things that used to work based on an implementation not implementing the specification correctly. In this case, the change/correction was in the byte code verifier. The JDK 8 Compatibility Guide states, "Verification of the invokespecial instruction has been tightened when the instruction refers to an instance initialization method ("<init>")." A nice overview of this issue is provided in Niv Steingarten's blog post Oracle's Latest Java 8 Update Broke Your Tools — How Did it Happen?

Upgrading to Java 9 (1.9)

It seems likely Java 9 will introduce some significant backwards compatibility issues, especially given its introduction of modularity. While it remains to be seen what these breakages are, there has already been significant uproar over the initial proposal to remove access to sun.misc.Unsafe. This is another example of where an officially unsupported API may not be used directly by most applications, but is probably used indirectly by numerous applications because libraries and products they depend upon use it. It's interesting that this has led to the Mark Reinhold proposal that internal APIs be encapsulated in JDK 9. Given the numerous compatibility issues associated with dropped and changed internal APIs between major Java revisions, this seems like a good idea.

Lessons Learned from JDK Version Compatibility Issues

Avoid taking advantage of improper implementations that violate the specification as those exploits of holes in the implementation may not work at all when the implementation is changed to enforce the specification.

Beware of and use only with caution any APIs, classes, and tools advertised as experimental or subject to removal in future releases of Java. This includes the sun.* packages and deprecated tools and APIs.

I like the proposed JDK 9 approach of "encapsulating internal APIs in JDK 9" to deal with these frequent issues during major revision upgrades.

Significant effort has been applied over the years to keep Java, for the most part, largely backwards compatible. However, there are cases where this backwards compatibility is not maintained. I have looked at some examples of this in this post and extracted some observations and lessons learned from those examples. Migrations to newer versions of Java tend to be easier when developers avoid using deprecated features, avoid using experimental features, and avoid using non-standard features. Also, certain coding practices such as avoiding basing logic on toString() results, can help.