Tag Archives: java

It comes with a set of Eclipse plugins to test and deploy Java servlets, using JDO or JPA to support database access. Of course, the database behind this is BigTable, which means that a lot of relational features are not available, but it scales!

This is the fourth installment of this series about managing backward compatibility in software development. Here I talk about what makes an XML Schema backward incompatible.

I specifically address W3C XML Schemas but general principles applies regardless of the schema language you use.

But first, why bother about XML Schemas compatiblility?

Actually, in enterprise applications, XML is often used either to specify configuration files or interchange formats. With the rise of WebServices and RESTFull applications on the Internet there is also an increase in the use of XML.

Thus, making sure that existing configuration files still work with your new software or, more importantly, that other applications can still communicate with yours can really make a difference.

So, what makes a schema incompatible?

Changing an element or attribute type to a more restricted type (like adding constraints on a xs:string)

Changing the order of a sequence in a complex element

Removing or renaming an element or attribute from a complex type

Adding a mandatory element or attribute to a complex type without providing a default

Removing complex or simple types will also make it incompatible if:

Your schema is included or imported by other schemas or

You do not replace them by compatible anonymous types (compatible meaning equivalent or less strict, e.g. if one defines a simple type JavaClass, which is a xs:string with a constraint, and replaces it with xs:string).

Then, how to preserve backward compatibility?

If some elements of the schema are becoming obsolete, do not remove them. Instead, mark them as deprecated in the schema documentation and, if applicable, remove their mapping to the object model (that way you will not have to maintain the code equivalent of the deprecated elements).

The best strategy I came across so far is using namespacing: If a given schema must be refactored, create a new one, changing its namespace (a good practice is to include the major version of the schema in the namespace).

You then have two options:

provide an XSL stylesheet that enables the migration of XML documents from the old schema the new one

provide support code to be able to read both document structures

Of course, the second solution is the most desirable from the operational point of view (and the first one is not always applicable). However, the trade-off is that it is more expensive from the development point of view. Once again, choosing between who is going to do the work (the guy who develops or the guy who installs your application) is a matter of project management.

This is the third post about software compatibility, the previous ones were talking about project management and bugs and this one deals with database schemas compatibility (I will deal with stored procedures in the chapters about code compatibility).

First of all, what does backward compatibility means when talking about the database?

Being able to retain data stored in one schema into a new one.

Preserving compatibility with external systems (like report engines) that may be accessing the database directly.

Point #1 is achieved through migration tools that update the database schema, in some cases such tools may be very tricky.

Point #2 is a bigger challenge. Changes that may break the database compatibility are:

Removing a table or changing its name.

Removing a column, changing its type (including its precision or length) or changing its name.

Changing the semantic of a column (e.g. changing the valid values).

Adding foreign keys.

In case #1 and #2, if such changes cannot be avoided, a good enough solution is to implement database views that mockup the old tables based on the new ones.

The thing is that for #2 you will need to rename the actual table which will force an update of the foreign keys in other tables and surely more code update than what was initially expected. Leaving an unused column in the table may be a better solution. As usual, this is a trade-off that should be discussed at the project level.

Point #3 is more tricky because it really depends on the change and the usage of the column. Most of the time transforming a “change” into a “remove and add new” will enable to refer to #2. Triggers can then be used to update the old column or it can just be left unused.

Point #4 is a problem when there are scripts that delete entries in a table. If all of a sudden there is a new foreign key that depends on this table then the script will fail, thus breaking the compatibility. I actually have no technical solution for this one. I think that only documentation can be given, but if any of you has an idea please share it with us :)

Nevertheless, one should recall to never do any incompatible change without a good enough reason.

First of all, coming back to my previous post on the subject, deciding whether or not to break the backward compatibility of an application is a project management matter. The decision that correcting a bug will break compatibility must not be left solely to the developer, sometimes the company may decide that compatibility should be preserved even when it comes to bugs.

Raymond Chen, a well-known developer at Microsoft, has some good examples on his blog, The Old New Thing, to illustrate this. Raymond actually gives us a good insight at Microsoft policy concerning backward compatibility of its OS.

This post, from Joel Spolsky (another well-known ex-employee of MS) gives another good example with this leap-year-bug deliberately created for Excel/1-2-3 compatibility.

So, to make it short, when you correct a bug, incompatibilities can appear because:

Either the bug as been detected and a workaround as been put in place. This workaround will have to be removed once the bug is corrected.

Or this was not initially considered a bug and the behavior is going to unexpectedly change.

As an example, if an interface exchanges strings representing date and time and you later discover that the time zone is omitted from the format. If someone developed a parser for this date and time but never expected a time zone information, then the application will break. This is a semantic incompatibility, but one that is brought by a bug fix.

In the case where your management decided that bug for bug compatibility was not necessary, the incompatibility and its potential impacts should be documented in the migration release notes.

In the case where you have to maintain the bug to maintain the compatibility, I recommend you subscribe to Raymond Chen’s blog or stop writing bugs.

For most software companies the ability to ship new versions of a product that will preserve clients’ data and customizations is a matter of market share. Still, this is often an afterthought and there seems to be little documentation available.

This article is the first of a serie about managing backward compatibility in enterprise applications. This will not be a definitive guide but I will try to spot the common areas where incompatibilities can appear and give guidelines about managing them.

This first post is about the project management side of backward compatibility.

One of the most important thing to remember about backward incompatibility is that it is mostly a matter of process and project management.

In order to find the most accurate way of solving a compatibility issue you need to talk about it because the solution can be driven by technical, business or project considerations. Once a solution is accepted, the reason as to why this as been done that way must be properly advertised (this is of uttermost importance when only documentation is provided) and rolled-out.

As backward compatibility is a project concern it must be:

Listed in the project risks list

Considered at the project level

Optionally considered at the product level (mostly when it has business impacts)

There are three ways to solve backward incompatibilities, they are listed from the most desirable to the one that requires the less developer work:

Ensure binary compatibility – Work is done at the development’s level.

Provide migration tools – Work is split between development and services but emphasis is put on development.

Provide thorough documentation of incompatibilities and ways to overcome them – Work is split between development and services but emphasis is put on services

Reject or postpone the change – Work is then at the product management level

Like for bugs, backward compatibility cannot be guaranteed at 100%, the best thing a project manager can provide is a good measure of the risk upon it for a given version.

When a new version is released, incompatibilities, those that have not been foreseen or at least documented, must then be treated like any other bug and become part of the maintenance process.

In the following posts I will focus on what can make an application backward incompatible and give some guidelines in order to limit those issues and ensure binary compatibility.

Have you ever tried to write Java code with non-ASCII characters? Like having French method names?

The other day I stumbled upon Java classes written in French. Class names like “Opération”, methods names like “getRéalisateur” and embedded log messages and comments all the same.

At first you say “not common but cool” (and you start thinking about writing code in Chinese because your boss always wondered how we could forbid clients from decompiling our classes without using an obfuscator).

Answer is #3 because the Java IDE (Eclipse in this case) is by default using the platform encoding to save files. And those classes have been created on a Mac.

I actually had no problem reading and compiling them because I also use Eclipse on a Mac and because the Java compiler is also assuming the source files are in the platform encoding.

So what, nothing wrong then? Yeah, except the integration server is running on Ubuntu and sometimes I work on Windows as well. And on those platforms the default encoding is not MacRoman…

Something interesting is that it is always like that! I mean, even when you code in plain English there are chances that your IDE is going to write the files in the platform encoding. But nobody notices because as long as you only use characters in the ASCII-7 range, then they will be encoded the same in almost all encodings.

So what is the solution? Well it depends if you really want to code in French (or in Chinese). My advice anyway is “don’t do that” and externalize localized strings. However, if you really insist you have two solutions:

Make the whole production chain encoding-explicit: Configure your IDE to use UTF-8 and specify in your build that the Java compiler is going to deal with UTF-8 encoded files (UTF-8 is better in most cases).

Make sure you only use ASCII-7 characters in your files and replace all non-ASCII-7 characters with their \uXXXX equivalent (even in comments).

However, be aware that #1 is not always possible, you might be using processing tools that do not offer you the option to use something else than the platform encoding.

That’s funny because another colleague, yesterday during the lunch, asked us if instead of developing for a JEE container it would not be better to adopt a lightweight container like Tomcat? Using frameworks like Spring?

My answer was actually another question (as often): What is a lightweight container?

When frameworks like Spring and Hibernate started, their purpose was to add functionalities that did not exist or were badly designed: flows, inversion of control and injection or entity management. People were complaining about JEE and some switched to Tomcat plus Spring and Hibernate. Some of them did so because at this moment they did not need the other JEE services.

Hanson concludes his article with the following:

When attempting to determine the server environment best suited to a particular application or system, it is helpful to break down the requirements of the system and determine which Java EE components will need to be supported.

I could not agree more with this. However, requirements evolve and people switch to new projects but they usually continue to use the same frameworks.

The result is that when the need for new services increases (transaction, security, messaging, administration) the pressure on frameworks increases and they add those services to their stack because their clients ask for it and that is fun to code.

Then, what is the difference between a JEE server and Tomcat+Spring? I mean at which point a lightweight container is not lightweight anymore? When you add transactions? And in that case why not using JEE? Because it is JEE and it is said to be heavyweight?

My answer is to always use JEE when it offers the services you need. If it does not? Use something else but otherwise use JEE. Today if I was creating a new application I would not use Hibernate for entity persistence, I would use EJB3.

In my previous post I was explaining how to use an Active Directory server to authenticate a user. Indeed, I was trying to make the system authenticate the user using the Windows credentials that she already entered when logging onto her workstation.

Some years ago I was working with IIS and it was only a matter of configuration of the server to enable that for browsers that were supporting the appropriate protocol (others would be using HTTP basic).
One of the advantages of that protocol is that the user’s password is never sent over the wire. I found out this protocol is named SPNEGO and is an extension to the HTTP Negotiate protocol.

Since negotiation must occur between the browser and the server, if the server does not natively implement that protocol you cannot use the standard security APIs like custom registries or JAAS.
The solution is then to disable the server standard authentication mechanism and implement a filter that will negotiate, using SPNEGO, with the browser.

In the principle it looks easy but one still need to implement SPNEGO and bridge with Windows, because it’s Windows that finally authenticates the user.

After some goggling I found that the jCIFS library and its extension jCIFS-Ext have the necessary support to help me do the job. In fact everything is already there, even the filter: jcifs.http.AuthenticationFilter.

So first, let’s configure the security constraints for our web-app. In the web.xml we must have the following:

XHTML

1

2

3

4

5

6

7

8

9

10

<security-constraint>

<web-resource-collection>

<web-resource-name>Any resource</web-resource-name>

<description>Any resource</description>

<url-pattern>/*</url-pattern>

</web-resource-collection>

<user-data-constraint>

<transport-guarantee>CONFIDENTIAL</transport-guarantee>

</user-data-constraint>

</security>

I do not define any role nor any authentication method because I don’t actually want the server to do the authentication by himself. Nevertheless, I define that I want confidentiality on those URLs.
I do that because I will configure my filter to fall-back to HTTP Basic if the browser does not support SPNEGO or HTTP Negotiate and I do not want the password to travel unencrypted on the net.

I hope this imply that if the application is not served over HTTPS there will be a problem, but I actually correctly configured my server to serve the application over HTTPS so I did not test this behaviour.

The second step is to configure the filter itself, the jCIFS-Ext filter has undocumented parameters so I had to go through the code to find them:

XHTML

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

<filter>

<filter-name>AuthenticationFilter</filter-name>

<display-name>AuthenticationFilter</display-name>

<description>SPNEGO Authentication Filter</description>

<filter-class>jcifs.http.AuthenticationFilter</filter-class>

<init-param>

<param-name>jcifs.smb.client.domain</param-name>

<param-value>MyDomain</param-value>

<description>The name of the Windows domain.</description>

</init-param>

<init-param>

<param-name>jcifs.http.domainController</param-name>

<param-value>mydomain.com</param-value>

<description>The address of the Windows

domain controller.</description>

</init-param>

<init-param>

<param-name>jcifs.http.enableNegotiate</param-name>

<param-value>true</param-value>

<description>If the browser does not support SPNEGO,

fallback to HTTP Negotiate.</description>

</init-param>

<init-param>

<param-name>jcifs.http.enableBasic</param-name>

<param-value>true</param-value>

<description>If the browser does not support SPNEGO

nor HTTP Negotiate, fallback to HTTP Basic

but only if the connection is secure.</description>

</init-param>

<init-param>

<param-name>jcifs.http.insecureBasic</param-name>

<param-value>false</param-value>

<description>Never fallback to HTTP Basic when the

connection is insecure.</description>

</init-param>

<init-param>

<param-name>jcifs.http.basicRealm</param-name>

<param-value>mydomain</param-value>

<description>The name of the domain in case of

HTTP Basic authentication.

Used only for display to the user.</description>

</init-param>

</filter>

<filter-mapping>

<filter-name>AuthenticationFilter</filter-name>

<url-pattern>/*</url-pattern>

</filter-mapping>

“Et voilà”, now your application should automatically authenticate the user based on its Windows credentials. I said should because there are some prerequisites:

on the browser side, Windows integrated security must be enabled

on the server side your platform must actually support Kerberos for the filter to properly work.

However, the former is a matter of configuration and the latter is a matter of slightly changing the code of the filter.

Configuring an Internet Explorer Browser

To configure an Internet Explorer browser to use Windows authentication, follow these procedures in Internet Explorer:

Configure Local Intranet Domains

In Internet Explorer, select Tools > Internet Options.

Select the Security tab.

Select Local intranet and click Sites.

In the Local intranet popup, ensure that the “Include all sites that bypass the proxy server” and “Include all local (intranet) sites not listed in other zones” options are checked.

Click Advanced.

In the Local intranet (Advanced) dialog box, add all relative domain names that will be used for Integrator server instances participating in the SSO configuration (for example, myhost.example.com) and click OK.

Configure Intranet Authentication

Select Tools > Internet Options.

Select the Security tab.

Select Local intranet and click Custom Level…

In the Security Settings dialog box, scroll to the User Authentication section.

Select Automatic logon only in Intranet zone. This option prevents users from having to re-enter logon credentials, which is a key piece to this solution.

Click OK.

Verify the Proxy Settings (If you have a proxy server enabled)

Select Tools > Internet Options.

Select the Connections tab and click LAN Settings.

Verify that the proxy server address and port number are correct.

Click Advanced.

In the Proxy Settings dialog box, ensure that all desired domain names are entered in the Exceptions field.

Click OK to close the Proxy Settings dialog box.

Set Integrated Authentication for Internet Explorer 6.0 (In addition to the previous settings, one additional setting is required if you are running Internet Explorer 6.0)

In Internet Explorer, select Tools > Internet Options.

Select the Advanced tab.

Scroll to the Security section.

Make sure that Enable Integrated Windows Authentication option is checked and click OK.

If this option was not checked, restart the computer.

Despite all of this configuration I encountered some cases where this was not working at all in IE and I was unable to spot the problem, so you might be falling into this category. The symptoms are that the negociation process takes place but the browser does not answer the last challenge and no error message is displayed at all.

Configuring a Mozilla Firefox Browser

To configure an Mozilla Firefox browser to use Windows authentication, follow these procedures in Mozilla Firefox:

Type about:config in the address bar of the browser and press return (a big list of properties should be displayed in the browser window).

Type “network” in the filter box.

Double-click on the network.automatic-ntlm-auth.trusted-uris property and enter “mydomain.com” (if there is already a value you can add a comma to separate both entries)

The value for this preference is a comma-separated list of URI fragments. This sample string shows the three legal kinds of fragments: https://, http://www.example.com, test.com

The first fragment says, “Trust all URLs with an https scheme.” The second fragment (a full URL) says, “Trust this particular web site.” The third fragment is interpreted to mean http://anything.test.com, so any web site that is a subdomain of test.com, including test.com itself, will also be trusted.

I did not encounter any problem with Firefox which is what I call a paradox…

Changing the filter to use NTLM instead of Kerberos

Actually the change must not occur in the filter but in the class jcifs.spnego.Authentication which comes with jCIFS-Ext. This class tries to determine if the system supports Kerberos but uses introspection, looking for some Java classes that enable Kerberos support in Java.
Nevertheless, those classes can be there without the actual system supporting Kerberos (which is the case where I work).

Fortunately, modifying the behaviour is not too much complicated, just change line 57 of this class:

Recently I came into trying to authenticate users of an intranet web-application against the ActiveDirectory server that is used to authenticate them on their Windows desktop. Here is some code I used to achieve this.

I went into several steps, the first of them being creating a custom user registry to interface my web server and the AD server.

I was using Jetty as the web container so I had to develop an implementation of Jetty’s UserRealm but in any other web container or application things should be the same.
Mostly you need to do two things:

Once you have created the initial context, the user has been authenticated by the AD server and everything is fine (creating the inital context will throw a NamingException otherwise).

However, since you are going to send the user’s credentials over the network you may want to have some confidence in the protocole that is used to negociate the connection. The javax.security.sasl.qop and other properties may be set to ensure that the protocole is safe.

This code adds the domain name to the username, that way the user doesn’t have to enter domain\username as its credentials but only its username.
You may want to force her to enter the domain or do some autodetection… as you like.

The roles that are returned are distinguished names, like cn=Joe Smith,ou=Sales,dc=mydomain,dc=com so that may be another issue to map them to simpler names. Fortunately I didn’t need these roles for my application.

The second step for me was to actually enable single sign-on (authentication without asking for credentials).

I quickly discovered that the previous code was totally useless for that purpose. But I keep this one for a later post ;-)

Apartment in Kathu, Thailand. Comfortable condo in a modern building, fully equipped. Access to a nice pool, sun deck and a gym. Central, 15-20mn ride from Patong Beach and Phuket Town, ideal for people who want to explore the island.