Tags

Just a quick note on doing JSON with Spring MVC, under WebSphere 8.5.5 Traditional.

This version of WebSphere includes Jackson 1.6.2 under the covers, for JSON serialization. (Apparently in WebSphere Liberty, the included Jackson isn't available for applications to directly use, but it is in WebSphere Traditional. On Liberty, you have to deploy your own copy of Jackson anyway.)

Spring 4.x "consumes" and "produces" attributes can auto serialize JSON, but apparently only if Jackson 2.x is in the classpath.

However, because Jackson 1.x and 2.x use different Java packages (org.codehaus.jackson vs. com.fasterxml.jackson), I was able to just drop Jackson 2 jars into my application (either inside the WAR or as a WAS shared library), and bingo, Spring can serialize JSON, without impacting anything else that relies on Jackson 1.x (like WebSphere's own JAX-RS support).

XJC does try to put @XmlRootElement annotation on a class that we generate from a complex type. The exact condition is somewhat ugly, but the basic idea is that if we can statically guarantee that a complex type won't be used by multiple different tag names, we put @XmlRootElement.

While that second link really contains all the crucial information on those issues, I thought I'd also post it here.

WebSphere Liberty Core(?)

First, my RAD install came with an installation of WebSphere Liberty, I think "Core" Edition. (Liberty Features compares the Editions.)

JAXB

And that didn't come with very many Features to enable. I didn't notice this until I went to generate JAXB Java classes from an XML Schema (XSD) file. Where I got the error:

Errors occurred during wsgen.

java.io.IOException: Cannot run program "/opt/IBM/WebSphere/Liberty/bin/jaxb/xjc" (in directory "/opt/IBM/WebSphere/Liberty"): error=2, No such file or directory

(Interesting, didn't even know I was using wsgen, eh?)

Well, that is the correct location of my Liberty install. But, hey, there is no jaxb folder there at all. I have a couple of other xjc instances on the system, but I have no idea how to make RAD use them. My project is a Web Project, pointed at a JRE that does have xjc, and the Liberty Runtime, but apparently that's not sufficient for the RAD configuration to connect the dots.

The ILAN versions are available for free with a DeveloperWorks ID and have the caveat:

This offering is a no-cost non-supported and non-warranted version of the product.

This one let me install all kinds of Features, including ... JAXB support. Now the xjc tool is there where RAD wanted it.

"jar" access is not allowed

Except, now RAD generating JAXB from Schema produced the following error:

The xjc tool returned an error:
parsing a schema...
java.lang.AssertionError: org.xml.sax.SAXParseException: Failed to read external schema document "jar:file:/opt/IBM/WebSphere/Liberty/lib/com.ibm.ws.jaxb.tools.2.2.10_1.0.19.jar!/com/sun/tools/xjc/reader/xmlschema/bindinfo/xjc.xsd", because "jar" access is not allowed.

Also, it turns out I only got this message if I elected for the code-generation process to create Serializable classes. If I didn't select that option, the generation succeeded. Weird since I the only difference I noticed - after I got it working - is adding "implements Serializable" to the generated classes.

In any case, this StackOverflow link pointed at a very similar problem, with a definitely non-intuitive workaround, that eventually succeeded when I found the right place for it:

For me, this meant going to the /opt/IBM/WebSphere/Liberty/bin/jaxb/xjc script (good thing it was a script and not a compiled executable), and adding the following lines, as the last step that touches JVM_ARGS before it's used:

# ... because "jar" access is not allowed"
JVM_ARGS="-Djavax.xml.accessExternalSchema=all ${JVM_ARGS}"

Update: Not just Mac + Liberty

Now back on my primary Windows 10, with WebSphere 8.5.5 Full Profile, I had the same problem occur:

The Xjc tool has completed Web service artifact generation.
Review the tool output for details, including errors and warnings.
parsing a schema...
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:280)
Caused by: java.lang.AssertionError: org.xml.sax.SAXParseException: Failed to read external schema document "jar:file:/C:/Program Files (x86)/IBM/WebSphere/AppServer/plugins/com.ibm.jaxb.tools.jar!/com/ibm/jtc/jax/tools/xjc/reader/xmlschema/bindinfo/xjc.xsd", because "jar" access is not allowed.

Same fix/workaround, this time in C:/Program Files (x86)/IBM/WebSphere/AppServer/bin/xjc.bat, in case that helps anyone searching later.

(And yes, this seems to me like a bug in either RAD/WDT, or WebSphere, or the combination.)

I've recently been trying to get my secondary machine, a Mac, running Rational Application Developer at least close to how I have it running on my primary Windows machine. (While awaiting hardware fixes and system reload.)

Several things have not worked out-of-the-box, so I figured I ought to document the things I've had to address.

Update an existing installation

First, I already had RAD 9.6 installed and had used it a bit, so I know it was working. It also installs a copy of IBM Installation Manager, which is how you ought to be able to update to a new fix level. I knew 9.6.1 was available, so I tried the "IBM Installation Manager" option from within RAD, from the "Help" menu, but it never worked. I don't remember the exact behavior or error, but I think at one point the application just never appeared, and maybe at another it reported a generic, "An error has occurred... see the log file".

Nor did running the program directly, once I located it under /Applications/IBM/InstallationManager/eclipse/IBMIM.app. I think initially it just failed to find any packages to install or update.

I then tried deleting IM, which under a Mac is not as intuitive to me as Windows, believe it or not. (And, apparently, simply sending those folders - InstallationManager and IMShared - to the Trash is insufficient for a full uninstall. I'll return to that in a moment.)

I then tried installing a new, standalone version of IM, but that found no packages either.

Reinstall

So... I concluded I needed to uninstall everything and start from scratch with 9.6.1. However, when running that installer, the IM installation portion complains that a newer version is still installed.

@PeteKidwell0100003WU3 (hopefully the same one) gave me the tip that I needed to delete the entire /var/ibm/InstallationManager folder (as root/sudo, of course). This did allow the reinstall to succeed.

Java 6?

Now, attempting to run RAD right after installation resulted in the confusing error, "To open "Eclipse" you need to install the legacy Java SE 6 runtime".

Are you kidding me? An up-to-date version of a Java IDE surely isn't requiring an obsolete Java? Plus RAD provides its own version - Java 8 - I can see it right there in its installation directory. Plus 9.6 was running fine.

Web searches seem to indicate this is Mac message rather than an Eclipse or RAD one, which makes sense given that it's not even trying to use RAD's copy of Java yet.

Where my-destination and my-websphere-server might or might not be the same server, as long as my-destination can reach my-websphere-server on the port being tunneled.

I'll get a login shell to my-destination, and a tunneled port 9043 to my-websphere-server, through localhost:9043.

Dynamic Forwarding

Thanks, again, to Harley, this tip actually obviates some of the need for port tunneling. The -D option will dynamically forward connections through a local port, as a SOCKS server. Which... a browser can be configured to use, thus reaching any http URLs on the "other side" of that tunnel.

ssh -D localhost:8888 my-destination

Firefox proxy settings

I'm also currently trying out the SwitchyOmega add-on to automatically switch to this proxy configuration when hitting hosts in our private domain. Thus far, it seems to be working exactly as I'd like.

Quite a departure from my normal topics, I've had the opportunity to work on an iOS (Swift) app recently (with IBM MobileFirst Platform Foundation handling the server-side logic). Working through various articles and StackOverflow questions to try to find a "happy medium" for the level of build process we needed, I thought what I'd written up for our team could prove useful to others working through something similar.

Goal

Our basic need is for a repeatable build script, that we can commit to Source Control, and that can handle a few differences between "Production" and "Test" builds. We have one distribution mechanism that we use internally, where we produce an ipa file, and we produce an xcarchive for Apple TestFlight or the App Store.

Approach Overview

The current approach is very minimal, using only the single default Xcode Scheme, the 2 existing Debug and Release Configurations, and a new Test Configuration copied from the Release one.

A few Conditional Compilation code blocks will choose different settings or make different calls, based on which Configuration is being built.

Xcode Setup

Create a new Configuration

Under the Project's (not the Target's) Info tab, in the Configurations section, we've created an additional Test Configuration, copied from "Duplicate Release Configuration".

Note, also, just below that section, the "Use Release for command-line builds". We'll explicitly override it in our build script, but it's worth noting that it exists.

Since we're currently using CocoaPods for the Mobile First SDK, we also had to take the steps described in the article above:

Note that if you use Cocoapods then you will need to set the configurations back to none, delete the contents of the Pods folder in your project (Not the Pods project) and re-run pod install.

(I don't know what "set the configurations back to none" means. I didn't do that.)

Configuration-specific Properties

I think under either the main Project or Target, Build Settings tab, near the bottom of the tab is the Swift Compiler - Custom Flags section. "Inside" that is an Active Compilation Conditions subsection. These flags can later be used in Conditional Compilation blocks.

By default, the Debug entry has a DEBUG flag set.

We've added a PROD flag to the Release entry.

Note: We're not currently changing the application Bundle ID, name, or icons for the different Configurations. So we won't be able to have multiple builds installed on a device at the same time. It could be useful, but would require quite a bit more effort and complication.

Change Configurations

When you run an application from within Xcode, you get the Debug Configuration by default. If you wish to run in a different Configuration, you can change this by editing the Scheme and changing the selected Configuration for the Run "task". This is a useful way to test your Production settings from the Xcode simulator before building them to push out to a store or device.

Note: I think this setting is stored in your user-specific settings, which we've excluded from Git. So it won't want a commit or affect others.

Build Script

Build Configuration

The script currently defaults to building the Test Configuration, and you have to specify Release on the command-line if you want to build that instead. At a later phase of the project, we might default the other direction.

The script uses three different executions of the xcodebuild command:

clean

archive

-exportArchive

(No, I don't know why the first two are commands, and the latter is an option.)

All three have the -configuration option explicitly specified, to use either the Configuration name passed-in from the command-line or the default defined in the script itself. I'm not 100% certain all three of them need this, but when only the clean command had it (what the referenced article showed), the build did not do what was expected. Which makes sense. It's probably just the archive step that also needs it, but I haven't tested this.

The script also names the archive (the .xcarchive directory) MyApp-ConfigurationName if it's not the Release Configuration. e.g. MyApp-Test. But the ipa file is always named MyApp.ipa. I don't see a way to change this with this configuration. It may be using the Scheme name.

Scheme

As mentioned above, we still have a single Scheme, so that is hardcoded in the script.

Workspace

Since CocoaPods projects use an Xcode Workspace rather than a Project, we use the -workspace option.

exportOptionsPlist

Finally, the earlier -exportFormat option is now gone, in its place is -exportOptionsPlist, which uses a plist file to specify build options. That is currently the file build.plist, with the option:

method=development

I think since we only build the ipa file when we're deploying through a non-Apple mechanism, we don't need to worry about this when we just pass the xcarchive to Apple.

If we do ever find a need to use different settings here, I think we'd create additional build-configuration.plist files(s) and specify the correct one in the script.

Note: It seems the only reliable way to see the options for the build plist file is to run xcodebuild -help on a Mac.

Update for Xcode 9

For Xcode 9, it seems we now also need to add a provisioningProfiles dictionary to the build plist file. This answer on Stack Overflow provided useful details and an example.

Code

Environment-specific Properties

We're using the approach of putting environment-specific properties into application plist files:

MyApp-test.plist

MyApp-production.plist

For our application, this is currently just a couple of URLs that are different in the two environments. Other items can be added as necessary.

Access to these properties is encapsulated through static methods in a simple AppProperties class, where the next technique is used.

Conditional Code

Note that we're currently only using flags, and we currently only have the DEBUG (created by default in the Debug Configuration) and PROD flags available. Our current logic uses only the PROD flag, which is only defined to the Release Configuration, so we only have Production vs. non-Production cases today.

Where this is used today is in choosing which application plist to load:

Another workaround if using RAD or WDT versions 8.0.x or later is to close the Servers view, exit RAD, then launch RAD again. Once RAD is restarted, retry the action that was failing previously. This workaround prevents the WAS SSL connection from being initialized first which should prevent the WAS SSL socket factory from being set as the default. Other SSL connections will attempt to use the RAD JDK's default socket factory unless otherwise specified.

I guess RAD started and loaded up the WAS SSL support, but RAD itself isn't fully able to use that.

Note also the warning to "reverse" that activity if you want to run WAS again under RAD.

SoapUI can do Groovy scripting, and I've been using that to get today's date into some fields that require a date.

e.g.

${=def now = new Date();now.format("yyyy-MM-dd")}

I'd been copying that to each request, which works fine, but is a little longer than I'd like. There's a way to define project-level variables, that can also use scripting, then reference those from messages. Which makes for at least a slightly smaller field content like this:

<ver:VerificationDate>${#Project#today}</ver:VerificationDate>

Where the ${#Project#variable-name} is how you reference project-level variables. See Property Expansion

To set up project-level variables (called "properties"), double-click on the project name/folder, then press the plus symbol in the Properties list at the bottom:

I really don't want to be hardcoding Database configuration in my adapters. Or repeating it for multiple adapters that share the same Database access. (Not to mention not wanting to require the referenced Apache BasicDataSource class and libraries either.)

And since MobileFirst Foundation runs on a Java EE server anyway, why would I not use DataSources configured as server Resources?

Resource URL

You only need one message for this, which I've named "Request Token". It looks like this:

For the Basic Auth Username and Password, specify a user you've configured in the MFP Console. (The referenced article mentions a default user/pass that exists in a development environment, but my test environment is a separate server, so I had to create a user for this purpose.)

Request Parameters

grant_type : client_credentials

scope : Use the scope protecting the resource.
If you don’t use a scope to protect your resource, use an empty string

Response

When you run this, the response will be JSON with a long token in the access_token field:

Add the Token to Adapter Requests

Now you need to add that token value as a custom HTTP header named "Authorization", not as an HTTP Basic Auth header.

But you don't want to have to add that header individually to every message, and you also don't want to have to update it all over the place when it expires and you have to generate a new one.

SoapUI Project Properties

So I added a Bearer property at the top level of my SoapUI project:

Whenever I need to generate a new Token, I update the value here.

Add the Authorization Header to Messages

Now, for each of my protected Resources - or even at a parent Resource above them, if that makes sense - I added a "Header" type request parameter, in the Resource itself, rather than in individual HTTP Methods or Messages. In this case, at the highlighted level:

Add a HEADER type parameter named Authorization, with the value:

Bearer ${#Project#Bearer}

to pull the value from the Project Property above.

Now all Messages under that Resource will automatically add that Header to their Requests.

(Any Messages which were created before you added this parameter, though, might not automatically pick it up.)

Will be deserialized by the Wink client Resource methods into my Map, to be accessed like this:

String id = myJsonResponseBean.getProperties().get("id");

Notes

Apparently only one JSON name/value field can be mapped in this way. But this is sufficient for my needs.

Regrettably, this is yet another dependency on proprietary APIs. I usually try to avoid those at all costs, but this JSON Client under WebSphere 8.5.5 now has at least 3 of them. There doesn't appear to be an alternative at this point.

Surprisingly, my earlier technique of adding the @XmlType and @JsonIgnoreProperties(ignoreUnknown = true) annotations to a JsonBase class did not work here. I had to explicitly add those annotations to this POJO class.

REST Client

At this version of WebSphere, JAX-RS 1.1 is supported, but that doesn't include any REST Client support. JAX-RS 2.0 apparently does, but that's not yet available here on WAS 8.5.5 Full Profile. It is on Liberty Profile, but that's not what we're using.

However, WebSphere's JAX-RS Server support uses Apache Wink, which does include REST Client capability. And, in fact, is the REST Client mechanism recommended by WebSphere's official documentation:

The Wink Javadoc shows those unchecked Exceptions for one form of the operations, like post().

Another form of each operation does not throw that Exception, but allows you to manually check the ClientResponse yourself. This is how the example in the above WAS documentation does things. (Note that even those methods do seem to throw ClientRuntimeException for things like communication errors, despite the javadoc not indicating that.)

POST

Speaking of POST, here's what that looks like. With a JSON payload as well.

JSON Beans and Serializing/Deserializing

So what allows automatically serializing a request POJO into JSON or deserializing a response JSON into a POJO? I had some naive hope that perhaps just being a JavaBean would be sufficient. But alas, it is not.

Thankfully, no. Also in the original WebSphere Rest Client article linked above, we find this statement:

Instead of calling the response.getEntity(String.class) object with String.class file, you can use any other class that has a valid javax.ws.rs.ext.MessageBodyReader object, such as a JAXB annotated class, a byte[], or a custom class that has a custom entity provider.

JAXB annotated class, that's super simple. (Still feels a bit weird that JAXB annotations are used for JSON serialization in WAS. Don't know if other servers do this as well. But easy.) If I do this in a base class, the subclasses don't have to also add the annotation:

import javax.xml.bind.annotation.XmlType;

@XmlType
public class JsonBase {}

Unwanted Fields

Hmmm... do I have to declare in the POJO every field and sub-object the JSON could have? That would be really frustrating and difficult to maintain. Maybe I'd rather just parse the fields I care about after all.

Yeah, look. If I don't declare every field, I get errors like this on deserialization:

org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "start" (Class MyJsonDataType), not marked as ignorable

We've recently had a need to accept files submitted by a partner and perform some processing on them, with the partner returning later to retrieve the results. Rather than the "old", standard approach of using something like sftp and regularly kicking off a cron job to look for input files, this seemed like a good case for a Web Service.

And IMO REST style services are simpler to produce (particularly with standard APIs like JAX-RS), consume, and test, and further are a good fit for this particular scenario. (See the excellent book RESTful Web Services, now available for free in electronic form, or its successor, which I didn't realize existed.)

Design

I decided to support the URL "/batch", as an HTTP POST of standard "multipart/form-data" media type, with a service-generated "Batch ID" (among other things) returned to the caller, and the URL "/batch/{batchId}" as an HTTP GET to obtain the results. (Both very easy to test directly from a browser, BTW. But also do seem the correct fit for the REST philosophy.)

I should say, I decided to propose this approach and then go see if I could make it happen in a straightforward way. I wasn't sure if submitting files this way was normal enough to be easily supported by JAX-RS and/or WebSphere.

Implementation

It turns out this is supported by JAX-RS in WebSphere, with a few different mechanisms. Right there in the official IBM Knowledge Center documentation, (almost) everything I needed to know:

Generic JAX-RS approach

I generally prefer to stick to the specs, not using any server-specific or implementation-specific code. WebSphere supports this by simply annotating a java.io.File parameter with @FormParam. Like everything else I've tried in JAX-RS, it's elegantly simple. (I admit I'm assuming this construct is universally supported by JEE servers, but I don't know that for certain.)

With WebSphere, this uploads the file to a temporary location, and gives it a generated file name. (On my Windows development system, the location seems to be my Windows %TEMP% directory.)

However, I'd prefer to prescriptively set the upload directory (so I don't have to later copy it to a more "permanent" location), and I really want the original file's name (among other things, so I can detect when the same file is inadvertently submitted again).

Because of these requirements, I needed to look for another approach.

WebSphere-specific (Apache Wink) approach

It turns out the last example on that WebSphere Knowledge Center page is what I needed. WebSphere's JAX-RS support uses Apache Wink under the covers (although I'm not sure which Wink version goes with which WebSphere version). Regrettably, this approach is dependent on specific Apache Wink classes. But I really need this capability in order to give the desired user experience, so I'll have to live with the platform-dependence. (Unsurprisingly, both the WebSphere 8.5.5 full and Liberty Profiles seem to work fine with it.)

Granted, this service won't normally be called from a browser and HTML page, but during testing, it sometimes is. And even there, using a different browser is acceptable. But still, if it wasn't too much work, I'd prefer to handle this more robustly "just in case".

.NET client error on the GET request

Our partner appears to be using a C# .NET client, and even after he had success submitting a file to our POST URL, he was getting this error upon calling our GET URL from HttpWebRequest.GetResponse():

JAXB and Boolean

It turns out that by default, JAXB 2.2, the level in Java 7, generates the wrong kinds of getters for Boolean (wrapper object) fields. It generates isXXX() getters instead of getXXX() getters. And according to the JavaBeans spec, only boolean primitives are supposed to do that.

This causes getXXX() methods to be generated instead of isXXX() methods. (I found myself thinking it would be nice if both styles of getters were produced, but I admit I don't know if that would violate some other specification or convention.)

xs:boolean with minOccurs="0"

Related is the fact that, with this version of JAXB, XML schema types that have minOccurs=0 get generated as "wrapper" classes (e.g. Boolean) instead of primitives (boolean). This is so that the value can be null if the XML element is missing or empty.

Based on some non-null-safe legacy code that I'm migrating, it *seems* that earlier versions of JAXB behaved differently, at least under WebSphere 6.1: either using primitives or otherwise returning a default value on empty.

Bonus topic

A different issue we saw with JAX-WS/JAXB under WAS 8.5.5/Java7, was that xs:string values with xsi:nil="true", were sometimes returning null rather than empty string. (Specifically, on the second and subsequent unmarshals. See forum thread.)

WebSphere 8.5.5 has an optimization option, com.ibm.xml.xlxp.jaxb.opti.level, whose default value exhibits this behavior, but which can be set to a "disabled" value that reverts to the earlier behavior.

Stumbled across this write-up I'd done years ago for my colleagues. Nothing revolutionary about it, but I thought I'd share in case others who see this are unaware of the capability.

Firefox stores user-specific data in Profiles. Bookmarks, settings, add-ons, etc. By default, Firefox creates and uses just one Profile, and you might not even know of its existence. But you can create and explicitly run more than one Profile, each with different configurations, and each running in a separate process. (On your traditional workstation, that is. Not, as far as I know, on a "mobile" device.) I typically use 2 or 3 Profiles for different purposes.

"Why?", you might ask. In short, some of my add-ons are specifically for development-oriented tasks, and aren't useful or necessary for my normal browsing activities. And since each such add-on increases Firefox's memory usage and adds its own risks of bugs and memory leaks, I decided to try a whole separate Profile for development tasks.

(Another, more recent option, is the Firefox Developer Edition, which installs a newer, test version of Firefox as an additional application, includes some developer-oriented settings, and also uses a separate Profile.)

A side-benefit is the ability to manage my active list of "browsing" pages separately from my development pages, restarting the browser and restoring the tabs, etc.

Next, how do you use multiple profiles? I won't attempt to duplicate the details described thoroughly in thesearticles, but here's a quick summary:

Run the firefox executable with the command-line options "-P -no-remote". This opens the Profile Manager and tells Firefox not to attempt to connect to an already-running copy.

Create a new profile. I left the "Don't ask at startup" box checked so that just running Firefox by itself opens my default profile without asking.

Create a custom shortcut to launch your new profile. Here's the text for my "Developer" profile:

In the screenshot, you can also see my "ESR" profiles for the Extended Support Release version of Firefox that I have installed alongside the current, Release version. This is the enterprise version of Firefox that my employer, IBM, officially supports for employees.

I also have some "Clean" profiles that I can use when I need to test how a site is behaving on a "stock" Firefox installation, without any customizations or add-ons.

Dependency Injection (DI) is a really, really useful pattern, IMO. I imagine most folks get that already. The Spring framework was my introduction to DI, and I still use and like Spring for many projects, in many ways.

However, with object lifecycles now managed by JEE containers, like for JAX-WS or JAX-RS services, sometimes using Spring-managed beans is not as direct as I'd like. JEE 6's Contexts and Dependency Injection (CDI) has been suggested to me multiple times as an alternative, built-into the JEE server and directly tied to its object lifecycles.

Most of my projects use Spring for other things as well, so I hadn't been willing to look down the CDI path just yet. However, I'm now working on some SOAP Web Service projects that don't yet use Spring, so I thought I'd give CDI a look.

What follows is the basics of configuring a "web" application that is both a JAX-WS service and a JAX-WS client to one or more other services. I need to inject both some interface implementation classes and some JEE Resource References into my service. In case documenting this particular example helps anyone else. Or just helps me clarify what I did and remember it in the future :-)

@Inject

In my @WebService JAX-WS implementation class (or any other "service" / "servlet" class managed by the JEE container), I need to make use of a "business facade" interface implementation singleton. JEE 6 annotation @Inject enables this.

@Inject
private BusinessFacade facade;

If I only have one such BusinessFacade implementation class defined in the application, this is sufficient to locate it. (Classes don't have to be annotated in order to have their instance(s) injectable as "Producers". They just have to meet some conventions: see About CDI Managed Beans.)

If you have multiple implementation classes, you'll need to qualify the @Inject like with @Named.

@ApplicationScoped

By default, the "scope" of a Managed Bean is "Dependent". That is, its lifecycle matches the lifecycle of the bean into which it is injected. In this case, the lifecycle of my @WebService instance. Which, at least under WebSphere 8.5.5, is created and destroyed with each HTTP request.

Since I want this BusinessFacade instance to be a singleton, only initialized once for the whole application, I need to tell my implementation class that I want instances of it to have "Application" scope. I do that by annotating that class with @ApplicationScoped.

To get that injected into my hand-written caller class, I use @Resource, specifying the same name used in <res-ref-name>:

@Resource(name="serviceLocation")
private URL serviceEndpointUrl;

@PostConstruct

Now, specific to JAX-WS client wrappers, I want to override the endpoint URL that was specified in the WSDL used to create the client code. The approach that seems cleanest to me is using the BindingProvider interface.

The @PostConstruct annotation indicates a method (named whatever you want) to be called after a class instance is created. I'll use this to bind a (singleton) JAX-WS client wrapper (ServicePortType in the example) to the endpoint URL specified in the above injected @Resource:

(Where Service and ServicePortType match whatever was generated from your WSDL.)

Note that I've annotated this particular class with @ApplicationScoped, so this @PostConstruct will only be called once. Another potential pattern would be to have this class not be a singleton but have the @PostConstruct method lazy-initialize and reuse a static ServicePortType instance.

beans.xml

Finally, logistically, your application must contain a beans.xml file in its WEB-INF directory in order for WebSphere to know to scan it for CDI annotations. This file can be completely empty.

Notes

I think technically some of these annotations aren't part of the CDI spec. But they all work in conjunction with each other to allow me to do simple DI within my JEE container, without additional 3rd-party dependencies.

For me, one of the downsides of annotation-based injection in general is that, best I can tell, if I want to switch between two interface implementations, say a mock implementation and a real one, I have to change the annotation and recompile the code. With Spring and XML configuration, I change an external file and restart the application. (I agree with this StackOverflow question.)

Another downside of annotation-based "coding/wiring", as opposed to configuration file wiring, is that you have to look all over the place to see the whole structure of the application. Being able to look in one (or a few) file(s) is useful at times.

(I'm assuming this is the same in stock Eclipse, I don't have it installed right now to confirm.)

If you have any source code that is generated by tooling (say, JAX-WS proxy code), you might have compiler warnings for things that are not considered best-practice. (Maybe they were fine in an earlier version of Java that the tooling still supports but are redundant in a later version.)

If you're like me and you prefer to have clean packages & classes in your IDE, here's a small tip that I somehow missed until recently.

If you place your "generated" source code in a separate source folder, which I like for ease of build process as well as organizational utility, you can also tell RAD to ignore compiler warnings for that source folder.

To do this, right-click on the source folder and go to Properties. Under "Java Compiler", check the "Ignore optional compile problems" setting, and you're done.

To me, this seems a very wise thing to do for anyone who manages multiple environments. (We've taken to actually styling our Development and Test end-user websites in a similar manner so people don't accidentally perform testing in the wrong environment.)

WebSphere's built-in solution

Note also another useful answer there that the WAS console does have a built-in mechanism for adding a custom text string to the banner. You can set this custom string in the System Administration > Console Identity > Custom identity string field.

Something more drastic

But the Stylish approach allows you to make some more obvious, drastic visual difference.

This script styles the #ibm-banner-main element by adding a background color to it. The effect looks like this:

(Where the "*CUST*" string is the aforementioned "Custom identity string".)

Copied from my StackOverflow answer, the content of the Stylish script would be:

Replace 'your-server' and potentially the 'https' and port, as appropriate.

You can list multiple @-moz-document sections if you want a single script for different environments (Development, Test, Production, etc.)

Which element to style

The way I determined which ID or class to style was using my browser's Development Tools "inspect this element". Using this investigative technique, other elements of the page could be discovered to style instead or in addition.

We wanted to capture some representative SOAP messages from our JAX-WS client Web Application (so that we could mock up a test service using SoapUI), and as it happens this is very easy to do with WebSphere trace settings.

We frequently need to confirm connectivity between systems on particular ports.

telnet

The simplest way to do this is with the telnet command. The syntax is:

telnet <server> <port>

Failure

One kind of failure looks like this:

telnet <server> <port>
Trying...
telnet: connect: Connection refused

This typically means that the connection reached the destination server, but nothing was listening on that port. It's also possible that it means some firewall between the two systems is flat-out rejecting the request.

Another like this:

telnet <server> <port>
Trying...

... after a notable delay, eventually you'll get ...

telnet: connect: Connection timed out

This means a firewall isn't letting the connection through, or possibly that the remote system isn't routing correctly back to the originating system

Success

On any system that has telnet installed, this seems to be the simplest approach.

ssh

Some Linuxes, including at least RHEL, apparently do not have telnet installed by default. Presumably because it's insecure if you login with it. Regrettably, that means you can't perform the above connectivity test with it, which is not itself insecure.

If you're unable or not allowed to install the telnet client, the ssh command has a similar way to attempt a connection on a particular port. Regrettably, its success/failure is more unclear (to me). The syntax is

Failure

I believe this can happen either when a firewall is rejecting the connection or when the connected server is not listening on the requested port.

Another kind of failure looks like this:

ssh <server> -p <port>

... Followed by just sitting there doing nothing.

I believe this means some firewall isn't letting you through but isn't actively rejecting the connection either. We saw this when the first firewall out of our origination system was allowing us through but the second one into our destination system was not.

Success

Regrettably, best I can tell, Success looks exactly like the second failure scenario. That is,

ssh <server> -p <port>

... Followed by just sitting there doing nothing.

You're connected and can type (appropriate) commands to the connected system, but there's no indication that you're connected.

Thus, I really don't like using ssh to test connectivity. It can prove immediate problems via the first kind of failure, but when that doesn't occur doesn't give me confidence that the connection truly worked.

openssl

This seems a better alternative to ssh, but is more cryptic to remember. Its syntax is:

There are times when we'd like to place injected Spring beans onto a scope where they can be accessed directly by JSTL EL expressions (${variable}). Perhaps URLs from JEE Resource References to be used in hrefs.

Spring has a class that enables us to do this directly from its configuration file, ServletContextAttributeExporter. This class will place beans on "Web Application Scope", also known as "Servlet Context".

It has attribute of a Map of keys to value objects, into which you can inject any Spring-defined beans you want, under any key names.

As a follow-up to Part 1, here are some additional considerations for generating JAX-WS code.

wsdlLocation

The wsdlLocation property for wsimport (whether from the command-line or from Ant) is placed in the generated JAX-WS @WebServiceClient class. It's placed both in that class annotation and in the static initializer block.

If you do not specify this option, the generated code will reference the absolute "file:" path to the .wsdl file on the local machine, which obviously won't work when deployed anywhere else. Alternative ways to deal with this are:

Use an Ant <replace> task to search and replace the generated code with something that does work. This was an approach we used in the past but is no longer necessary.

Set wsdlLocation to a relative or absolute path-only value, in which case the generated code will look for the .wsdl file on the application's classpath. Thus, the file will have to be stored/copied/deployed into a directory in the classpath. (And note that WEB-INF is not on the classpath, so where we had been storing our wsdl files, /WEB-INF/wsdl/file.wsdl would not be found.)

Set wsdlLocation to a full file:/// URL that starts at the root of the WAR file. That is, "file:///WEB-INF/wsdl/file.wsdl" (note the 3 slashes after file:). To my surprise, this works. I'm looking for a reference to officially document that URL("file:///...") from within a JEE Web Application will look at the root of the WAR.

Don't use the client class' default constructor, but rather use the one that accepts a URL containing the deployed .wsdl file's location.

Option 3 seems to me the cleanest and simplest with our current projects' directory conventions, so this is what I plan to use. For example:

wsdlLocation="file:///WEB-INF/wsdl/sampleService.wsdl"

Client vs. Server

wsimport will generate:

A Java interface describing the Web Service, annotated with @WebService. This interface will take the name of the <wsdl:portType> element from the WSDL

A client proxy class that extends javax.xml.ws.Service, annotated with @WebServiceClient. This class will take the name of the <wsdl:service> element from the WSDL. Contrary to what you might expect from the base class name, this is indeed a client proxy, not a service.

Of these, item 2 is not necessary for a JAX-WS service. In fact, the existence of a client class could be confusing.

An alternative to using wsimport for the service code is to generate it with RAD tooling, which if you specify that you're creating a service, will not create the client class. This will also create a "stub" service implementation class, which wsimport does not do.

The generated service implementation class will be annotated with @WebService with name and targetNamespace attributes that match those of the generated interface described in item 1. Additional attributes for the serviceName and portName will be added as well (matching the <wsdl:service> and <wsdl:binding> elements, respectively).

However, relying on RAD tooling rather than scripting may be considered to reduce the repeatability of future builds, so it might be preferable to use the wsimport approach, delete the generated client proxy, and manually create the @WebService class with the proper annotation attributes and proper methods.

In either case, the @WebService implementation class will have actual calls to business code manually added to it, so once it's generated it should not be overwritten by any future process. Thus, perhaps the better approach is to initially create the service with RAD tooling, then subsequently use wsimport if WSDL changes require code to be regenerated.

Generated Code and Source Control

An earlier convention on our project was to generate the JAX-WS client code as part of the normal build process, every time the application is built. This is a reasonable, "ideal" approach since none of this code is "source" in the conventional sense. That is, it doesn't need to be tracked for human edits or backed up for recovery purposes.

However, since theoretically behaviors could change with newer code generations, it does seem safer to version-control this code. This would also allow easy recovery of previous versions of generated code if that became necessary.

Doing so would also

reduce build times (although perhaps not significantly)

ensure that new developers immediately have a fully compilable set of source code for a project.

Finally, even with version-controlled generated code, having scripted methods to generate the code is still useful. This ensures that code generation options are documented and repeatable. Thus, we will be creating Ant targets even if they're not used as part of the default target and automated build process.