Since this had the symptoms of a typical classpath problem, I started there. I had to remember that WebSphere’s version of -verbose as a JVM argument is Application server > Java and Process Managerment > Process definition > Java Virtual Machine – Verbose class loading. I also did a quick check to see what I was using in other environments:

This revealed that the obsolete XmlSchemaCollection class came from WebSphere’s old version of org.apache.axis2.jar in …\IBM\WebSphere\AppServer\plugins. In spite of the “plugin” name, WebSphere is hopelessly intertangled with Axis 2, so removing it was not an option. So I preempted WebSphere by providing my own xmlschema-core-2.0.2.jar, and made it a part of my distribution by adding it to the pom.xml:

Finally, I set “Classes loaded with local class loader first (parent last)” in WebSphere’s class loader settings for my app. In some cases, I had to also place xmlschema-core-2.0.2.jar in a shared library folder.

These extra steps are a bit annoying, but I suppose that’s the price paid for using a Java EE container that lags a bit behind on open source frameworks.

Since I work on a few apps that use the DB2 JDBC Type 4 drivers, work-arounds for common issues have become familiar. But when the corrupted DB2 JAR issue resurfaced today, I realized that I had not posted about it. Now is as good a time as any to make up for that neglect…

The problem is that for DB2 LUW versions between 9 and 9.5, IBM published db2jcc.jar files with corrupt classes. In some Java EE environments, the container doesn’t fully scan JARs, so it doesn’t really matter if an unused class is wonky. But many containers do scan, causing exceptions like the following:

SEVERE: Unable to process Jar entry [COM/ibm/db2os390/sqlj/custom/DB2SQLJCustomizer.class] from Jar [jar:file:…lib/db2jcc.jar!/] for annotations
org.apache.tomcat.util.bcel.classfile.ClassFormatException: null is not a Java .class file
at org.apache.tomcat.util.bcel.classfile.ClassParser.readID(ClassParser.java:238)
at org.apache.tomcat.util.bcel.classfile.ClassParser.parse(ClassParser.java:114)

I’ve enjoyed Maven’s built-in maven-eclipse-plugin as a handy little tool for creating Eclipse projects from Maven POMs. But I was recently lured into trying m2eclipse (a.k.a, m2e) because of its feature set. After all, common Maven tasks (running builds, editing POMs, adding dependencies, updating repos, etc.) can certainly benefit from some tooling and automation.

So I installed the plug-in into Eclipse and then ran the obligatory Configure -> Convert to Maven Project. Imagine my surprise when I was immediately rewarded with the exception:

“Updating Maven Project”. Unsupported IClasspathEntry kind=4

Turns out, this is a known bug due to the fact that m2e and maven-eclipse-plugin use two different approaches for classpathentry values in .classpath files. If you try to migrate a project created with eclipse:eclipse (like mine and countless others were), you get this error. M2e went so far as to add this comment to my.project file:

NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are not supported in M2Eclipse.

Can’t we all just get along? Can’t maven plugin developers agree on a standard for this fairly common and straightforward requirement?

Since interoperability with multiple environments and the command-line are important to me, I decided to just ditch m2e. I’m simply not willing to lock my project into this plug-in, closing the other (more standard) options. If m2e adds compatibility in a future release, I’ll try it again. And if not? Nevermore.

Since security is king in my corp-rat world, standards dictate that my public web services be accessed via mutual authentication SSL. The extra steps this handshake requires can be tedious: exchanging certs, building keystores, configuring connections, updating encryption JARs, etc. So when helping developers of a third party app call in, it’s useful to provide a standard tool as a non-proprietary point of reference.

This week I decided to use soapUI to demonstrate calls into my web services over two-way SSL. The last time I did something like this, I used keytool and openssl to build keystores and convert key formats. But this go ’round I stumbled across this most excellent post which recommends the user-friendly Portecle tool, and steps through the soapUI setup.

Just a few tips to add:

SoapUI’s GUI-accessible logs (soapUI log, http log, SSL Info, etc.) are helpful for diagnosing common problems, but sometimes you have to view content in bin\soapui-errors.log and error.log. Take a peek there if other diags aren’t helpful.

SoapUI doesn’t show full details of the server/client key exchange. You can get more detailed traces with the simple curl -v or curl –trace; for example:

Although I’m a perennial test-driven development (TDD) wonk, I’ve been surprised by the recent interest in my xUnits, many of which are so pedestrian I’ve completely forgotten about them. After all, once the code is written and shipped, you can often ignore unit tests as long as they pass on builds and you aren’t updating the code under test (refactoring, extending, whatever). Along with that interest has come discussions of mock frameworks.

Careful… heavy reliance on mocks can encourage bad practice. Classes under test should be so cohesive and decoupled they can be tested independently with little scaffolding. And heavy use of JUnit for integration tests is a misuse of the framework.

But we all do it. You’re working on those top service-layer classes and you want the benefits of TDD there, too. They use tons of external resources (databases, web services, files, etc.) that just aren’t there in the test runner’s isolated environment. So you mock it up, and you want the mocks to be good enough to be meaningful. Mocks can be fragile over time, so you should also provide a way out if the mocks fail but the class under test is fine. You don’t want future maintainers wasting time propping up old mocks.

So how to balance all that? Here’s a quick example to illustrate a few techniques.

Let’s take it from top to bottom (item numbers correspond to // #x comments in the code):

Don’t drag in more than you need. If you’re using Spring, you may be tempted to inject (@Autowire) the service, but since you’re testing your implementation of the service, why would you? Just instantiate the thing. There are times when you’ll want a Spring application context and, for those, tools like @RunWith(SpringJUnit4ClassRunner.class) come in handy. But those are rare, and it’s best to keep it simple.

Container? forget it! Since you’re running out of container, you will need to mock anything that relies on things like JNDI lookups. Spring Mock’s SimpleNamingContextBuilder does the job nicely.

Provide a way out. Often you can construct or mock the database content entirely within the JUnit using in-memory databases like HSQLDB. But integration test cases sometimes need an established database environment to connect to. Those cases won’t apply if the environment isn’t there, so use JUnit Assume to skip them.

Include traces. JUnits on business methods rarely need logging, but traces can be valuable for integration tests. I recommend keeping the level low (like debug or trace) to make them easy to throttle in build logs.

Frameworks like JMockit make it easy to completely stub out dependent classes. But with these, avoid using so much scaffolding that your tests are meaningless or your classes are too tightly coupled.

Just a few suggestions to make integration tests in JUnits a bit more helpful.

Spring’s online documentation is often quite helpful when getting started with most of their frameworks. That is, you walk through the examples, quickly hit something that doesn’t work, and then grab the source code and step through it, using the docs as a navigational aid.

Such was the case today when working through the Spring Web Services tutorial. After fixing a few configuration issues, I got stuck on an exception thrown in MessageDispatcher.getEndpointAdapter:

java.lang.IllegalStateException: No adapter for endpoint […]: Is your endpoint annotated with @Endpoint, or does it implement a supported interface like MessageHandler or PayloadEndpoint?

After attaching source and debugging, I found that the JDomPayloadMethodProcessor was not in the list of the DefaultMethodEndpointAdapter‘s methodArgumentResolvers. It seems it should have been since initMethodArgumentResolvers adds it whenever org.jdom.Element is present (found by the classloader), and it was present.

Now I had never used JDOM before, but I thought I’d play along since the example used it. Besides, he who dies with themostXMLframeworks wins, right? Since the example had org.jdom, I had Maven fetch it.

Upon further debugging, I found that initMethodArgumentResolvers wasn’t initializing because the list had already been set by AnnotationDrivenBeanDefinitionParser.registerEndpointAdapters. And that class was looking for org.jdom2.Element.

Doh! Those Spring developers should talk. Meanwhile, I just grabbed jdom2 and converted to it.

This debugging stint was a small price to pay for a web services framework that definitely beats Axis, Axis2, and others I’ve used. But I look forward to the day when I can use a Spring framework as a black box. Until then, I’ll keep going to the Spring Source code. And, of course, wish founder Rod all the best in his new endeavors.

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Today’s post will be the last of the Friday Fixes series. I’ve received some great feedback on Friday Fixes content, but they’re a bit of a mixed bag and therefore often too much at once. So we’ll return to our irregularly unscheduled posts, with problems and solutions by topic, as they arrive. Or, as I get time to blog about them. Whichever comes last.

More Servlet Filtering

On prior Fridays, I described the hows and whys of tucking away security protections into a servlet filter. By protecting against XSS, CSRF, and similar threats at this lowest level, you ensure nothing gets through while shielding your core application from this burden. Based on feedback, I thought I’d share a couple more servlet filter security tricks I’ve coded. If either detects trouble, you can redirect to an error page with an appropriate message, and even kill the session if you want more punishment.

Validate IP Address (Stolen session protection)

At login, grab the remote IP address: session.setAttribute(“REMOTE_IP_ADDR”, request.getRemoteAddr()). Then, in the servlet filter, check against it for each request, like so:

In my last Friday Fixes post, I described how to use Apache DBCP to tackle DisconnectException timeouts. But as I mentioned then, if your servlet container is a recent version, it will likely provide its own database connection pools, and you can do without DBCP.

When switching from DBCP (with validationQuery on) to a built-in pool, you’ll want to enable connection validation. It’s turned off by default and the configuration settings are tucked away, so here’s a quick guide:

Adjust intervals as needed for your environment. Also, these validation SQLs are for DB2; use equivalent SQLs for other databases.

Auto Logoff

JavaScript-based auto-logoff timers are a good complement to server-side session limits. Such are especially nice when coupled with periodic keep-alive messages while the user is still working. The basic techniques for doing this are now classic, but new styles show up almost daily. I happen to like Mint‘s approach of adding a red banner and seconds countdown one minute before logging off.

Among Internet Explorer’s quirks is that certain page actions (clicking a button, submitting, etc.) will freeze animated GIFs. Fortunately, gratuitous live GIFs went out with grunge bands, but they are handy for the occasional “loading” image.

I found myself having to work around this IE death today to keep my spinner moving. The fix is simple: just set the HTML to the image again, like so:

The trouble with keeping your data in the cloud is that clouds can dissipate. Such was the case this week with the Nike+ API.

Nike finally unveiled the long-awaited replacement for their fragile Flash-based Nike+ site, but at the same time broke the public API that many of us depend on. As a result, I had to turn off my Nike+ Stats code and sidebar widget from this blog. Sites that depend on Nike+ data (dailymile, EagerFeet, Nike+PHP, etc.) are also left in a holding pattern. At this point, it’s not even clear if Nike will let us access our own data; hopefully they won’t attempt a Runner+-like shakedown.

This type of thing is all too common lately, and the broader lesson here is that this cloud world we’re rushing into can cause some big data access and ownership problems. If and when Nike lets me access my own data, I’ll reinstate my Nike+ stats (either directly, or through a plugin like dailymile’s). Until then, I’ll be watching for a break in the clouds.

Broken Tiles

I encountered intermittent problems where Apache Tiles 2.2.2 where concurrency issues cause it to throw a NoSuchDefinitionException and render a blank page. There have been variousJIRAs with fixes, but these are in the not-yet-released version 2.2.3. To get these fixes, update your Maven pom.xml, specify the 2.2.3-SNAPSHOT version for all Tiles components, and add the Apache snapshot repository:

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

I got a friendly reminder this morning that I’ve neglected my Friday Fixes postings of late. I didn’t say they’d be every Friday, did I? At any rate, here are some things that came up this week.

Tabbed Freedom v. Logoff Security

Tabbed browsing is a wonderful thing, but its features can become security concerns to corp-rat folks who mainly use their browsers for mission critical apps. For example, with most browsers, closing a tab (but not the browser itself) does not clean up session cookies. Yet those security first guys would like to have a way to trigger a log off (kill the session) on tab close.

This is a common request, but there’s no straightforward solution. As much as I’d like browsers to have a “tab closed” event, there isn’t one. The best we can do is hook the unload event which is fired, yes, when the tab is closed, but also anytime you leave the page: whether it’s navigating a link, submitting a form, or simply refreshing. So the trick is to detect and allow acceptable unloads. Following is some JavaScript I pieced together (into a common JSPF) based loosely on various recommendations on the web.

This triggers on refresh, but that’s often a good thing since the user could lose work; Gmail and Google Docs do the same thing when you’re editing a draft. It’s a good idea to make this behavior configurable, since many folks prefer the freedom of tabbed browsing over the security of forcing logoff.

DBCP Has Timed Out

Right after mastering the linked list, it seems every programmer wants to build a database connection pool. I’ve built a couple myself, but this proliferation gets in the way of having a single golden solution that folks could rally around and expect to be supported forever.

Such was the story behind Apache DBCP: it was created to unify JDBC connection pools. Although it’s broadly used, it’s over-engineered, messy, and limited. So it, too, fell by the wayside of open source neglect. And since nearly all servlet containers now provide built-in connection pools, there’s really no use for DBCP anymore.

Yet I found myself having to fix DisconnectException timeouts with an existing DBCP implementation, typically stemming from errors like: A communication error has been detected… Location where the error was detected: Reply.fill().

After trying several recovery options, I found that DBCP’s validationQuery prevented these, at the cost of a little extra overhead. Although validationQuery can be configured, I didn’t want additional setup steps that varied by server. So I just added it to the code:

In the next pass, I’ll yank out DBCP altogether and configure pools in WebSphere, WebLogic, and Tomcat 7. But this gave me a quick fix to keep going on the same platform.

Aggregation Dictation

Weird: I got three questions about aggregating in SQL on the same day. Two of them involved OLAP syntax that’s somewhat obscure, but quite useful. So if you find yourself with complications from aggregation aggravation, try one of these:

Doing a group by and need rollup totals across multiple groups? Try grouping sets, rollup, and cube. I’ve written about these before; for example, see this post.

Need to limit the size of a result set and assign rankings to the results? Fetch first X rows only works fine for the former, but not the latter. So try the ranking and windowing functions, such as row_number, rank, dense_rank, and partition by. For example, to find the three most senior employees in each department (allowing for ties), do this:

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges ran the gamut, but there’s probably not much broad interest in consolidated posting for store-level no advice chargebacks, image format and compression conversion, SQLs with decode(), 798 NACHA addenda, or many of the other crazy things that came up. So I’ll stick to the web security vein with a CSRF detector I built.

Sea Surf

If other protections (like XSS) are in place, meaningful Cross-Site Request Forgery (CSRF) attacks are hard to pull off. But that usually doesn’t stop the black hats from trying, or the white hats from insisting you specifically address it.

The basic approach to preventing CSRF (“sea surf”) is to insert a synchronizer token on generated pages and compare it to a session-stored value on subsequent incoming requests. There are some pre-packaged CSRF protectors available, but many are incomplete while others are bloated or fragile. I wanted CSRF detection that was:

I also wanted to include double submit protection, without having to add another filter (certainly no PRG filters – POSTs must be POSTs). Here below is the gist of it.

First, we need to insert a token. I could leverage the fact that nearly all of our JSPs already included a common JSPF file, so I just added to that. The @include wasn’t always inside a form so I added the hidden input field via JavaScript (setToken). I used a bean to keep the JSPF as slim as possible.

I didn’t want to modify all those $.ajax calls to pass the token, so the ajaxSend handler does that. The token arrives from AJAX calls in the request header, and from form submits as a request value (from the hidden input field); that gives the benefit of being able to distinquish them. You could use a separate token for each if you’d like.

The TokenUtil bean is simple, just providing the link to the CSRFDetector.

A servlet filter (doFilter) calls CSRFDetector to validate incoming requests and return a simple error string if invalid. You can limit this to only validating POSTs with parameters, or extend it to other requests as needed. The validation goes like this:

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

You know the old saying, “build a man a fire and he’s warm for a day; set a man on fire, and he’s warm for the rest of his life.” Or something like that. I’ve been asked about tool preferences and development approaches lately, so this week’s post focuses on tools and strategies.

JRebel

If you’re sick of JVM hot-swap error messages and having to redeploy for nearly every change (who isn’t?), run, do not walk, to ZeroTurnaround‘s site and get JRebel. I gave up on an early trial last year, but picked it up again with the latest version a few weeks ago. This thing is so essential, it should be part of the Eclipse base.

My DB2 tool of choice depends on what I’m doing: designing, programming, tuning, administering, or monitoring. There is no “one tool that rules them all,” but my favorites have included TOAD, Eclipse DTP, MyEclipse Database Tools, Spotlight, db2top, db2mon, some custom tools I wrote, and the plain old commandline.

I never liked IBM’s standard GUI tools like Control Center and Command Editor; they’re just too slow and awkward. With the advent of DB2 10, IBM is finally discontinuing Control Center, replacing it with Data Studio 3.1, the grown-up version of the Optim tools and old Eclipse plugins.

I recently switched from a combination of tools to primarily using Data Studio. Having yet another Eclipse workspace open does tax memory a bit, but it’s worth it to get Data Studio’s feature richness. Not only do I get the basics of navigation, SQL editors, table browsing and editing, I can do explains, tuning, and administration tasks quickly from the same tool. Capability wise, it’s like “TOAD meets DTP,” and it’s the closest thing yet to that “one DB2 tool.”

Standardized Configuration

For team development, I’m a fan of preloaded images and workspaces. That is, create a standard workspace that other developers can just pick up, update from the VCS, and start developing. It spares everyone from having to repeat setup steps, or debug configuration issues due to a missed setting somewhere. Alongside this, everybody uses the same directory structures and naming conventions. Yes, “convention over configuration.”

But with the flexibility of today’s IDEs, this has become a lost art in many shops. Developers give in to the lure of customization and go their own ways. But is that worth the resulting lost time and fat manual “setup documents?”

Cloud-based IDEs promise quick start-up and common workspaces, but you don’t have to move development environments to the cloud to get that. Simply follow a common directory structure and build a ready-to-use Eclipse workspace for all team members to grab and go.

Josh is taking it to extremes, but he does have a point: developers’ lives are often too hectic and too distracted. This “do more with less” economy means multiple projects and responsibilities and the unending tyranny of the urgent. Yet we need blocks of focused time to be productive, separated by meaningful breaks for recovery, reflection, and “strategerizing.” It’s like fartlek training: those speed sprints are counterproductive without recovery paces in between. Prior generations of programmers had “smoke breaks;” we need equivalent times away from the desk to walk away and reflect, and then come back with new ideas and approaches.

I’ll be following to see if these experiments yield working solutions, and if Josh can stay employed. You may want to follow him as well.

Be > XSS

As far as I know, there’s no-one whose middle name is <script>transferFunds()</script>. But does your web site know that?

It’s surprising how prevalent cross-site scripting (XSS) attacks are, even after a long history and established preventions. Even large sites like Facebook and Twitter have been victimized, embarrassing them and their users. The general solution approach is simple: validate your inputs and escape your outputs. And open source libraries like ESAPI, StringEscapeUtils, and AntiSamy provide ready assistance.

But misses often aren’t due to systematic neglect, rather they’re caused by small defects and oversights. All it takes is one missed input validation or one missed output-encode to create a hole. 99% secure isn’t good enough.

With that in mind, I coded a servlet filter to reject post parameters with certain “blacklist” characters like < and >. “White list” input validation is better than a blacklist, but a filter is a last line of defense against places where server-side input validation may have been missed. It’s a quick and simple solution if your site doesn’t have to accept these symbols.

I’m hopeful that one day we’ll have a comprehensive open source framework that we can simply drop in to protect against most web site vulnerabilities without all the custom coding and configuration that existing frameworks require. In the mean time, just say no to special characters you don’t really need.

Comments Off

On that note, I’ve turned off comments for this blog. Nearly all real feedback comes via emails anyway, and I’m tired of the flood of spam comments that come during “comments open” intervals. Most spam comments are just cross-links to boost page rank, but I also get some desperate hack attempts. Either way, it’s time-consuming to reject them all, so I’m turning comments off completely. To send feedback, please email me.

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

This week’s challenges were all over the place, but I’ll focus in on just a couple of fixes in one popular category: Spring Security.

Filters : MVC :: Oil : Water

The web app I’m currently working on has some unique functions that occur at login. Sure, it all starts with a login form, but it goes far beyond the simplified functions of standard Java EE form authentication. Good news is, Spring MVC makes implementing these login functions nice, while Spring Security provides many of the protections we need. Bad news is, Spring Security and Spring MVC don’t play together at all.

A key reason is that Spring Security’s access control and authentication mechanisms are called upstream in the filter chain (before the controllers) and can’t access the controller’s state, session, and request. With forms authentication (LoginUrlAuthenticationEntryPoint, UsernamePasswordAuthenticationFilter, and the like), you get one monitored post URL for authentication (by default, j_spring_security_check) with no means for the controller to do sophisticated validation or page flow.

If I could lock myself into the strict confines of form authentication, then Spring Security’s implementation would be helpful: outside a bit of XML namespace configuration, all I’d need is a simple UserDetailsService DAO. But I couldn’t, and trying to force it just convinced me that I really didn’t want Spring Security’s forms authentication after all. What I really wanted was a Spring MVC login process which passed authentication details to Spring Security when done.

That part was very easy. I just wrote a small utility class which, after it authenticated the user and retrieved his authorities, simply set them in Spring Security’s context. Like so:

This made Spring Security and me both happy, with no need to mix the oil of Spring MVC controller flow with the water of Spring Security authentication filters.

Casting Custom SPeLs

Among its many uses, the Spring Expression Language (SpEL) allows for flexible URL filter (intercept-url) expressions. And custom expressions can be used to support unique access control requirements through simple configuration.

The T(my.package.Class).myMethod(whatever) syntax was one option, but that’s ugly, limited, and prone to error. Further, by stepping through the code, I learned that the expression handlers called in Spring Security have access to all sorts of useful information to make authorization decisions. So building my own custom expression evaluator was just the ticket.

But plugging in a custom expression handler took some maneuvering. Turns out, you can’t just add a new SecurityExpressionRoot directly in the security configuration; you have to swap in a custom SecurityExpressionHandler and have it instantiate the evaluator, like so:

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

Hashes Make for Poor Traces

SQL and parameter tracing is covered well by most ORMs like Hibernate, but you’re on your own when using JDBC directly. It’s helpful to funnel JDBC access through one place with a common wrapper or utility class, but if all you get is the PreparedStatement, there are no public methods to get the SQL text and parameter array from it for tracing. Unbewiebable. PreparedStatement.toString sometimes includes useful information, but that depends entirely on the JDBC driver. In my case, all I got was the class name and hash. So I just went upstream a bit with my calls and tracing hooks so I could pass in the original SQL text and parameter array.

Samy Isn’t My Hero

You’d think URL rewriting went out with the Backstreet Boys, but unfortunately, JSESSIONID is alive and well, and frequenting many address bars. This week, I’ve been focused on hack-proofing (counting down the OWASP Top Ten) so I should be a bit more sensitive to these things. Yet I inadvertently gave away my session when emailing a link within the team management system I use (does it count as a stolen session if you give it away?) Not only does this system include the direct sessionguid in the URL, it also doesn’t validate session references (like by IP address) and it uses long expirations.

At least this system invalidates the session on logout, so I’ve resumed my habit of manually logging out when done. That, and attention to URL parameters is a burden we must all carry in this insecure web world. Site owners, update your sites. Until then, let’s be careful out there.

Virtual Time Zone

While directing offshore developers last year, thinking 9 1/2 or 10 1/2 hours ahead finally became natural. Having an Additional Clock set in Windows 7 provided a quick backup. Amazingly, some software still isn’t timezone-aware, causing problems such as missed updates. I won’t name names or share details, but I opted for a simple solution: keep a VM set in IST and use it for this purpose. Virtual servers are cheap these days, and it’s easy enough to keep one in the cloud, hovering over the timezone of choice.

SQL1092 Redux

Yes, bugs know no geographical boundaries. A blog reader from Zoetermeer in the Netherlands dropped me another note this week, this time about some SQL1092 issues he encountered. Full details are here; the quick fix was the DB2_GRP_LOOKUP change.

This DB2 issue comes up frequently enough that DB2 should either change the default setting or also adopt WebSphere MQ’s strategy of using a separate limited domain ID just for group enumeration. Those IBM’ers should talk.

Those Mainframe Hippies

I traditionally think of mainframe DB2’ers as belt-and-suspenders codgers who check your grammar and grade your penmanship, while viewing DB2 LUW users as the freewheeling kids who sometimes don’t know better. That changes some with each new version, as the two platforms become more cross-compatible.

So I was surprised this week to find that DB2 for z/OS was more flexible than DB2 LUW on timestamp formats. Mainframe DB2 has long accepted spaces and colons as separators, but unlike DB2 LUW, you can mix and match ’em. For example, DB2 z/OS 9 and higher will take any of the following:

‘2012-03-16 19:20:00’

‘2012-03-16-19.20.00’

‘2012-03-16-19:20:00’

‘2012-03-16 19.20.00’

DB2 LUW (9.7) will reject the last two with SQL0180 errors.

Knowing the limits is important when writing code and scripts that work across multiple DB2 platforms and versions. The problem could get worse as DB2 LUW adds more Oracle compatibility, but as long as the DB2 kids and codgers stay in sync, we can enjoy some format flexibility there.

It’s Friday, and time again for some Friday Fixes: selected problems I encountered during the week and their solutions.

It’s a Big World After All

I got the complaint again this week: “DB2 is so stupid; why does it sometimes take two bytes for a character?” I’ve answered that before, and the best solution is: if you’re storing truly binary data, use a binary type (like for bit data). But if it really is text, I usually just send a link to Joel Spolsky’s classic article on character encodings. And here’s a quiz – let me know what the following says:

אם אתה יכול לקרוא את זה, אתה תגרוק קידודים

The counterpoint to Joel’s article, or at least to blindly applying Postel’s Law to web apps, is that it can open the door to all kinds of security vulnerabilities: SQL injection, cross-site scripting, CSRF, etc. So it’s common to have a blacklist of disallowed characters or a white list specifying the allowed character set. The point is to be deliberate and think about what should be allowed: both the extent and limits. Don’t treat binary data as text and don’t pretend there’s only one character encoding in the world.

Different, Not Better

Having endured VMWare’s product gyrations through the years, I’ve learned that upgrading it isn’t always best: sometimes I just get different, not better. I feel the same way about Windows 8.

In this latest case, I found my beloved but now-obsolete VMWare Server can’t host the new Windows 8 CP edition, so I considered switching to VMWare Workstation 8. But the latest VirtualBox (4.1.8) works, so I downloaded that, installed Windows 8 from ISO, and quickly had it running in a VM. Glad I didn’t have to play a pair of 8s.

I suppose that’s a side benefit when leading players like VMWare and Microsoft make revolutionary changes to their incumbent products. Major (often costly) upgrades create good opportunities to try out competing alternatives.

That Other Browser

I’ll always need the latest Windows and Internet Explorer (IE) versions running in some mode (VM or otherwise) just to test and fix IE browser compatibility issues. This week, I ran into an “Access denied” JavaScript error in IE 8 – Intranet zone, Protected Mode. As is often the case, my code worked fine in Firefox, Chrome, and other browsers.

I wanted to pin down exactly which IE setting was causing the problem, so I tweaked some likely suspects in custom Security Settings but could never nail it down to just one. There was nothing in the Event Log to really help, just red herrings. And apparently neither Protected Mode nor IE ESC was the root cause. Switching zones (such as adding to trusted sites) didn’t work either, and extreme measures only made things worse. Ultimately, doing a Reset all zones to the default level resolved the issue.

IE has far too many obscure and non-standard security settings that don’t work as expected. And there’s no direct way to get a consolidated dump of all current settings. Microsoft continues to work on fixing this, so I’m hopeful things will improve; they really can’t get worse. Until then, the solution may continue to be “switch browsers” or “take one reset and call me in the morning.”

Jasper Variability

Jasper Reports (iReport) variables and groups are useful features. But they can be frustrating at times because of documentation limits (yes, even with The Ultimate Guide) and differences between versions and languages (Groovy v. Java). Sometimes the answer must be found by stepping through the code to determine what it wants. That was necessary for me to get my group totals working.

For example, imagine a sales report grouped by region where you want group footers totaling the number of Sprockets sold for each region. Standard variable calculations won’t work, and the Variable Expression must handle anything that comes its way, including null values from multi-line entries. So, rather than using a built-in Counter variable with an Increment Type, I had to use a counter-intuitive variable, like so:

Friday Fragments served a useful purpose, one of which was regularity: funny how Fridays kept coming around. While I haven’t been in pedagogical fragment mode for awhile, I encounter real life puzzles every day. Sharing these often helps others who encounter the same problems, or even myself when I need a quick reference down the road. I just need a reminder and motivator to stop and record these. I suppose Friday is as good as any.

So here goes the first installment of Friday Fixes: selected problems I encountered during the week and their solutions, along with other sundries.

Spring Loading

Like most form tags, binding a multi-select list in Spring MVC is easy: deceptively so. All you need is this, right?

And, of course, the model attribute and/or bean methods to return the allChoices collection and get/set myFavorites. Well, not so fast. Turns out, multi-select lists in Spring MVC have always been a bit of a pain, particularly when it comes to making the initial selections (convincing Spring to add the selected attributes) on initial page load. One pre-selection is fine, but with multiples, the comma-separated list pushed back into the model’s setter is a one-way trip.

Solving this in prior versions of Spring required using an InitBinder whether you otherwise needed one or not. But for Spring MVC 3, the fix is to just map to collection getter/setters, even if your model wants to use the comma separated list. For example, use the following getter and change the form:select path to use it: path=”myFavoritesList“.

Between the deadlock event monitor, db2pd, and snapshots, DB2 has long provided good tools for tracking down deadlock culprits. But for lock timeouts, not so much. The DB2 folks have tried to improve things lately, but they’ve changed their minds a lot, often adding new tools and then quickly taking them away.

Now that lock timeout event monitors are finally here, many of the other new approaches like db2_capture_locktimeout and db2pdcfg -catch (with db2cos call-out scripts) have been deprecated. A coworker was concerned about the passing of db2_capture_locktimeout, but it appears it’ll be around a little longer. For example, the following still works in even the latest 9.7 fixpacks.

Repeat the last two commands in another DB2 Window and then wait for the timeout. Look for the report under your DIAGPATH, SQLLIB, or Application Data folder; for example: dir db2locktimeout*.* /s. Even with the latest 9.7 fixpacks, the timeout report can occasionally have some holes in it (like missing SQLs), but it’s still quite useful.

Flexigrid Incantation

Got a Flexigrid with a radio button? Want to fetch the value of a column on the radio-selected row? Well, when Flexigrid generates the tds and divs from your colModel, it provides abbr attributes, not ids. So the usual jQuery shorthands to find by ID don’t apply, and you’re off chasing the more obscure abbr. For example:

JDBC is one of those core, essential components where you want to see continued incremental improvement with no interface impacts: just keep dropping in the latest versions and expect it to work. Yay, back compatibility!

JDBC 4.0 is relatively new to DB2 LUW (first available in version 9.5), but it and prerequisite Java 6 have certainly been around long enough for folks to start switching over to it. Just don’t assume that everything in a deep open source stack will work with it.

In yesterday’s action, perfectly-good BIRT 2.5.1 reports failed (“data set column … does not exist” exception) in an environment with DB2’s JDBC 4.x drivers. At first, I suspected the particular db2jcc4.jar level, since it was taken from DB2 9.7 Fixpack 3: that now-deprecated fixpack that does everything evil except steal your kidneys. But since new versions of BIRT (like the 3.7.1 version I use) worked fine with that version of Ernie (ahem, JDBC 4.0) and old versions of BIRT worked fine with JDBC 3.0 (db2jcc.jar), that flagged the culprit. Turns out, the report SQLs use column aliases (who doesn’t?) and apparently JDBC 4.0’s getColumnName “fixes” broke these. Upgrading BIRT brought harmony back to the universe.

With “broken compatibility” issues like this, it’s tempting to procrastinate on open source upgrades. That’s why Maven POMs quickly develop inertia: once you get it all working together, who has time to test new dependency version permutations? Fortunately, the Eclipse project and others do a good job of bundling compatible toolsets for download. That way, if something “doesn’t work on my machine”, it’s quick work to see how it plays with the latest and greatest stack.

“If there are enterprise beans in the application, the EJB deployment process can take several minutes. Please do not abandon all hope while the process completes.”

WebSphere deployment message (paraphrased)

Of all the many ills of EJBs, slow deployments are among the most chronic. New products like JRebel attempt to reduce this time impact, but with inconsistent results. And hot-swapping code while debugging often fails for all but the most trivial changes. So if each significant code change requires a long wait for redeployment, it at least better work.

That’s why today’s deployment failures were so annoying. I could see from console logs that RMIC was running and there were no errors, but the EJB stub classes were missing from the resulting JARs. This was a new MyEclipse workspace (Blue edition, version 9.1), but I had never had this problem before. I re-checked my configuration and verified that I had enabled EJBDeploy for my WebSphere 6.1 target server (Properties – MyEclipse – EJB Deploy). Yet in the process of poking around and checking things, I stumbled on the problem. While there, I kicked off a manual run by clicking “Apply and run EJB Deploy” and finally got an error message: it could not create the files because it needed the .ejbDeploy sub-folder under my workspace path. I manually created that folder and it worked.

I don’t know why MyEclipse or ejbdeploy couldn’t just create the folder, nor why it wouldn’t report the error to the console during normal deployments. But since this manual work-around resolved the problem, I’ll know what to do next time it happens.

Problem solved, and back to waiting on deployments again. Thanks… I think.

Handled correctly, loose typing can be a powerful thing , but with great power comes great responsibility. The license to enjoy the benefits of dynamic typing comes with the responsibility to actually design with real objects. If one just strings together collections of simple types, weaker typing can lead to some very confusing code.

I was reminded of this today in an improbable way. I uplifted some old, unfamiliar code to a modern Java version, which included adding in generics: taking untyped collections toward strongly-typed ones. This meant grokking and stepping through the code to infer data types where nested collections were used, and it smoked out what the design really looked like. One commonly-used structure came to this, and there were several others like it:

This just screams, “make objects!” It reminded me of perhaps why the “everything in triplicate” crowd fears dynamic typing. If you aren’t designing with objects, then strong typing partly saves you from yourself. In this case, the addition of crazy nested generics actually made the subsequent code easier to understand and less error-prone.

But a much better approach, of course, is to create new classes to handle what these nested collections do, with the added benefit of factored behavior. In this case, the real objects exist very plainly in the problem domain; just follow the nouns. A truly object-oriented design usually doesn’t need strong typing to clarify the design.

This does require diligence and I’ve often been guilty of missing objects, too. Yet perhaps this example reveals a new code quality metric: if you need more than two adjacent angle brackets to make the “raw type” compiler warnings go away, it’s time to step back and look for objects.

Recent turns of events have moved me onto a new application to be written atop an old platform. The stack consists of WebSphere AD 5, J2EE 1.4 (with EJB), Struts 1.2, and some Rube Goldberg approaches. It’s outdated and bloated, but since there’s no time to shift to anything modern, we’ll add more new code atop it.

It’s a very common problem these days.

Over a decade ago, many said that J2EE would become the “COBOL of the 21st Century.” That prediction has come true, but not in a good way. By saying that, proponents meant “ubiquitous,” but what we got instead was “entrenched” and “obsolete.”

This is largely due to the size of the ship and the number of course corrections. J2EE is a huge designed-by-committee camel that dramatically shifted with each major release as it crushed under its own weight. And with each successive round, the champions of lighter enterprise Java (Rod Johnson, Bruce Tate, Anil Hemrajani, etc.) have either seen their solutions become bloatware themselves, or have abandoned enterprise Java altogether. No wonder J2EE is now called the dead man walking.

J2EE’s size and needless complexity have worked against each other. Newer, better, lighter solutions eventually came around, but bloated, entrenched projects found it difficult or impossible to move to them.

While it doesn’t hold a candle to dynamic languages, I actually like Java and parts of J2EE (cleanly-designed servlets, JSPs, and services are good things). It’s just the bloat and do-overs which accompany it that drive me nuts. I’ve often wondered why Java and C++ fall victim to this far more than other languages. Perhaps it’s the culture and personality that comes with static typing: the classic security vs. freedom trade-off. Folks who are attracted to J2EE’s everything in triplicate and research project mindset often don’t understand the costs of technical debt or the benefits of clean and simple design.

Maybe I’m just too accustomed to quickly building apps that have maximum function with minimum code. I’ll code in whatever language is required, but I will not abandon my push for agility, simplicity, and elegance.

I had a good lunch today with a friend who wanted to quickly set up simple (yet strong) authentication on a Tomcat web server using his own login page. Since forms authentication is built into all J2EE web servers (and ASP .NET servers, for that matter), it’s quite easy.

In summary, the steps for Tomcat are:

Add security-constraints to WEB-INF/web.xml to specify protected resources / folders. Also include auth-constraints and security-roles for access.

Create user and user role tables and configure the JDBC realm in server.xml. Or, simply start with defining users directly in tomcat-users.xml; you can always add the database later.

Create the login and login error JSPs, pointing to them from the login-config section of web.xml. Remember to include the required form element names in the JSP (j_security_check, j_username, j_password, etc.). Also note that these pages can’t use style sheets and other external files, so you have to (redundantly) embed style information directly into the JSP.

By default, all traffic (including login passwords) isn’t encrypted, so this should only be used with SSL/TLS encryption in place. That means installing a digital certificate, which is also fairly easy. That is:

Purchase an SSL certificate. For initial testing, you can create a self-signed cert using keytool, included with JSSE.

If the server is local, re-start Tomcat, open your browser, and access your site using the https://localhost:8443 URL. Look for the browser cues for a secure site: padlock icon, green or yellow address bar, etc.

You may eventually switch to more sophisticated methods, like integrating with external security systems for single sign on (e.g., using SAML). But the simple steps above will get you going quickly with basic, unbreakable authentication.

“I wanted nothing else than to make the object as perfect as possible.”
Erno Rubik

I recently enhanced one of my search functions to support additional criteria. It was originally a very modest search, accepting only an account number and amount. But, over time, search fields and flags were added and the primary method grew to accepting a very long parameter list, and then to receiving a tightly-coupled dictionary of search keys and their values. I looked at the method and thought, “what idiot wrote this thing?” That’d be me.

It’s an easy trap to fall into. Layering new functions atop an existing O-O system requires ongoing diligence to refactor and maintain clean design patterns. The temptations are many; for example, standard objects like those workhorse collections can lure one away from crafting valuable new custom classes. But there are defenses, such as a good library of xUnit test cases to support refactoring. Good design and test-driven development pay dividends long after the original code is written.

In this case, a command object did the trick. Have a findPersons* method that takes far too many parameters? Create a PersonSearch or PersonQuery command/value object to house them.

There are many benefits to this besides cleaner code. xUnit test cases usually come easier and are less brittle. You can better factor behaviors such as validation and conversion onto the command object itself. This can improve reuse across the client and server.

A basic security requirement for any rich web or client/server system is that validation occurs both at the client and at the server. If the command object is commutable, this means you write the validation code once, to be used on both sides. This works well with Google Web Toolkit (GWT), Server Smalltalk (SST), and similar frameworks.

This sent me on a brief witch hunt; for example, I searched for related methods with too many parameters: MyClass methodDictionary do: [ :ea | ea parameterCount > 3 ifTrue: [ Transcript cr; show: ea printString ]]. I found and fixed a few, and toyed with a Smalllint rule for it. It never seems to end with the perfect object, but often much closer.