Archivo mensual: junio 2012

Sometimes I just want to apply to a property exactly the same Bean Validator validations I apply to another property.

Point in case: I have a password property in a User class, and there is a ChangePassword data transfer object somewhere else that happens to have currentPassword, newPassword and repeatPassword fields, used precisely to change User.password.

It is obvious that I will want to apply the validations for User.property to all these fields. And, of course, I want changes to validation rules in the original property to be taken into account, automatically.

Copying them is a no-no, because we will violate the DRY (Don’ Repeat Yourself) rule, and creating a custom validation annotation, say @Password, might be overkill. And it would not convey the right meaning, or that’s how I see it.

Creating some kind of DelegateValidation validator that replicates other validations makes a lot of sense, as it clearly conveys the intention and keeps code DRY.

Here we must get access to the Validator we are using to peform validations to ask it to perform validations against the tracked property. Since I use CDI, as well as the Seam validtor module, I rely on them to inject the right Validator instance in my validator implementation.

However, if you are not using CDI/Seam, or just to make testing easier, I am providing a way to supply a validator via the setGlobalValidator method, that you should call before any validations are performed.

Last but not least, let me tellyou that, if you provide a wrong property name for the validator, the validation will fail with a ValidationException, as attested by some of the following tests.

The tests

For the sake of completeness, here are my tests, written against TestNg:

In ExtJs you find youself locating components by identifier all the time, as in

Ext.getCmp( 'myToolbar' );

instead of referencing them like this:

myWindow.myToolbar;

A main cause for this is the fact that in ExtJs you work with config objects all the time, and the real objects that will be created based on that config are not there yet. You have to use some kind of identifier to refer to them.

We often use the id property as the identifier for objects because it is natural to relate id with identifier.

But sometimes using id for this purpose is a bad thing, because the id has a larger meaning. It assigned as the id for the dom object corresponding to this element, or at least as the base for those ids.

This means that the id must be globally unique. Therefore, you’ll get a collision if two different panels have a status bar that happen to have the same id (not uncommon for a large application).

What you want in most cases is for an identifier to be unique in a certain context, not globally unique. You want to have a unique identifier for a button in a window, but don’t care if you have two buttons with the same identifier in different windows, right?

I recommend you use itemId instead of id as the default way to identify ExtJs components. Use id if you are going to perform dom related tasks.

To locate an element by itemId you can use the ComponentQuery , as well as the up, down and child methods in components/containers.

Whereas you used to write

Ext.getCmp( 'myToolbar' );

now you will be best served by something like

myWindow.down( '#myToolbar' );

which will look for a component with the myToolbaritemId somewhere below the myWindow container.

The default way dates are serialized by Gson is not very useful or easy to handle, and I decided to provide a custom default implementation that is easier to work with.

Just remember to check this link to make sure you understand how to handle it, and don’t forget to add the required code in the ExtJs side

Enhancements to unexpected server exception handling
This one is just experimental, so it might not get into 2.2 final. It will all depend on the feedback I get.

From my proposal draft…

Right now, when there is an unexpected exception at the server, DJN provides a message, as per the Ext Direct specification.

While this message provides both the exception type (without the package) and the exception message, I would like to have them apart. This way, it would be possible to handle exceptions without having to dissect the message, making things easier.

Besides, I would love to return the whole exception chain, if in a limited way: just the exception type and the message for every exception in the chain would be enough.

My proposal is to return serverException with the error information, that would provide the following entries:

exception.type: the full exception name of the topmost exception.

exception.message: the message of the topmost exception.

rootException.type: the full exception name of the root exception.

rootException.message: the message of the root exception.

exceptions: an array of elements containing the type and message for all exceptions in the exception chain.

As always, more tests have been added to help make DJN code more robust.

While I consider this beta to have release quality and pretty safe to use, I prefer to get ample public exposure before promoting it to final.

Two days ago I wrote about some ugly problems with Glassfish/Weld and I ended up talking about how hard is it to get a good rythm for writing and testing code in a JEE environment.

Time is money. If your train of thought is broken due to delays, then you are wasting even more money than you might think.

Why not spend money in tools to avoid throwing it away in low quality work time? Tools like JRebel. It reloads classes at runtime when you modify them, without requiring that you pack them in jars/wars/ears again and restart the application(s). Almost zero code to deployment time in many cases. Nice!

But there is a problem: money spent in tools or libraries tends to be too solid, and therefore the ones making the purchase decision become a bit too noticeable.

Developers do not use JRebel not because it is not reasonable to do so, but because somebody has to pay with solid money for it. Solid money is money that’s very clearly seen as it dissapears, because there is an invoice backing the expenditure. You can be caught spending solid money.

If we spend 1.000 euros in a tool, we might be seen doing the expenditure. However, if we spend 10.000 euros in wasted time, who the hell is gonna “see” that? This money is foggy money, money that evaporates without a trace.

Not investing in high quality development tools and environments or training is a clear case of cheap is good but expensive is better. It is death by a thousand cuts.

Unfortunately, using bad free but expensive tools & techniques is preferred all too often to using non free but cheap tools & techniques.

Refusing to do what we think is right for fear of finger pointing, all the time, is a proof of lack of maturity, and I doubt great software can be created if worrying and politics take the breath out of you.

It is time for the industry to mature and make a quantum leap towards responsibility, accountability and quality. Time to stop diverting energies away from finger pointing and penny pinching, and to start thinking big & deep.

I had to publish this, just in case others are suffering from the same problem and losing their sanity…

At some moment during development, Glassfish refused to start a JEE 6 app with the following error:
Application error

Just that, no other error messages, no stack trace or exception, no nothing.

After a lot of tinkering in the dark I found that this happened when the CDI implementation packed with Glassfish (Weld) found some problems. In my case, whenever Weld could not @Inject a remote bean with a @Remote interface using the following code:

@Inject
private UserRepository userRepository

Of course, that can’t work unless you provide the JNDI lookup. My mistake, I just didn’t pay attention because I was developing all modules in the same Eclipse project to minimize time to deploy, and forgot the remoting tax.

Of course, using the @EJB annotation worked when I specified the JNDI lookup as follows:

Why does @Inject not work here? Well, in order to perform an @Inject, CDI uses metadata to get the information needed to resolve the bean. But, for a remote bean, there is no such metadata at hand to the program, the implementation lives in outer space, in a remote VM.

Of course, the solution to keep using @Inject is to define a producer field that specifies how to locate the remote bean, as follows:

Then, whenever CDI finds an injection point like the following one, it will work

@Injection
private UserRepository userRepository;

Back to the problem…Ok, my mistake, but here Weld/Glassfish might have helped a lot by providing a better messsage, even “Weld could not resolve a bean” would have been better than the “something happened”-like message I got.

I was so mistified that at some moment I thought the problem was due to classloading issues, and attempted all sorts of crazy unrelated things!

Why @Inject, and not @EJB?

Now, why use @Produces + @Inject instead of just @EJB, a much shorter version? Because by using @Inject and the @Produces annotation we separate the what (I want the X bean) from the how (the X bean is located in this and that way), and that is key to effective abstraction and encapsulation.

This separation of concerns makes using @Inject the best way to inject beans, backed by @Produces to define how to get them: forget about @EJB, @PersistenceContext, etc. for injection, those are there to help define the how, and only that. Of course, hello world programs need not apply!

Problem #2

My misunderstandings with Glassfish/Weld did not end there, though…

At some moment I refactored a class, changing its name, and then Glassfish started complaining with the following error at runtime whenever I called a EJB method:
Unexpected remote application exception:
javax.ejb.AccessLocalException: Client not authorized for this invocation

And, no, I had not been tinkering with security, nor with remote or local interfaces.

A real mistery, as it seemed to be caused by some error attempting to access a remote bean via a @Local interface or something like that…

Well, no sir. It was caused by changing the class name for the remote EJB implementation (?). Bizarre! Symptoms and causes have no relation whatsoever, at least to my mind.

To make matters worse, the only workaround was to remove the app and reconfigure and redeploy it (??).

Admittedly, that is not something you do often if you are trying to do agile development: I use Eclipse and an exploded project structure + directory based Glassfish deployment to minimize time from code change to code execution, so no redeployment should be needed.

That setup is very valuable because sometimes you can change code and keep executing the app, instead of waiting ages to recompile the app, generate the jars, wars, ears and what else.

This generate & pack the whole universe approach that seems almost unavoidable with JEE projects is evil for productivity. We are almost forced into the Maven-based TDD paradigm, TDD standing for Time to Deploy is a Disaster here.

Frankly, sometimes I really doubt there is much people doing TDD out there for enterprise apps, when you see how lousy tool support for zero-time code changes is and how much time it takes to get feedback.

Because one or two minutes is a lot of time when it comes to your train of thought.