Writing negative unit test cases for Metadata Parsers I realized we are not Throwing exception in case of mandatory element missing.

We are not validating xml file too, so even if these elements are defined as mandatory from xsd parser just try to parse them and in a lot of cases it got success. The result is (for example) a DataSource without driver class defined that is totally bad.

We should of course avoid these situations, and xsd validation is not a solution IMHO since our metadata could be build not only from xml but also from annotation and maybe in future updated from a management console.

It's not a little (even if it's not very big) work since all metadata have to be revisited and updated to verify and validate input data in constructor throw exception and so on. Maybe it would better to switch from current constructors strategy to a builder pattern for metadata providing validation of inputs, but I have to think about that.

We used to have a method that would validate the metadata model after the merge and throw an exception if the constraints defined by the XSD wasn't correct.

The datasource part is easier since there is only one place where the information can come from (XML), so the parser can be a lot more strict there.

As to updates from the management layer - the management facade must verify that the update is correct before sending the value to the instance. However, we can only do basic tests here, but that should be enough.

Implementation: If a builder pattern can help, go for it. The current requirements for the metadata model can't be changed though.

Validation has been implemented as method validate AND it's fired for every metadata creation (aka as part of the constructor). I think it's better to have valid metadata since we are working with immutable object at metadata level (at least in 99% of cases). Of course the method validate is public and can be fired by metadata's client too in case of metadata forced modification (forceXXX methods). Opinions?

I have implemented at the moment just a validation to respect xsd specification (more or less just mandatory fields and cardinalities). But of course there are more things we could verify at code level. Some example: backGroundValidationMinutes has non sense if BackGroundValidation is false; min pool size could not be < of max_pool_size and so on. Or even more complex checking like provided className are at least valid class name. Since this validation comes very early in user experience (typically during deployment) I think a fine grained check would help a lot users avoiding hard to understand runtime exception during datasources/ra use. Of course make this fine grained check would need some extra analysis and some extra documentation, but I think we would get great benefits.

Feedbacks are more than welcome, in the mean time I'll provide unit tests for current validation

Validation has been implemented as method validate AND it's fired for every metadata creation (aka as part of the constructor). I think it's better to have valid metadata since we are working with immutable object at metadata level (at least in 99% of cases). Of course the method validate is public and can be fired by metadata's client too in case of metadata forced modification (forceXXX methods). Opinions?

The validator is also available as a standalone tool, and have Ant and Maven integration. Having the validator as part of our deployment chain is to ensure that people are actually using it before deploying to our container.

So all the triggered rules should be collected and summaried in the generated report.

Of course make this fine grained check would need some extra analysis and some extra documentation, but I think we would get great benefits.

Yes, it would be nice to have a very fine gained analysis of the deployments. However, I think it is of lower priority than some of our other tasks atm. If we have some sort of framework for these validation rules it would be a good task for a new contributor.

Validation has been implemented as method validate AND it's fired for every metadata creation (aka as part of the constructor). I think it's better to have valid metadata since we are working with immutable object at metadata level (at least in 99% of cases). Of course the method validate is public and can be fired by metadata's client too in case of metadata forced modification (forceXXX methods). Opinions?

The validator is also available as a standalone tool, and have Ant and Maven integration. Having the validator as part of our deployment chain is to ensure that people are actually using it before deploying to our container.

So all the triggered rules should be collected and summaried in the generated report

Oki, but it isn't our validator. Here we are validate our metadata representing xml or annotations. AFAIK our validator tool is about our javax.* implementations, generated from this metadata. IOW here we have implemented minimal check from xsds constraints, something that users could achieve standalone just validating xml against our xsds. Of course we are not enabling xsds validation for performance reasons, doing this validation on generated metadata instead.

Jesper Pedersen wrote:

Of course make this fine grained check would need some extra analysis and some extra documentation, but I think we would get great benefits.

Yes, it would be nice to have a very fine gained analysis of the deployments. However, I think it is of lower priority than some of our other tasks atm. If we have some sort of framework for these validation rules it would be a good task for a new contributor.