This forum is now a read-only archive. All commenting, posting, registration services have been turned off. Those needing community support and/or wanting to ask questions should refer to the Tag/Forum map, and to http://spring.io/questions for a curated list of stackoverflow tags that Pivotal engineers, and the community, monitor.

I'm trying to use the CustomEditorConfigurer so I can register a custom property editor that allows for a custom ResourceLoader to resolve resources, instead of using the Application Context's ResourceLoader:

The problem is that the custom PropertyEditor gets registered AFTER the Resource property is injected into the PropertyPlaceHolderConfigurer, so the Resource gets loaded using the Application Context instead of using the PropertyEditor that would allow me to override this behaviour.

So my questions are:

1) Is it possible to register a custom PropertyEditor before the Resource gets injected? If so, how?
2) Am I missing some other way to achieve what I'm looking for? Is there another way to override the Application Context resource loading mechanism?

Comment

You can set the order property on both BeanFactoryPostProcessors (CustomEditorConfigurer and PropertyPlaceholderConfigurer) and make sure yours gets executed first.

Not sure if that will work because they are both BeanFactoryPostProcessors.

I'm afraid it doesn't work...

The PropertyPlaceHolderConfigurer has its resource(s) injected before the BeanFactoryPostProcessors have a chance to execute their postProcessBeanFactory() method. AbstractApplicationContext's invokeBeanFactoryPostProcessors() calls the beanFactory.getBean() method, that injects the properties for the beans:

You can implement a custom spring xml extension for that, i.e. you can use something like <resourceroperty-placeholder location="config:application.properties"/> instead of standard <contextroperty-placeholder location="config:application.properties"/>

Comment

That is what I figured would happen (although the flow is a bit different then you are explaining here ). But the general idea is the same.

Dependency injections happens AFTER the BeanFactoryPostProcessor have fired. BeanFactoryPostProcessors operate on BeanDefinitions as they fire there are no instances yet only configuration meta data.

I don't understand this... I've been debugging the ApplicationContext initilization process and the Resource gets created using the (previously registered) ResourceEditor, and injects it in the PropertyPlaceHolderConfigurer...

I'm sure there's something I don't grasp here, but the call stack for the resource creation is this (sorry for dumping the stack trace, but I don't figure a better way of explaining :-) )

The simplest solution here would be to create a custom extension to the CustomEditorConfigurer which adds the PriorityOrdered interface, that way it is taken into account.

I've tried this solution (creating a subclass that implements PriorirtyOrdered), and the result is the same... The problem is that the PropertyPlaceHolderConfigurer has its location resource already initialized in the beanFactory.getBean(), and the property doesn't get injected after invokeBeanFactoryPostProcessors

The real problem here is that you want to post process a BeanFactoryPostProcessor with another BeanFactoryPostProcessor. I was hoping that making the CustomEditorConfigurer a PriorityOrdered instance would fix it (especially if it would run before the PPHC), but this is obviously not the case. It is quite difficult actually to make a BeanFactoryPostProcessor that will process other BeanFactoryPostProcessors.

Also the problem runs even deeper than you might think the ResourceEditorRegistrar (used to register resource loaders etc.) is invoked and registered programmatically before everything else (check the prepareBeanFactoryMethod). Which is then used to set the properties on the PPHC (due to the getBean) before it is being post processed (as I stated this is quite difficult).

Comment

The real problem here is that you want to post process a BeanFactoryPostProcessor with another BeanFactoryPostProcessor. I was hoping that making the CustomEditorConfigurer a PriorityOrdered instance would fix it (especially if it would run before the PPHC), but this is obviously not the case. It is quite difficult actually to make a BeanFactoryPostProcessor that will process other BeanFactoryPostProcessors.

Also the problem runs even deeper than you might think the ResourceEditorRegistrar (used to register resource loaders etc.) is invoked and registered programmatically before everything else (check the prepareBeanFactoryMethod). Which is then used to set the properties on the PPHC (due to the getBean) before it is being post processed (as I stated this is quite difficult).

Comment

I've seen quite a few posts that ask for using different configuration files according to which environment (development, production, etc) is being used. I thought of this solution for that problem (being able to load properties files using a new ResourceLoader, that loads resources from an environment-specified path).

You can implement a custom spring xml extension for that, i.e. you can use something like <resourceroperty-placeholder location="config:application.properties"/> instead of standard <contextroperty-placeholder location="config:application.properties"/>

I've seen quite a few posts that ask for using different configuration files according to which environment (development, production, etc) is being used. I thought of this solution for that problem (being able to load properties files using a new ResourceLoader, that loads resources from an environment-specified path).

Do you think there is a better approach for this?

Thank you very much to you (and Marten) for your responses

Welcome

I see two possible ways to get environment-specific behavior:

Define all settings at the environment-level, i.e. set necessary environment variables/jvm properties before the application start and have a delivery unit that is common for all environments;

Define necessary settings at the VCS-managed files and parametrize build procedure, i.e. when you build delivery unit you define the environment it's going to be deployed at and the build script automatically applies necessary settings;

As for me, I prefer the second way because I can observe all settings from the IDE then.

The problem with the second approach is that you aren't deploying the same to your Test, Acceptance and Production environment. You have to recreate the artifacts and this can be troublesome in some businesses.

There is actually a third option, which is part option 1.

3) Specify a system variable which points to the configuration directory (i.e. config_path) and reference this from your PPHC.

Code:

<property name="location" value="${config_path}/jdbc.properties" />

That way you can deploy the same war/ear/.... to the server and only have different properties files, stored in a path configured at the server.

Comment

The problem with the second approach is that you aren't deploying the same to your Test, Acceptance and Production environment. You have to recreate the artifacts and this can be troublesome in some businesses.

The main point here is that all delivery units have the same codebase but different settings. Agreed that that may be inappropriate for particular situations.

Well I worked for a lot of banks and insurance companies and they just don't like it if you need to build a new artifact when you move to a new server (Test/Accept/Production). They just want to copy the war/ear to the new server, maybe edit a properties file for different datasource/jms settings but that is it. The codebase has to be the same because that is the one that has been tested by the test department. Next to that it also saves you 2 build cycles and if you have CI in place they can just grab a nightly build and test it if they want.. .

Comment

Well I worked for a lot of banks and insurance companies and they just don't like it if you need to build a new artifact when you move to a new server (Test/Accept/Production). They just want to copy the war/ear to the new server, maybe edit a properties file for different datasource/jms settings but that is it. The codebase has to be the same because that is the one that has been tested by the test department. Next to that it also saves you 2 build cycles and if you have CI in place they can just grab a nightly build and test it if they want.. .

I can say that Deutsche Bank is ok to the different deliveries per platform

I don't say that your approach is worse, I just say that every approach has pros and cons.