Discussions

In "Java EE: Java Ever Evolving," Sean Landis describes his impressions of Java EE from a JINI emigrant's point of view. He points out the number of divergent frameworks, and the conflict between the pragmatic coders vs. the standards authors, and follows with a fairly cynical "look at all this work that's been created!" point.

I always felt that enterprise Java was a giant hairball. The more I learned the less I liked. Everything was wrong. Since I had a Jini job, and Jini I felt, was actually a well-designed technology, I didn't worry too much about the pathetic state of enterprise Java. Now though, I was waiting in line and I was going to choke down the Java EE Kool-aid regardless of how I felt about it.
...
Most of this garbage is probably a result of a rush-to-market standardization (I use that term loosely) process. To some extent, the technology was born of fear, rushed to market to be the first to fill an obvious need. Regardless, the enterprise Java world was blessed with a number of forward-thinking open projects such as Spring, JBoss, and Hibernate. These efforts, I think, really drove improvements in the standards. I may be wrong, but from my outsider's view, developers, being so pragmatic, finally got sick and tired of the crap and started building the right stuff.

His questions at the end of the article are enlightening, and worth considering:

How are organizations dealing with the turmoil of the rapidly changing enterprise computing landscape? There are so many options and choices. There are so many tools and frameworks, many of which don't seem to worry much about backward compatibility. Knowing how fast things are changing, what is the thought process for picking a suite of enterprise tools for constructing an application?
Once the choices are made and the application is built, how do companies evolve? I've been bitten by this recently on my own small EE project. In a matter of a year, the component technologies have changed dramatically. Deprecation is rampant, backward compatibility is elusive, and the standards are changing. I understand the issue of technology migration in general but in this EE world, the depth and breadth of the problem seems so vast as to make it intractable for traditional approaches.
As I interviewed for all the J2EE jobs I spoke of eariler, this theme seemed universal. All the companies were struggling to keep up. All the companies were in some form of entropic distress. None of them seemed to have an answer other than to throw more people and more expertise at their problem. Hire someone well-versed in the latest toolkits and TLAs and hope they are the hero that saves the day.
Maybe my impressions are wrong. Maybe I'm trying to justify my ingrained prejudices toward enterprise Java. I don't know. But now that I've sipped the Kool-Aid, I'm beginning to wonder if I'm making a mistake. Ah heck! Look on the bright side: If nothing else, there's plenty of work to do! Gulp!

A casual look at the job boards in the UK will shine a very strong light on this. Spring & Hibernate dominate as the alternatives to J2EE. I don't think there is too much confusion in the core J2EE territory. It's just a painful transition time as consensus is reached on the de-facto standards for the other areas of development.
In terms of AJAX, JSF, Echo etc. yes there is a lot of flux here but Google are putting an end to that now ;-) I have a feeling we'll have a new defacto standard by next year.
I would never myself complain about the diversity of frameworks since the one framework fits all approach of J2EE was largely a miserable flop. The many frameworks have allowed a pragmatic evolution of ideas which has lead to some pretty sturdy frameworks now available and contending remarkably well with the committee-ware that was J2EE.
In terms of which framework for a company to invest in, well yes that is tricky isn't it. But if we wanted only one option in life we would be with Mr Gates now wouldn't we.
All the best - Neil

In terms of AJAX, JSF, Echo etc. yes there is a lot of flux here but Google are putting an end to that now ;-) I have a feeling we'll have a new defacto standard by next year.

I strongly suspect not. You are confusing different levels of technology here. Google has released GWT - a development framework for writing AJAX applications. JSF (and other frameworks) are designed for the production of entire websites.

In terms of AJAX, JSF, Echo etc. yes there is a lot of flux here but Google are putting an end to that now ;-) I have a feeling we'll have a new defacto standard by next year.

I strongly suspect not. You are confusing different levels of technology here. Google has released GWT - a development framework for writing AJAX applications. JSF (and other frameworks) are designed for the production of entire websites.

Hi Steve,
Have you taken a look at GWT? I think you'll find it is already and intends to be more so an AJAX enabled version of Echo/JSF. I don't believe I have confused anything.
Kind regards
Neil

In terms of AJAX, JSF, Echo etc. yes there is a lot of flux here but Google are putting an end to that now ;-) I have a feeling we'll have a new defacto standard by next year.

I strongly suspect not. You are confusing different levels of technology here. Google has released GWT - a development framework for writing AJAX applications. JSF (and other frameworks) are designed for the production of entire websites.

If you design the right AJAX application (and this is admittedly still very immature), you don't need a framework.
Because you don't need a heavy web tier.
All the web server has to do is host the AJAX web pages (static or MINIMALLY generated), and serve as a service gateway/broker.
State is maintained on the client (I use a very basic applet to mimic a session), so thousands of memory-hogging and server-killing sessions disappear. No more bloated JSF. No more Swingified web components.
Admittedly, this dream gets complicated with complex data sets and paging, but that's more of a query caching problem.
I know we love our lovely heavy web tiers that we bet the farm on career-wise. Struts++, Tapestry, Wicket, JSF, you name it. It's legacy stuff now folks. Same thing with Ruby on Rails. I know we want to sell dozens of bloated app servers and web servers and JSF components. Sorry, it's all COBOL now.

In terms of AJAX, JSF, Echo etc. yes there is a lot of flux here but Google are putting an end to that now ;-) I have a feeling we'll have a new defacto standard by next year.

I strongly suspect not. You are confusing different levels of technology here. Google has released GWT - a development framework for writing AJAX applications. JSF (and other frameworks) are designed for the production of entire websites.

If you design the right AJAX application (and this is admittedly still very immature), you don't need a framework.

Because you don't need a heavy web tier.

All the web server has to do is host the AJAX web pages (static or MINIMALLY generated), and serve as a service gateway/broker.

State is maintained on the client (I use a very basic applet to mimic a session), so thousands of memory-hogging and server-killing sessions disappear. No more bloated JSF. No more Swingified web components.

Admittedly, this dream gets complicated with complex data sets and paging, but that's more of a query caching problem.

I know we love our lovely heavy web tiers that we bet the farm on career-wise. Struts++, Tapestry, Wicket, JSF, you name it. It's legacy stuff now folks. Same thing with Ruby on Rails. I know we want to sell dozens of bloated app servers and web servers and JSF components. Sorry, it's all COBOL now.

J2EE is a collection of standard APIs and Spring and Hibernate are not alternatives to J2EE by any means. Spring and Hibernate use many of those APIs and Spring(core) actually does not implement anything :) it simply wraps stuff and wires it together

J2EE is a collection of standard APIs and Spring and Hibernate are not alternatives to J2EE by any means. Spring and Hibernate use many of those APIs and Spring(core) actually does not implement anything :) it simply wraps stuff and wires it together

Given that beans in EJB3 are all POJOs, the benefit of
using Spring will be few.

Hi Jim
I think you'l find that most people use Spring way beyond the persistence layer. It is a pervasive Dependency Injection framework whose advantages are best appreciated through application.
Kind regards
Neil

Hi Jim,
Not only does Spring have a great DI framework, it is absolutely wonderful in how it handles transactions. If used correctly, you can scale up (or down!) from taking a task that does not require transactions, to one with transactional integrity that spans a single data source, to a fully JTXA-compliant 2-phase commit transaction, all by changing a configuration file only.
Nevermind Spring's wonderful set of templates, that make using some of the dustier corners of J2EE (JEE? it just doesn't sound right!), such as JavaMail.
I find Spring to be wonderful in that it lets me retrofit my existing code base, developed on J2EE, with proper DI for configuration and testing purposes. Adding in transactions becomes much easier at that point, since I can declare transactional boundaries in my configuration files without changing code. Over time, I can integrate Spring more tightly into my code, making use of its rich set of helper objects. Finally, I can take the final step of removing the overhead of a full J2EE container, or use only minimal aspects of the container for such elements as messaging. Its value is that it works *with* existing J2EE components, but helps to set you free from the heavyweight container. Combine that with all of the benefits of DI (especially testing, testing, and testing), and its ability to handle declaritive transactions that can be changed from no transaction, to single component transactions, to JTXA, all without changing the underlying code, and Spring seems to me to be an obvious choice for almost any Java project.

I find it amusing that so many people think they need something like Spring. For me, good OO design can satisfy the usual requirements for security and transaction demarcation (for example, a base Struts Action can provide for centralized commit/rollback, while a filter would close the Hibernate Session stored in a ThreadLocal). A ServiceLocator is a great alternative to DI. And finally, there are ways to write true unit tests for any kind of code, without having to change it first.
For me, the less frameworks and XML files, the better.

(for example, a base Struts Action can provide for centralized commit/rollback, while a filter would close the Hibernate Session stored in a ThreadLocal).

First, handling transactions in the controller is not a good idea in my opinion.
And using a bloated base class has a lot of shortcomings which have been documented several times :
-You can't choose wich service you want.
-You have to deal with an overcomplex API in most cases.
-You have to explicity call methods (for instance to define transaction boundaries).
-Sometimes the concrete class is already extending a another class.
-Good luck reusing your code.
-Your base class become severly bloated or depend upon a tons of other classes and become very fragile.
Using an AOP (like Spring) or an meta-data based interceptor solution (like Hibernate or EJB3) is better. A filter is basically just a web specialized interceptor.

A ServiceLocator is a great alternative to DI. And finally, there are ways to write true unit tests for any kind of code, without having to change it first.

For me, the less frameworks and XML files, the better.

I hate the pull method, the push is way more powerful because :
-You don't need to call the service locator or whatever factories.
-You don't need to program tons of factories, builders,service locators, ...
-...
Anyway, I'll stop here because this subject is already covered extensively but DI hasn't become so popular for nothing.

As already pointed out in the previous item, the code that deals with the transaction service is centralized in BaseAction. It is not spread in the codebase, as you seem to imply.

-Sometimes the concrete class is already extending another class.

Not in our case, and we have to extend from the Struts Action class anyway.

-Good luck reusing your code.

Our application infrastructure layer (including the BaseAction class) is, at least in principle, fully reusable for web apps following the same architecture, even though such reuse in other apps is not a project requirement.

-Your base class become severly bloated or depend upon a tons of other classes and become very fragile.

Granted, our BaseAction class could be simplified, but still, it only contains about 400 lines of code, and depends on five other infrastructure packages, only one of which are involved in transaction demarcation.

I hate the pull method, the push is way more powerful because :-You don't need to call the service locator or whatever factories.

Yes, but with DI you need extra fields and setters/constructors, which are not needed with a ServiceLocator. So, in both cases there is additional complexity.

As already pointed out in the previous item, the code that deals with the transaction service is centralized in BaseAction. It is not spread in the codebase, as you seem to imply.

-Sometimes the concrete class is already extending another class.

Not in our case, and we have to extend from the Struts Action class anyway.

-Good luck reusing your code.

Our application infrastructure layer (including the BaseAction class) is, at least in principle, fully reusable for web apps following the same architecture, even though such reuse in other apps is not a project requirement.

-Your base class become severly bloated or depend upon a tons of other classes and become very fragile.

Granted, our BaseAction class could be simplified, but still, it only contains about 400 lines of code, and depends on five other infrastructure packages, only one of which are involved in transaction demarcation.

And soon you need to handle security, remoting, auditing, and you are screw. Granted those services are usually not handled by the controller but by a service facace as transactions are but the problem is still the same. Of course, if you only need transaction there isn't any problem but if you need to adress just another cross cutting concern, you will be in trouble. All my arguments were based upon this assumption.
A base class doesn't allow you to choose the service you want. For instance, let's say I don't need auditing but transactions in one case but in another case I need both, how is it possible to do that using a base class ? By adding protected methods? Your base class would end up with a very complex API and your code base become bloated with those method calls. Your cross cutting concerns are no longer handled invisibly.. Base class might be okay in some cases but it isn't very flexible. A good old interceptor (proxy) or AOP are a better choice here because you can decide which combination you want to use while they are staying invisible to the target class.

Yes, but with DI you need extra fields and setters/constructors, which are not needed with a ServiceLocator. So, in both cases there is additional complexity.

An application usually needs no more than one ServiceLocator. For example, XyzService xyz = ServiceLocator.get(XyzService.class);.

Anyway, I'll stop here because this subject is already covered extensively but DI hasn't become so popular for nothing.

I have read lots on DI and Spring, but still am not convinced it's as useful as advertised. Maybe people just want to have more frameworks listed in their resume, I don't know.

Maybe one type of service locator (for retrieving the datasource I suppose) but usually more than one implementations (mock, HSQL (db for test purpose), Oracle or MySql (production db). But what about factories and builders? Afterall, service locator is just a particular creational pattern. My code use to be clutered with those type of objects before I started using DI.

And soon you need to handle security, remoting, auditing, and you are screw.
Granted those services are usually not handled by the controller but by a service facace as transactions are but the problem is still the same. Of course, if you only need transaction there isn't any problem but if you need to adress just another cross cutting concern, you will be in trouble. All my arguments were based upon this assumption.

The app I mentioned does handle security (with URL-based access control implemented with a filter) and auditing (implemented with an Hibernate Interceptor, plus an API for more complex cases). Remoting isn't a need/requirement for this app, but if we ever need to expose some service for remote access, we would add a RemoteFacade or a set of Web Services; in any case, the current infrastructure shouldn't require changes.

A base class doesn't allow you to choose the service you want. For instance, let's say I don't need auditing but transactions in one case but in another case I need both, how is it possible to do that using a base class ? By adding protected methods? Your base class would end up with a very complex API and your code base become bloated with those method calls.

At least for us, auditing and transactions are orthogonal issues, implemented in packages with no inter-dependencies.
The BaseAction class is a Layer Supertype (in Martin Fowler's terminology), containing a Template Method; it does not expose an API for subclasses or any other class to call.

Your cross cutting concerns are no longer handled invisibly.

Well, while implementing application use cases our programmers don't actually write any code to handle the crosscutting concerns of transaction demarcation, security, and auditing, except in the rare special case. So, if by "handled invisibly" you mean with no explicit code, I guess we got it.

Base class might be okay in some cases but it isn't very flexible. A good old interceptor (proxy) or AOP are a better choice here because you can decide which combination you want to use while they are staying invisible to the target class.

I try to stay away from uneeded "flexibility", especially if it requires the use of a complex framework, tool or language that the team is unfamiliar with. But if the benefits are worth the cost, then I would go for it. So far, the benefits of AOP or even DI didn't seem to be worth their cost in the applications I helped to develop.

Maybe one type of service locator (for retrieving the datasource I suppose) but usually more than one implementations (mock, HSQL (db for test purpose), Oracle or MySql (production db). But what about factories and builders? Afterall, service locator is just a particular creational pattern. My code use to be clutered with those type of objects before I started using DI.

Our app only uses a DataSource indirectly, through Hibernate configuration. Hibernate itself is encapsulated in a persistence subsystem behind a Static Facade (well, except for the use of HQL in strings).
The application doesn't actually have any ServiceLocators at all :) Nor it has lots of factories, even though we still "program to interfaces" in the true GoF sense. BTW, a ServiceLocator can be an Abstract Factory implemented as a Singleton, and similarly it can be used to create whole families of objects.
For unit testing, I actually had to create a new tool to enable the creation of mocks for concrete classes that don't implement any Java interface, are final, with static methods, and/or instantiated with "new" in client code. (Unfortunatelly, the app implementation didn't start with TDD two years ago, but today we at least have the ability to do that for new code.)

Well I think it's obvious we won't agree there but I would tell you that my discussion was focused on handling those concern at the service levels (quite important if your application is AJAX or SOA based) and that filters are indeed a proxy but in the Web layer. So as I pointed out, the proxy solution is very powerful.
Anyway, I just wanted to specify one last thing :

I try to stay away from uneeded "flexibility", especially if it requires the use of a complex framework, tool or language that the team is unfamiliar with. But if the benefits are worth the cost, then I would go for it. So far, the benefits of AOP or even DI didn't seem to be worth their cost in the applications I helped to develop.

I agree with you on this one since I believe in agile methods but at the architectural level, I think it is very important to have a flexible architecture to allow design refactoring and to be sure it is going to scale up. I think it's worth it to pay the little extra cost associated with a good architecture as long as it doesn't make my code more complex. As for design, I agree with the mantra : "Do the simplest possible thing that could possibly work, then refactor". Don't bring unecessary flexibility until you actually need it and that's why I like DI. I don't have to plan to call a factory or builder, DI will allow me to inject whatever I want. Hence, the indirection cost is null.

J2EE is a collection of standard APIs and Spring and Hibernate are not alternatives to J2EE by any means. Spring and Hibernate use many of those APIs and Spring(core) actually does not implement anything :) it simply wraps stuff and wires it together

Given that beans in EJB3 are all POJOs, the benefit of using Spring will be few.

Sure, like POJOs alone are sufficient.... The POJO model is not what is hard to get right. Everyone by now know it is the way to go.
But in the end, you still need to handle those cross-cutting concerns (annotation-based proxy, AOP, ...). This is where you should compare Spring and EJB3. In my opinion, Spring is still more powerful and easy to use than EJB3. The main weakness of Spring, ie. complex configuration, is solved in Spring 2. They have introcuded a lot of new configuration tags to hide the internals of Spring. Plus, you can use Java instead of XML now.

J2EE is a collection of standard APIs and Spring and Hibernate are not alternatives to J2EE by any means. Spring and Hibernate use many of those APIs and Spring(core) actually does not implement anything :) it simply wraps stuff and wires it together

Hi Konstantin
Fair point.
I was not very clear was I :-) I do realise they are not replacements for all parts of the J2EE technology stack. They are however the two key replacement technologies for a misguided design methodology which most of us associate with J2EE. I don't think (myself) of J2EE being the individual parts (like JMS or EJB) but the combination (i.e. J2EE blueprints ;-( ).
My point was that the bulk of jobs in the enteprise market for the UK were in the past looking for J2EE based strategies/technologies as the backbone of their applications and therefore those skillsets. Whereas now the aim is to provide more lightweight applications which require less plumbing and uncessary remoting etc. etc. and this is reflected by the desire for Spring/Hibernate skills.
Spring and Hibernate do provide an alternative plumbing and persistence mechanism which is mostly on a day to day level most of what I (and many others) have to deal with. The J2EE apps I have worked on have in fact usually been more plumbing than business logic.
Maybe I would have better phrased it as:
Spring & Hibernate dominate amongst the alternatives to the J2EE paradigm
Regards
Neil

This is about as brutally honest as I've seen.
Anyone who believes Spring is just a helper package that makes J2EE programming easier hasn't gone down the slippery slope yet. It replaces the J2EE paradigm with the Spring paradigm, trading one Kool-Aid for another.

Anyone who believes Spring is just a helper package that makes J2EE programming easier hasn't gone down the slippery slope yet. It replaces the J2EE paradigm with the Spring paradigm, trading one Kool-Aid for another.

In a matter of a year, the component technologies have changed dramatically. Deprecation is rampant, backward compatibility is elusive, and the standards are changing.

Spring & Hibernate dominate as the alternatives to J2EE. I don't think there is too much confusion in the core J2EE territory. It's just a painful transition time as consensus is reached on the de-facto standards for the other areas of development.

I'm pretty sure that Spring and Hibernate will be the next on the list of depricated J2EE frameworks. They are better than Entity Beans but their configuration is too complex for real-world applications. They provide flexibility where you don't need it.

I'm pretty sure that Spring and Hibernate will be the next on the list of depricated J2EE frameworks. They are better than Entity Beans but their configuration is too complex for real-world applications. They provide flexibility where you don't need it.

Are you kidding? Hibernate is a superset of EJB3 and the underlying persistence engine for JBoss's EJB3/JPA implementation. It won't be going away anytime soon, no matter how you slice it. Even alone, Hibernate has substantial enough of an implementation base and community to take it well into the future. From a configuration standpoint, you can use XML or annotations to define your entity mappings, so configuration is really flexible (as opposed to too complex for real-world aplications).
Per Spring... You can pick, a-la-carte, any of the many features Spring offers and use them independently, as needed (AOP, IoC, Data abstraction, transaction management, MVC, remoting, JMS, JMX, etc...)
Sure, you may have to wire you beans in the config files, but the abstraction and simplification it provides from an architectural and code perspective by far outweighs any configuration effort expended.
So, unless someone is choosing to use features that Spring provides where they don't need it, I don't see how it is providing 'flexibility where you don't need it'. With regard to Hibernate, it may not be the most suitable persistence choice for your project, but it certainly isn't too complex considering all that it shields you from (and considering the complexity of the subject matter behind persistence architectures and the wide array of other benifits like database creation it offers out of the box)...

Hibernate is a superset of EJB3 and the underlying persistence engine for JBoss's EJB3/JPA implementation. It won't be going away anytime soon, no matter how you slice it. Even alone, Hibernate has substantial enough of an implementation base and community to take it well into the future. From a configuration standpoint, you can use XML or annotations to define your entity mappings, so configuration is really flexible (as opposed to too complex for real-world aplications).

What you describe as "really flexible" is the problem, not the solution. For larger projects (hundreds of tables) the Hibernate mapping configuration can hardly be handled manually any more so you need a tool like Middlegen that generates "all the repetitive, tedious to write code and configuration files for you".

cv,
i agree in part to your response here. in the
context of large projects (hundreds of tables)
code gen tools can really help out. however i
don't think that problem really has anything
to do with the points that were originally
made. i think it would have been better put:
things that you find "really flexible" may not
apply in all contexts, large development
efforts for example.
configuration is always a bear on larger
projects, mapping metadata for the persistence
technology is just one dimension to the
complexity one must deal with on. therefore
the problem is not the flexiblity as you state
but rather the complexity imposed by a large
project.
sean

configuration is always a bear on larger projects, mapping metadata for the persistence technology is just one dimension to the complexity one must deal with on. therefore the problem is not the flexibility as you state but rather the complexity imposed by a large project.

Configuration is a necessary evil. But one shouldn't sign a pact with the Devil by keeping an unlimited number of big, unrelated configuration files! Consistency is key when it comes to managing projects.
A good rule of thumb is: if you can avoid a configuration file by using sensible defaults, then go for it!
Take for example my pet project (shameless plug!): MessAdmin. While I could have delegated a lot of the work to the user, I strived to make everything as simple as possible, and it shows: using MessAdmin is now dead-easy!
So, use config files where necessary, but keep them to a minimum as far as possible.

configuration is always a bear on larger projects, mapping metadata for the persistence technology is just one dimension to the complexity one must deal with on. therefore the problem is not the flexiblity as you state but rather the complexity imposed by a large project.

Configuration complexity is not inevitable. It exists because the frameworks are made configurable excessively and unnecessarily. The Hibernate documentation ( http://www.hibernate.org/hib_docs/v3/reference/en/html/ ) e.g. consists largely of configuration and mapping options which often duplicate the information (keys, columns, associations, ...) that is already defined in the database schema.
I'm very interested in a simple and convenient O/R mapping framework, but one that offers reasonable defaults (and code generation) instead of configuration complexity.

So write an implementation of org.hibernate.cfg.NamingStrategy that defines mappings based on naming conventions that match your code and your database schema. You can get all the goodness of Hibernate with all the convenience of RoR Active Record.

Configuration complexity is not inevitable. It exists because the frameworks are made configurable excessively and unnecessarily. The Hibernate documentation ( http://www.hibernate.org/hib_docs/v3/reference/en/html/ ) e.g. consists largely of configuration and mapping options which often duplicate the information (keys, columns, associations, ...) that is already defined in the database schema. I'm very interested in a simple and convenient O/R mapping framework, but one that offers reasonable defaults (and code generation) instead of configuration complexity.

Tools and APIs evolve as time goes on. What started as "simple little tool" quickly becomes an advanced or even complex tool. All the feature request people submit and crave for tend to pile up as new features in to a tool / API, product. The problem is especially with OS is that there is little to hold back the feature request flood. ;)
But if you had actually used Hibernate and its Tools like schemaexport, hbm2ddl etc. you would know that Hibernate has pretty good "reasonable defaults" for "keys, columns, associations". I have use the tools since 2.x doing things like customizing the templates (the things that are partially responsible for the "reasonable defaults"), adding Middlegen to the mix (when there was no GUI for the mappinggen) and found that while the result may not always be perfect (nothing is...), it is still pretty good...
I agree with you on the point that the Hibernate Documentation should be less about the configuration options and more about how to actually use the ORM correctly. If I was responsible for the configuration / documentation I would be using for example XML Schema (maybe RELAX, Schematron <-- BTW, are people actually using these?) to describe the different "configuration languages" (ORM, SessionFactory etc.)
-I

configuration is always a bear on larger projects, mapping metadata for the persistence technology is just one dimension to the complexity one must deal with on. therefore the problem is not the flexiblity as you state but rather the complexity imposed by a large project.

Configuration complexity is not inevitable. It exists because the frameworks are made configurable excessively and unnecessarily. The Hibernate documentation ( http://www.hibernate.org/hib_docs/v3/reference/en/html/ ) e.g. consists largely of configuration and mapping options which often duplicate the information (keys, columns, associations, ...) that is already defined in the database schema. I'm very interested in a simple and convenient O/R mapping framework, but one that offers reasonable defaults (and code generation) instead of configuration complexity.

What you describe as "really flexible" is the problem, not the solution. For larger projects (hundreds of tables) the Hibernate mapping configuration can hardly be handled manually any more so you need a tool like Middlegen that generates "all the repetitive, tedious to write code and configuration files for you".

So you think you'd have an easier time hand-coding all of SQL for 100's of tables ? I've worked on several applications that had 100's of tables and development time was reduced by about 80% using Hibernate over hand-coded SQL.

There are obvious advantages to Spring/Hibernate etc. But I think the problem some are describing with regards to configuration, (what some regard as just a "little" configuration) is that it is still easier/was easier when there were less "pattern" and "xml" hell out there. Patterns are obviously good but can sometimes be enforcing of a repetitious chore to stay "proper" and not "deviant". Xml is wonderful because it is pure meta-data and data in plain text, predictably parsable (in every way) and ubiquitious etc etc.
There are frameworks that are combatting the problem with a pattern that is kind of a pattern of using less pattern, (or repetitious indirections) and that is the DNR pattern (do not repeat yourself). Ironic... Maturity is often met when irony is more apparent like this...it's close. Still, nothing has arrived yet to match something superior to xml (config.sys of our day) and that is the newest and next thing to hit config.sys(xml) and that will be windows 3.1 (jk). However, what I do mean is that wizards tend to make light work of configurations. JBoss was doing it (in a limited way) when it started making deployment manager applications and xdoclet was one of the many ways people tried to solve xml hell but it puts the problem back where xml was trying to reduce and that is xml is a way of taking things out of code and xdoclet was putting it back in code. Yes, putting concepts close to usage is a long known scheme but nothing quite captures the simplicity of a good tool or wizard. In java, few have the simplicity of say an ASP.NET web wizard. Yes, I know one can do equivalent things (after downloading and configuring etc etc etc etc into the wee hours of the morning)...but those who say such forget about that latter part sometimes. Programmers haven't made their own jobs easy enough yet because customers (and the community of vendors) have come first.

What you describe as "really flexible" is the problem, not the solution. For larger projects (hundreds of tables) the Hibernate mapping configuration can hardly be handled manually any more so you need a tool like Middlegen that generates "all the repetitive, tedious to write code and configuration files for you".

So what's your proposed alternative? Manual JDBC? Even Spring JDBC templates would be a PITA compared to Hibernate. I've been using Weblogic Workshop for the past 8 months and it's supposed to make all this "easy". Well it is, until you have to start testing and re-testing and oh, there's a bug in that control so work around it. Pretty soon you end up with complexity that far outweighs that of Hibernate with NONE of the flexibility.
The way that seems easy leads to much wailing and gnashing of teeth. The way that seems hard is in reality the easy path.

For larger projects (hundreds of tables) the Hibernate mapping configuration can hardly be handled manually any more so you need a tool like Middlegen that generates "all the repetitive, tedious to write code and configuration files for you".

Actually, I am working on a large project right now (I've been here for 3 of its 5 years) that currently has 857 tables. No exaggeration. True that we don't write hbm.xml files by hand, but I venture that trying to tame our monster database with Middlegen would have near impossible. Instead, developers write xml files that describe pojo properties and the table columns they map to. Then, the pojo files and associated hbm.xml files get generated via a Velocity template of our own design. So, we're still hand-coding all those xml files - and it's been more than doable for us. In fact, it's been a necessity for us to get it right.

Actually, I am working on a large project right now (I've been here for 3 of its 5 years) that currently has 857 tables. No exaggeration. True that we don't write hbm.xml files by hand, but I venture that trying to tame our monster database with Middlegen would have near impossible. Instead, developers write xml files that describe pojo properties and the table columns they map to. Then, the pojo files and associated hbm.xml files get generated via a Velocity template of our own design. So, we're still hand-coding all those xml files - and it's been more than doable for us. In fact, it's been a necessity for us to get it right.

So, assuming one table only has 10 columns, you write column-to-property mappings for about 8570 columns?! Why don't you generate those mappings from the database schema?

Actually, I am working on a large project right now (I've been here for 3 of its 5 years) that currently has 857 tables. No exaggeration. True that we don't write hbm.xml files by hand, but I venture that trying to tame our monster database with Middlegen would have near impossible. Instead, developers write xml files that describe pojo properties and the table columns they map to. Then, the pojo files and associated hbm.xml files get generated via a Velocity template of our own design. So, we're still hand-coding all those xml files - and it's been more than doable for us. In fact, it's been a necessity for us to get it right.

So, assuming one table only has 10 columns, you write column-to-property mappings for about 8570 columns?! Why don't you generate those mappings from the database schema?

Take a look at Hibernate-Ide and than comeback. You will see that it is already possible but they have chosen to add this feature at the tool level instead of directly into the framework which makes more sense IMO.
Anyway, in my case, I prefer to model my domain, then generate my DB schema and then tweak it for any other considerations. Why would you generate a domain model from a DB??? You loose all the benefits of a domain model so why bother with its complexity? Just stick with a transaction script in this case and don't use an ORM tool.

We don't model our domain model from the db. While we do have a kind of object that maps 1:1 to our tables, they get composed into another kind of object which is our domain model. The complexity of our application warranted that kind of separation/approach.
I'm happy for you that you have the flexability to generate your database schema from your object model, but in my experience that is a priveledge only available to small projects run by very few, in control people. In the medium to large enterprise projects I've worked on, the database schema comes first (sometimes long pre-existed), and the middle tier "object" guys have to "deal with it."
And to address the previous poster's comment, I can't see that auto-generating a mapping from the database schema would ever work for us more than once, as we put thought into each object, including (at the least), the property names we assign to (what often seem like obfuscated) column names. Add to that the fact that our database schema is constantly evolving, and you can see how it was not practical for us.
To get on my soap box for a second (and this is not aimed at you), it does bother me when people say "why didn't you just do this?" - when they have absolutely no idea what others are up against. With no knowledge of the business or functional requirements, the size, length or scope of the work, the size of the team, etc., they make rash, terse statements. People need to realize that every project is different, and there is no golden hammer. Developers have to make the best choices given their circumstances. And for many, that means (some amount of) ORM.

I'm pretty sure that Spring and Hibernate will be the next on the list of depricated J2EE frameworks. They are better than Entity Beans but their configuration is too complex for real-world applications. They provide flexibility where you don't need it.

So what you are saying is that Spring does not cut it for real-world applications?
I've heard that quite a few of the largest banks use spring and if i get you well, there are probably just using spring for prototypes ;)

I'm pretty sure that Spring and Hibernate will be the next on the list of depricated J2EE frameworks. They are better than Entity Beans but their configuration is too complex for real-world applications. They provide flexibility where you don't need it.

i think a perspective check is in order here. cv,
what do you consider a "real-world" application?
just want to make sure that share the same reality.
sean

I'm pretty sure that Spring and Hibernate will be the next on the list of depricated J2EE frameworks. They are better than Entity Beans but their configuration is too complex for real-world applications. They provide flexibility where you don't need it.

Be careful now - I've been scolded many times for disparaging Hibernate and ORM. It's the elixir of our times. Don't you know?

You are sick. Im sorry but you are. Fact is I can deploy almost any java app Ive written in the past 8 years on any VM and any appserver I want with no problems.
Im sorry you have problems digesting technology but I have seen enough of your posts. Why dont you use ruby if your pants are in such a wad. You are just wasting space on here.

Sadly my account to TSS was somehow mangled and I couldn't log in to post. Now that I can, the thread is in the old news section...
I'm glad that my original post on Artima.com triggered some discussion here. A lot of posts about which enterprise technology is best, etc. Unfortunately, I didn't see one post that actually discussed the questions that Joseph Ottinger (and myself) felt were worth considering.
The fact that this thread wondered away from the questions and into techno-religious discourse reinforces my concerns about the divergence in enterprise java and the challenges faced by corporations using these technologies.

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.