Discussions

iBATIS has released a Database Layer aimed at improving the design and implementation of persistence layers for Java apps, using two main APIs: SQL Maps for reducing JDBC code and Data Access Objects for abstracting the persistence implementation details.

The iBATIS Database Layer is designed for the Real World, where there are no perfect database designs and no perfect object models. We are not always able to change either of these, and with the iBATIS Database Layer, you won't have to.

SQL Maps puts the full power of real SQL at your fingertips. There are no complex O/R mappings or query languages to learn. You already know SQL and XML, and that's all you need. Your maps can be as simple as this:

How is this better than writing SQL Queries in java code ?
<
It is more convenient. Consider the option that the framework would force you to write additonal code for each query by subclassing something or by creating extra objects. Now you only need a Java bean, and no extra code to get the cruft done. You could even use existing Java beans to persist them. I would say that is a definite advantage.

You do not understand this product at all.
I think there are two schools of thought regarding the database part of a project.
(I am asumming that a relational database is used, which is the most common case).

The first approach is to use an O/R mapping tool like Hibernate, JDO and CMP EJB (BTW, I have tested Hibernate 2.0beta and I think it gives EJB-CMP a run for its money, I have not used any JDO products, but I think the interface sucks).
People who like this approach like to see the RDBMS as some kind of proprietary black box that should be completly abstracted from the rest of the system, so that it can be replaced whenever you want (because they are many available and all of them are mostly the same) and are happy with a database design generated automatically by an O/R mapping tool.

A second approach states that the database design is an integral part of the system and should be done very carefully by someone who completely understands relational databases and knows a lot about the details of the RDBMS being used, so that the database is impossible to corrupt (that is, there is no way to violate integrity constraints) and provides the best possible performance.

I personally think that any complex (and many not so complex) system requires a database design that is only possible to implement correctly by a skilled DB developer and not by generic code generated automatically by some tool.
For the business logic part you also need skilled OO programmers (and domain experts) who design an object view/model of the system.

I think people who like the first approach are wrong. And I think so because many of them assume that RDBMS are proprietary things, that should be manipulated from a "standard" system that hides these proprietary features. The problem is that the ONLY real standard for relational databases is SQL and it is way more powerful than any O/R mapper. In fact, most O/R mappers have to provide some "standard" query language similar to SQL to provide some of its functionality, but they are not real standards (at least, they are not as standard as SQL) and they prevent you from using all standard and proprietary SQL features.
When I say that it is good to use proprietary SQL features many people tell me that I contradict myself, but that is just not true, because porting proprietary SQL from one RDBMS to another is fairly simple. What you need is a way to easily change the SQL without changing your application code and this iBatis aims to provide that functionality. In fact, it is probably much easier to port SQL from say, Sybase to Oracle than porting a mapping done with one JDO tool to another (or from a CMP engine to another).
I think O/R mappers are good for projects where data integrity and RDBMS performance is not a very important issue, but most projects do not fall in this category.
I have worked on projects for the oil&gas industry where we had to access very large and complex existing (and also very well designed) databases and it was simply impossible to use an O/R mapper to map the BD information into objects. We ended up using EJB-BMP or straight JDBC. It would have also been impossible to create the original database using an O/R mapper enforcing all integrity constraints and getting maximum performance.
This iBatis DB tool would have been of great help.
I was even going to try to create a tool like iBatis, but maybe iBatis is just what I need.

Vegata,
I fully aggree with your comments. But for portability of SQL, I am not so sure it is always that straight forward. We are solving that problem using our product called SwisSQL, which converts SQL from one DB to another.

May be you should look at iBatis in combination with SwisSQL. Visit
www.vembu.com and check it out.

I think O/R mappers are good for projects where data integrity and RDBMS

>> performance is not a very important issue, but most projects do not fall in
>> this category.

I differ. Most projects fall into the category where development and maintenance time is also a very important criterion and O-R mappers excel in this area. In general I have found that there are a small subset of queries (typically complex joins - tree traversal etc.) where an O-R mapper generated code may not perform well. These require developer intervention for optimisation. Another area of complexity is if one is wanting to map new objects onto legacy tables.

Prima Facie I think iBatis assists in alleviating this problem while keeping development / maintenance time low. I prefer to have this choice available than not available (someone mentioned about "not invented here").

Now only if I could get iBatis to integrate with my hibernate/jdo cache ;).

A second approach states that the database design is an integral part of the system and should be done very carefully by someone who completely understands relational databases and knows a lot about the details of the RDBMS being used, so that the database is impossible to corrupt (that is, there is no way to violate integrity constraints) and provides the best possible performance. <

Funnily enough, I agree with and use this supposed "second approach". But I don't know where people get the impression that ORM tools dont allow this - good ones don't try to force a schema design upon you. Presumably this notion comes from experience with *bad* ORM tools like CMP. This is the whole CONCEPT behind O/R mapping, but many people don't seem to realise it:

* it is a MAPPING layer!

a good ORM tool lets you design your relational schema exactly the way you want, design your object model exactly the way you want (or at least very, very close too) and then MAP between them!

Let me tell you how my current project works.

(1) our data modeller who neither knows nor cares about Java produces the schema
(2) somewhat independantly, the Java developers write some POJOs (it helps that we are working from a pre-documented business model)
(3) we add the XDoclet tags to map b/w the two models when the schema is delivered

The schema is absolutely hand-optimized by an experienced person who knows all about data integrity, normalization, performance. This whole notion that ORM hinders data integrity is complete rubbish and I have no idea where it comes from.

>> I think so because many of them assume that RDBMS are proprietary things, that should be manipulated from a "standard" system that hides these proprietary features <
In fact, a good ORM tool ABSTRACTS the proprietary features. No, you won't be able to use an Oracle CONNECT BY clause in Hibernate and no, there is (so far) no support for UNIONs (though that would be pretty easy to add if there was any demand for it, I think). But features which are reasonably common accross platforms (and just have different syntaxes) can certainly be supported. eg. Hibernate2 abstracts

* the MySQL-style LIMIT clause (on Oracle it is implemented using ROWNUMS)
* FOR UPDATE and FOR UPDATE NOWAIT, falling back to whatever lockmode is supported by the underlying database
* OUTER JOIN, using (+) for platforms with no support for the ANSI-style

These are just examples; I'm sure there are other potential features that our user base has not yet shown any great interest in.

But the more important reason why this statement is absolutely misguided is this: yes, there WILL be some cases where you need to fall back to handcrafted SQL. But they will be perhaps 2% of cases. For the other 98%, ORM will increase your productivity by a potentially huge margin. And for those 2% of cases? Just call hibernateSession.connection().prepareStatement("MY CRAZEE PROPRIETARY SQL THAT WONT WORK ANYWHERE ELSE ?, ?");

Why would you NOT use ORM for the 98% of cases which work beautifully? Any reason at all??

Now, having said all that, iBATIS looks like a pretty nice approach for the 2%.

>>I have worked on projects for the oil&gas industry where we had to access very large and complex existing (and also very well designed) databases and it was simply impossible to use an O/R mapper to map the BD information into objects. We ended up using EJB-BMP or straight JDBC. It would have also been impossible to create the original database using an O/R mapper enforcing all integrity constraints and getting maximum performance. <
I just simply don't believe that. Did you TRY to use an ORM tool? A Good One? Or did you try out CMP (which is an abomination)?

The only schemas that mature ORM tools typically have problems with are broken schemas with Bad Things like primary keys that change value, or foreign keys that point to some other column that the primary key. And I suppose that the very expensive commercial tools like TopLink can even handle some of those cases.

(1) our data modeller who neither knows nor cares about Java produces the schema
(2) somewhat independantly, the Java developers write some POJOs (it helps that we are working from a pre-documented business model)
(3) we add the XDoclet tags to map b/w the two models when the schema is delivered

I think that having pre-documented business model is cruical for the success of your project. I am advocating this aproach for years:
1) Make an abstract business model (as a conceptual class diagram),
2) Then let this abstract business model drive both, your database schema and your actual application design (class diagram, ...). These two can be made independently and in parallel.

Oh, absolutely. This is a feature request that has been lying around for far too long. (It just never made it to top of todo list.) It is even very easy to implement, given existing Hibernate architecture. We *will* add this functionality.

OTOH, perhaps we should just see if we can help Hibernate + iBATIS integrate together, instead? ;)

Great! I've just started using hibernate and was writting custom jdbc code to implement batch updates. Having tried other OR tools like OJB, JDO impls and Castor I can say hibernate is an excellent tool. Thanks!

A second approach states that the database design is an integral part of the system and should be done very carefully by someone who completely understands relational databases and knows a lot about the details of the RDBMS being used, so that the database is impossible to corrupt (that is, there is no way to violate integrity constraints) and provides the best possible performance. <>

> Funnily enough, I agree with and use this supposed "second approach". But I don't know where people get the impression that ORM tools dont allow this - good ones don't try to force a schema design upon you. Presumably this notion comes from experience with *bad* ORM tools like CMP. This is the whole CONCEPT behind O/R mapping, but many people don't seem to realise it:
>
> * it is a MAPPING layer!
>
> a good ORM tool lets you design your relational schema exactly the way you want, design your object model exactly the way you want (or at least very, very close too) and then MAP between them!
>
> Let me tell you how my current project works.
>
> (1) our data modeller who neither knows nor cares about Java produces the schema
> (2) somewhat independantly, the Java developers write some POJOs (it helps that we are working from a pre-documented business model)
> (3) we add the XDoclet tags to map b/w the two models when the schema is delivered
>
> The schema is absolutely hand-optimized by an experienced person who knows all about data integrity, normalization, performance. This whole notion that ORM hinders data integrity is complete rubbish and I have no idea where it comes from.
>
>
> >> I think so because many of them assume that RDBMS are proprietary things, that should be manipulated from a "standard" system that hides these proprietary features <>
> In fact, a good ORM tool ABSTRACTS the proprietary features. No, you won't be able to use an Oracle CONNECT BY clause in Hibernate and no, there is (so far) no support for UNIONs (though that would be pretty easy to add if there was any demand for it, I think). But features which are reasonably common accross platforms (and just have different syntaxes) can certainly be supported. eg. Hibernate2 abstracts
>
> * the MySQL-style LIMIT clause (on Oracle it is implemented using ROWNUMS)
> * FOR UPDATE and FOR UPDATE NOWAIT, falling back to whatever lockmode is supported by the underlying database
> * OUTER JOIN, using (+) for platforms with no support for the ANSI-style
>
> These are just examples; I'm sure there are other potential features that our user base has not yet shown any great interest in.
>
> But the more important reason why this statement is absolutely misguided is this: yes, there WILL be some cases where you need to fall back to handcrafted SQL. But they will be perhaps 2% of cases. For the other 98%, ORM will increase your productivity by a potentially huge margin. And for those 2% of cases? Just call hibernateSession.connection().prepareStatement("MY CRAZEE PROPRIETARY SQL THAT WONT WORK ANYWHERE ELSE ?, ?");
>
> Why would you NOT use ORM for the 98% of cases which work beautifully? Any reason at all??

>
>
> Now, having said all that, iBATIS looks like a pretty nice approach for the 2%.
>
>
> >>I have worked on projects for the oil&gas industry where we had to access very large and complex existing (and also very well designed) databases and it was simply impossible to use an O/R mapper to map the BD information into objects. We ended up using EJB-BMP or straight JDBC. It would have also been impossible to create the original database using an O/R mapper enforcing all integrity constraints and getting maximum performance. <>
> I just simply don't believe that. Did you TRY to use an ORM tool? A Good One? Or did you try out CMP (which is an abomination)?
>
> The only schemas that mature ORM tools typically have problems with are broken schemas with Bad Things like primary keys that change value, or foreign keys that point to some other column that the primary key. And I suppose that the very expensive commercial tools like TopLink can even handle some of those cases.
>
> Gavin

The perception that ORM imposes a schema on you probably comes from bad ORM tools. I know that many of them provide some "reverse engineering" functionality, but almost all ORM tools i have seen regard "Reverse engineering" as something that is not the most common or recommended way to use it. In their documentation this "reverse engineering" is left as an last chapter or appendix.
Worse, these reverse engineering tools are generally limited to a few possible mappings where each table is generally mapped to a class and each tuple in the table is an instance of that class.
Thinking about it, they are not true object/relational mappers, but object/table mappers. They assume a tabular database model and not a relational one.
The relational model establishes several types of relations: base relations (called tables in SQL), views, snapshots, result sets, etc.
The "reverse engineering" tools in most O/R mappers only consider tables and perhaps views/snapshots as relations, but they do not consider result sets to be relations and they simply are.
In SQL databases the only way to access relations is through SQL. If the relation you want is a full table you express it as SELECT * FROM table, but in many cases (much more than the 2% you estimate) the relation I want to map my objects to results from other, more complex SQL expresions. So what is needed is a true O/R mapper, which for the particular (and overwhelmingly common) case of SQL databases would be a more general Object to SQL mapper and not a simple Object to Table mapper.
I agree with you in that CMP is an abomitation and there are some ORM tools very well desinged (like Hibernate) that are probably suitable for more cases than I originally stated, but I have found many cases (much more than 2%) where it would not be useful.
The case you do not beleive the ORM could not be used was something more or less like this (actually much more complex, with many more relations, because it described complex oil facilities and processes):
Table Facility(id INTEGER, type INTEGER, name VARCHAR)
Table Attributes (id INTEGER, attribute_name VARCHAR, attribute_value VARCHAR)
Facility represents a facility type and Attributes represents attributes of the facility.
The exact object model is not completly known at design time because new facility types can be added or a facility can add new attributes (e.g. a flowstation may add a new flow meter not previously considered, for example). All facilities of the same type have some mandatory attributes in common, but may have additional optional attributes. It would be absurd to use a table for each facility type because there would be too many tables, their structure would be changing somewhat frequently and many queries would be much harder to implement.
All Facilities have something in common (so there could be a base abstract class or interface) and the "type" field might indicate a concrete implementation class.
The possible attribute_names in table Attributes change depending on the type.
There may be other relations describing mandatory attributes that all facilities of a certain type must have.
Is it possible to map an existing schema like this with one of the O/R mappers?
Is it possible to generate a schema like this with one of the O/R mappers?
Much more than 2% of large databases I have seen have cases more or less like this one.
You might argue that the relational model is not appropriate for this kind of data (I do not know if you consider this to be a broken design), but it used to work very well and I think is was a good design.

I know that many of them provide some "reverse engineering" functionality, but almost all ORM tools i have seen regard "Reverse engineering" as something that is not the most common or recommended way to use it. In their documentation this "reverse engineering" is left as an last chapter or appendix. <

Well, I certainly regard it as one of the two most common cases. And the work on a Hibernate Middlegen plugin has just taken a huge leap forward, thanks to David Channon, so there is even a beautiful GUI for this almost ready.

>> So what is needed is a true O/R mapper, which for the particular (and overwhelmingly common) case of SQL databases would be a more general Object to SQL mapper and not a simple Object to Table mapper. <
I have a slightly different view of this, though I can see your point of view also. There are ways of accessing very complex relations (using projection/selection/joins/aggregation/subqueries) using HQL, and mapping the result set automatically to an object, though perhaps not exactly in the way that you are thinking of. I think we have two ways of thinking about the same problem, that we both think is an important problem.

>> The exact object model is not completly known at design time because new facility types can be added or a facility can add new attributes (e.g. a flowstation may add a new flow meter not previously considered, for example). All facilities of the same type have some mandatory attributes in common, but may have additional optional attributes. It would be absurd to use a table for each facility type because there would be too many tables, their structure would be changing somewhat frequently and many queries would be much harder to implement. <
I think this case is not particularly uncommon, and I think it can be handled well using ORM. Since the OM is not known at design time, we need to model things as name/value pairs in Java. So the attributes are just mapped as a collection of Attributes by the ORM. This is straightforward.

Referential integrity is tricky for these kinds of relational models, but I still don't see how it becomes any harder using ORM.

Now, it becomes non-straightforward if *some* Attributes are associations and others are simple values, but here is where Hibernate custom types and components will start to come into play (granted this is a very special feature of Hibernate that other ORMs don't really have). Yes, depending upon the full scope of this, it might get a little hairy, and you might not be able to take advantage of things like outerjoin fetching properly. But I'd certainly love to take a stab at it, using Hibernate ;)

But, even if you can't do it the way I'm thinking, Hibernate is designed so that you can integrate mapped classes with handcrafted persistent classes (behind a DAO, for example) in the same object graph (using custom types, once again).

> Is it possible to map an existing schema like this with one of the O/R mappers?
> Is it possible to generate a schema like this with one of the O/R mappers?

yes, I believe so

> Much more than 2% of large databases I have seen have cases more or less like this one.

Absolutely. But did they comprise > 2% of cases? Perhaps, in your domain, they do. But not usually.

I know that many of them provide some "reverse engineering" functionality, but almost all ORM tools i have seen regard "Reverse engineering" as something that is not the most common or recommended way to use it. In their documentation this "reverse engineering" is left as an last chapter or appendix. <>

> Well, I certainly regard it as one of the two most common cases. And the work on a Hibernate Middlegen plugin has just taken a huge leap forward, thanks to David Channon, so there is even a beautiful GUI for this almost ready.

I am glad to hear that. It makes Hibernate better. I am not very experienced with Hibernate. I have only read the docs for 2.0beta4 and 2.0beta5 (I see the reverse engineering utility only in the beta5 doc).
But the problem is that. You call it "Reverse Engineerig". It should not be reverse engineering. To me designing the database is "Engineering" and designing the domain object model is also "Engineering".

>
> >> So what is needed is a true O/R mapper, which for the particular (and overwhelmingly common) case of SQL databases would be a more general Object to SQL mapper and not a simple Object to Table mapper. <>
> I have a slightly different view of this, though I can see your point of view also. There are ways of accessing very complex relations (using projection/selection/joins/aggregation/subqueries) using HQL, and mapping the result set automatically to an object, though perhaps not exactly in the way that you are thinking of. I think we have two ways of thinking about the same problem, that we both think is an important problem.

I agree.

>
> >> The exact object model is not completly known at design time because new facility types can be added or a facility can add new attributes (e.g. a flowstation may add a new flow meter not previously considered, for example). All facilities of the same type have some mandatory attributes in common, but may have additional optional attributes. It would be absurd to use a table for each facility type because there would be too many tables, their structure would be changing somewhat frequently and many queries would be much harder to implement. <>
> I think this case is not particularly uncommon, and I think it can be handled well using ORM. Since the OM is not known at design time, we need to model things as name/value pairs in Java. So the attributes are just mapped as a collection of Attributes by the ORM. This is straightforward.
>
> Referential integrity is tricky for these kinds of relational models, but I still don't see how it becomes any harder using ORM.
>
> Now, it becomes non-straightforward if *some* Attributes are associations and others are simple values, but here is where Hibernate custom types and components will start to come into play (granted this is a very special feature of Hibernate that other ORMs don't really have). Yes, depending upon the full scope of this, it might get a little hairy, and you might not be able to take advantage of things like outerjoin fetching properly. But I'd certainly love to take a stab at it, using Hibernate ;)
>
> But, even if you can't do it the way I'm thinking, Hibernate is designed so that you can integrate mapped classes with handcrafted persistent classes (behind a DAO, for example) in the same object graph (using custom types, once again).
>
> > Is it possible to map an existing schema like this with one of the O/R mappers?
> > Is it possible to generate a schema like this with one of the O/R mappers?
>
> yes, I believe so

OK, I do not know if you exactly understand how I want the object model to be for this case. I do not want only a class Facility with get/setAttribute methods.
I want several classes/interfaces like FlowStation, Manifold and HighPressureManifold (which extend Facility) that have specific methods (e.g. getPressure(), getPumps() which returns a list of related Pumps) and do not have getAttribute/setAttribute methods.
If you can do this with Hibernate, then i urge you to put an example in the documentation because I do not see an effective way to implement it.

An even greter complication comes when the same object model is implemented using two different databases. Suppose there is a fusion (it happened) and information about some flow stations is on database A, and about other flow stations is on database B. They have different persistence mappings but refer to same type of Object. Do OR mappers support this?. If they do, then put an example in the docs too. I agree this but be a much less common case (perhaps even less than 1%).

>
> > Much more than 2% of large databases I have seen have cases more or less like this one.
>
> Absolutely. But did they comprise > 2% of cases? Perhaps, in your domain, they do. But not usually.
Maybe you are right.
But I still think that the database should be designed by a person who knows about databases. Maybe a design generated by an OR mapper can be used if it enforces all integrity constraint, but in many cases they do not (somtimes it might be sufficient to modify it). So the main use for OR mappers should be mapping a properly engineered relational database to objects and not generating one.
The argument about less development time is not valid. The database must enforce data integrity. Period. You have to put all the necessary effort in guaranteeing that. If you are happy with a broken database (one that does not enforce integrity constraints) then you are not developing a professional project, but a broken one (just imagine someone trying to "fix" some data using a SQL console and putting illegal data).

A quick note: the official Hibernate reverse engineering solution is Middlegen, just as soon as the new plugin is mature (very soon). By saying "reverse" we do not mean to diminish its importance in any way ... it is just the standard terminology.

>> OK, I do not know if you exactly understand how I want the object model to be for this case. I do not want only a class Facility with get/setAttribute methods.
I want several classes/interfaces like FlowStation, Manifold and HighPressureManifold (which extend Facility) that have specific methods (e.g. getPressure(), getPumps() which returns a list of related Pumps) and do not have getAttribute/setAttribute methods.<
>>If you can do this with Hibernate, then i urge you to put an example in the documentation because I do not see an effective way to implement it. <
So you have a:

but using a static object model with a generic relational model is a VERY uncommon usecase. Why? Because data must necessarily evolve more slowly than code.

All that extra flexibility in your relational model

A. causes problems for data integrity
B. is completely useless, since the static object model must be redesigned when any changes are mode

ORM tools most likely don't implement this because its too bizarre to be a common requirement. (It would be very easy for me to plug in a ClassPersister to Hibernate to do this thing ... but noone would use it).

>> An even greter complication comes when the same object model is implemented using two different databases. Suppose there is a fusion (it happened) and information about some flow stations is on database A, and about other flow stations is on database B. They have different persistence mappings but refer to same type of Object. Do OR mappers support this? <
Hibernate does, and has since version 0.8. It is trivial problem.

A quick note: the official Hibernate reverse engineering solution is Middlegen, just as soon as the new plugin is mature (very soon). By saying "reverse" we do not mean to diminish its importance in any way ... it is just the standard terminology.

>
>
> >> OK, I do not know if you exactly understand how I want the object model to be for this case. I do not want only a class Facility with get/setAttribute methods.
> I want several classes/interfaces like FlowStation, Manifold and HighPressureManifold (which extend Facility) that have specific methods (e.g. getPressure(), getPumps() which returns a list of related Pumps) and do not have getAttribute/setAttribute methods.<>
> >>If you can do this with Hibernate, then i urge you to put an example in the documentation because I do not see an effective way to implement it. <>
> So you have a:
>
> * static compiled object model
> * generic relational model
>
> excuse me, but I think this is VERY bizarre use case. I can believe:
>
> * generic object model / static relational model (eg. OFBiz Entity Engine)
> * generic OM / generic relational model (as described in famous Ambler paper, but I've never really seen it in practice)
>
> but using a static object model with a generic relational model is a VERY uncommon usecase. Why? Because data must necessarily evolve more slowly than code.
In this case, the generic relational model is better. The data integrity may be harder to implement (you have to use triggers in many cases), but a static relational model would be a maintenance nightmare (if not impossible).
The object model changes less than the relational model (at least the interfaces or abstract classes).
I can buy you the argument that many times, a generic object model would be a better way to manage it, but I think this is not the case. The interfaces are pretty stable but there are several variations in how data is persisted (on the object side, these variations would be several concrete implementation classes).
>
> All that extra flexibility in your relational model
>
> A. causes problems for data integrity
No, it does not, if you know what you are doing. It may be harder than simply using foreign keys, but very possible (e.g. you may need triggers).
> B. is completely useless, since the static object model must be redesigned when any changes are mode
As I said, interfaces are stable (the general concept of FlowStation or Manifold does not change) and implementations classes would change at the same rate as the data. In fact there would be a bijection between persistence implementations and concrete class implementations.
>
> ORM tools most likely don't implement this because its too bizarre to be a common requirement. (It would be very easy for me to plug in a ClassPersister to Hibernate to do this thing ... but noone would use it).
If you implement it they way I want it, at least I would use it.
>
>
> >> An even greter complication comes when the same object model is implemented using two different databases. Suppose there is a fusion (it happened) and information about some flow stations is on database A, and about other flow stations is on database B. They have different persistence mappings but refer to same type of Object. Do OR mappers support this? <>
> Hibernate does, and has since version 0.8. It is trivial problem.
Great!!!, I see this as a special case of different persistence implementation of a general Object interface.

in his interesting posting, Vegeta distinguished two approaches to db access, namely 1) O/R mapping and 2) viewing "the DB as an integral part".

I don't think that those approaches are mutually exclusive. It makes the task of O/R mapping much more efficient and pleasurable if you know RDBMSs. And,
knowing RDBMSs very well does not imply that you don't choose O/R anymore, on the contrary.

Also, the key reasons for us to use O/R mappers is NOT the ability to switch between RDBMS vendors. That's only a byproduct. The reasons for O/R are:

- speed of development (!)
- the much reduced need to write tedious CRUD code again and again
- elegance
- the ability to work with POJOs and POJO-Collections (Trees etc)
- out-of-the-box good performance for most cases

Tools like hibernate do this very well. In our current project, we had to reverse-engineer an existing complex CMS database. That was no problem with hibernate (using a "database-up" approach).

I also don't understand why O/R mappers should have problems with data integrity, as Vegeta wrote. In my experience, once you write the mapping document correctly (which is straightforward), you get *guaranteed integrity*, which is much more than what you get if you copy-paste CRUD JDBC code again in each project.

I mean, nothing wrong with writing expert SQL, especially for stuff like DWH queries. But current open-source O/R mappers are very strong, IMHO, and they make the EJB CMP spec designers look "old fashioned" (no offense).

[quote]porting proprietary SQL from one RDBMS to another is fairly simple.[/quote]

Um, no. It's not. There are vast, wild differences between the servers' proprietary constructs. Compare PL/SQL to Transact/SQL. I was working for a company that had the majority of its logic in Transact/SQL stored procedures. They began a program to move to Oracle because of support from investors. $1million dollars later, they gave up. This was with full support from an Oracle VAR and all of their developers. Tables can migrate fine, but procedural db code is 100% proprietary and extremely difficult to port because they're usually COMPLETELY different in the language constructs they use, the database structures they expect (compare the way an Oracle db is set up to the way an MSSQLServer is set up and you'll know what I mean), etc, even in the way variables are defined in the languages. They do share ANSI SQL but in stored procedure programming, that's usually only about half of what you're doing.

I believe that
1) A database SHOULD be a black box. Persistence is a commodity. Especially when designing a new system, you're far better off letting your application logic determine persistence than the other way around. The advantages of application servers and object-oriented languages as far as dealing with complexity, creating managable language constructs, and the ability to scale, are far greater than those offered by proprietary database languages and the program structures they dictate. Databases are designed to store and serve data. Java (and other) application servers are designed to create business code. Each is far better than the other at what they do.
2) If you are saying that most projects, especially complicated ones, call for a complex database design, you're only half right. Most projects' persistence needs are simple for 80% or so of all cases, such as CRUD operations. The other 20% is complex and requires complex tuning. That's no reason to give up on using an ORM tool. Most ORM tools can help with the simple stuff and a good design will allow your exceptional cases to be treated however they need to without forcing an ORM tool out of the picture. It would seem silly to me to force yourself to deal with nightmare JDBC hand-coding for 80% of your project when only 20% requires special handling. One size does not fit all, and that statement goes both ways.

Oh, and if you've got Rolf on your side, good luck. He's just jealous because there are pretty much NO ORM tools for the COM/.NET market. Believe me, I've looked for them and could have used them.

Drew McAuliffe wrote:
QUOTE:
>>>porting proprietary SQL from one RDBMS to another is fairly simple.
Um, no. It's not. There are vast, wild differences between the servers' proprietary constructs. Compare PL/SQL to Transact/SQL. I was working for a company that had the majority of its logic in Transact/SQL stored procedures. They began a program to move to Oracle because of support from investors. $1million dollars later, they gave up.
END OF QUOTE

Yes, PL/SQL is differ from Transact/SQL. So what?
Don't be be afraid of transforming "@my_variable" (T-SQL) into "my_variable" (PL/SQL), "="(T-SQL) into ":="(PL/SQL), and "*="(T-SQL) into "(+)=" or ANSI OUTER JOIN (ORACLE) :)

IMHO, I see NO difficulties with transforming code from product with just basic functionality (SYBASE) into product with advanced functionality (ORACLE). It is could be time consuming operation, but not really difficult. I have a lot of experience with both ORACLE and SYBASE (I don't have a big experience with MS SQL), but I like ORACLE much more because it REALLY is more COMPLETE product. (Sorry guys, I don't want to start a new thread - it is my personal opinion)

Real difficulties can arise only when developers have to transform from product with advanced functionality to product with just basic functionality. Although, this case also solvable, but more complex.

Anyway, IT IS NOT ABOUT "COMPLETELY DIFFERENT LANGUAGE CONSTRUCTS" - IT IS ABOUT PREBUILDED FUNCTIONALITY :)

Anyway, would be better to transform SP in languages such as T/SQL or PL/SQL into Java Stored Procedure (in case transformation should take place) - it will be more portable solution.

So, personally I dont understand, why those guys gave up during migration process? May be they spent investors money for wrong goals? (Sorry in advance - I don't wanna offend them)

[quote]porting proprietary SQL from one RDBMS to another is fairly simple.[/quote]

>
> Um, no. It's not. There are vast, wild differences between the servers' proprietary constructs. Compare PL/SQL to Transact/SQL. I was working for a company that had the majority of its logic in Transact/SQL stored procedures. They began a program to move to Oracle because of support from investors. $1million dollars later, they gave up. This was with full support from an Oracle VAR and all of their developers. Tables can migrate fine, but procedural db code is 100% proprietary and extremely difficult to port because they're usually COMPLETELY different in the language constructs they use, the database structures they expect (compare the way an Oracle db is set up to the way an MSSQLServer is set up and you'll know what I mean), etc, even in the way variables are defined in the languages. They do share ANSI SQL but in stored procedure programming, that's usually only about half of what you're doing.
>
You are right in all what you said, but I talked about SQL, not about procedural languages provided by databases.
I think that in most cases you should restrict the use of procedural languages to what is necessary to enforce integrity constraints (e.g. triggers).
Porting SQL from a database to another is not very difficult.
Porting PL/SQL (which is different thing) to other procedural languages can be hard. But, if you need to port your database and you need triggers to guarantee the integrity then you must port them no matter what you use to map to objects.

> I believe that
> 1) A database SHOULD be a black box. Persistence is a commodity. Especially when designing a new system, you're far better off letting your application logic determine persistence than the other way around. The advantages of application servers and object-oriented languages as far as dealing with complexity, creating managable language constructs, and the ability to scale, are far greater than those offered by proprietary database languages and the program structures they dictate. Databases are designed to store and serve data. Java (and other) application servers are designed to create business code. Each is far better than the other at what they do.
You are right. But persistence is not a triviality. The relational model is the most widely used method for persistence because it is proven and has solid theoretical foundations. So, de proper design (and administration) of a relational database requires a professional, not blindly using some autogenerated code from a tool.

> 2) If you are saying that most projects, especially complicated ones, call for a complex database design, you're only half right. Most projects' persistence needs are simple for 80% or so of all cases, such as CRUD operations. The other 20% is complex and requires complex tuning. That's no reason to give up on using an ORM tool. Most ORM tools can help with the simple stuff and a good design will allow your exceptional cases to be treated however they need to without forcing an ORM tool out of the picture. It would seem silly to me to force yourself to deal with nightmare JDBC hand-coding for 80% of your project when only 20% requires special handling. One size does not fit all, and that statement goes both ways.
I do not disagree completely, but OR mappers fail to fully map effectively the relational model and require a nonstandard language (EJBQL, HQL, etc.) to alleviate that deficiency.
I think the problem lies in seing the relational model as a tabular model where tables is the concept to be map, and not relations (which are accessed through SQL, which, as you said, different databases DO share).

>
> Oh, and if you've got Rolf on your side, good luck. He's just jealous because there are pretty much NO ORM tools for the COM/.NET market. Believe me, I've looked for them and could have used them.
Hey, I do not care much about Rolf. He usually talks a lot of crap (Windows more secure than UNIX/Linux, etc.) and likes to start flamewars, but he sometimes has a point and this time he may be (at least, in part) right.

I am going to use either Hibernate or Toplink in that upcoming project
(and Struts of course). Plain JDBC/PLSQL isn't productive at all even with Oracle JPublish. I remember how we had to regenerate JPublish persistence classes every time our database schema changed.

I cannot understand why there is such a hype about hibernate.
I am using hibernate and not having fun with it. It is as
good as any other O/R mapper. And who says that their documentation
is so good? Do you measure the quality of a docu by the number
of pages? The docu just covers the basics and leaves you out in
the rain when you get to the "real" stuff.
Updating objects with relations is really painfull with hibernate.
My conclusion is that it is very helpful if you do not use
relations between objects and you take care of it by yourself.
Otherwise you can go crazy with hibernate. But if I do it
this way then what do I have? Right, nothing.
IMHO this tool gives you more flexibility and more control than
for example hibernate.

the rain when you get to the "real" stuff.<
Ummmm. I don't recall your name from the user forum. If you are experiencing so much pain, why have you not asked for advice? I am surprised that you would post *here* first to express your concerns.

>> Updating objects with relations is really painfull with hibernate. <
Is it?? I have never seen this view expressed before. I have seen new users trip up on some subtleties, but once they get past those, they are usually very satisfied.

>> My conclusion is that it is very helpful if you do not use
relations between objects and you take care of it by yourself.
Otherwise you can go crazy with hibernate. <
This is probably the most negative comment that I have ever seen from a Hibernate user anywhere. Perhaps what you need is help from some experienced users. Don't be afraid to ask.

For all the O/R tools out there, iBatis is a great way to do things simply and is a great alternative. I think it is awesome that Clinton has been developing this himself and making it freely available. It is a great help to have tools like this available to experiment and work with.

I think its better that he implements his own simple persistence framework, because I can look at the ibatis source and learn easier than from the Hibernate source. Actually, from a practical point, I'm using flat files with queries and substution strings in some projects and have wound up reading a properties file, loading queries from from files, and building a collection of stringbuffers which use regexp replace (instead of building a DOM document or prepared statement) and then sent to a JDBC handler class. This is a real world problem when complex queries are required that O/R mapping doesn't really handle. I think ibatis can help clean up my cruft.

Hi !
I prefer using a JDO solution which bring me a convenent and reliable persistence layer.
Opposing to the wide scope of persistence tools JDO is a standard approved by Sun. So you can easily change your implementation like from an open source (See http://db.apache.org/ojb/) to a commercial per example.
By now majors seem sulky but I'm definitly convinced that specification will be The persistence standard for the next years.

> >> suffer from 'not invented here' syndrome.
>
> Well, if we all thought that way, we'd never get anywhere would we?
>
> I guess by your logic, all of these people suffer similarly....except the first to the table I suppose....

Some of them do suffer; others try to make money.

'not invented here' syndrome is related to others:
- overconfidence and lack of respect  I am smart, so I can do better then others;
- laziness - It is too hard to understand! I do not have time to understand! (Right, do not think, code!!!)

Very few of them have really bright and innovative approaches, but good ideas are here and there.

It is understandable why commercial vendors do not cooperate, but it is sad to see lack of cooperation between OpenSource GNU type projects.

Do you not think it's good enough to simply have a DIFFERENT idea? You are easily offended my friend...

>> laziness - It is too hard to understand! I do not
>> have time to understand! (Right, do not think, code!!!)

So you actually think it takes less time to write a functionally similar API than to read the docs for an existing one? That's just silly...

>> Very few of them have really bright and innovative approaches,
>> but good ideas are here and there.

It is pretty closed minded of you to apply words like "good" to this global context. There's no generally "good" or "bad" solution to all problems! There is usually a "better" approach for a given problem. Each of the O/R mappers (and otherwise) mentioned in this discussion so far has a place and a useful purpose, and each will work better than the others in certain situations.

I've never said a bad thing about Hibernate, and I never will. I think it's a fantastic tool, and if it works for you, then great! iBATIS SQL Maps uses a COMPLETELY DIFFERENT approach. You would be hard pressed to find a similarity between their implementations.

Good points, Clinton. Of course, there's the wild theory that people could actually download and at least examine both Hibernate and iBatis before criticizing either of them, but I know it's unpopular to expect such an effort. =)

I've had great success in using Hibernate on 2 small projects. I found the docs to be good, and I've found the user forums to answer all of my questions not answered by the docs.

I just downloaded iBatis and looked at the examples, and it looks like a pretty slick product. Seems like there's a little more effort required in terms of defining all of the CRUD queries, which Hibernate does not require, but that's really not a big deal. This may have been a nice fit on my last project, where we had complex geospatial data in our database that couldn't be mapped by the O-R mappers we looked at (although we didn't look that hard at doing this).

I'm sure both products can work well in many different cases as DAO impl's. It's absurd to criticize multiple open source projects that solve the same problem but use vastly different approaches.

Do you not think it's good enough to simply have a DIFFERENT idea? You are easily offended my friend...

>
Not at all :). I think that it is OK to have different idea. But DIFFERENT idea does not mean that the idea is good.

> >> laziness - It is too hard to understand! I do not
> >> have time to understand! (Right, do not think, code!!!)
>
> So you actually think it takes less time to write a functionally similar API than to read the docs for an existing one? That's just silly...

I think that you did not spend enough time trying to understand Hibernates idea and studying Hibernates code.

> It is pretty closed minded of you to apply words like "good" to this global context. There's no generally "good" or "bad" solution to all problems! There is usually a "better" approach for a given problem. Each of the O/R mappers (and otherwise) mentioned in this discussion so far has a place and a useful purpose, and each will work better than the others in certain situations.
>
It looks like you have closed mind if you say that better approach exist.
I do accept that there might be several equally good approaches to get something done. (Very small number). Because of different personal, cultural and other preferences someone may choose good for him/her approach and borrow good(for the approach) ideas from everywhere.

Open Source philosophy is related to freedom, isn't it? So, I don't see why someone should refrain from start a new project, besides it is good, bad, useful or not. People that publish their work that are free to do so!! It is not mandatory to contribute to existing projects, although of course it is a good idea. But nobody should dictate you what you have to do. If you don't like iBatis, it's OK. IMHO you don't have to be so critic regarding Clinton's work since he's not trying to throw it into your throat. So, lets move on, you don't like iBatis, some people do like it. That's it.

On the contrary, if you had spent any time whatsoever in looking into even the simplest details, you would have discovered one very important fact....

The iBATIS Database Layer was released BEFORE Hibernate. Version 1.0 of the iBATIS Database Layer was released July 1st 2002, in conjunction with the first JPetStore release. Hibernate 1.0 was released a month later. Both projects likely started at about the same time.

It is pretty closed minded of you to apply words like "good" to this global context. There's no generally "good" or "bad" solution to all problems! There is usually a "better" approach for a given problem. Each of the O/R mappers (and otherwise) mentioned in this discussion so far has a place and a useful purpose, and each will work better than the others in certain situations. <

Absolutely! The more options, the better for the community.

And everyone please not: if I would have "contributed to an existing effort" instead of going with not invented here, there sould be no Hibernate!

My view is that there are two different kinds of approaches that are common

* One emphasises screens and the relational model. Developers think about the application as updating and querying the database in response to user activity. There is relatively little reuse of data access code between different features (since each feature uses its own data access). This is a great approach for smaller applications and perhaps even for some larger applications. It is particularly appropriate with an existing database full of stored procedures. The distinguishing feature is that the application Java code itself does not particularly concern itself with things like associations and certainly not with inheritance.

An example is a simple Tomcat application that calls JDBC from a servlet.

* The other emphasises a domain model. Business model classes and their associations (and subclasses) are heavily reused to provide a variety of different functionality. The application works with the business model - developers of business functionality like to maintain the illusion of not caring how data access works.

Examples are applications using entity beans or the DAO pattern.

The first kind of application benefits from something like iBATIS or voruta. The second kind benefits from ORM. Both are appropriate in different contexts.

If your application has four tables you simply don't need ORM (you can still use it, and it may even save some typing ... but you don't need it). OTOH, if you have 60 tables with complex inter-relationships and transactions that span chunks of business logic implemented independently by 3 different developers, _do_use_ORM_. You will not regret the decision.

Another time when ORM is helpful is when you are trying to achieve platform independance, or where your schema keeps changing underneath you. It is MUCH easier to modify POJOs and mapping documents than it is to modify handcrafted queries. The handcrafted queries represent the same structural mapping many times, whereas ORM represents the structural relationship once for each table.

If _I_ were implementing YetAnotherWeblog, I would probably use Hibernate, because I know it inside out. If I were recommending a solution to a PL/SQL developer learing Java, I would recommend iBATIS or Voruta because the learning process is shorter. OTOH, if they were building FooBarGenericContentManagementAndPaymentsEngine, I would most likely recommend they take the time to learn Hibernate. It is NOT difficult, but it is more difficult than iBATIS, etc.

Thanks for all the comments everyone...even those that aren't so excited. All of them will help make the product better.

I just wanted to share a little nightmare with you that will help you understand why I wrote the iBATIS DB Layer. We have a number of 3rd party systems that we integrate a number of ways and each has a database. These databases are not controlled by us, and should we attempt to change them (or even access tables directly), we are at risk of violating our support agreement. Furthermore, we are replacing legacy systems that already have a lot of very complex SQL that we would rather not mess with or attempt to introduce an O/R mapper to.....

I'm sure everyone will appreciate the following example that would have been quite ugly implemented as a mash of string concatenation, setXXX() and getXXX() methods as well as forgotten rs.close(), ps.close() and conn.close() calls....and don't forget the appropriate try{}catch{}finally{} blocks....

You can write in ANSI SQL, as iBatis (or Jakarta Commons-SQL, or Husted's Scaffolding, etc. - I call SQL based DAO)
or one of limited variants such as HQL, OQL, EQL, JQL that are used etc. by O/R based DAO.

I find ANSI SQL more powerful with loots of books, on it such as Joe Celko.

O/R DAO is allways limited in some way, EX: performance, complex apps, etc.
Also Data access is the slowest part of J2EE and this is great solution that can use the full power of the SQL engine.

Most large apps, before iBatis, used JDBC, and now we can write at a higer level of abstraction and still have raw SQL.

I appreciate the nightmare, the problem I have with Ibatis (and other frameworks) is the need for everything to be in xml.

Maybe I'm dumb, but I hate giving up compile time checking. The most common error I have is getting javabean names wrong in the ibatis result or parameter maps. Which would be fine, but there is no compile time checking.

This is not a specific attack against ibatis, which I quite like, apart from the lack of compile time checks.

Why are we all moving towards XML and away from compile time checking?

Two month ago I have prepared a proof concept for upcoming project - rewrite a desktop application (VB5,MS Access,communication via modems) into web application (Struts+J2EE, MS SQL, no modems of course). I have tested Hibernate and iBatis as two candidates for db-layer. After one week i was sure that i stay with iBatis. Reason is simple: i can not change the DB structure. Another good feature of iBatis is the possibility to use existing SQL statements used in VB code.

I think that JDO is nice, but iBatis DB layer is more pragmatic and well suited for real life situations where your boss does not accept estimates like "80% of code must be rewritten and DB must be changed too" ;-)

Many thanks for a quality product. Simple to use, fantastic documentation and incredibly fast to develop with and maintain. Have evaluated many O/R tools over the years and I have found yours to be the most rapid to build with.

I am currently using your latest release (and previous releases) in a number of large scale applications, and its working brilliant.

Also, very impressed with the work you did with your JPetStore. Good to see some people promoting java and throwing some time into debunking the MS petstore application.

My recommendation for programmers looking for db-layer:
Both tools are good choice for a DB layer in your project, but consider:
1. If you are "SQL guru" new to OOAD or you make some rewrite/porting of older software into Java, then try iBatis.
2. If you are "OOAD guru" or you are starting new project with empty DB, then try Hibernate.

>
> My recommendation for programmers looking for db-layer:
> Both tools are good choice for a DB layer in your project, but consider:
> 1. If you are "SQL guru" new to OOAD or you make some rewrite/porting of older software into Java, then try iBatis.
> 2. If you are "OOAD guru" or you are starting new project with empty DB, then try Hibernate.
What if you know about both.
If you are going to work on anything that involves OO programming and databases you must know about both or have people on the team that knows about the technologies involved.
If you are doing OO programming without knowing OO programming or if you are going to use databases without knowing about them then you are doing an amateur-level project.

I think its an intresting debate, and its easy to see both sides.
I'd like to believe that in a real enterprise project that a pure OO solution like Hibernate could work, I just know that from the cross section of usual suspects (Wizkid,Legacy Man,Data Facist,Business Saboteur,OO Evangelist) etc. that I deal with that the compromise solution usually pays off (as most of the various characters can understand a little SQL).

In my mind the right tool for the projects I participate in is IBatis, but I'm certain that if you could win unified support for your whole development methodology it would be a more pleasent experiance to stick to pure OO.

But in any case by the time I've written this I'll have Technology Boy telling me that my next project is going to be written in Visual Hebrew..

I want to thank you for your hard work on iBATIS. We're using iBATIS in a new application to be released to production in the coming weeks and I want to share how it's helped us in two important ways.

First, it's a simple, efficient way of managing a DAO layer. We also use Hibernate in projects, but for the purposes of this app, iBATIS was a better fit. It was just easier to understand and use.

Second, our team is in transition and most are learning Java for the first time. Most of the team's background is in database work, like PL/SQL. iBATIS became a great learning tool for them. SQL is something with which they're very familiar. The could see a direct correlation between code and database transactions, yet the code and SQL were sufficiently separated to make the architect impressed. Plus, they like tweaking SQL; iBATIS let us do this as the app was running: no recompilation or redeployment.

I'm happy to say that the app performs beautifully under load. iBATIS also looked pretty sound when inspected by our profiler tool.

First, it's a simple, efficient way of managing a DAO layer. We also use Hibernate in projects, but for the purposes of this app, iBATIS was a better fit. It was just easier to understand and use.

Second, our team is in transition and most are learning Java for the first time. Most of the team's background is in database work, like PL/SQL. iBATIS became a great learning tool for them. SQL is something with which they're very familiar. The could see a direct correlation between code and database transactions, yet the code and SQL were sufficiently separated to make the architect impressed. Plus, they like tweaking SQL; iBATIS let us do this as the app was running: no recompilation or redeployment. <<

These are all good reasons to use something like iBATIS or Voruta (http://voruta.sourceforge.net/) over ORM. Certainly ORM is not for *every* problem. It sounds like your application architecture does not heavily emphasise a purist OO domain model (which IS perfectly legitimate and even desirable for some kinds of projects) and in this case ORM is certainly not appropriate.

It is when you start using associations and/or inheritance really heavily that ORM starts to shine, compared to these kind of simple approaches.

I do not understand the negativity around this topic. The bottom line is that you have to map between Java objects and the database in a way or another. You can do it by writing everything with JDBC and filling some Java beans or hash maps, but in effect what you are doing is mapping.

Now if you have a tool that can do some of this boring job for you, then why not utilize it? Sure, performance can be an issue with something like entity beans but you do not have to use those solutions. It might be that you do not feel safe not being able to write the SQL yourself, but you still can do that too (e.g. with iBatis!). You have plenty of choice available to you. There is no point reinventing the wheel.

For example, I find the dynamic SQL in XML very useful. My need might be spesific to my problem but now this saved me a lot of trouble, and I can generalize my design very nicely. Great stuff.

You can absolutely write portable SQL code using SQL Maps (or JDBC for that matter). 90% of the time it's a decision you make to either write the portable SQL or not. Of course there some exceptions (mostly relating to BLOBs w/Oracle and certain non-standard functions).

For example: JPetStore, which uses the iBATIS Database Layer works with 9 (known) databases with NO code or SQL changes.

Problem in using group by clase using hibernate.Select has some fields from 2 tables and also has max and sum aggregate functions. fields from select clause are used in group by clause.Which function of hibernate session should be used to execute this query?Please help...

Hi, I dont know what the fuss is about; it works, is simple; offers good caching fascility and is simple, Oh I've repeated the word simple)...and thats important.

Yes, it's got quirks, We're using 2.0.9B and here are the problems we faced:

1. Persisting CLOBS2. Passing Arrays3. Getting a raw JDBC connection was a pain because the method that was returning a Proxy Connection and for some other reason that was no good to us.

..Anyway, there will be issies and fixes but the important thing is that it is a good WORKABLE SIMPLE option for many of us.

I wouldnt want to comment if its the 'Best designed thing' or not, but if it works then I must respect whats been done because we use it and thats the bottom line.

Just remember, Microsoft has got away with murder because of one or one of the reason(s).....many end users still love what they dishout because its SIMPLE and fits into everything they got. Ok, you may have your own theory why they have done well, so that may be debatable.

Technically, Ibatis ...I think its great for querying but persistence is still a lil weak.

But for all the bugs, I think it's great and know it will mature further,...just dont complicate it :o) (If you manage that, tell me how? lol)

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.