POJO stands for plain old java objects. These are just basic JavaBeans that have defined setter and getter methods for all the properties that are there in that bean. Besides they can also have some business logic related to that property. Hibernate applications works efficiently with POJOs rather then simple java classes.

5. Please draw object life cycle in hibernate?

Transient objects do not (yet) have any association with the database. they act like any normal Java object and are not saved to the database. When the last reference to a transient object is lost, the object itself is lost and is (eventually) garbage collected. There is no connection between transactions and such objects: commits and rollbacks have no effects on them. They can be turned into persistent objects via one of thesave method calls if the Session object or by adding a reference from a persistent object to this object.

Persistent objects do have an association with the database. They are always associated with a persistence manager, i.e., a Session object and they always participate in a transaction. Actual updates of a database from the persistent object may occur at any time between when the object is updated to the end of the transaction: it does not necessarily happen immediately. However, this feature, which allows important optimizations in database interactions, is essentially invisible to the programmer. For example, one place where one might expect to notice the difference between the in-memory persistent object and the database version is at the point of executing a query. In such a case, Hibernate will, if necessary, synchronise any dirty objects with the database (i.e., save them) in order to ensure that the query returns the correct results.

A persistent object has a primary key value set, whether or not it has been actually saved to the database yet.

Calling the delete method of the Session object on a persistent object will cause its removal from the database and will make it transient.

Detached objects are objects that were persistent but no longer have a connection to a Session object (usually because you have closed the session). Such an object contains data that was synchronised with the database at the time that the session was closed, but, since then, the database may have changed; with the result that this object is now stale.

6. How transaction management is done in hibernate?

Transaction simply means a unit of work, which is atomic. If only one step fails, then the whole unit of work fails. When we consider database, Transaction groups a set of statements/commands which gets committed together. If a single statement fails, whole work will be rolled back. Transactions can be described using ACID criteria.

ACID means:

A: Atomicity: In a Transaction If only one step fails, the whole unit of work must fail.This is known as atomicity

C : Consistency : The Transactions operate on Consistent data. I.e. This data is hidden from other concurrently running transactions. The data is clean and consistent.

I: Isolation: It means when a transaction is being executed, it will not be visible to other concurrently running transactions. This means, any changes done to the existing data during a transaction, before transaction finishes, is not visible to other active transactions. They work on consistent data.

D: Durable: Also the changes done in a transaction are durable. Even if server / system fails after a transaction, the changes done by a successful transaction are permanent / persistent.

So a database transactions involve a set of SQL statements, which either succeeds or fails. If any one of the statement fails in between, the execution of subsequent statements are aborted and all changes done by the previous SQL statements will be rolled back. In complex applications, which involve different database actions, one should set boundries for a transaction. This means beginning and end of the transaction should be decided. This is called Transaction demarcation. If an error occurs (either while executing operations or when committing the transaction), you have to roll back the transaction to leave the data in a consistent state.

This can be done in 2 ways.

In a programmatic manner by explicitly setting boundaries in code or using JTA API.

Association mapping refers to a many-to-one or one-to-one relationship which will be mapped by using another class which you have mapped in Hibernate (also called an “entity”). The associated object has its own lifecycle and is simply related to the first object.

Component mapping refers to mapping a class (or collection of classes) whose lifecycle is bound tightly to the parent. This is also called “composition” in the strict definition of the word in object-oriented programming. Basically if you delete the parent object the child object should also be deleted; it also cannot exist on its own without a parent.

8. Why hibernate is called as lazy invocation ?

Lazy setting decides whether to load child objects while loading the Parent Object. You need to specify parent class.Lazy = true in hibernate mapping file. By default the lazy loading of the child objects is true. This make sure that the child objects are not loaded unless they are explicitly invoked in the application by calling getChild() method on parent. In this case hibernate issues a fresh database call to load the child when getChild() is actually called on the Parent object. But in some cases you do need to load the child objects when parent is loaded. Just make the lazy=false and hibernate will load the child when parent is loaded from the database. Examples: Address child of User class can be made lazy if it is not required frequently. But you may need to load the Author object for Book parent whenever you deal with the book for online bookshop.

Hibernate does not support lazy initialization for detached objects. Access to a lazy association outside of the context of an open Hibernate session will result in an exception.

9. What is hibernate tuning?

The key to obtain better performance in any hibernate application is to employ SQL Optimization, session management, and Data caching

10. Define hibernate statistics ?

We’ve been doing a lot of Hibernate work at Terracotta recently and that naturally includes a fair amount of performance testing. In Hibernate, you can grab those stats using the Statistics object for aSessionFactory via getStatistics(). There are all sorts of tasty morsels inside this class to get factory-wide counts and per-entity/query/cache/etc stats. Cool stuff.

We noticed however while doing some perf testing that the code inside the StatisticsImpl is a bit problematic from a concurrency point of view. The basic gist of the class is just a set of stat counters. I can simplify the pattern to this without much loss of detail:

That’s basically what’s in this class but it’s repeated a couple dozen times for other counters and there are Maps used for per-item counts of a few kinds of items. There are several problems here from a concurrency point of view:

Coarse-grained lock – the entire class shares a single lock via synchronized methods. That means that every counter in every thread is contending for that same lock. The impact of this is that when you turn on statistics collection in Hibernate, you immediately introduce a big old stop-the-world contention point across all threads. This will have an impact (possibly a very significant one) on the very stats you are trying to collect. It€™s entirely possible that the scale of your application or other bottlenecks mean that your application is not actually seeing this as a bottleneck, but it should scare you at least a little.

At the very least here, we could be used fine-grained locking to avoid creating contention between different kinds of statistics. There are also collections in here that collect stats on a per-query, per-entity, per-collection, etc basis. Those could actually have fine-grained locks per-entity as well (but they don€™t).

Dirty reads -you’ll notice that while writes to the counters are synchronized, the reads are not. Presumably this was done for performance. Unfortunately, it’s also quite broken from a memory model point of view. These reads are not guaranteed to see writes from any other thread, so the values you’re seeing are possibly stale. In fact, it’s possible that you are never seeing any of the counter updates. In practice, the synchronization on puts is probably causing the local caches to get flushed and on the hardware we’re running, you do seem to see values that are in the ballpark at least. But the Java memory model makes no guarantee that this will work on all architectures or at any point in the future.

Race condition on clear() -the common way that the stats are used is with some gui or other monitor sitting in a loop and periodically reading some (or all) of the stats, then calling clear(). Because time passes between the read of the first stat and the clear, you will lose all updates to the stats during the course of the reads.You may be willing to neglect a few lost updates, but consider that in many cases the monitor thread may iterate through every entity, collection, and query updated since the last loop (potentially hundreds of reads). In the cases where per-item stats are looked up, the gets() are actually synchronized as well when finding the stat in a Map. Those gets are synchronized against all other puts happening in the system. So the scope of that “read all stats” part of the monitor code may actually be quite large and you will lose all updates made between the beginning of that and the clear(), which distorts the next set of stats to an unknown degree (more activity == more distortion).

[UPDATE] Dirty long read -As Peter mentioned in the comments, the values being read here are longs and since longs and doubles are 64-bit values, dirty reads of them are not guaranteed to see atomic writes from other threads. So you could see different 32-bit chunks that were not written together. Reads/writes of shared doubles and longs should always be done with synchronized or volatile to address this issue.

I certainly understand that stats are seen as best-effort and that the existing Hibernate code supports pre-1.5 Java and does not have access to Java 5 concurrency utilities like Atomic classes and concurrent collections, but even so I think there are things that could be done here to make stats collection a lot more accurate and less invasive. Things like fine-grained or more concurrent locking (with volatile, AtomicInteger, or ReentrantReadWriteLock) would go a long way towards fixing visibility while increasing concurrency.

In our Terracotta integration, I suspect we will be using some byte-code instrumentation to clean up these classes (we already assume Java 1.5+ so that is a constraint that we are happy to break). As it is, we don’t currently trust the stats in the first place and second we are actually seeing the single lock showing up as a hot spot in performance tests.

I hope that the next version of Hibernate (which I think is dropping pre-1.5 support?) can make some improvements as well.