Part 1. Introduction

Chapter 1.
OpenJPA

OpenJPA is Apache's implementation of Sun's Java Persistence API (JPA)
specification for the transparent persistence of Java objects. This
document provides an overview of the JPA standard and technical
details on the use of OpenJPA.

1.
About This Document

This document is intended for OpenJPA users. It is divided into several parts:

The OpenJPA Reference Guide contains
detailed documentation on all aspects of OpenJPA. Browse through this guide
to familiarize yourself with the many advanced features and customization
opportunities OpenJPA provides. Later, you can use the guide when you need
details on a specific aspect of OpenJPA.

Chapter 1.
Introduction

The Java Persistence API (JPA) is a specification from
Sun Microsystems for the persistence of Java objects to any relational
datastore. JPA requires J2SE 1.5 (also referred to as "Java 5") or
higher, as it makes heavy use of new Java language features such as annotations
and generics. This document provides an overview of JPA. Unless
otherwise noted, the information presented applies to all JPA implementations.

Note

For coverage of OpenJPA's many extensions to the JPA specification,
see the Reference Guide.

1.
Intended Audience

This document is intended for developers who want to learn about JPA
in order to use it in their applications. It assumes that you have a strong
knowledge of object-oriented concepts and Java, including Java 5 annotations and
generics. It also assumes some experience with relational databases and the
Structured Query Language (SQL).

2.
Lightweight Persistence

Persistent data is information that can outlive the program
that creates it. The majority of complex programs use persistent data: GUI
applications need to store user preferences across program invocations, web
applications track user movements and orders over long periods of time, etc.

Lightweight persistence is the storage and retrieval of
persistent data with little or no work from you, the developer. For example,
Java serialization
is a form of lightweight persistence because it can be used to persist Java
objects directly to a file with very little effort. Serialization's capabilities
as a lightweight persistence mechanism pale in comparison to those provided by
JPA, however. The next chapter compares JPA to serialization and other available
persistence mechanisms.

Chapter 2.
Why JPA?

Java developers who need to store and retrieve persistent data already have
several options available to them: serialization, JDBC, JDO, proprietary
object-relational mapping tools, object databases, and EJB 2 entity beans. Why
introduce yet another persistence framework? The answer to this question is that
with the exception of JDO, each of the aforementioned persistence solutions has
severe limitations. JPA attempts to overcome these limitations, as illustrated
by the table below.

Table 2.1.
Persistence Mechanisms

Supports:

Serialization

JDBC

ORM

ODB

EJB 2

JDO

JPA

Java Objects

Yes

No

Yes

Yes

Yes

Yes

Yes

Advanced OO Concepts

Yes

No

Yes

Yes

No

Yes

Yes

Transactional Integrity

No

Yes

Yes

Yes

Yes

Yes

Yes

Concurrency

No

Yes

Yes

Yes

Yes

Yes

Yes

Large Data Sets

No

Yes

Yes

Yes

Yes

Yes

Yes

Existing Schema

No

Yes

Yes

No

Yes

Yes

Yes

Relational and Non-Relational Stores

No

No

No

No

Yes

Yes

No

Queries

No

Yes

Yes

Yes

Yes

Yes

Yes

Strict Standards / Portability

Yes

No

No

No

Yes

Yes

Yes

Simplicity

Yes

Yes

Yes

Yes

No

Yes

Yes

Serialization is Java's built-in mechanism for transforming
an object graph into a series of bytes, which can then be sent over the network
or stored in a file. Serialization is very easy to use, but it is also very
limited. It must store and retrieve the entire object graph at once, making it
unsuitable for dealing with large amounts of data. It cannot undo changes that
are made to objects if an error occurs while updating information, making it
unsuitable for applications that require strict data integrity. Multiple threads
or programs cannot read and write the same serialized data concurrently without
conflicting with each other. It provides no query capabilities. All these
factors make serialization useless for all but the most trivial persistence
needs.

Many developers use the Java Database Connectivity (JDBC)
APIs to manipulate persistent data in relational databases. JDBC overcomes most
of the shortcomings of serialization: it can handle large amounts of data, has
mechanisms to ensure data integrity, supports concurrent access to information,
and has a sophisticated query language in SQL. Unfortunately, JDBC does not
duplicate serialization's ease of use. The relational paradigm used by JDBC was
not designed for storing objects, and therefore forces you to either abandon
object-oriented programming for the portions of your code that deal with
persistent data, or to find a way of mapping object-oriented concepts like
inheritance to relational databases yourself.

There are many proprietary software products that can perform the mapping
between objects and relational database tables for you. These
object-relational mapping (ORM) frameworks allow you to focus on the
object model and not concern yourself with the mismatch between the
object-oriented and relational paradigms. Unfortunately, each of these product
has its own set of APIs. Your code becomes tied to the proprietary interfaces of
a single vendor. If the vendor raises prices, fails to fix show-stopping bugs,
or falls behind in features, you cannot switch to another product without
rewriting all of your persistence code. This is referred to as vendor lock-in.

Rather than map objects to relational databases, some software companies have
developed a form of database designed specifically to store objects. These
object databases (ODBs) are often much easier to use than
object-relational mapping software. The Object Database Management Group (ODMG)
was formed to create a standard API for accessing object databases; few object
database vendors, however, comply with the ODMG's recommendations. Thus, vendor
lock-in plagues object databases as well. Many companies are also hesitant to
switch from tried-and-true relational systems to the relatively unknown object
database technology. Fewer data-analysis tools are available for object database
systems, and there are vast quantities of data already stored in older
relational databases. For all of these reasons and more, object databases have
not caught on as well as their creators hoped.

The Enterprise Edition of the Java platform introduced entity Enterprise Java
Beans (EJBs). EJB 2.x entities are components that represent persistent
information in a datastore. Like object-relational mapping solutions, EJB 2.x
entities provide an object-oriented view of persistent data. Unlike
object-relational software, however, EJB 2.x entities are not limited to
relational databases; the persistent information they represent may come from an
Enterprise Information System (EIS) or other storage device. Also, EJB 2.x
entities use a strict standard, making them portable across vendors.
Unfortunately, the EJB 2.x standard is somewhat limited in the object-oriented
concepts it can represent. Advanced features like inheritance, polymorphism, and
complex relations are absent. Additionally, EBJ 2.x entities are difficult to
code, and they require heavyweight and often expensive application servers to
run.

The JDO specification uses an API that is strikingly similar to JPA. JDO,
however, supports non-relational databases, a feature that some argue dilutes
the specification.

JPA combines the best features from each of the persistence mechanisms listed
above. Creating entities under JPA is as simple as creating serializable
classes. JPA supports the large data sets, data consistency, concurrent use, and
query capabilities of JDBC. Like object-relational software and object
databases, JPA allows the use of advanced object-oriented concepts such as
inheritance. JPA avoids vendor lock-in by relying on a strict specification like
JDO and EJB 2.x entities. JPA focuses on relational databases. And like JDO, JPA
is extremely easy to use.

Note

OpenJPA typically stores data in relational databases, but can be customized for
use with non-relational datastores as well.

JPA is not ideal for every application. For many applications, though, it
provides an exciting alternative to other persistence mechanisms.

Chapter 3.
Java Persistence API Architecture

The diagram below illustrates the relationships between the primary components
of the JPA architecture.

Note

A number of the depicted interfaces are only required outside of an
EJB3-compliant application server. In an application server,
EntityManager instances are typically injected, rendering the
EntityManagerFactory unnecessary. Also, transactions
within an application server are handled using standard application server
transaction controls. Thus, the EntityTransaction also
goes unused.

EntityManager
: The javax.persistence.EntityManager
is the primary JPA interface used by applications. Each
EntityManager manages a set of persistent objects, and
has APIs to insert new objects and delete existing ones. When used outside the
container, there is a one-to-one relationship between an
EntityManager and an EntityTransaction.
EntityManagers also act as factories for
Query instances.

EntityTransaction: Each EntityManager
has a one-to-one relation with a single
javax.persistence.EntityTransaction.
EntityTransactions allow operations on persistent data to be
grouped into units of work that either completely succeed or completely fail,
leaving the datastore in its original state. These all-or-nothing operations
are important for maintaining data integrity.

Query
: The javax.persistence.Query
interface is implemented by each JPA vendor to find persistent
objects that meet certain criteria. JPA standardizes support for queries using
both the Java Persistence Query Language (JPQL) and the Structured Query
Language (SQL). You obtain Query instances from an
EntityManager.

The example below illustrates how the JPA interfaces interact to execute a JPQL
query and update persistent objects. The example assumes execution outside a
container.

The remainder of this document explores the JPA interfaces in detail. We present
them in roughly the order that you will use them as you develop your
application.

1.
JPA Exceptions

The diagram above depicts the JPA exception architecture. All
exceptions are unchecked. JPA uses standard exceptions where
appropriate, most notably IllegalArgumentExceptions and
IllegalStateExceptions. The specification also provides
a few JPA-specific exceptions in the javax.persistence
package. These exceptions should be self-explanatory. See the
Javadoc for
additional details on JPA exceptions.

JPA recognizes two types of persistent classes: entity
classes and embeddable classes. Each persistent instance of
an entity class - each entity - represents a unique
datastore record. You can use the EntityManager to find
an entity by its persistent identity (covered later in this chapter), or use a
Query to find entities matching certain criteria.

An instance of an embeddable class, on the other hand, is only stored as part of
a separate entity. Embeddable instances have no persistent identity, and are
never returned directly from the EntityManager or from a
Query unless the query uses a projection on owning class
to the embedded instance. For example, if Address is
embedded in Company, then
a query "SELECT a FROM Address a" will never return the
embedded Address of Company;
but a projection query such as
"SELECT c.address FROM Company c" will.

Despite these differences, there are few distinctions between entity classes and
embeddable classes. In fact, writing either type of persistent class is a lot
like writing any other class. There are no special parent classes to
extend from, field types to use, or methods to write. This is one important way
in which JPA makes persistence transparent to you, the developer.

Note

JPA supports both fields and JavaBean properties as persistent state. For
simplicity, however, we will refer to all persistent state as persistent fields,
unless we want to note a unique aspect of persistent properties.

1.
Restrictions on Persistent Classes

There are very few restrictions placed on persistent classes. Still, it never
hurts to familiarize yourself with exactly what JPA does and does not support.

1.1.
Default or No-Arg Constructor

The JPA specification requires that all persistent classes have a no-arg
constructor. This constructor may be public or protected. Because the compiler
automatically creates a default no-arg constructor when no other constructor is
defined, only classes that define constructors must also include a no-arg
constructor.

Note

OpenJPA's enhancer will automatically add a protected
no-arg constructor to your class when required. Therefore, this restriction does
not apply when using the enhancer. See Section 2, “
Enhancement
”
of the Reference Guide for details.

1.2.
Final

Entity classes may not be final. No method of an entity class can be final.

Note

OpenJPA supports final classes and final methods.

1.3.
Identity Fields

All entity classes must declare one or more fields which together form the
persistent identity of an instance. These are called identity
or primary key fields. In our
Magazine class, isbn and title
are identity fields, because no two magazine records in the datastore can have
the same isbn and title values.
Section 2.2, “
Id
” will show you how to denote your
identity fields in JPA metadata. Section 2, “
Entity Identity
”
below examines persistent identity.

Note

1.4.
Version Field

The version field in our Magazine
class may seem out of place. JPA uses a version field in your entities to detect
concurrent modifications to the same datastore record. When the JPA runtime
detects an attempt to concurrently modify the same record, it throws an
exception to the transaction attempting to commit last. This prevents
overwriting the previous commit with stale data.

A version field is not required, but without one concurrent threads or
processes might succeed in making conflicting changes to the same record at the
same time. This is unacceptable to most applications.
Section 2.5, “
Version
” shows you how to designate a
version field in JPA metadata.

The version field must be an integral type ( int,
Long, etc) or a
java.sql.Timestamp. You should consider version fields immutable.
Changing the field value has undefined results.

Note

OpenJPA fully supports version fields, but does not require them for concurrency
detection. OpenJPA can maintain surrogate version values or use state
comparisons to detect concurrent modifications. See
Section 7, “
Additional JPA Mappings
” in the Reference Guide.

1.5.
Inheritance

JPA fully supports inheritance in persistent classes. It allows persistent
classes to inherit from non-persistent classes, persistent classes to inherit
from other persistent classes, and non-persistent classes to inherit from
persistent classes. It is even possible to form inheritance hierarchies in which
persistence skips generations. There are, however, a few important limitations:

Persistent classes cannot inherit from certain natively-implemented system
classes such as java.net.Socket and
java.lang.Thread.

If a persistent class inherits from a non-persistent class, the fields of the
non-persistent superclass cannot be persisted.

1.6.
Persistent Fields

JPA manages the state of all persistent fields. Before you access persistent
state, the JPA runtime makes sure that it has been loaded from the datastore.
When you set a field, the runtime records that it has changed so that the new
value will be persisted. This allows you to treat the field in exactly the same
way you treat any other field - another aspect of JPA's transparency.

JPA does not support static or final fields. It does, however, include built-in
support for most common field types. These types can be roughly divided into
three categories: immutable types, mutable types, and relations.

Immutable types, once created, cannot be changed. The only
way to alter a persistent field of an immutable type is to assign a new value to
the field. JPA supports the following immutable types:

JPA also supports byte[], Byte[],
char[], and Character[] as
immutable types. That is, you can persist fields of these types,
but you should not manipulate individual array indexes without resetting the
array into the persistent field.

Persistent fields of mutable types can be altered without
assigning the field a new value. Mutable types can be modified directly through
their own methods. The JPA specification requires that implementations support
the following mutable field types:

java.util.Date

java.util.Calendar

java.sql.Date

java.sql.Timestamp

java.sql.Time

Enums

Entity types (relations between entities)

Embeddable types

java.util.Collections of entities

java.util.Sets of entities

java.util.Lists of entities

java.util.Maps in which each entry maps the value of one
of a related entity's fields to that entity.

Collection and map types may be parameterized.

Most JPA implementations also have support for persisting serializable values as
binary data in the datastore. Chapter 5,
Metadata
has more
information on persisting serializable types.

Note

OpenJPA also supports arrays, java.lang.Number,
java.util.Locale, all JDK 1.2 Set,
List, and Map types,
and many other mutable and immutable field types. OpenJPA also allows you to
plug in support for custom types.

1.7.
Conclusions

This section detailed all of the restrictions JPA places on persistent classes.
While it may seem like we presented a lot of information, you will seldom find
yourself hindered by these restrictions in practice. Additionally, there are
often ways of using JPA's other features to circumvent any limitations you run
into.

2.
Entity Identity

Java recognizes two forms of object identity: numeric identity and qualitative
identity. If two references are numerically identical, then
they refer to the same JVM instance in memory. You can test for this using the
== operator. Qualitative identity, on
the other hand, relies on some user-defined criteria to determine whether two
objects are "equal". You test for qualitative identity using the
equals method. By default, this method simply relies on numeric
identity.

JPA introduces another form of object identity, called entity
identity or persistent identity. Entity
identity tests whether two persistent objects represent the same state in the
datastore.

The entity identity of each persistent instance is encapsulated in its
identity field(s). If two entities of the same type have
the same identity field values, then the two entities represent the same state
in the datastore. Each entity's identity field values must be unique among all
other entites of the same type.

Note

OpenJPA supports entities as identity fields, as the Reference Guide discusses
in Section 4.2, “
Entities as Identity Fields
”. For legacy schemas with binary
primary key columns, OpenJPA also supports using identity fields of type
byte[]. When you use a byte[]
identity field, you must create an identity class. Identity classes are
covered below.

Warning

Changing the fields of an embeddable instance while it is assigned to an
identity field has undefined results. Always treat embeddable identity instances
as immutable objects in your applications.

If you are dealing with a single persistence context (see
Section 3, “
Persistence Context
”), then you do not
have to compare identity fields to test whether two entity references represent
the same state in the datastore. There is a much easier way: the ==
operator. JPA requires that each persistence context maintain only
one JVM object to represent each unique datastore record. Thus, entity identity
is equivalent to numeric identity within a persistence context. This is referred
to as the uniqueness requirement.

The uniqueness requirement is extremely important - without it, it would be
impossible to maintain data integrity. Think of what could happen if two
different objects in the same transaction were allowed to represent the same
persistent data. If you made different modifications to each of these objects,
which set of changes should be written to the datastore? How would your
application logic handle seeing two different "versions" of the same data?
Thanks to the uniqueness requirement, these questions do not have to be
answered.

2.1.
Identity Class

If your entity has only one identity field, you can use the value of that field
as the entity's identity object in all EntityManager APIs. Otherwise, you must supply an
identity class to use for identity objects. Your identity class must meet the
following criteria:

The class must be public.

The class must be serializable.

The class must have a public no-args constructor.

The names of the non-static fields or properties of the class must be the same
as the names of the identity fields or properties of the corresponding entity
class, and the types must be identical.

The equals and hashCode
methods of the class must use the values of all fields or properties
corresponding to identity fields or properties in the entity class.

If the class is an inner class, it must be static.

All entity classes related by inheritance must use the same identity class, or
else each entity class must have its own identity class whose inheritance
hierarchy mirrors the inheritance hierarchy of the owning entity classes (see
Section 2.1.1, “
Identity Hierarchies
”).

Note

Though you may still create identity classes by hand, OpenJPA provides the
appidtool to automatically generate proper identity
classes based on your identity fields. See
Section 4.3, “
Application Identity Tool
” of the Reference Guide.

Example 4.2.
Identity Class

This example illustrates a proper identity class for an entity with multiple
identity fields.

2.1.1.
Identity Hierarchies

An alternative to having a single identity class for an entire inheritance
hierarchy is to have one identity class per level in the inheritance hierarchy.
The requirements for using a hierarchy of identity classes are as follows:

The inheritance hierarchy of identity classes must exactly mirror the hierarchy
of the persistent classes that they identify. In the example pictured above,
abstract class Person is extended by abstract class
Employee, which is extended by non-abstract class
FullTimeEmployee, which is extended by non-abstract
class Manager. The corresponding identity classes, then,
are an abstract PersonId class, extended by an abstract
EmployeeId class, extended by a non-abstract
FullTimeEmployeeId class, extended by a non-abstract
ManagerId class.

Subclasses in the identity hierarchy may define additional identity fields until
the hierarchy becomes non-abstract. In the aforementioned example,
Person defines an identity field ssn,
Employee defines additional identity field userName
, and FullTimeEmployee adds a final identity
field, empId. However, Manager may not
define any additional identity fields, since it is a subclass of a non-abstract
class. The hierarchy of identity classes, of course, must match the identity
field definitions of the persistent class hierarchy.

It is not necessary for each abstract class to declare identity fields. In the
previous example, the abstract Person and
Employee classes could declare no identity fields, and the first
concrete subclass FullTimeEmployee could define one or
more identity fields.

All subclasses of a concrete identity class must be equals
and hashCode-compatible with the
concrete superclass. This means that in our example, a ManagerId
instance and a FullTimeEmployeeId instance
with the same identity field values should have the same hash code, and should
compare equal to each other using the equals method of
either one. In practice, this requirement reduces to the following coding
practices:

Use instanceof instead of comparing Class
objects in the equals methods of your
identity classes.

An identity class that extends another non-abstract identity class should not
override equals or hashCode.

3.
Lifecycle Callbacks

It is often necessary to perform various actions at different stages of a
persistent object's lifecycle. JPA includes a variety of callbacks methods for
monitoring changes in the lifecycle of your persistent objects. These callbacks
can be defined on the persistent classes themselves and on non-persistent
listener classes.

3.1.
Callback Methods

Every persistence event has a corresponding callback method marker. These
markers are shared between persistent classes and their listeners. You can use
these markers to designate a method for callback either by annotating that
method or by listing the method in the XML mapping file for a given class. The
lifecycle events and their corresponding method markers are:

PrePersist: Methods marked with this annotation
will be invoked before an object is persisted. This could be used for assigning
primary key values to persistent objects. This is equivalent to the XML element
tag pre-persist.

PostPersist: Methods marked with this annotation
will be invoked after an object has transitioned to the persistent state. You
might want to use such methods to update a screen after a new row is added. This
is equivalent to the XML element tag post-persist.

PostLoad: Methods marked with this annotation
will be invoked after all eagerly fetched fields of your class have been loaded
from the datastore. No other persistent fields can be accessed in this method.
This is equivalent to the XML element tag post-load.

PostLoad is often used to initialize non-persistent
fields whose values depend on the values of persistent fields, such as a complex
datastructure.

PreUpdate: Methods marked with this annotation
will be invoked just the persistent values in your objects are flushed to the
datastore. This is equivalent to the XML element tag
pre-update.

PreUpdate is the complement to PostLoad
. While methods marked with PostLoad are most
often used to initialize non-persistent values from persistent data, methods
annotated with PreUpdate is normally used to set
persistent fields with information cached in non-persistent data.

PostUpdate: Methods marked with this annotation
will be invoked after changes to a given instance have been stored to the
datastore. This is useful for clearing stale data cached at the application
layer. This is equivalent to the XML element tag post-update.

PreRemove: Methods marked with this annotation
will be invoked before an object transactions to the deleted state. Access to
persistent fields is valid within this method. You might use this method to
cascade the deletion to related objects based on complex criteria, or to perform
other cleanup. This is equivalent to the XML element tag
pre-remove.

PostRemove: Methods marked with this annotation
will be invoked after an object has been marked as to be deleted. This is
equivalent to the XML element tag post-remove.

3.2.
Using Callback Methods

When declaring callback methods on a persistent class, any method may be used
which takes no arguments and is not shared with any property access fields.
Multiple events can be assigned to a single method as well.

Below is an example of how to declare callback methods on persistent classes:

Note

3.3.
Using Entity Listeners

Mixing lifecycle event code into your persistent classes is not always ideal. It
is often more elegant to handle cross-cutting lifecycle events in a
non-persistent listener class. JPA allows for this, requiring only that listener
classes have a public no-arg constructor. Like persistent classes, your listener
classes can consume any number of callbacks. The callback methods must take in a
single java.lang.Object argument which represents the
persistent object that triggered the event.

Entities can enumerate listeners using the EntityListeners
annotation. This annotation takes an array of listener classes as
its value.

Below is an example of how to declare an entity and its corresponding listener
classes.

3.4.
Entity Listeners Hierarchy

Entity listener methods are invoked in a specific order when a given event is
fired. So-called default listeners are invoked first: these
are listeners which have been defined in a package annotation or in the root
element of XML mapping files. Next, entity listeners are invoked in the order of
the inheritance hierarchy, with superclass listeners being invoked before
subclass listeners. Finally, if an entity has multiple listeners for the same
event, the listeners are invoked in declaration order.

You can exclude default listeners and listeners defined in superclasses from the
invocation chain through the use of two class-level annotations:

ExcludeDefaultListeners: This annotation indicates that
no default listeners will be invoked for this class, or any of its subclasses.
The XML equivalent is the empty exclude-default-listeners
element.

ExcludeSuperclassListeners: This annotation will cause
OpenJPA to skip invoking any listeners declared in superclasses. The XML
equivalent is the empty exclude-superclass-listeners element.

4.
Conclusions

This chapter covered everything you need to know to write persistent class
definitions in JPA. JPA cannot use your persistent classes, however, until you
complete one additional step: you must define the persistence metadata. The next
chapter explores metadata in detail.

JPA requires that you accompany each persistent class with persistence metadata.
This metadata serves three primary purposes:

To identify persistent classes.

To override default JPA behavior.

To provide the JPA implementation with information that it cannot glean from
simply reflecting on the persistent class.

Persistence metadata is specified using either the Java 5 annotations defined in
the javax.persistence package, XML mapping files, or a
mixture of both. In the latter case, XML declarations override conflicting
annotations. If you choose to use XML metadata, the XML files must be available
at development and runtime, and must be discoverable via either of two
strategies:

In a resource named orm.xml placed in a
META-INF directory within a directory in your classpath or within a
jar archive containing your persistent classes.

Declared in your
persistence.xml configuration file. In this case, each XML
metadata file must be listed in a mapping-file element whose
content is either a path to the given file or a resource location available to
the class' class loader.

Note

OpenJPA uses a process called enhancement to modify the
bytecode of entities for transparent lazy loading and immediate dirty tracking.
See Section 2, “
Enhancement
” in the Reference Guide for
details on enhancement.

1.2.
Id Class

As we discussed in Section 2.1, “
Identity Class
”,
entities with multiple identity fields must use an identity class
to encapsulate their persistent identity. The IdClass
annotation specifies this class. It accepts a single
java.lang.Class value.

The equivalent XML element is id-class, which has a single
attribute:

class: Set this required attribute to the name of the
identity class.

1.3.
Mapped Superclass

A mapped superclass is a non-entity class that can define
persistent state and mapping information for entity subclasses. Mapped
superclasses are usually abstract. Unlike true entities, you cannot query a
mapped superclass, pass a mapped superclass instance to any
EntityManager or Query methods, or declare a
persistent relation with a mapped superclass target. You denote a mapped
superclass with the MappedSuperclass marker annotation.

The equivalent XML element is mapped-superclass. It expects
the following attributes:

Note

OpenJPA allows you to query on mapped superclasses. A query on a mapped
superclass will return all matching subclass instances. OpenJPA also allows you
to declare relations to mapped superclass types; however, you cannot query
across these relations.

1.4.
Embeddable

The Embeddable annotation designates an embeddable
persistent class. Embeddable instances are stored as part of the record of their
owning instance. All embeddable classes must have this annotation.

A persistent class can either be an entity or an embeddable class, but not both.

The equivalent XML element is embeddable. It understands the
following attributes:

Note

OpenJPA allows a persistent class to be both an entity and an embeddable class.
Instances of the class will act as entites when persisted explicitly or assigned
to non-embedded fields of entities. Instances will act as embedded values when
assigned to embedded fields of entities.

To signal that a class is both an entity and an embeddable class in OpenJPA,
simply add both the @Entity and the @Embeddable
annotations to the class.

1.5.
EntityListeners

An entity may list its lifecycle event listeners in the
EntityListeners annotation. This value of this annotation is an
array of the listener Class es for the entity. The
equivalent XML element is entity-listeners. For more details
on entity listeners, see Section 3, “
Lifecycle Callbacks
”.

1.6.
Example

Here are the class declarations for our persistent object model, annotated with
the appropriate persistence metadata. Note that Magazine
declares an identity class, and that Document and
Address are a mapped superclass and an embeddable class,
respectively. LifetimeSubscription and
TrialSubscription override the default entity name to supply a
shorter alias for use in queries.

The persistence implementation must be able to retrieve and set the persistent
state of your entities, mapped superclasses, and embeddable types. JPA offers
two modes of persistent state access: field access, and
property access. Under field access, the implementation
injects state directly into your persistent fields, and retrieves changed state
from your fields as well. To declare field access on an entity with XML
metadata, set the access attribute of your entity
XML element to FIELD. To use field access for an
entity using annotation metadata, simply place your metadata and mapping
annotations on your field declarations:

@ManyToOne
private Company publisher;

Property access, on the other hand, retrieves and loads state through JavaBean
"getter" and "setter" methods. For a property p of type
T, you must define the following getter method:

T getP();

For boolean properties, this is also acceptable:

boolean isP();

You must also define the following setter method:

void setP(T value);

To use property access, set your entity element's
access attribute to PROPERTY, or place your
metadata and mapping annotations on the getter method:

Warning

When using property access, only the getter and setter method for a property
should ever access the underlying persistent field directly. Other methods,
including internal business methods in the persistent class, should go through
the getter and setter methods when manipulating persistent state.

Also, take care when adding business logic to your getter and setter methods.
Consider that they are invoked by the persistence implementation to load and
retrieve all persistent state; other side effects might not be desirable.

Each class must use either field access or property access for all state; you
cannot use both access types within the same class. Additionally, a subclass
must use the same access type as its superclass.

The remainder of this document uses the term "persistent field" to refer to
either a persistent field or a persistent property.

2.1.
Transient

The Transient annotation specifies that a field is
non-persistent. Use it to exclude fields from management that would otherwise be
persistent. Transient is a marker annotation only; it
has no properties.

The equivalent XML element is transient. It has a single
attribute:

name: The transient field or property name. This attribute
is required.

2.3.
Generated Value

The previous section showed you how to declare your identity fields with the
Id annotation. It is often convenient to allow the
persistence implementation to assign a unique value to your identity fields
automatically. JPA includes the GeneratedValue
annotation for this purpose. It has the following properties:

GenerationType strategy: Enum value specifying how to
auto-generate the field value. The GenerationType enum
has the following values:

GeneratorType.AUTO: The default. Assign the field a
generated value, leaving the details to the JPA vendor.

GenerationType.IDENTITY: The database will assign an
identity value on insert.

GenerationType.SEQUENCE: Use a datastore sequence to
generate a field value.

GenerationType.TABLE: Use a sequence table to generate a
field value.

String generator: The name of a generator defined in mapping
metadata. We show you how to define named generators in
Section 5, “
Generators
”. If the
GenerationType is set but this property is unset, the JPA
implementation uses appropriate defaults for the selected generation type.

The equivalent XML element is generated-value, which
includes the following attributes:

strategy: One of TABLE,
SEQUENCE, IDENTITY, or AUTO,
defaulting to AUTO.

If the entities are mapped to the same table name but with different schema
name within one PersistenceUnit intentionally, and the
strategy of GeneratedType.AUTO is used to generate the ID
for each entity, a schema name for each entity must be explicitly declared
either through the annotation or the mapping.xml file. Otherwise, the mapping
tool only creates the tables for those entities with the schema names under
each schema. In addition, there will be only one
OPENJPA_SEQUENCE_TABLE created for all the entities within
the PersistenceUnit if the entities are not identified
with the schema name. Read Section 6, “
Generators
” and
Section 11, “
Default Schema
” in the Reference Guide.

2.4.
Embedded Id

If your entity has multiple identity values, you may declare multiple
@Id fields, or you may declare a single @EmbeddedId
field. The type of a field annotated with EmbeddedId must
be an embeddable entity class. The fields of this embeddable class are
considered the identity values of the owning entity. We explore entity identity
and identity fields in Section 1.3, “
Identity Fields
”.

The EmbeddedId annotation has no properties.

The equivalent XML element is embedded-id. It has one
required attribute:

name: The name of the identity field or property.

2.5.
Version

Use the Version annotation to designate a version field.
Section 1.4, “
Version Field
” explained the importance of
version fields to JPA. This is a marker annotation; it has no properties.

The equivalent XML element is version, which has a single
attribute:

name: The name of the version field or property. This
attribute is required.

2.6.1.
Fetch Type

Many metadata annotations in JPA have a fetch property. This
property can take on one of two values: FetchType.EAGER or
FetchType.LAZY. FetchType.EAGER means that
the field is loaded by the JPA implementation before it returns the persistent
object to you. Whenever you retrieve an entity from a query or from the
EntityManager, you are guaranteed that all of its eager
fields are populated with datastore data.

FetchType.LAZY is a hint to the JPA runtime that you want to
defer loading of the field until you access it. This is called lazy
loading. Lazy loading is completely transparent; when you attempt to
read the field for the first time, the JPA runtime will load the value from the
datastore and populate the field automatically. Lazy loading is only a hint and
not a directive because some JPA implementations cannot lazy-load certain field
types.

With a mix of eager and lazily-loaded fields, you can ensure that commonly-used
fields load efficiently, and that other state loads transparently when accessed.
As you will see in Section 3, “
Persistence Context
”,
you can also use eager fetching to ensure that entites have all needed data
loaded before they become detached at the end of a
persistence context.

Note

OpenJPA can lazy-load any field type. OpenJPA also allows you to dynamically
change which fields are eagerly or lazily loaded at runtime. See
Section 7, “
Fetch Groups
” in the Reference Guide for details.

2.7.
Embedded

Use the Embedded marker annotation on embeddable field
types. Embedded fields are mapped as part of the datastore record of the
declaring entity. In our sample model, Author and
Company each embed their Address,
rather than forming a relation to an Address as a
separate entity.

The equivalent XML element is embedded, which expects a
single attribute:

name: The name of the field or property. This attribute is
required.

2.8.
Many To One

When an entity A references a single entity
B, and other As might also reference the same
B, we say there is a many to one
relation from A to B. In our sample
model, for example, each magazine has a reference to its publisher. Multiple
magazines might have the same publisher. We say, then, that the
Magazine.publisher field is a many to one relation from magazines to
publishers.

JPA indicates many to one relations between entities with the
ManyToOne annotation. This annotation has the following properties:

boolean optional: Whether the related object must exist. If
false, this field cannot be null. Defaults to
true.

The equivalent XML element is many-to-one. It accepts the
following attributes:

name: The name of the field or property. This attribute is
required.

target-entity: The class of the related type.

fetch: One of EAGER or
LAZY.

optional: Boolean indicating whether the field value may be
null.

2.8.1.
Cascade Type

We introduce the JPA EntityManager in
Chapter 8,
EntityManager
. The EntityManager
has APIs to persist new entities, remove (delete) existing
entities, refresh entity state from the datastore, and merge detached
entity state back into the persistence context. We explore all of
these APIs in detail later in the overview.

When the EntityManager is performing the above
operations, you can instruct it to automatically cascade the operation to the
entities held in a persistent field with the cascade property
of your metadata annotation. This process is recursive. The cascade
property accepts an array of CascadeType enum
values.

CascadeType.PERSIST: When persisting an entity, also persist
the entities held in this field. We suggest liberal application of this cascade
rule, because if the EntityManager finds a field that
references a new entity during flush, and the field does not use
CascadeType.PERSIST, it is an error.

CascadeType.REMOVE: When deleting an entity, also delete the
entities held in this field.

CascadeType.REFRESH: When refreshing an entity, also refresh
the entities held in this field.

CascadeType.MERGE: When merging entity state, also merge the
entities held in this field.

Note

OpenJPA offers enhancements to JPA's CascadeType.REMOVE functionality,
including additional annotations to control how and when dependent fields will
be removed. See Section 3.2.1, “
Dependent
” for more details.

CascadeType defines one additional value,
CascadeType.ALL, that acts as a shortcut for all of the values above.
The following annotations are equivalent:

2.9.
One To Many

When an entity A references multiple B
entities, and no two As reference the same
B, we say there is a one to many relation from
A to B.

One to many relations are the exact inverse of the many to one relations we
detailed in the preceding section. In that section, we said that the
Magazine.publisher field is a many to one relation from magazines to
publishers. Now, we see that the Company.mags field is the
inverse - a one to many relation from publishers to magazines. Each company may
publish multiple magazines, but each magazine can have only one publisher.

JPA indicates one to many relations between entities with the
OneToMany annotation. This annotation has the following properties:

Class targetEntity: The class of the related entity type.
This information is usually taken from the parameterized collection or map
element type. You must supply it explicitly, however, if your field isn't a
parameterized type.

String mappedBy: Names the many to one field in the related
entity that maps this bidirectional relation. We explain bidirectional relations
below. Leaving this property unset signals that this is a standard
unidirectional relation.

2.9.1.
Bidirectional Relations

When two fields are logical inverses of each other, they form a
bidirectional relation. Our model contains two bidirectional
relations: Magazine.publisher and Company.mags
form one bidirectional relation, and Article.authors
and Author.articles form the other. In both cases,
there is a clear link between the two fields that form the relationship. A
magazine refers to its publisher while the publisher refers to all its published
magazines. An article refers to its authors while each author refers to her
written articles.

When the two fields of a bidirectional relation share the same datastore
mapping, JPA formalizes the connection with the mappedBy
property. Marking Company.mags as mappedByMagazine.publisher means two things:

Company.mags uses the datastore mapping for
Magazine.publisher, but inverses it. In fact, it is illegal to
specify any additional mapping information when you use the mappedBy
property. All mapping information is read from the referenced field.
We explore mapping in depth in Chapter 12,
Mapping Metadata
.

Magazine.publisher is the "owner" of the relation. The field
that specifies the mapping data is always the owner. This means that changes to
the Magazine.publisher field are reflected in the datastore,
while changes to the Company.mags field alone are not.
Changes to Company.mags may still affect the JPA
implementation's cache, however. Thus, it is very important that you keep your
object model consistent by properly maintaining both sides of your bidirectional
relations at all times.

You should always take advantage of the mappedBy property
rather than mapping each field of a bidirectional relation independently.
Failing to do so may result in the JPA implementation trying to update the
database with conflicting data. Be careful to only mark one side of the relation
as mappedBy, however. One side has to actually do the
mapping!

Note

You can configure OpenJPA to automatically synchronize both sides of a
bidirectional relation, or to perform various actions when it detects
inconsistent relations. See Section 5, “
Managed Inverses
” in the
Reference Guide for details.

2.10.
One To One

When an entity A references a single entity
B, and no other As can reference the same
B, we say there is a one to one relation between
A and B. In our sample model,
Magazine has a one to one relation to Article
through the Magazine.coverArticle field. No two magazines can
have the same cover article.

JPA indicates one to one relations between entities with the
OneToOne annotation. This annotation has the following properties:

Class targetEntity: The class of the related entity type.
This information is usually taken from the field type.

String mappedBy: Names the field in the related entity that
maps this bidirectional relation. We explain bidirectional relations in
Section 2.9.1, “
Bidirectional Relations
” above. Leaving this property
unset signals that this is a standard unidirectional relation.

2.11.
Many To Many

When an entity A references multiple B
entities, and other As might reference some of the same
Bs, we say there is a many to many
relation between A and B. In our sample
model, for example, each article has a reference to all the authors that
contributed to the article. Other articles might have some of the same authors.
We say, then, that Article and Author
have a many to many relation through the Article.authors
field.

JPA indicates many to many relations between entities with the
ManyToMany annotation. This annotation has the following properties:

Class targetEntity: The class of the related entity type.
This information is usually taken from the parameterized collection or map
element type. You must supply it explicitly, however, if your field isn't a
parameterized type.

String mappedBy: Names the many to many field in the related
entity that maps this bidirectional relation. We explain bidirectional relations
in Section 2.9.1, “
Bidirectional Relations
” above. Leaving this
property unset signals that this is a standard unidirectional relation.

2.12.
Order By

Datastores such as relational databases do not preserve the order of records.
Your persistent List fields might be ordered one way the
first time you retrieve an object from the datastore, and a completely different
way the next. To ensure consistent ordering of collection fields, you must use
the OrderBy annotation. The OrderBy
annotation's value is a string defining the order of the collection
elements. An empty value means to sort on the identity value(s) of the elements
in ascending order. Any other value must be of the form:

<field name>[ ASC|DESC][, ...]

Each <field name> is the name of a persistent field in
the collection's element type. You can optionally follow each field by the
keyword ASC for ascending order, or DESC
for descending order. If the direction is omitted, it defaults to ascending.

The equivalent XML element is order-by which can be listed as
a sub-element of the one-to-many or many-to-many
elements. The text within this element is parsed as the order by
string.

2.13.
Map Key

JPA supports persistent Map fields through either a
OneToMany or ManyToMany
association. The related entities form the map values. JPA
derives the map keys by extracting a field from each entity value. The
MapKey annotation designates the field that is used as
the key. It has the following properties:

String name: The name of a field in the related entity class
to use as the map key. If no name is given, defaults to the identity field of
the related entity class.

The equivalent XML element is map-key which can be listed as
a sub-element of the one-to-many or many-to-many
elements. The map-key element has the following
attributes:

name: The name of the field in the related entity class to
use as the map key.

2.14.
Persistent Field Defaults

In the absence of any of the annotations above, JPA defines the following
default behavior for declared fields:

Fields declared static, transient, or final
default to non-persistent.

Fields of an embeddable type default to persistent, as if annotated with
@Embedded.

All other fields default to non-persistent.

Note that according to these defaults, all relations between entities must be
annotated explicitly. Without an annotation, a relation field will default to
serialized storage if the related entity type is serializable, or will default
to being non-persistent if not.

3.
XML Schema

We present the complete XML schema below. Many of the elements relate to
object/relational mapping rather than metadata; these elements are discussed in
Chapter 12,
Mapping Metadata
.

Chapter 6.
Persistence

Note

OpenJPA also includes the
OpenJPAPersistence helper class to provide
additional utility methods.

Within a container, you will typically use injection to
access an EntityManagerFactory. Applications operating
of a container, however, can use the
Persistence class to obtain
EntityManagerFactory objects in a vendor-neutral fashion.

Each createEntityManagerFactory method searches the
system for an EntityManagerFactory definition with the
given name. Use null for an unnamed factory. The optional map
contains vendor-specific property settings used to further configure the
factory.

persistence.xml files define
EntityManagerFactories. The createEntityManagerFactory
methods search for persistence.xml files
within the META-INF directory of any CLASSPATH
element. For example, if your CLASSPATH contains
the conf directory, you could place an
EntityManagerFactory definition in
conf/META-INF/persistence.xml.

The root element of a persistence.xml file is
persistence, which then contains one or more
persistence-unit definitions. Each persistence unit describes the
configuration for the entity managers created by the persistence unit's entity
manager factory. The persistence unit can specify these elements and attribtues.

name: This is the name you pass to the
Persistence.createEntityManagerFactory methods described above. The
name attribute is required.

transaction-type: Whether to use managed
(JTA) or local (RESOURCE_LOCAL)
transaction management.

provider: If you are using a third-party JPA vendor, this
element names its implementation of the
PersistenceProvider bootstrapping interface.

Note

Set the provider to
org.apache.openjpa.persistence.PersistenceProviderImpl to use
OpenJPA.

jta-data-source: The JNDI name of a JDBC
DataSource that is automatically enlisted in JTA transactions. This
may be an XA DataSource.

non-jta-data-source: The JNDI name of a JDBC
DataSource that is not enlisted in JTA transactions.

mapping-file*: The resource names of XML mapping files for
entities and embeddable classes. You can also specify mapping information in an
orm.xml file in your META-INF
directory. If present, the orm.xml mapping file will be
read automatically.

jar-file*: The names of jar files containing entities and
embeddable classes. The implementation will scan the jar for annotated classes.

class*: The class names of entities and embeddable classes.

properties: This element contains nested property
elements used to specify vendor-specific settings. Each
property has a name attribute and a value attribute.

2.
Non-EE Use

The example below demonstrates the Persistence class in
action. You will typically execute code like this on application startup, then
cache the resulting factory for future use. This bootstrapping code is only
necessary in non-EE environments; in an EE environment
EntityManagerFactories are typically injected.

Note

1.
Obtaining an EntityManagerFactory

Within a container, you will typically use injection to
access an EntityManagerFactory. There are, however,
alternative mechanisms for EntityManagerFactory
construction.

Some vendors may supply public constructors for their
EntityManagerFactory implementations, but we recommend using the
Java Connector Architecture (JCA) in a managed environment, or the
Persistence class' createEntityManagerFactory
methods in an unmanaged environment, as described in
Chapter 6,
Persistence
. These strategies allow
vendors to pool factories, cutting down on resource utilization.

JPA allows you to create and configure an
EntityManagerFactory, then store it in a Java Naming and Directory
Interface (JNDI) tree for later retrieval and use.

2.
Obtaining EntityManagers

The two createEntityManager methods above create a new
EntityManager each time they are invoked. The optional
Map is used to to supply vendor-specific settings. If you
have configured your implementation for JTA transactions and a JTA transaction
is active, the returned EntityManager will be
synchronized with that transaction.

Note

OpenJPA recognizes the following string keys in the map supplied to
createEntityManager:

The last option uses reflection to configure any property of OpenJPA's
EntityManager implementation with the value supplied in
your map. The first options correspond exactly to the same-named OpenJPA
configuration keys described in Chapter 2,
Configuration
of the
Reference Guide.

3.
Persistence Context

A persistence context is a set of entities such that for any persistent identity
there is a unique entity instance. Within a persistence context, entities are
managed. The EntityManager controls
their lifecycle, and they can access datastore resources.

When a persistence context ends, previously-managed entities become
detached. A detached entity is no longer under the control of the
EntityManager, and no longer has access to datastore
resources. We discuss detachment in detail in
Section 2, “
Entity Lifecycle Management
”. For now, it is sufficient to
know that detachment has two obvious consequences:

The detached entity cannot load any additional persistent state.

The EntityManager will not return the detached entity
from find, nor will queries include the detached
entity in their results. Instead, find method
invocations and query executions that would normally incorporate the detached
entity will create a new managed entity with the same identity.

Injected EntityManagers have a transaction
persistence context,
while EntityManagers obtained through the
EntityManagerFactory have an extended
persistence context. We describe these persistence context types
below.

3.1.
Transaction Persistence Context

Under the transaction persistence context model, an EntityManager
begins a new persistence context with each transaction, and ends
the context when the transaction commits or rolls back. Within the transaction,
entities you retrieve through the EntityManager or via
Queries are managed entities. They can access datastore
resources to lazy-load additional persistent state as needed, and only one
entity may exist for any persistent identity.

When the transaction completes, all entities lose their association with the
EntityManager and become detached. Traversing a
persistent field that wasn't already loaded now has undefined results. And using
the EntityManager or a Query to
retrieve additional objects may now create new instances with the same
persistent identities as detached instances.

If you use an EntityManager with a transaction
persistence context model outside of an active transaction, each method
invocation creates a new persistence context, performs the method action, and
ends the persistence context. For example, consider using the
EntityManager.find method outside of a transaction. The
EntityManager will create a temporary persistence context, perform
the find operation, end the persistence context, and return the detached result
object to you. A second call with the same id will return a second detached
object.

When the next transaction begins, the EntityManager will
begin a new persistence context, and will again start returning managed
entities. As you'll see in Chapter 8,
EntityManager
, you can
also merge the previously-detached entites back into the new persistence
context.

Example 7.1.
Behavior of Transaction Persistence Context

The following code illustrates the behavior of entites under an
EntityManager using a transaction persistence context.

3.2.
Extended Persistence Context

An EntityManager using an extended persistence context
maintains the same persistence context for its entire lifecycle. Whether inside
a transaction or not, all entities returned from the EntityManager
are managed, and the EntityManager never
creates two entity instances to represent the same persistent identity. Entities
only become detached when you finally close the EntityManager
(or when they are serialized).

Example 7.2.
Behavior of Extended Persistence Context

The following code illustrates the behavior of entites under an
EntityManager using an extended persistence context.

4.
Closing the EntityManagerFactory

public boolean isOpen ();
public void close ();

EntityManagerFactory instances are heavyweight objects.
Each factory might maintain a metadata cache, object state cache,
EntityManager pool, connection pool, and more. If your application
no longer needs an EntityManagerFactory, you should
close it to free these resources. When an EntityManagerFactory
closes, all EntityManagers from that
factory, and by extension all entities managed by those
EntityManagers, become invalid. Attempting to close an
EntityManagerFactory while one or more of its
EntityManagers has an active transaction may result in an
IllegalStateException.

Closing an EntityManagerFactory should not be taken
lightly. It is much better to keep a factory open for a long period of time than
to repeatedly create and close new factories. Thus, most applications will never
close the factory, or only close it when the application is exiting. Only
applications that require multiple factories with different configurations have
an obvious reason to create and close multiple EntityManagerFactory
instances. Once a factory is closed, all methods except
isOpen throw an
IllegalStateException.

Chapter 8.
EntityManager

The diagram above presents an overview of the EntityManager
interface. For a complete treatment of the
EntityManager API, see the
Javadoc documentation. Methods whose parameter signatures consist of
an ellipsis (...) are overloaded to take multiple parameter types.

Note

The EntityManager is the primary interface used by
application developers to interact with the JPA runtime. The methods
of the EntityManager can be divided into the following
functional categories:

Transaction association.

Entity lifecycle management.

Entity identity management.

Cache management.

Query factory.

Closing.

1.
Transaction Association

public EntityTransaction getTransaction ();

Every EntityManager has a one-to-one relation with an
EntityTransaction instance. In fact, many vendors use a single class to implement both the
EntityManager and EntityTransaction
interfaces. If your application requires multiple concurrent
transactions, you will use multiple EntityManagers.

You can retrieve the EntityTransaction associated with an
EntityManager through the getTransaction
method. Note that most JPA implementations can
integrate with an application server's managed transactions. If you take
advantage of this feature, you will control transactions by declarative
demarcation or through the Java Transaction API (JTA) rather than through the
EntityTransaction.

2.
Entity Lifecycle Management

EntityManagers perform several actions that affect the
lifecycle state of entity instances.

public void persist(Object entity);

Transitions new instances to managed. On the next flush or commit, the newly
persisted instances will be inserted into the datastore.

For a given entity A, the persist
method behaves as follows:

If A is a new entity, it becomes managed.

If A is an existing managed entity, it is ignored. However,
the persist operation cascades as defined below.

If A is a removed entity, it becomes managed.

If A is a detached entity, an
IllegalArgumentException is thrown.

The persist operation recurses on all relation fields of A
whose cascades include
CascadeType.PERSIST.

This action can only be used in the context of an active transaction.

public void remove(Object entity);

Transitions managed instances to removed. The instances will be deleted from the
datastore on the next flush or commit. Accessing a removed entity has undefined
results.

For a given entity A, the remove
method behaves as follows:

If A is a new entity, it is ignored. However, the remove
operation cascades as defined below.

If A is an existing managed entity, it becomes removed.

If A is a removed entity, it is ignored.

If A is a detached entity, an
IllegalArgumentException is thrown.

The remove operation recurses on all relation fields of A
whose cascades include
CascadeType.REMOVE.

This action can only be used in the context of an active transaction.

public void refresh(Object entity);

Use the refresh action to make sure the persistent
state of an instance is synchronized with the values in the datastore.
refresh is intended for long-running optimistic
transactions in which there is a danger of seeing stale data.

For a given entity A, the refresh
method behaves as follows:

If A is a new entity, it is ignored. However, the refresh
operation cascades as defined below.

If A is an existing managed entity, its state is refreshed
from the datastore.

If A is a removed entity, it is ignored.

If A is a detached entity, an
IllegalArgumentException is thrown.

The refresh operation recurses on all relation fields of A
whose cascades include
CascadeType.REFRESH.

public Object merge(Object entity);

A common use case for an application running in a servlet or application server
is to "detach" objects from all server resources, modify them, and then "attach"
them again. For example, a servlet might store persistent data in a user session
between a modification based on a series of web forms. Between each form
request, the web container might decide to serialize the session, requiring that
the stored persistent state be disassociated from any other resources.
Similarly, a client/server application might transfer persistent objects to a
client via serialization, allow the client to modify their state, and then have
the client return the modified data in order to be saved. This is sometimes
referred to as the data transfer object or value
object pattern, and it allows fine-grained manipulation of data
objects without incurring the overhead of multiple remote method invocations.

JPA provides support for this pattern by automatically detaching
entities when they are serialized or when a persistence context ends (see
Section 3, “
Persistence Context
” for an exploration of
persistence contexts). The JPA merge API
re-attaches detached entities. This allows you to detach a persistent instance,
modify the detached instance offline, and merge the instance back into an
EntityManager (either the same one that detached the
instance, or a new one). The changes will then be applied to the existing
instance from the datastore.

A detached entity maintains its persistent identity, but cannot load additional
state from the datastore. Accessing any persistent field or property that was
not loaded at the time of detachment has undefined results. Also, be sure not to
alter the version or identity fields of detached instances if you plan on
merging them later.

The merge method returns a managed copy of the given
detached entity. Changes made to the persistent state of the detached entity are
applied to this managed instance. Because merging involves changing persistent
state, you can only merge within a transaction.

If you attempt to merge an instance whose representation has changed in the
datastore since detachment, the merge operation will throw an exception, or the
transaction in which you perform the merge will fail on commit, just as if a
normal optimistic conflict were detected.

Note

OpenJPA offers enhancements to JPA detachment functionality,
including additional options to control which fields are detached. See
Section 1, “
Detach and Attach
” in the Reference Guide for details.

For a given entity A, the merge
method behaves as follows:

If A is a detached entity, its state is copied into existing
managed instance A' of the same entity identity, or a new
managed copy of A is created.

If A is a new entity, a new managed entity A'
is created and the state of A is copied into
A'.

If A is an existing managed entity, it is ignored. However,
the merge operation still cascades as defined below.

If A is a removed entity, an
IllegalArgumentException is thrown.

The merge operation recurses on all relation fields of A
whose cascades include
CascadeType.MERGE.

READ: Other transactions may concurrently read the object,
but cannot concurrently update it.

WRITE: Other transactions cannot concurrently read or write
the object. When a transaction is committed that holds WRITE locks on any
entites, those entites will have their version incremented even if the entities
themselves did not change in the transaction.

Note

The following diagram illustrates the lifecycle of an entity with respect to the
APIs presented in this section.

3.
Lifecycle Examples

The examples below demonstrate how to use the lifecycle methods presented in the
previous section. The examples are appropriate for out-of-container use. Within
a container, EntityManagers are usually injected, and
transactions are usually managed. You would therefore omit the
createEntityManager and close calls, as
well as all transaction demarcation code.

Magazine.MagazineId mi = new Magazine.MagazineId();
mi.isbn = "1B78-YU9L";
mi.title = "JavaWorld";
// updates should always be made within transactions; note that
// there is no code explicitly linking the magazine or company
// with the transaction; JPA automatically tracks all changes
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
Magazine mag = em.find(Magazine.class, mi);
mag.setPrice(5.99);
Company pub = mag.getPublisher();
pub.setRevenue(1750000D);
em.getTransaction().commit();
// or we could continue using the EntityManager...
em.close();

Example 8.3.
Removing Objects

// assume we have an object id for the company whose subscriptions
// we want to delete
Object oid = ...;
// deletes should always be made within transactions
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
Company pub = (Company) em.find(Company.class, oid);
for (Subscription sub : pub.getSubscriptions())
em.remove(sub);
pub.getSubscriptions().clear();
em.getTransaction().commit();
// or we could continue using the EntityManager...
em.close();

Example 8.4.
Detaching and Merging

This example demonstrates a common client/server scenario. The client requests
objects and makes changes to them, while the server handles the object lookups
and transactions.

4.
Entity Identity Management

Each EntityManager is responsible for managing the
persistent identities of the managed objects in the persistence context. The
following methods allow you to interact with the management of persistent
identities. The behavior of these methods is deeply affected by the persistence
context type of the EntityManager; see
Section 3, “
Persistence Context
” for an explanation of
persistence contexts.

public <T> T find(Class<T> cls, Object oid);

This method returns the persistent instance of the given type with the given
persistent identity. If the instance is already present in the current
persistence context, the cached version will be returned. Otherwise, a new
instance will be constructed and loaded with state from the datastore. If no
entity with the given type and identity exists in the datastore, this method
returns null.

public <T> T getReference(Class<T> cls, Object oid);

This method is similar to find, but does not
necessarily go to the database when the entity is not found in cache. The
implementation may construct a hollow entity and return it
to you instead. Hollow entities do not have any state loaded. The state only
gets loaded when you attempt to access a persistent field. At that time, the
implementation may throw an EntityNotFoundException if it
discovers that the entity does not exist in the datastore. The implementation
may also throw an EntityNotFoundException from the
getReference method itself. Unlike
find, getReference does not return null.

public boolean contains(Object entity);

Returns true if the given entity is part of the current persistence context, and
false otherwise. Removed entities are not considered part of the current
persistence context.

5.
Cache Management

public void flush();

The flush method writes any changes that have been made
in the current transaction to the datastore. If the EntityManager
does not already have a connection to the datastore, it obtains one
for the flush and retains it for the duration of the transaction. Any exceptions
during flush cause the transaction to be marked for rollback. See
Chapter 9,
Transaction
.

Flushing requires an active transaction. If there isn't a transaction in
progress, the flush method throws a
TransactionRequiredException.

The EntityManager's FlushMode property
controls whether to flush transactional changes before executing queries. This
allows the query results to take into account changes you have made during the
current transaction. Available
javax.persistence.FlushModeType constants are:

COMMIT: Only flush when committing, or when told to do so
through the flush method. Query results may not take
into account changes made in the current transaction.

AUTO: The implementation is permitted to flush before
queries to ensure that the results reflect the most recent object state.

Clearing the EntityManager effectively ends the
persistence context. All entities managed by the EntityManager
become detached.

6.
Query Factory

public Query createQuery(String query);

Query objects are used to find entities matching certain
criteria. The createQuery method creates a query using
the given Java Persistence Query Language (JPQL) string. See
Chapter 10,
JPA Query
for details.

public Query createNamedQuery(String name);

This method retrieves a query defined in metadata by name. The returned
Query instance is initialized with the information
declared in metadata. For more information on named queries, read
Section 1.10, “
Named Queries
”.

7.
Closing

public boolean isOpen();
public void close();

When an EntityManager is no longer needed, you should
call its close method. Closing an
EntityManager releases any resources it is using. The persistence
context ends, and the entities managed by the EntityManager
become detached. Any Query instances the
EntityManager created become invalid. Calling any method
other than isOpen on a closed EntityManager
results in an IllegalStateException. You
cannot close a EntityManager that is in the middle of a
transaction.

If you are in a managed environment using injected entity managers, you should
not close them.

Chapter 9.
Transaction

Transactions are critical to maintaining data integrity. They are used to group
operations into units of work that act in an all-or-nothing fashion.
Transactions have the following qualities:

Atomicity. Atomicity refers to the all-or-nothing property
of transactions. Either every data update in the transaction completes
successfully, or they all fail, leaving the datastore in its original state. A
transaction cannot be only partially successful.

Consistency. Each transaction takes the datastore from one
consistent state to another consistent state.

Isolation. Transactions are isolated from each other. When
you are reading persistent data in one transaction, you cannot "see" the changes
that are being made to that data in other transactions. Similarly, the updates
you make in one transaction cannot conflict with updates made in concurrent
transactions. The form of conflict resolution employed depends on whether you
are using pessimistic or optimistic transactions. Both types are described later
in this chapter.

Durability. The effects of successful transactions are
durable; the updates made to persistent data last for the lifetime of the
datastore.

Together, these qualities are called the ACID properties of transactions. To
understand why these properties are so important to maintaining data integrity,
consider the following example:

Suppose you create an application to manage bank accounts. The application
includes a method to transfer funds from one user to another, and it looks
something like this:

Now suppose that user Alice wants to transfer 100 dollars to user Bob. No
problem; you simply invoke your transferFunds method,
supplying Alice in the from parameter, Bob in the
to parameter, and 100.00 as the amnt
. The first line of the method is executed, and 100 dollars is
subtracted from Alice's account. But then, something goes wrong. An unexpected
exception occurs, or the hardware fails, and your method never completes.

You are left with a situation in which the 100 dollars has simply disappeared.
Thanks to the first line of your method, it is no longer in Alice's account, and
yet it was never transferred to Bob's account either. The datastore is in an
inconsistent state.

The importance of transactions should now be clear. If the two lines of the
transferFunds method had been placed together in a
transaction, it would be impossible for only the first line to succeed. Either
the funds would be transferred properly or they would not be transferred at all,
and an exception would be thrown. Money could never vanish into thin air, and
the data store could never get into an inconsistent state.

1.
Transaction Types

There are two major types of transactions: pessimistic transactions and
optimistic transactions. Each type has both advantages and disadvantages.

Pessimistic transactions generally lock the datastore records they act on,
preventing other concurrent transactions from using the same data. This avoids
conflicts between transactions, but consumes database resources. Additionally,
locking records can result in deadlock, a situation in
which two transactions are both waiting for the other to release its locks
before completing. The results of a deadlock are datastore-dependent; usually
one transaction is forcefully rolled back after some specified timeout interval,
and an exception is thrown.

This document will often use the term datastore transaction
in place of pessimistic transaction. This is to acknowledge
that some datastores do not support pessimistic semantics, and that the exact
meaning of a non-optimistic JPA transaction is dependent on the datastore. Most
of the time, a datastore transaction is equivalent to a pessimistic transaction.

Optimistic transactions consume less resources than pessimistic/datastore
transactions, but only at the expense of reliability. Because optimistic
transactions do not lock datastore records, two transactions might change the
same persistent information at the same time, and the conflict will not be
detected until the second transaction attempts to flush or commit. At this time,
the second transaction will realize that another transaction has concurrently
modified the same records (usually through a timestamp or versioning system),
and will throw an appropriate exception. Note that optimistic transactions still
maintain data integrity; they are simply more likely to fail in heavily
concurrent situations.

Despite their drawbacks, optimistic transactions are the best choice for most
applications. They offer better performance, better scalability, and lower risk
of hanging due to deadlock.

2.
The EntityTransaction Interface

JPA integrates with your container's managed transactions,
allowing you to use the container's declarative transaction demarcation and its
Java Transaction API (JTA) implementation for transaction management. Outside of
a container, though, you must demarcate transactions manually through JPA. The
EntityTransaction interface controls unmanaged
transactions in JPA.

public void begin();
public void commit();
public void rollback();

The begin, commit, and
rollback methods demarcate transaction boundaries. The
methods should be self-explanatory: begin starts a
transaction, commit attempts to commit the
transaction's changes to the datastore, and rollback
aborts the transaction, in which case the datastore is "rolled back" to its
previous state. JPA implementations will automatically roll back transactions if
any exception is thrown during the commit process.

Unless you are using an extended persistence context, committing or rolling back
also ends the persistence context. All managed entites will be detached from the
EntityManager.

public boolean isActive();

Finally, the isActive method returns true
if the transaction is in progress (begin
has been called more recently than commit or
rollback), and false otherwise.

The javax.persistence.Query interface is the mechanism
for issuing queries in JPA. The primary query language used is the Java
Persistence Query Language, or JPQL. JPQL is syntactically
very similar to SQL, but is object-oriented rather than table-oriented.

A JPQL query has an internal namespace declared in the from
clause of the query. Arbitrary identifiers are assigned to entities so that they
can be referenced elsewhere in the query. In the query example above, the
identifier x is assigned to the entity Magazine
.

Note

The as keyword can optionally be used when declaring
identifiers in the from clause. SELECT x FROM
Magazine x and SELECT x FROM Magazine AS x are
synonymous.

Following the select clause of the query is the object or
objects that the query returns. In the case of the query above, the query's
result list will contain instances of the Magazine class.

Note

When selecting entities, you can optional use the keyword object
. The clauses select x and SELECT
OBJECT(x) are synonymous.

The optional where clause places criteria on matching
results. For example:

SELECT x FROM Magazine x WHERE x.title = 'JDJ'

Keywords in JPQL expressions are case-insensitive, but entity, identifier, and
member names are not. For example, the expression above could also be expressed
as:

select x from Magazine x where x.title = 'JDJ'

But it could not be expressed as:

SELECT x FROM Magazine x WHERE x.TITLE = 'JDJ'

As with the select clause, alias names in the where
clause are resolved to the entity declared in the from
clause. The query above could be described in English as "for all
Magazine instances x, return a list
of every x such that x's title
field is equal to 'JDJ'".

This expression would match magazines whose price is 4.00, 5.00 or 6.00, but not
1.00, 2.00 or 3.00.

JPQL also includes the following conditionals:

[NOT] BETWEEN: Shorthand for expressing that a value falls
between two other values. The following two statements are synonymous:

SELECT x FROM Magazine x WHERE x.price >= 3.00 AND x.price <= 5.00

SELECT x FROM Magazine x WHERE x.price BETWEEN 3.00 AND 5.00

[NOT] LIKE: Performs a string comparison with wildcard
support. The special character '_' in the parameter means to match any single
character, and the special character '%' means to match any sequence of
characters. The following statement matches title fields "JDJ" and "JavaPro",
but not "IT Insider":

SELECT x FROM Magazine x WHERE x.title LIKE 'J%'

The following statement matches the title field "JDJ" but not "JavaPro":

SELECT x FROM Magazine x WHERE x.title LIKE 'J__'

[NOT] IN: Specifies that the member must be equal to one
element of the provided list. The following two statements are synonymous:

This query says that for each Magazine x
, traverse the articles relation and check each
Articley, and pass the filter if
y's authorName field is equal to "John
Doe". In short, this query will return all magazines that have any articles
written by John Doe.

Note

The IN() syntax can also be expressed with the keywords
inner join. The statements SELECT x FROM Magazine
x, IN(x.articles) y WHERE y.authorName = 'John Doe' and
SELECT x FROM Magazine x inner join x.articles y WHERE y.authorName = 'John Doe'
are synonymous.

1.3.
Fetch Joins

JPQL queries may specify one or more join fetch declarations,
which allow the query to specify which fields in the returned instances will be
pre-fetched.

SELECT x FROM Magazine x join fetch x.articles WHERE x.title = 'JDJ'

The query above returns Magazine instances and guarantees
that the articles field will already be fetched in the
returned instances.

Note

Specifying the join fetch declaration is
functionally equivalent to adding the fields to the Query's
FetchConfiguration. See Section 7, “
Fetch Groups
”.

1.4.
JPQL Functions

As well as supporting direct field and relation comparisons, JPQL supports a
pre-defined set of functions that you can apply.

CONCAT(string1, string2): Concatenates two string fields or
literals. For example:

SELECT x FROM Magazine x WHERE CONCAT(x.title, 's') = 'JDJs'

SUBSTRING(string, startIndex, length): Returns the part of
the string argument starting at startIndex
(1-based) and ending at length characters past
startIndex.

SELECT x FROM Magazine x WHERE SUBSTRING(x.title, 1, 1) = 'J'

TRIM([LEADING | TRAILING | BOTH] [character FROM] string:
Trims the specified character from either the beginning ( LEADING
) end ( TRAILING) or both ( BOTH
) of the string argument. If no trim character is specified, the
space character will be trimmed.

SELECT x FROM Magazine x WHERE TRIM(BOTH 'J' FROM x.title) = 'D'

LOWER(string): Returns the lower-case of the specified
string argument.

SELECT x FROM Magazine x WHERE LOWER(x.title) = 'jdj'

UPPER(string): Returns the upper-case of the specified
string argument.

SELECT x FROM Magazine x WHERE UPPER(x.title) = 'JAVAPRO'

LENGTH(string): Returns the number of characters in the
specified string argument.

SELECT x FROM Magazine x WHERE LENGTH(x.title) = 3

LOCATE(searchString, candidateString [, startIndex]):
Returns the first index of searchString in
candidateString. Positions are 1-based. If the string is not found,
returns 0.

SELECT x FROM Magazine x WHERE LOCATE('D', x.title) = 2

ABS(number): Returns the absolute value of the argument.

SELECT x FROM Magazine x WHERE ABS(x.price) >= 5.00

SQRT(number): Returns the square root of the argument.

SELECT x FROM Magazine x WHERE SQRT(x.price) >= 1.00

MOD(number, divisor): Returns the modulo of number
and divisor.

SELECT x FROM Magazine x WHERE MOD(x.price, 10) = 0

CURRENT_DATE: Returns the current date.

CURRENT_TIME: Returns the current time.

CURRENT_TIMESTAMP: Returns the current timestamp.

1.5.
Polymorphic Queries

All JPQL queries are polymorphic, which means the from clause
of a query includes not only instances of the specific entity class to which it
refers, but all subclasses of that class as well. The instances returned by a
query include instances of the subclasses that satisfy the query conditions. For
example, the following query may return instances of Magazine
, as well as Tabloid and Digest
instances, where Tabloid and
Digest are Magazine subclasses.

SELECT x FROM Magazine x WHERE x.price < 5

1.6.
Query Parameters

JPQL provides support for parameterized queries. Either named parameters or
positional parameters may be specified in the query string. Parameters allow you
to re-use query templates where only the input parameters vary. A single query
can declare either named parameters or positional parameters, but is not allowed
to declare both named and positional parameters.

public Query setParameter (int pos, Object value);

Specify positional parameters in your JPQL string using an integer prefixed by a
question mark. You can then populate the Query object
with positional parameter values via calls to the setParameter
method above. The method returns the Query
instance for optional method chaining.

This code will substitute JDJ for the ?1
parameter and 5.0 for the ?2 parameter,
then execute the query with those values.

public Query setParameter(String name, Object value);

Named parameter are denoted by prefixing an arbitrary name with a colon in your
JPQL string. You can then populate the Query object with
parameter values using the method above. Like the positional parameter method,
this method returns the Query instance for optional
method chaining.

1.7.
Query Hints

JPQL provides support for hints which are name/value pairs used to control locking and optimization keywords in sql.
The following example shows how to use the JPA hint api to set the ReadLockMode and ResultCount in the OpenJPA fetch plan. This will result in the sql keywords OPTIMIZE FOR 2 ROWS and UPDATE to be emitted into the sql provided that a pessimistic LockManager is being used.

Invalid hints or hints which can not be processed by a particular database are ignored. Otherwise, invalid hints will result in an ArgumentException being thrown.

1.7.1.
Locking Hints

To avoid deadlock and optimistic update exceptions among multiple updaters, use a pessimistic LockManager, specified in the persistence unit definition, and use a hint name of "openjpa.FetchPlan.ReadLockMode" on queries for entities that must be locked for serialization. The value of ReadLockMode can be either "READ" or "WRITE". This results in FOR UPDATE or USE AND KEEP UPDATE LOCKS in sql.

Using a ReadLockMode hint with JPA optimistic locking ( i.e. specifying LockManager = "version") will result in the entity version field either being reread at end of transaction in the case of a value of "READ" or the version field updated at end of transaction in the case of "WRITE". You must define a version field in the entity mapping when using a version LockManager and using ReadLockMode.

Table 10.1.
Interaction of ReadLockMode hint and LockManager

ReadLockMode

LockManager=pessimistic

LockManager=version

READ

sql with UPDATE

sql without update;

reread version field at the end of transaction and check for no change.

WRITE

sql with UPDATE

sql without update;

force update version field at the end of transaction

not specified

sql without update

sql without update

1.7.2.
Result Set Size Hint

To specify a result set size hint to those databases that support it, specify a hint name of "openjpa.hint.OptimizeResultCount" with an integer value greater than zero. This causes the sql keyword OPTIMIZE FOR to be generated.

1.7.3.
Isolation Level Hint

To specify an isolation level, specify a hint name of "openjpa.FetchPlan.Isolation". The value will be used to specify isolation level using the sql WITH <isolation> clause for those databases that support it. This hint only works in conjunction with the ReadLockMode hint.

1.7.4.
Other Fetchplan Hints

Any property of an OpenJPA FetchPlan can be changed using a hint by using a name of the form "openjpa.FetchPlan."<property name>.Valid property names include :
MaxFetchDepth, FetchBatchSize, LockTimeOut, EagerFetchMode, SubclassFetchMode and Isolation.

1.8.
Ordering

JPQL queries may optionally contain an order by clause which
specifies one or more fields to order by when returning query results. You may
follow the order by field clause with the asc
or desc keywords, which indicate that ordering
should be ascending or descending, respectively. If the direction is omitted,
ordering is ascending by default.

SELECT x FROM Magazine x order by x.title asc, x.price desc

The query above returns Magazine instances sorted by
their title in ascending order. In cases where the titles of two or more
magazines are the same, those instances will be sorted by price in descending
order.

1.9.
Aggregates

JPQL queries can select aggregate data as well as objects. JPQL includes the
min, max, avg, and
count aggregates. These functions can be used for reporting
and summary queries.

The following query will return the average of all the prices of all the
magazines:

1.11.
Delete By Query

Queries are useful not only for finding objects, but for efficiently deleting
them as well. For example, you might delete all records created before a certain
date. Rather than bring these objects into memory and delete them individually,
JPA allows you to perform a single bulk delete based on JPQL criteria.

Delete by query uses the same JPQL syntax as normal queries, with one exception:
begin your query string with the delete keyword instead of
the select keyword. To then execute the delete, you call the
following Query method:

public int executeUpdate();

This method returns the number of objects deleted. The following example deletes
all subscriptions whose expiration date has passed.

1.12.
Update By Query

Similar to bulk deletes, it is sometimes necessary to perform updates against a
large number of queries in a single operation, without having to bring all the
instances down to the client. Rather than bring these objects into memory and
modifying them individually, JPA allows you to perform a single bulk update
based on JPQL criteria.

Update by query uses the same JPQL syntax as normal queries, except that the
query string begins with the update keyword instead of
select. To execute the update, you call the following
Query method:

public int executeUpdate();

This method returns the number of objects updated. The following example updates
all subscriptions whose expiration date has passed to have the "paid" field set
to true..

The Java Persistence Query Language (JPQL) is used to define searches against
persistent entities independent of the mechanism used to store those entities.
As such, JPQL is "portable", and not constrained to any particular data store.
The Java Persistence query language is an extension of the Enterprise JavaBeans
query language, EJB QL, adding operations such as bulk
deletes and updates, join operations, aggregates, projections, and subqueries.
Furthermore, JPQL queries can be declared statically in metadata, or can be
dynamically built in code. This chapter provides the full definition of the
language.

Note

Much of this section is paraphrased or taken directly from Chapter 4 of the
JSR 220 specification.

2.1.
JPQL Statement Types

A JPQL statement may be either a SELECT statement, an
UPDATE statement, or a DELETE statement.
This chapter refers to all such statements as "queries". Where it is important
to distinguish among statement types, the specific statement type is referenced.
In BNF syntax, a query language statement is defined as:

A select statement must always have a SELECT and a
FROM clause. The square brackets [] indicate that the other
clauses are optional.

2.1.2.
JPQL Update and Delete Statements

Update and delete statements provide bulk operations over sets of entities. In
BNF syntax, these operations are defined as:

update_statement ::= update_clause [where_clause]

delete_statement ::= delete_clause [where_clause]

The update and delete clauses determine the type of the entities to be updated
or deleted. The WHERE clause may be used to restrict the
scope of the update or delete operation. Update and delete statements are
described further in Section 2.9, “
JPQL Bulk Update and Delete
”.

2.2.
JPQL Abstract Schema Types and Query Domains

The Java Persistence query language is a typed language, and every expression
has a type. The type of an expression is derived from the structure of the
expression, the abstract schema types of the identification variable
declarations, the types to which the persistent fields and relationships
evaluate, and the types of literals. The abstract schema type of an entity is
derived from the entity class and the metadata information provided by Java
language annotations or in the XML descriptor.

Informally, the abstract schema type of an entity can be characterized as
follows:

For every persistent field or get
accessor method (for a persistent property) of the entity class, there is a
field ("state-field") whose abstract schema type corresponds to that of the
field or the result type of the accessor method.

For every persistent relationship field or get accessor method (for a persistent
relationship property) of the entity class, there is a field
("association-field") whose type is the abstract schema type of the related
entity (or, if the relationship is a one-to-many or many-to-many, a collection
of such). Abstract schema types are specific to the query language data model.
The persistence provider is not required to implement or otherwise materialize
an abstract schema type. The domain of a query consists of the abstract schema
types of all entities that are defined in the same persistence unit. The domain
of a query may be restricted by the navigability of the relationships of the
entity on which it is based. The association-fields of an entity's abstract
schema type determine navigability. Using the association-fields and their
values, a query can select related entities and use their abstract schema types
in the query.

2.2.1.
JPQL Entity Naming

Entities are designated in query strings by their entity names. The entity name
is defined by the name element of the Entity annotation (or the entity-name XML
descriptor element), and defaults to the unqualified name of the entity class.
Entity names are scoped within the persistence unit and must be unique within
the persistence unit.

2.2.2.
JPQL Schema Example

This example assumes that the application developer provides several entity
classes, representing magazines, publishers, authors, and articles. The abstract
schema types for these entities are Magazine,
Publisher, Author, and Article.

Several Entities with Abstract Persistence Schemas Defined in the Same
Persistence Unit. The entity Publisher has a one-to-many
relationships with Magazine. There is also a one-to-many
relationship between Magazine and Article
. The entity Article is related to Author
in a one-to-one relationship.

Queries to select magazines can be defined by navigating over the
association-fields and state-fields defined by Magazine and Author. A query to
find all magazines that have unpublished articles is as follows:

This query navigates over the association-field authors of the
abstract schema type Magazine to find articles, and uses the
state-field published of Article to select
those magazines that have at least one article that is published. Although
predefined reserved identifiers, such as DISTINCT,
FROM, AS, JOIN,
WHERE, and FALSE appear in upper case in this
example, predefined reserved identifiers are case insensitive. The
SELECT clause of this example designates the return type of this
query to be of type Magazine. Because the same persistence unit defines the
abstract persistence schemas of the related entities, the developer can also
specify a query over articles that utilizes the abstract
schema type for products, and hence the state-fields and association-fields of
both the abstract schema types Magazine and Author. For example, if the
abstract schema type Author has a state-field named firstName, a query over
articles can be specified using this state-field. Such a query might be to
find all magazines that have articles authored by someone with the first name
"John".

Because Magazine is related to Author by means of the
relationships between Magazine and Article and between Article and Author,
navigation using the association-fields authors and product is used to express
the query. This query is specified by using the abstract schema name Magazine,
which designates the abstract schema type over which the query ranges. The basis
for the navigation is provided by the association-fields authors and product of
the abstract schema types Magazine and Article respectively.

The FROM clause of a query defines the domain of the query by
declaring identification variables. An identification variable is an identifier
declared in the FROM clause of a query. The domain of the
query may be constrained by path expressions. Identification variables designate
instances of a particular entity abstract schema type. The FROM
clause can contain multiple identification variable declarations
separated by a comma (,).

2.3.1.
JPQL FROM Identifiers

An identifier is a character sequence of unlimited length. The character
sequence must begin with a Java identifier start character, and all other
characters must be Java identifier part characters. An identifier start
character is any character for which the method
Character.isJavaIdentifierStart returns true.
This includes the underscore (_) character and the dollar sign ($) character. An
identifier part character is any character for which the method
Character.isJavaIdentifierPart returns true.
The question mark (?) character is reserved for use by the Java Persistence
query language. The following are reserved identifiers:

SELECT

FROM

WHERE

UPDATE

DELETE

JOIN

OUTER

INNER

LEFT

GROUP

BY

HAVING

FETCH

DISTINCT

OBJECT

NULL

TRUE

FALSE

NOT

AND

OR

BETWEEN

LIKE

IN

AS

UNKNOWN

EMPTY

MEMBER

OF

IS

AVG

MAX

MIN

SUM

COUNT

ORDER

BY

ASC

DESC

MOD

UPPER

LOWER

TRIM

POSITION

CHARACTER_LENGTH

CHAR_LENGTH

BIT_LENGTH

CURRENT_TIME

CURRENT_DATE

CURRENT_TIMESTAMP

NEW

EXISTS

ALL

ANY

SOME

Reserved identifiers are case insensitive. Reserved identifiers must not be
used as identification variables. It is recommended that other SQL reserved
words also not be as identification variables in queries because they may be
used as reserved identifiers in future releases of the specification.

2.3.2.
JPQL Identification Variables

An identification variable is a valid identifier declared in the FROM
clause of a query. All identification variables must be declared in
the FROM clause. Identification variables cannot be declared
in other clauses. An identification variable must not be a reserved identifier
or have the same name as any entity in the same persistence unit. Identification
variables are case insensitive. An identification variable evaluates to a value
of the type of the expression used in declaring the variable. For example,
consider the previous query:

In the FROM clause declaration
mag.articlesart, the identification variable
art evaluates to any Article value
directly reachable from Magazine. The association-field
articles is a collection of instances of the abstract schema
type Article and the identification variable art
refers to an element of this collection. The type of auth
is the abstract schema type of Author. An
identification variable ranges over the abstract schema type of an entity. An
identification variable designates an instance of an entity abstract schema type
or an element of a collection of entity abstract schema type instances.
Identification variables are existentially quantified in a query. An
identification variable always designates a reference to a single value. It is
declared in one of three ways: in a range variable declaration, in a join
clause, or in a collection member declaration. The identification variable
declarations are evaluated from left to right in the FROM
clause, and an identification variable declaration can use the result of a
preceding identification variable declaration of the query string.

2.3.3.
JPQL Range Declarations

The syntax for declaring an identification variable as a range variable is
similar to that of SQL; optionally, it uses the AS keyword.

Range variable declarations allow the developer to designate a "root" for
objects which may not be reachable by navigation. In order to select values by
comparing more than one instance of an entity abstract schema type, more than
one identification variable ranging over the abstract schema type is needed in
the FROM clause.

The following query returns magazines whose price is greater than the price of
magazines published by "Adventure" publishers. This example illustrates the use
of two different identification variables in the FROM clause,
both of the abstract schema type Magazine. The SELECT clause
of this query determines that it is the magazines with prices greater than those
of "Adventure" publisher's that are returned.

2.3.4.
JPQL Path Expressions

An identification variable followed by the navigation operator (.) and a
state-field or association-field is a path expression. The type of the path
expression is the type computed as the result of navigation; that is, the type
of the state-field or association-field to which the expression navigates.
Depending on navigability, a path expression that leads to a association-field
may be further composed. Path expressions can be composed from other path
expressions if the original path expression evaluates to a single-valued type
(not a collection) corresponding to a association-field. Path expression
navigability is composed using "inner join" semantics. That is, if the value of
a non-terminal association-field in the path expression is null, the path is
considered to have no value, and does not participate in the determination of
the result. The syntax for single-valued path expressions and collection valued
path expressions is as follows:

A single_valued_association_field is designated by the name of an
association-field in a one-to-one or many-to-one relationship. The type of a
single_valued_association_field and thus a
single_valued_association_path_expression is the abstract schema type of the
related entity. A collection_valued_association_field is designated by the name
of an association-field in a one-to-many or a many-to-many relationship. The
type of a collection_valued_association_field is a collection of values of the
abstract schema type of the related entity. An embedded_class_state _field is
designated by the name of an entity state field that corresponds to an embedded
class. Navigation to a related entity results in a value of the related entity's
abstract schema type.

The evaluation of a path expression terminating in a state-field results in the
abstract schema type corresponding to the Java type designated by the
state-field. It is syntactically illegal to compose a path expression from a
path expression that evaluates to a collection. For example, if mag
designates Magazine, the path expression
mag.articles.author is illegal since navigation to authors results in
a collection. This case should produce an error when the query string is
verified. To handle such a navigation, an identification variable must be
declared in the FROM clause to range over the elements of the
articles collection. Another path expression must be used to
navigate over each such element in the WHERE clause of the
query, as in the following query which returns all authors that have any
articles in any magazines:

The association referenced by the right side of the FETCH JOIN
clause must be an association that belongs to an entity that is
returned as a result of the query. It is not permitted to specify an
identification variable for the entities referenced by the right side of the
FETCH JOIN clause, and hence references to the implicitly
fetched entities cannot appear elsewhere in the query. The following query
returns a set of magazines. As a side effect, the associated articles for those
magazines are also retrieved, even though they are not part of the explicit
query result. The persistent fields or properties of the articles that are
eagerly fetched are fully initialized. The initialization of the relationship
properties of the articles that are retrieved is determined
by the metadata for the Article entity class.

A fetch join has the same join semantics as the corresponding inner or outer
join, except that the related objects specified on the right-hand side of the
join operation are not returned in the query result or otherwise referenced in
the query. Hence, for example, if magazine id 1 has five articles, the above
query returns five references to the magazine 1 entity.

2.3.6.
JPQL Collection Member Declarations

An identification variable declared by a collection_member_declaration ranges
over values of a collection obtained by navigation using a path expression. Such
a path expression represents a navigation involving the association-fields of an
entity abstract schema type. Because a path expression can be based on another
path expression, the navigation can use the association-fields of related
entities. An identification variable of a collection member declaration is
declared using a special operator, the reserved identifier IN
. The argument to the IN operator is a collection-valued path
expression. The path expression evaluates to a collection type specified as a
result of navigation to a collection-valued association-field of an entity
abstract schema type. The syntax for declaring a collection member
identification variable is as follows:

In this example,
articles is the name of an association-field whose value is a
collection of instances of the abstract schema type Article.
The identification variable art designates a member of this
collection, a single Article abstract schema type instance.
In this example, mag is an identification variable of the
abstract schema type Magazine.

2.3.7.
JPQL Polymorphism

Java Persistence queries are automatically polymorphic. The FROM
clause of a query designates not only instances of the specific
entity classes to which explicitly refers but of subclasses as well. The
instances returned by a query include instances of the subclasses that satisfy
the query criteria.

2.4.
JPQL WHERE Clause

The WHERE clause of a query consists of a conditional
expression used to select objects or values that satisfy the expression. The
WHERE clause restricts the result of a select statement or
the scope of an update or delete operation. A WHERE clause is
defined as follows:

where_clause ::= WHERE
conditional_expression

The GROUP BY construct enables the aggregation of values
according to the properties of an entity class. The HAVING
construct enables conditions to be specified that further restrict the query
result as restrictions upon the groups. The syntax of the HAVING
clause is as follows:

The following sections describe the language constructs that can be used in a
conditional expression of the WHERE clause or
HAVING clause. State-fields that are mapped in serialized form or as
lobs may not be portably used in conditional expressions.

Note

The
implementation is not expected to perform such query operations involving such
fields in memory rather than in the database.

2.5.1.
JPQL Literals

A string literal is enclosed in single quotes--for example: 'literal'. A string
literal that includes a single quote is represented by two single quotes--for
example: 'literal''s'. String literals in queries, like Java String literals,
use unicode character encoding. The use of Java escape notation is not supported
in query string literals. Exact numeric literals support the use of Java integer
literal syntax as well as SQL exact numeric literal syntax. Approximate literals
support the use of Java floating point literal syntax as well as SQL approximate
numeric literal syntax. Enum literals support the use of Java enum literal
syntax. The enum class name must be specified. Appropriate suffixes may be used
to indicate the specific type of a numeric literal in accordance with the Java
Language Specification. The boolean literals are TRUE and
FALSE. Although predefined reserved literals appear in upper
case, they are case insensitive.

2.5.2.
JPQL Identification Variables

All identification variables used in the WHERE or
HAVING clause of a SELECT or DELETE
statement must be declared in the FROM clause, as
described in Section 2.3.2, “
JPQL Identification Variables
”. The identification
variables used in the WHERE clause of an UPDATE
statement must be declared in the UPDATE clause.
Identification variables are existentially quantified in the WHERE
and HAVING clause. This means that an
identification variable represents a member of a collection or an instance of an
entity's abstract schema type. An identification variable never designates a
collection in its entirety.

2.5.3.
JPQL Path Expressions

It is illegal to use a collection_valued_path_expression within a
WHERE or HAVING clause as part of a conditional
expression except in an empty_collection_comparison_expression, in a
collection_member_expression, or as an argument to the SIZE
operator.

2.5.4.
JPQL Input Parameters

Either positional or named parameters may be used. Positional and named
parameters may not be mixed in a single query. Input parameters can only be used
in the WHERE clause or HAVING clause of a
query.

2.5.4.1.
JPQL Positional Parameters

The following rules apply to positional parameters.

Input parameters are designated by the question mark (?) prefix followed
by an integer. For example: ?1.

Input parameters are numbered starting from 1. Note that the same parameter can
be used more than once in the query string and that the ordering of the use of
parameters within the query string need not conform to the order of the
positional parameters.

There must be at least one element in the comma separated list
that defines the set of values for the IN expression. If the
value of a state_field_path_expression in an IN or
NOT IN expression is NULL or unknown, the value of
the expression is unknown.

2.5.9.
JPQL Like Expressions

The syntax for the use of the comparison operator [ NOT ]
LIKE in a conditional expression is as follows:

string_expression [NOT] LIKE pattern_value [ESCAPE escape_character]

The string_expression must have a string value. The pattern_value is a string
literal or a string-valued input parameter in which an underscore (_) stands for
any single character, a percent (%) character stands for any sequence of
characters (including the empty sequence), and all other characters stand for
themselves. The optional escape_character is a single-character string literal
or a character-valued input parameter (i.e., char or Character) and is used to
escape the special meaning of the underscore and percent characters in
pattern_value. Examples are:

address.phone LIKE '12%3'

is true for '123' '12993' and false for '1234'

asentence.word LIKE 'l_se'

is true for 'lose'
and false for 'loose'

aword.underscored LIKE '\_%' ESCAPE '\'

is true
for '_foo' and false for 'bar'

address.phone NOT LIKE '12%3'

is false for
'123' and '12993' and true for '1234'. If the value of the string_expression or
pattern_value is NULL or unknown, the value of the
LIKE expression is unknown. If the escape_character is specified and
is NULL, the value of the LIKE expression
is unknown.

2.5.10.
JPQL Null Comparison Expressions

The syntax for the use of the comparison operator IS NULL in
a conditional expression is as follows:

{single_valued_path_expression | input_parameter } IS [NOT] NULL

A null comparison expression tests whether or not the single-valued path
expression or input parameter is a NULL value.

2.5.11.
JPQL Empty Collection Comparison Expressions

The syntax for the use of the comparison operator IS EMPTY in
an empty_collection_comparison_expression is as follows:

collection_valued_path_expression IS [NOT] EMPTY

This expression tests whether or not the collection designated by the
collection-valued path expression is empty (i.e, has no elements).

For example, the following query will return all magazines that don't have any
articles at all:

SELECT mag FROM Magazine mag WHERE mag.articles IS EMPTY

If the value of the collection-valued path expression in an
empty collection comparison expression is unknown, the value of the empty
comparison expression is unknown.

This expression tests whether the designated value is a member of the collection
specified by the collection-valued path expression. If the collection valued
path expression designates an empty collection, the value of the
MEMBER OF expression is FALSE and the value of the
NOT MEMBER OF expression is TRUE.
Otherwise, if the value of the collection-valued path expression or
single-valued association-field path expression in the collection member
expression is NULL or unknown, the value of the collection
member expression is unknown.

The use of the reserved word OF is optional in this expression.

2.5.13.
JPQL Exists Expressions

An EXISTS expression is a predicate that is true only if the
result of the subquery consists of one or more values and that is false
otherwise. The syntax of an exists expression is

The result of this query consists of all authors whose spouse
is also an author.

2.5.14.
JPQL All or Any Expressions

An ALL conditional expression is a predicate that is true if
the comparison operation is true for all values in the result of the subquery or
the result of the subquery is empty. An ALL conditional
expression is false if the result of the comparison is false for at least one
row, and is unknown if neither true nor false. An ANY
conditional expression is a predicate that is true if the comparison operation
is true for some value in the result of the subquery. An ANY
conditional expression is false if the result of the subquery is empty or if the
comparison operation is false for every value in the result of the subquery, and
is unknown if neither true nor false. The keyword SOME is
synonymous with ANY. The comparison operators used with
ALL or ANY conditional expressions are =,
<, <=, >, >=, <>. The result of the subquery must be like that
of the other argument to the comparison operator in type. See
Section 2.11, “
JPQL Equality and Comparison Semantics
”. The syntax of an ALL
or ANY expression is specified as follows:

all_or_any_expression ::= { ALL | ANY | SOME}
(subquery)

The following example select the authors who make the highest salary for their
magazine:

Note that some contexts in which a subquery can be used require that the
subquery be a scalar subquery (i.e., produce a single result). This is
illustrated in the following example involving a numeric comparison operation.

2.5.16.
JPQL Functional Expressions

The JPQL includes the following built-in functions, which may be used in the
WHERE or HAVING clause of a query. If the
value of any argument to a functional expression is null or unknown, the value
of the functional expression is unknown.

The CONCAT function returns a string that is a concatenation
of its arguments. The second and third arguments of the SUBSTRING
function denote the starting position and length of the substring to
be returned. These arguments are integers. The first position of a string is
denoted by 1. The SUBSTRING function returns a string. The
TRIM function trims the specified character from a string. If
the character to be trimmed is not specified, it is assumed to be space (or
blank). The optional trim_character is a single-character string literal or a
character-valued input parameter (i.e., char or Character). If a trim
specification is not provided, BOTH is assumed. The
TRIM function returns the trimmed string. The LOWER
and UPPER functions convert a string to lower and upper case,
respectively. They return a string. The LOCATE function
returns the position of a given string within a string, starting the search at a
specified position. It returns the first position at which the string was found
as an integer. The first argument is the string to be located; the second
argument is the string to be searched; the optional third argument is an integer
that represents the string position at which the search is started (by default,
the beginning of the string to be searched). The first position in a string is
denoted by 1. If the string is not found, 0 is returned. The LENGTH
function returns the length of the string in characters as an
integer.

The ABS function takes a numeric argument and returns a
number (integer, float, or double) of the same type as the argument to the
function. The SQRT function takes a numeric argument and
returns a double.

Note that not all databases support the use of a trim character other than the
space character; use of this argument may result in queries that are not
portable. Note that not all databases support the use of the third argument to
LOCATE; use of this argument may result in queries that are
not portable.

The MOD function takes two integer arguments and returns an
integer. The SIZE function returns an integer value, the
number of elements of the collection. If the collection is empty, the
SIZE function evaluates to zero. Numeric arguments to these functions
may correspond to the numeric Java object types as well as the primitive numeric
types.

2.5.16.3.
JPQL Datetime Functions

The datetime functions return the value of current date, time, and timestamp on
the database server.

2.6.
JPQL GROUP BY, HAVING

The GROUP BY construct enables the aggregation of values
according to a set of properties. The HAVING construct
enables conditions to be specified that further restrict the query result. Such
conditions are restrictions upon the groups. The syntax of the GROUP
BY and HAVING clauses is as follows:

If a query contains both a WHERE clause and a GROUP
BY clause, the effect is that of first applying the where clause, and
then forming the groups and filtering them according to the HAVING
clause. The HAVING clause causes those groups to
be retained that satisfy the condition of the HAVING clause.
The requirements for the SELECT clause when GROUP
BY is used follow those of SQL: namely, any item that appears in the
SELECT clause (other than as an argument to an aggregate
function) must also appear in the GROUP BY clause. In forming
the groups, null values are treated as the same for grouping purposes. Grouping
by an entity is permitted. In this case, the entity must contain no serialized
state fields or lob-valued state fields. The HAVING clause
must specify search conditions over the grouping items or aggregate functions
that apply to grouping items.

If there is no GROUP BY clause and the HAVING
clause is used, the result is treated as a single group, and the
select list can only consist of aggregate functions. When a query declares a
HAVING clause, it must always also declare a GROUP
BY clause.

2.7.
JPQL SELECT Clause

The SELECT clause denotes the query result. More than one
value may be returned from the SELECT clause of a query. The
SELECT clause may contain one or more of the following
elements: a single range variable or identification variable that ranges over an
entity abstract schema type, a single-valued path expression, an aggregate
select expression, a constructor expression. The SELECT
clause has the following syntax:

Note that the SELECT clause must be specified to return only
single-valued expressions. The query below is therefore not valid:

SELECT mag.authors FROM Magazine AS mag

The
DISTINCT keyword is used to specify that duplicate values
must be eliminated from the query result. If DISTINCT is not
specified, duplicate values are not eliminated. Standalone identification
variables in the SELECT clause may optionally be qualified by
the OBJECT operator. The SELECT clause
must not use the OBJECT operator to qualify path expressions.

2.7.1.
JPQL Result Type of the SELECT Clause

The type of the query result specified by the SELECT clause
of a query is an entity abstract schema type, a state-field type, the result of
an aggregate function, the result of a construction operation, or some sequence
of these. The result type of the SELECT clause is defined by
the result types of the select_expressions contained in it. When multiple
select_expressions are used in the SELECT clause, the result
of the query is of type Object[], and the elements in this result correspond in
order to the order of their specification in the SELECT
clause and in type to the result types of each of the select_expressions. The
type of the result of a select_expression is as follows:

A single_valued_path_expression that is a
state_field_path_expression results in an object of the same type as the
corresponding state field of the entity. If the state field of the entity is a
primitive type, the corresponding object type is returned.

single_valued_path_expression that is a
single_valued_association_path_expression results in an entity object of the
type of the relationship field or the subtype of the relationship field of the
entity object as determined by the object/relational mapping.

The result type of an identification_variable is the type of the entity to which
that identification variable corresponds or a subtype as determined by the
object/relational mapping.

The result type of a constructor_expression is the type of the class for which
the constructor is defined. The types of the arguments to the constructor are
defined by the above rules.

2.7.2.
JPQL Constructor Expressions

A constructor may be used in the
SELECT list to return one or more Java instances. The
specified class is not required to be an entity or to be mapped to the database.
The constructor name must be fully qualified.

If an entity class name is specified in the SELECT NEW
clause, the resulting entity instances are in the new state.

2.7.3.
JPQL Null Values in the Query Result

If the result of a query corresponds to a association-field or state-field whose
value is null, that null value is returned in the result of the query method.
The IS NOT NULL construct can be used to eliminate such null
values from the result set of the query. Note, however, that state-field types
defined in terms of Java numeric primitive types cannot produce NULL
values in the query result. A query that returns such a state-field
type as a result type must not return a null value.

2.7.4.
JPQL Aggregate Functions

The result of a query may be the result
of an aggregate function applied to a path expression. The following aggregate
functions can be used in the SELECT clause of a query:
AVG, COUNT, MAX,
MIN, SUM. For all aggregate functions
except COUNT, the path expression that is the argument to
the aggregate function must terminate in a state-field. The path expression
argument to COUNT may terminate in either a state-field or a
association-field, or the argument to COUNT may be an
identification variable. Arguments to the functions SUM and
AVG must be numeric. Arguments to the functions MAX
and MIN must correspond to orderable state-field
types (i.e., numeric types, string types, character types, or date types). The
Java type that is contained in the result of a query using an aggregate function
is as follows:

COUNT returns
Long.

MAX, MIN return the type of the
state-field to which they are applied.

AVG returns Double.

SUM returns Long when applied to state-fields of integral
types (other than BigInteger); Double when applied to state-fields of floating
point types; BigInteger when applied to state-fields of type BigInteger; and
BigDecimal when applied to state-fields of type BigDecimal. If SUM
, AVG, MAX, or MIN
is used, and there are no values to which the aggregate function can
be applied, the result of the aggregate function is NULL. If
COUNT is used, and there are no values to which
COUNT can be applied, the result of the aggregate function is 0.

The argument to an aggregate function may be preceded by the keyword
DISTINCT to specify that duplicate values are to be eliminated before
the aggregate function is applied.
It is legal to specify DISTINCT with MAX
or MIN, but it does not affect the result.
Null values are eliminated before the
aggregate function is applied, regardless of whether the keyword
DISTINCT is specified.

2.7.4.1.
JPQL Aggregate Examples

The following query returns the average price of all magazines:

SELECT AVG(mag.price) FROM Magazine mag

The
following query returns the sum of all the prices from all the
magazines published by 'Larry':

2.8.
JPQL ORDER BY Clause

The ORDER BY clause allows the objects or values that are
returned by the query to be ordered. The syntax of the ORDER BY
clause is

orderby_clause ::= ORDER BY orderby_item {,
orderby_item}*

orderby_item ::= state_field_path_expression [ASC | DESC]

When the ORDER BY clause is used in a query, each element of
the SELECT clause of the query must be one of the following:
an identification variable x, optionally denoted as OBJECT(x)
, a single_valued_association_path_expression, or a state_field_path_expression.
For example:

SELECT pub FROM Publisher pub ORDER BY pub.revenue, pub.name

If more than one orderby_item is specified, the left-to-right
sequence of the orderby_item elements determines the precedence, whereby the
leftmost orderby_item has highest precedence. The keyword ASC
specifies that ascending ordering be used; the keyword DESC
specifies that descending ordering be used. Ascending ordering is the default.
SQL rules for the ordering of null values apply: that is, all null values must
appear before all non-null values in the ordering or all null values must appear
after all non-null values in the ordering, but it is not specified which. The
ordering of the query result is preserved in the result of the query method if
the ORDER BY clause is used.

2.9.
JPQL Bulk Update and Delete

Bulk update and delete operations apply to entities of a single
entity class (together with its subclasses, if any). Only one entity abstract
schema type may be specified in the FROM or UPDATE
clause. The syntax of these operations is as follows:

The syntax of the WHERE clause is described in
Section 2.4, “
JPQL WHERE Clause
”. A delete operation only applies to
entities of the specified class and its subclasses. It does not cascade to
related entities. The new_value specified for an update operation must be
compatible in type with the state-field to which it is assigned. Bulk update
maps directly to a database update operation, bypassing optimistic locking
checks. Portable applications must manually update the value of the version
column, if desired, and/or manually validate the value of the version column.
The persistence context is not synchronized with the result of the bulk update
or delete. Caution should be used when executing bulk update or delete
operations because they may result in inconsistencies between the database and
the entities in the active persistence context. In general, bulk update and
delete operations should only be performed within a separate transaction or at
the beginning of a transaction (before entities have been accessed whose state
might be affected by such operations).

Examples:

DELETE FROM Publisher pub WHERE pub.revenue > 1000000.0

DELETE FROM Publisher pub WHERE pub.revenue = 0 AND pub.magazines IS EMPTY

2.10.
JPQL Null Values

When the target of a reference does not exist in the database, its value is
regarded as NULL. SQL 92 NULL semantics
defines the evaluation of conditional expressions containing NULL
values. The following is a brief description of these semantics:

Comparison or arithmetic operations with a
NULL value always yield an unknown value.

Two NULL values are not considered to be equal, the
comparison yields an unknown value.

Comparison or arithmetic operations with an unknown value always yield an
unknown value.

The IS NULL and IS NOT NULL operators
convert a NULL state-field or single-valued association-field
value into the respective TRUE or FALSE
value.

Note: The JPQL defines the empty string, "", as a string with 0 length, which is
not equal to a NULL value. However, NULL
values and empty strings may not always be distinguished when queries are mapped
to some databases. Application developers should therefore not rely on the
semantics of query comparisons involving the empty string and NULL
value.

2.11.
JPQL Equality and Comparison Semantics

Only the values of like types are permitted to be compared. A type is like
another type if they correspond to the same Java language type, or if one is a
primitive Java language type and the other is the wrappered Java class type
equivalent (e.g., int and Integer are like types in this sense). There is one
exception to this rule: it is valid to compare numeric values for which the
rules of numeric promotion apply. Conditional expressions attempting to compare
non-like type values are disallowed except for this numeric case. Note that the
arithmetic operators and comparison operators are permitted to be applied to
state-fields and input parameters of the wrappered Java class equivalents to the
primitive numeric Java types. Two entities of the same abstract schema type are
equal if and only if they have the same primary key value. Only
equality/inequality comparisons over enums are required to be supported.

2.12.
JPQL BNF

The following is the BNF for the Java Persistence query language, from section
4.14 of the JSR 220 specification.

between_expression ::= arithmetic_expression [ NOT ]
BETWEEN arithmetic_expression AND
arithmetic_expression | string_expression [ NOT ]
BETWEEN string_expression AND string_expression |
datetime_expression [ NOT ] BETWEEN
datetime_expression AND datetime_expression

Chapter 11.
SQL Queries

JPQL is a powerful query language, but there are times when it is not enough.
Maybe you're migrating a JDBC application to JPA on a strict deadline, and you
don't have time to translate your existing SQL selects to JPQL. Or maybe a
certain query requires database-specific SQL your JPA implementation doesn't
support. Or maybe your DBA has spent hours crafting the perfect select statement
for a query in your application's critical path. Whatever the reason, SQL
queries can remain an essential part of an application.

You are probably familiar with executing SQL queries by obtaining a
java.sql.Connection, using the JDBC APIs to create a
Statement, and executing that Statement to
obtain a ResultSet. And of course, you are free to
continue using this low-level approach to SQL execution in your JPA
applications. However, JPA also supports executing SQL queries through the
javax.persistence.Query interface introduced in
Chapter 10,
JPA Query
. Using a JPA SQL query, you can
retrieve either persistent objects or projections of column values. The
following sections detail each use.

1.
Creating SQL Queries

The EntityManager has two factory methods suitable for
creating SQL queries:

Note

In addition to SELECT statements, OpenJPA supports stored procedure invocations
as SQL queries. OpenJPA will assume any SQL that does not begin with the
SELECT keyword (ignoring case) is a stored procedure call,
and invoke it as such at the JDBC level.

2.
Retrieving Persistent Objects with SQL

When you give a SQL Query a candidate class, it will
return persistent instances of that class. At a minimum, your SQL must select
the class' primary key columns, discriminator column (if mapped), and version
column (also if mapped). The JPA runtime uses the values of the primary key
columns to construct each result object's identity, and possibly to match it
with a persistent object already in the EntityManager's
cache. When an object is not already cached, the implementation creates a new
object to represent the current result row. It might use the discriminator
column value to make sure it constructs an object of the correct subclass.
Finally, the query records available version column data for use in optimistic
concurrency checking, should you later change the result object and flush it
back to the database.

Aside from the primary key, discriminator, and version columns, any columns you
select are used to populate the persistent fields of each result object. JPA
implementations will compete on how effectively they map your selected data to
your persistent instance fields.

Let's make the discussion above concrete with an example. It uses the following
simple mapping between a class and the database:

Note

Throughout this chapter, we will draw on the object model introduced in
Chapter 5,
Metadata
. We present that model again below.
As we discuss various aspects of mapping metadata, we will zoom in on specific
areas of the model and show how we map the object layer to the relational layer.

All mapping metadata is optional. Where no explicit mapping metadata is given,
JPA uses the defaults defined by the specification. As we present
each mapping throughout this chapter, we also describe the defaults that apply
when the mapping is absent.

1.
Table

The Table annotation specifies the table for an entity
class. If you omit the Table annotation, base entity
classes default to a table with their unqualified class name. The default table
of an entity subclass depends on the inheritance strategy, as you will see in
Section 6, “
Inheritance
”.

Tables have the following properties:

String name: The name of the table. Defaults to the
unqualified entity class name.

String schema: The table's schema. If you do not name a
schema, JPA uses the default schema for the database connection.

String catalog: The table's catalog. If you do not name a
catalog, JPA uses the default catalog for the database connection.

UniqueConstraint[] uniqueConstraints: An array of unique
constraints to place on the table. We cover unique constraints below. Defaults
to an empty array.

The equivalent XML element is table. It has the following
attributes, which correspond to the annotation properties above:

Sometimes, some of the fields in a class are mapped to secondary tables. In that
case, use the class' Table annotation to name what you
consider the class' primary table. Later, we will see how to map certain fields
to other tables.

The example below maps classes to tables to separate schemas. The
CONTRACT, SUB, and LINE_ITEM
tables are in the CNTRCT schema; all other tables
are in the default schema.

2.
Unique Constraints

Unique constraints ensure that the data in a column or combination of columns is
unique for each row. A table's primary key, for example, functions as an
implicit unique constraint. In JPA, you represent other unique
constraints with an array of UniqueConstraint
annotations within the table annotation. The unique constraints you define are
used during table creation to generate the proper database constraints, and may
also be used at runtime to order INSERT, UPDATE
, and DELETE statements. For example, suppose there
is a unique constraint on the columns of field F. In the
same transaction, you remove an object A and persist a new
object B, both with the same F value. The
JPA runtime must ensure that the SQL deleting A
is sent to the database before the SQL inserting B to avoid a
unique constraint violation.

UniqueConstraint has a single property:

String[] columnNames: The names of the columns the
constraint spans.

In XML, unique constraints are represented by nesting
unique-constraint elements within the table
element. Each unique-constraint element in turn nests
column-name text elements to enumerate the contraint's
columns.

Example 12.2.
Defining a Unique Constraint

The following defines a unique constraint on the TITLE
column of the ART table:

3.
Column

In the previous section, we saw that a UniqueConstraint
uses an array of column names. Field mappings, however, use full-fledged
Column annotations. Column annotations have the following
properties:

String name: The column name. Defaults to the field name.

String columnDefinition: The database-specific column type
name. This property is only used by vendors that support creating tables from
your mapping metadata. During table creation, the vendor will use the value of
the columnDefinition as the declared column type. If no
columnDefinition is given, the vendor will choose an
appropriate default based on the field type combined with the column's length,
precision, and scale.

int length: The column length. This property is typically
only used during table creation, though some vendors might use it to validate
data before flushing. CHAR and VARCHAR
columns typically default to a length of 255; other column types use the
database default.

int precision: The precision of a numeric column. This
property is often used in conjunction with scale to form the
proper column type name during table creation.

int scale: The number of decimal digits a numeric column can
hold. This property is often used in conjunction with precision
to form the proper column type name during table creation.

boolean nullable: Whether the column can store null values.
Vendors may use this property both for table creation and at runtime; however,
it is never required. Defaults to true.

boolean insertable: By setting this property to
false, you can omit the column from SQL INSERT
statements. Defaults to true.

boolean updatable: By setting this property to
false, you can omit the column from SQL UPDATE
statements. Defaults to true.

String table: Sometimes you will need to map fields to
tables other than the primary table. This property allows you specify that the
column resides in a secondary table. We will see how to map fields to secondary
tables later in the chapter.

The equivalent XML element is column. This element has
attributes that are exactly equivalent to the Column
annotation's properties described above:

name

column-definition

length

precision

scale

insertable

updatable

table

4.
Identity Mapping

With our new knowledge of columns, we can map the identity fields of our
entities. The diagram below now includes primary key columns for our model's
tables. The primary key column for Author uses
nonstandard type INTEGER64, and the Magazine.isbn
field is mapped to a VARCHAR(9) column instead of
a VARCHAR(255) column, which is the default for string
fields. We do not need to point out either one of these oddities to the JPA
implementation for runtime use. If, however, we want to use the JPA
implementation to create our tables for us, it needs to know about
any desired non-default column types. Therefore, the example following the
diagram includes this data in its encoding of our mappings.

Note that many of our identity fields do not need to specify column information,
because they use the default column name and type.

5.
Generators

One aspect of identity mapping not covered in the previous section is JPA's
ability to automatically assign a value to your numeric identity fields using
generators. We discussed the available generator types in
Section 2.2, “
Id
”. Now we show you how to define
named generators.

5.1.
Sequence Generator

Most databases allow you to create native sequences. These are database
structures that generate increasing numeric values. The
SequenceGenerator annotation represents a named database sequence.
You can place the annotation on any package, entity class, persistent field
declaration (if your entity uses field access), or getter method for a
persistent property (if your entity uses property access).
SequenceGenerator has the following properties:

String name: The generator name. This property is required.

String sequenceName: The name of the database sequence. If
you do not specify the database sequence, your vendor will choose an appropriate
default.

int initialValue: The initial sequence value.

int allocationSize: Some databases can pre-allocate groups
of sequence values. This allows the database to service sequence requests from
cache, rather than physically incrementing the sequence with every request. This
allocation size defaults to 50.

Note

OpenJPA allows you to use one of OpenJPA's built-in generator
implementations in the sequenceName property. You can also
set the sequenceName to system to use the
system sequence defined by the
openjpa.Sequence configuration property. See the Reference
Guide's Section 6, “
Generators
” for details.

The XML element for a sequence generator is sequence-generator
. Its attributes mirror the above annotation's properties:

name

sequence-name

initial-value

allocation-size

To use a sequence generator, set your GeneratedValue
annotation's strategy property to
GenerationType.SEQUENCE, and its generator property
to the sequence generator's declared name. Or equivalently, set your
generated-value XML element's strategy attribute to
SEQUENCE and its generator attribute to
the generator name.

5.2.
TableGenerator

A TableGenerator refers to a database table used to store
increasing sequence values for one or more entities. As with
SequenceGenerator, you can place the TableGenerator
annotation on any package, entity class, persistent field
declaration (if your entity uses field access), or getter method for a
persistent property (if your entity uses property access).
TableGenerator has the following properties:

String name: The generator name. This property is required.

String table: The name of the generator table. If left
unspecified, your vendor will choose a default table.

String schema: The named table's schema.

String catalog: The named table's catalog.

String pkColumnName: The name of the primary key column in
the generator table. If unspecified, your implementation will choose a default.

String valueColumnName: The name of the column that holds
the sequence value. If unspecified, your implementation will choose a default.

String pkColumnValue: The primary key column value of the
row in the generator table holding this sequence value. You can use the same
generator table for multiple logical sequences by supplying different
pkColumnValue s. If you do not specify a value, the implementation
will supply a default.

int initialValue: The value of the generator's first issued
number.

int allocationSize: The number of values to allocate in
memory for each trip to the database. Allocating values in memory allows the JPA
runtime to avoid accessing the database for every sequence request.
This number also specifies the amount that the sequence value is incremented
each time the generator table is updated. Defaults to 50.

The XML equivalent is the table-generator element. This
element's attributes correspond exactly to the above annotation's properties:

name

table

schema

catalog

pk-column-name

value-column-name

pk-column-value

initial-value

allocation-size

To use a table generator, set your GeneratedValue
annotation's strategy property to
GenerationType.TABLE, and its generator property to
the table generator's declared name. Or equivalently, set your
generated-value XML element's strategy attribute to
TABLE and its generator attribute to the
generator name.

6.
Inheritance

In the 1990's programmers coined the term impedance mismatch
to describe the difficulties in bridging the object and relational
worlds. Perhaps no feature of object modeling highlights the impedance mismatch
better than inheritance. There is no natural, efficient way to represent an
inheritance relationship in a relational database.

Luckily, JPA gives you a choice of inheritance strategies, making
the best of a bad situation. The base entity class defines the inheritance
strategy for the hierarchy with the Inheritance
annotation. Inheritance has the following properties:

InheritanceType strategy: Enum value declaring the
inheritance strategy for the hierarchy. Defaults to
InheritanceType.SINGLE_TABLE. We detail each of the available
strategies below.

The corresponding XML element is inheritance, which has a
single attribute:

Note

OpenJPA allows you to vary your inheritance strategy for each class, rather than
forcing a single strategy per inheritance hierarchy. See
Section 7, “
Additional JPA Mappings
” in the Reference Guide for
details.

Single table inheritance is the default strategy. Thus, we could omit the
@Inheritance annotation in the example above and get the same
result.

Note

Mapping subclass state to the superclass table is often called flat
inheritance mapping.

6.1.1.
Advantages

Single table inheritance mapping is the fastest of all inheritance models, since
it never requires a join to retrieve a persistent instance from the database.
Similarly, persisting or updating a persistent instance requires only a single
INSERT or UPDATE statement. Finally,
relations to any class within a single table inheritance hierarchy are just as
efficient as relations to a base class.

6.1.2.
Disadvantages

The larger the inheritance model gets, the "wider" the mapped table gets, in
that for every field in the entire inheritance hierarchy, a column must exist in
the mapped table. This may have undesirable consequence on the database size,
since a wide or deep inheritance hierarchy will result in tables with many
mostly-empty columns.

6.2.
Joined

The InheritanceType.JOINED strategy uses a different table
for each class in the hierarchy. Each table only includes state declared in its
class. Thus to load a subclass instance, the JPA implementation must
read from the subclass table as well as the table of each ancestor class, up to
the base entity class.

Note

Using joined subclass tables is also called vertical
inheritance mapping.

PrimaryKeyJoinColumn annotations tell the JPA
implementation how to join each subclass table record to the corresponding
record in its direct superclass table. In our model, the LINE_ITEM.ID
column joins to the CONTRACT.ID column. The
PrimaryKeyJoinColumn annotation has the following
properties:

String name: The name of the subclass table column. When
there is a single identity field, defaults to that field's column name.

String referencedColumnName: The name of the superclass
table column this subclass table column joins to. When there is a single
identity field, defaults to that field's column name.

String columnDefinition: This property has the same meaning
as the columnDefinition property on the Column
annotation, described in
Section 3, “
Column
”.

The XML equivalent is the primary-key-join-column element.
Its attributes mirror the annotation properties described above:

name

referenced-column-name

column-definition

The example below shows how we use InheritanceTable.JOINED
and a primary key join column to map our sample model according to the diagram
above. Note that a primary key join column is not strictly needed, because there
is only one identity column, and the subclass table column has the same name as
the superclass table column. In this situation, the defaults suffice. However,
we include the primary key join column for illustrative purposes.

When there are multiple identity columns, you must define multiple
PrimaryKeyJoinColumns using the aptly-named
PrimaryKeyJoinColumns annotation. This annotation's value is an
array of PrimaryKeyJoinColumn s. We could rewrite
LineItem's mapping as:

In XML, simply list as many primary-key-join-column elements
as necessary.

6.2.1.
Advantages

The joined strategy has the following advantages:

Using joined subclass tables results in the most normalized
database schema, meaning the schema with the least spurious or redundant data.

As more subclasses are added to the data model over time, the only schema
modification that needs to be made is the addition of corresponding subclass
tables in the database (rather than having to change the structure of existing
tables).

Relations to a base class using this strategy can be loaded through standard
joins and can use standard foreign keys, as opposed to the machinations required
to load polymorphic relations to table-per-class base types, described below.

6.2.2.
Disadvantages

Aside from certain uses of the table-per-class strategy described below, the
joined strategy is often the slowest of the inheritance models. Retrieving any
subclass requires one or more database joins, and storing subclasses requires
multiple INSERT or UPDATE statements. This
is only the case when persistence operations are performed on subclasses; if
most operations are performed on the least-derived persistent superclass, then
this mapping is very fast.

Note

When executing a select against a hierarchy that uses joined subclass table
inheritance, you must consider how to load subclass state.
Section 8, “
Eager Fetching
” in the Reference Guide
describes OpenJPA's options for efficient data loading.

6.3.
Table Per Class

Like the JOINED strategy, the
InheritanceType.TABLE_PER_CLASS strategy uses a different table for
each class in the hierarchy. Unlike the JOINED strategy,
however, each table includes all state for an instance of the corresponding
class. Thus to load a subclass instance, the JPA implementation must
only read from the subclass table; it does not need to join to superclass
tables.

Suppose that our sample model's Magazine class has a
subclass Tabloid. The classes are mapped using the
table-per-class strategy, as in the diagram above. In a table-per-class mapping,
Magazine's table MAG contains all
state declared in the base Magazine class.
Tabloid maps to a separate table, TABLOID. This
table contains not only the state declared in the Tabloid
subclass, but all the base class state from Magazine as
well. Thus the TABLOID table would contain columns for
isbn, title, and other
Magazine fields. These columns would default to the names used in
Magazine's mapping metadata.
Section 8.3, “
Embedded Mapping
” will show you how to use
AttributeOverrides and AssociationOverride
s to override superclass field mappings.

6.3.1.
Advantages

The table-per-class strategy is very efficient when operating on instances of a
known class. Under these conditions, the strategy never requires joining to
superclass or subclass tables. Reads, joins, inserts, updates, and deletes are
all efficient in the absence of polymorphic behavior. Also, as in the joined
strategy, adding additional classes to the hierarchy does not require modifying
existing class tables.

6.3.2.
Disadvantages

Polymorphic relations to non-leaf classes in a table-per-class hierarchy have
many limitations. When the concrete subclass is not known, the related object
could be in any of the subclass tables, making joins through the relation
impossible. This ambiguity also affects identity lookups and queries; these
operations require multiple SQL SELECTs (one for each
possible subclass), or a complex UNION.

7.
Discriminator

The single table
inheritance strategy results in a single table containing records for two or
more different classes in an inheritance hierarchy. Similarly, using the
joined strategy
results in the superclass table holding records for superclass instances as well
as for the superclass state of subclass instances. When selecting data, JPA
needs a way to differentiate a row representing an object of one class from a
row representing an object of another. That is the job of the
discriminator column.

The discriminator column is always in the table of the base entity. It holds a
different value for records of each class, allowing the JPA runtime
to determine what class of object each row represents.

The DiscriminatorColumn annotation represents a
discriminator column. It has these properties:

String name: The column name. Defaults to DTYPE
.

length: For string discriminator values, the length of the
column. Defaults to 31.

String columnDefinition: This property has the same meaning
as the columnDefinition property on the Column
annotation, described in
Section 3, “
Column
”.

The corresponding XML element is discriminator-column. Its
attribues mirror the annotation properties above:

name

length

column-definition

discriminator-type: One of STRING,
CHAR, or INTEGER.

The DiscriminatorValue annotation specifies the
discriminator value for each class. Though this annotation's value is always a
string, the implementation will parse it according to the
DiscriminatorColumn's discriminatorType property
above. The type defaults to DiscriminatorType.STRING, but
may be DiscriminatorType.CHAR or
DiscriminatorType.INTEGER. If you do not specify a
DiscriminatorValue, the provider will choose an appropriate
default.

The corresponding XML element is discriminator-value. The
text within this element is parsed as the discriminator value.

Note

OpenJPA assumes your model employs a discriminator column if any of the
following are true:

The base entity explicitly declares an inheritance type of
SINGLE_TABLE.

The base entity sets a discriminator value.

The base entity declares a discriminator column.

Only SINGLE_TABLE inheritance hierarchies require a
discriminator column and values. JOINED hierarchies can use
a discriminator to make some operations more efficient, but do not require one.
TABLE_PER_CLASS hierarchies have no use for a discriminator.

OpenJPA defines additional discriminator strategies; see
Section 7, “
Additional JPA Mappings
” in the Reference Guide for
details. OpenJPA also supports final entity classes. OpenJPA does not use a
discriminator on final classes.

We can now translate our newfound knowledge of JPA discriminators into concrete
JPA mappings. We first extend our diagram with discriminator columns:

Next, we present the updated mapping document. Notice that in this version, we
have removed explicit inheritance annotations when the defaults sufficed. Also,
notice that entities using the default DTYPE discriminator
column mapping do not need an explicit DiscriminatorColumn
annotation.

The following sections enumerate the myriad of field mappings JPA
supports. JPA augments the persistence metadata covered in
Chapter 5,
Metadata
with many new object-relational
annotations. As we explore the library of standard mappings, we introduce each
of these enhancements in context.

Note

OpenJPA supports many additional field types, and allows you to create custom
mappings for unsupported field types or database schemas. See the Reference
Guide's Chapter 7,
Mapping
for complete coverage of
OpenJPA's mapping capabilities.

In fact, you have already seen examples of basic field mappings in this chapter
- the mapping of all identity fields in
Example 12.3, “
Identity Mapping
”. As you saw in that
section, to write a basic field mapping you use the Column
annotation to describe the column the field value is stored in. We
discussed the Column annotation in
Section 3, “
Column
”. Recall that the name of
the column defaults to the field name, and the type of the column defaults to an
appropriate type for the field type. These defaults allow you to sometimes omit
the annotation altogether.

8.1.1.
LOBs

Adding the Lob marker annotation to a basic field signals
that the data is to be stored as a LOB (Large OBject). If the field holds string
or character data, it will map to a CLOB (Character Large
OBject) database column. If the field holds any other data type, it will be
stored as binary data in a BLOB (Binary Large OBject) column.
The implementation will serialize the Java value if needed.

The equivalent XML element is lob, which has no children or
attributes.

8.1.2.
Enumerated

You can apply the Enumerated annotation to your
Enum fields to control how they map to the database. The
Enumerated annotation's value one of the following
constants from the EnumType enum:

EnumType.ORDINAL: The default. The persistence
implementation places the ordinal value of the enum in a numeric column. This is
an efficient mapping, but may break if you rearrange the Java enum declaration.

EnumType.STRING: Store the name of the enum value rather
than the ordinal. This mapping uses a VARCHAR column rather
than a numeric one.

The Enumerated annotation is optional. Any un-annotated
enumeration field defaults to ORDINAL mapping.

The corresponding XML element is enumerated. Its embedded
text must be one of STRING or ORIDINAL.

8.1.3.
Temporal Types

The Temporal annotation determines how the implementation
handles your basic java.util.Date and
java.util.Calendar fields at the JDBC level. The
Temporal annotation's value is a constant from the
TemporalType enum. Available values are:

TemporalType.TIMESTAMP: The default. Use JDBC's timestamp
APIs to manipulate the column data.

TemporalType.TIME: Use JDBC's time APIs to manipulate the
column data.

If the Temporal annotation is omitted, the implementation
will treat the data as a timestamp.

The corresponding XML element is temporal, whose text value
must be one of: TIME, DATE, or
TIMESTAMP.

8.1.4.
The Updated Mappings

Below we present an updated diagram of our model and its associated database
schema, followed by the corresponding mapping metadata. Note that the mapping
metadata relies on defaults where possible. Also note that as a mapped
superclass, Document can define mappings that will
automatically transfer to its subclass' tables. In
Section 8.3, “
Embedded Mapping
”, you will see how a subclass
can override its mapped superclass' mappings.

8.2.
Secondary Tables

Sometimes a logical record is spread over multiple database tables. JPA
calls a class' declared table the primary
table, and calls other tables that make up a logical record secondary
tables. You can map any persistent field to a secondary table. Just
write the standard field mapping, then perform these two additional steps:

Set the table attribute of each of the field's columns or
join columns to the name of the secondary table.

Define the secondary table on the entity class declaration.

You define secondary tables with the SecondaryTable
annotation. This annotation has all the properties of the Table
annotation covered in Section 1, “
Table
”
, plus a pkJoinColumns property.

The pkJoinColumns property is an array of
PrimaryKeyJoinColumns dictating how to join secondary table records
to their owning primary table records. Each PrimaryKeyJoinColumn
joins a secondary table column to a primary key column in the
primary table. See Section 6.2, “
Joined
”
above for coverage of PrimaryKeyJoinColumn's properties.

The corresponding XML element is secondary-table. This
element has all the attributes of the table element, but also
accepts nested primary-key-join-column elements.

8.3.
Embedded Mapping

Chapter 5,
Metadata
describes JPA's concept of
embeddable objects. The field values of embedded objects are stored
as part of the owning record, rather than as a separate database record. Thus,
instead of mapping a relation to an embeddable object as a foreign key, you map
all the fields of the embeddable instance to columns in the owning field's
table.

JPA defaults the embedded column names and descriptions to those of
the embeddable class' field mappings. The AttributeOverride
annotation overrides a basic embedded mapping. This annotation has
the following properties:

String name: The name of the embedded class' field being
mapped to this class' table.

Column column: The column defining the mapping of the
embedded class' field to this class' table.

The corresponding XML element is attribute-override. It has
a single name attribute to name the field being overridden,
and a single column child element.

To declare multiple overrides, use the AttributeOverrides
annotation, whose value is an array of AttributeOverride
s. In XML, simply list multiple attribute-override elements
in succession.

To override a many to one or one to one relationship, use the
AssociationOverride annotation in place of
AttributeOverride. AssociationOverride has
the following properties:

String name: The name of the embedded class' field being
mapped to this class' table.

You can also use attribute overrides on an entity class to override mappings
defined by its mapped superclass or table-per-class superclass. The example
below re-maps the Document.version field to the
Contract table's CVERSION column.

8.4.
Direct Relations

A direct relation is a non-embedded persistent field that holds a reference to
another entity. many to one
and one to one metadata field
types are mapped as direct relations. Our model has three direct relations:
Magazine's publisher field is a direct
relation to a Company, Magazine's
coverArticle field is a direct relation to
Article, and the LineItem.magazine field is a
direct relation to a Magazine. Direct relations are
represented in the database by foreign key columns:

You typically map a direct relation with JoinColumn
annotations describing how the local foreign key columns join to the primary key
columns of the related record. The JoinColumn annotation
exposes the following properties:

String name: The name of the foreign key column. Defaults to
the relation field name, plus an underscore, plus the name of the referenced
primary key column.

String referencedColumnName: The name of the primary key
column being joined to. If there is only one identity field in the related
entity class, the join column name defaults to the name of the identity field's
column.

boolean unique: Whether this column is guaranteed to hold
unique values for all rows. Defaults to false.

JoinColumn also has the same nullable
, insertable, updatable,
columnDefinition, and table properties as the
Column annotation. See
Section 3, “
Column
” for details on these
properties.

The join-column element represents a join column in XML. Its
attributes mirror the above annotation's properties:

name

referenced-column-name

unique

nullable

insertable

updatable

column-definition

table

When there are multiple columns involved in the join, as when a
LineItem references a Magazine in our model,
the JoinColumns annotation allows you to specify an array
of JoinColumn values. In XML, simply list multiple
join-column elements.

When the entities in a one to one relation join on shared primary key values
rather than separate foreign key columns, use the
PrimaryKeyJoinColumn(s) annotation or
primary-key-join-column elements in place of JoinColumn(s)
/ join-column elements.

8.5.
Join Table

A join table consists of two foreign keys. Each row of a
join table associates two objects together. JPA uses join tables to
represent collections of entity objects: one foreign key refers back to the
collection's owner, and the other refers to a collection element.

one to many and
many to many metadata field
types can map to join tables. Several fields in our model use join table
mappings, including Magazine.articles and
Article.authors.

You define join tables with the JoinTable annotation.
This annotation has the following properties:

String name: Table name. If not given, the name of the table
defaults to the name of the owning entity's table, plus an underscore, plus the
name of the related entity's table.

String catalog: Table catalog.

String schema: Table schema.

JoinColumn[] joinColumns: Array of JoinColumn
showing how to associate join table records with the owning row in
the primary table. This property mirrors the pkJoinColumns
property of the SecondaryTable annotation in
functionality. See Section 8.2, “
Secondary Tables
” to
refresh your memory on secondary tables.

If this is a bidirectional relation (see
Section 2.9.1, “
Bidirectional Relations
” ), the name of a join column
defaults to the inverse field name, plus an underscore, plus the referenced
primary key column name. Otherwise, the join column name defaults to the field's
owning entity name, plus an underscore, plus the referenced primary key column
name.

JoinColumn[] inverseJoinColumns: Array of
JoinColumns showing how to associate join table records with the
records that form the elements of the collection. These join columns are used
just like the join columns for direct relations, and they have the same naming
defaults. Read Section 8.4, “
Direct Relations
” for a review of
direct relation mapping.

join-table is the corresponding XML element. It has the same
attributes as the table element, but includes the ability to
nest join-column and inverse-join-column
elements as children. We have seen join-column elements
already; inverse-join-column elements have the same
attributes.

8.6.
Bidirectional Mapping

Section 2.9.1, “
Bidirectional Relations
” introduced bidirectional
relations. To map a bidirectional relation, you map one field normally using the
annotations we have covered throughout this chapter. Then you use the
mappedBy property of the other field's metadata annotation or the
corresponding mapped-by XML attribute to refer to the mapped
field. Look for this pattern in these bidirectional relations as you peruse the
complete mappings below:

Magazine.publisher and Company.mags.

Article.authors and Author.articles.

8.7.
Map Mapping

All map fields in JPA are modeled on either one to many or many to
many associations. The map key is always derived from an associated entity's
field. Thus map fields use the same mappings as any one to many or many to many
fields, namely dedicated join
tables or bidirectional
relations. The only additions are the MapKey
annotation and map-key element to declare the key field. We
covered these additions in in Section 2.13, “
Map Key
”.

The example below maps Subscription's map of
LineItems to the SUB_ITEMS join table. The key
for each map entry is the LineItem's num
field value.

Chapter 1.
Introduction

OpenJPA is a JDBC-based implementation of the JPA standard.
This document is a reference for the configuration and use of OpenJPA.

1.
Intended Audience

This document is intended for OpenJPA developers. It
assumes strong knowledge of Java, familiarity with the eXtensible Markup
Language (XML), and an understanding of JPA. If you are not familiar with JPA,
please read the JPA Overview before
proceeding.

Certain sections of this guide cover advanced topics such as custom
object-relational mapping, enterprise integration, and using OpenJPA with
third-party tools. These sections assume prior experience with the relevant
subject.

1.
Introduction

This chapter describes the OpenJPA configuration framework. It concludes with
descriptions of all the configuration properties recognized by OpenJPA. You may
want to browse these properties now, but it is not necessary. Most of them will
be referenced later in the documentation as we explain the various features they
apply to.

2.
Runtime Configuration

The OpenJPA runtime includes a comprehensive system of configuration defaults
and overrides:

OpenJPA first looks for an optional openjpa.xml resource.
OpenJPA searches for this resource in each top-level directory of your
CLASSPATH. OpenJPA will also find the resource if you place it within
a META-INF directory in any top-level directory of the
CLASSPATH. The openjpa.xml resource
contains property settings in
JPA's XML format.

You can customize the name or location of the above resource by specifying the
correct resource path in the openjpa.properties System
property.

You can override any value defined in the above resource by setting the System
property of the same name to the desired value.

In JPA, the values in the standard META-INF/persistence.xml
bootstrapping file used by the
Persistence class at runtime override the values in the above resource, as well as
any System property settings. The Map passed to
Persistence.createEntityManagerFactory at runtime also
overrides previous settings, including properties defined in
persistence.xml.

When using JCA deployment the config-property values in your
ra.xml file override other settings.

Note

Internally, the OpenJPA runtime environment and development
tools manipulate property settings through a general
Configuration interface, and in particular its
OpenJPAConfiguration and
JDBCConfiguration subclasses. For advanced
customization, OpenJPA's extended runtime interfaces and its development tools
allow you to access these interfaces directly. See the
Javadoc for details.

3.
Command Line Configuration

OpenJPA development tools share the same set of configuration defaults and
overrides as the runtime system. They also allow you to specify property values
on the command line:

-properties/-p <configuration file or resource>: Use
the -properties flag, or its shorter -p
form, to specify a configuration file to use. Note that OpenJPA always searches
the default file locations described above, so this flag is only needed when you
do not have a default resource in place, or when you wish to override the
defaults. The given value can be the path to a file, or the resource name of a
file somewhere in the CLASSPATH. OpenJPA will search the
given location as well as the location prefixed by META-INF/
. Thus, to point an OpenJPA tool at
META-INF/my-persistence.xml, you can use:

<tool> -p my-persistence.xml

If you want to run a tool against just one particular persistence unit in
a configuration file, you can do so by specifying an anchor along with the
resource. If you do not specify an anchor, the tools will run against all
the persistence units defined within the specified resource, or the default
resource if none is specified. If the persistence unit is defined within
the default resource location, then you can just specify the raw anchor itself:

-<property name> <property value>: Any
configuration property that you can specify in a configuration file can be
overridden with a command line flag. The flag name is always the last token of
the corresponding property name, with the first letter in either upper or lower
case. For example, to override the openjpa.ConnectionUserName
property, you could pass the -connectionUserName <value>
flag to any tool. Values set this way override both the values in the
configuration file and values set via System properties.

3.1.
Code Formatting

Some OpenJPA development tools generate Java code. These tools share a common
set of command-line flags for formatting their output to match your coding
style. All code formatting flags can begin with either the codeFormat
or cf prefix.

-codeFormat./-cf.tabSpaces <spaces>: The number of
spaces that make up a tab, or 0 to use tab characters. Defaults to using tab
characters.

-codeFormat./-cf.spaceBeforeParen <true/t | false/f>:
Whether or not to place a space before opening parentheses on method calls, if
statements, loops, etc. Defaults to false.

-codeFormat./-cf.spaceInParen <true/t | false/f>:
Whether or not to place a space within parentheses; i.e. method( arg)
. Defaults to false.

-codeFormat./-cf.braceOnSameLine <true/t | false/f>:
Whether or not to place opening braces on the same line as the declaration that
begins the code block, or on the next line. Defaults to true
.

-codeFormat./-cf.braceAtSameTabLevel <true/t | false/f>
: When the braceOnSameLine option is disabled, you can choose
whether to place the brace at the same tab level of the contained code. Defaults
to false.

-codeFormat./-cf.scoreBeforeFieldName <true/t | false/f>
: Whether to prefix an underscore to names of private member
variables. Defaults to false.

-codeFormat./-cf.linesBetweenSections <lines>: The
number of lines to skip between sections of code. Defaults to 1.

4.
Plugin Configuration

Because OpenJPA is a highly customizable environment, many configuration
properties relate to the creation and configuration of system plugins. Plugin
properties have a syntax very similar to that of Java 5 annotations. They allow
you to specify both what class to use for the plugin and how to configure the
public fields or bean properties of the instantiated plugin instance. The
easiest way to describe the plugin syntax is by example:

OpenJPA has a pluggable L2 caching mechanism that is controlled by the
openjpa.DataCache configuration property. Suppose that you have
created a new class, com.xyz.MyDataCache, that you want
OpenJPA to use for caching. You've made instances of MyDataCache
configurable via two methods, setCacheSize(int size)
and setRemoteHost(String host). The
sample below shows how you would tell OpenJPA to use an instance of your custom
plugin with a max size of 1000 and a remote host of cacheserver
.

As you can see, plugin properties take a class name, followed by a
comma-separated list of values for the plugin's public fields or bean properties
in parentheses. OpenJPA will match each named property to a field or setter
method in the instantiated plugin instance, and set the field or invoke the
method with the given value (after converting the value to the right type, of
course). The first letter of the property names can be in either upper or lower
case. The following would also have been valid:

com.xyz.MyDataCache(cacheSize=1000, remoteHost=cacheserver)

If you do not need to pass any property settings to a plugin, you can just name
the class to use:

com.xyz.MyDataCache

Similarly, if the plugin has a default class that you do not want to change, you
can simply specify a list of property settings, without a class name. For
example, OpenJPA's query cache companion to the data cache has a default
implementation suitable to most users, but you still might want to change the
query cache's size. It has a CacheSize property for this
purpose:

CacheSize=1000

Finally, many of OpenJPA's built-in options for plugins have short alias names
that you can use in place of the full class name. The data cache property, for
example, has an available alias of true for the standard
cache implementation. The property value simply becomes:

true

The standard cache implementation class also has a CacheSize
property, so to use the standard implementation and configure the size, specify:

true(CacheSize=1000)

The remainder of this chapter reviews the set of configuration properties
OpenJPA recognizes.

OpenJPA defines many configuration properties. Most of these properties are
provided for advanced users who wish to customize OpenJPA's behavior; the
majority of developers can omit them. The following properties apply to any
OpenJPA back-end, though the given descriptions are tailored to OpenJPA's
default JDBC store.

5.27.
openjpa.DataCacheTimeout

Description: The number of milliseconds that
data in the data cache is valid. Set this to -1 to indicate that data should not
expire from the cache. This property can also be specified for individual
classes. See Section 1.1, “
Data Cache Configuration
” for details.

5.29.
openjpa.DynamicDataStructs

Description: Whether to dynamically generate
customized structs to hold persistent data. Both the OpenJPA data cache and the
remote framework rely on data structs to cache and transfer persistent state.
With dynamic structs, OpenJPA can customize data storage for each class,
eliminating the need to generate primitive wrapper objects. This saves memory
and speeds up certain runtime operations. The price is a longer warm-up time for
the application - generating and loading custom classes into the JVM takes time.
Therefore, only set this property to true if you have a
long-running application where the initial cost of class generation is offset by
memory and speed optimization over time.

5.31.
openjpa.FetchGroups

Description: A comma-separated list of fetch
group names that are to be loaded when retrieving objects from the datastore.
Fetch groups can also be set at runtime. See Section 7, “
Fetch Groups
”
for details.

5.33.
openjpa.IgnoreChanges

Description: Whether to consider modifications
to persistent objects made in the current transaction when evaluating queries.
Setting this to true allows OpenJPA to ignore changes and
execute the query directly against the datastore. A value of false
forces OpenJPA to consider whether the changes in the current
transaction affect the query, and if so to either evaluate the query in-memory
or flush before running it against the datastore.

5.34. openjpa.Id

Property name:openjpa.Id

Resource adaptor config-property:Id

Default: none

Description: An
environment-specific identifier for this configuration. This
might correspond to a JPA persistence-unit name, or to some other
more-unique value available in the current environment.

Description: Controls whether to log a warning
and defer registration instead of throwing an exception when a persistent class
cannot be fully processed. This property should only be
used in complex classloader situations where security is preventing OpenJPA from
reading registered classes. Setting this to true unnecessarily may obscure more
serious problems.

Description:
The RuntimeUnenhancedClasses property controls how OpenJPA
handles classes that have not been enhanced byt the PCEnhancer
tool or automatically by a javaagent. If RuntimeUnenhanced is
set to supported OpenJPA will automatically
create subclasses for unenhanced entity classes. If set to
unsupportedOpenJPA will not create subclasses
for unenhanced entity classes and will throw an exception when
they are detected. If set to warn OpenJPA
will not create subclasses for unenhanced entity classes
but will log a warning message.

Description: A comma-separated list of plugin
strings (see Section 4, “
Plugin Configuration
”) describing
org.apache.openjpa.lib.jdbc.ConnectionDecorator
instances to install on the connection factory. These decorators can wrap
connections passed from the underlying DataSource to add
functionality. OpenJPA will pass all connections through the list of decorators
before using them. Note that by default OpenJPA employs all
of the built-in decorators in the org.apache.openjpa.lib.jdbc
package already; you do not need to list them here.

6.17.
openjpa.jdbc.SynchronizeMappings

Description: Controls whether OpenJPA will
attempt to run the mapping tool on all persistent classes to synchronize their
mappings and schema at runtime. Useful for rapid test/debug cycles. See
Section 1.3, “
Runtime Forward Mapping
” for more information.

Chapter 3.
Logging

Logging is an important means of gaining insight into your application's runtime
behavior. OpenJPA provides a flexible logging system that integrates with many
existing runtime systems, such as application servers and servlet runners.

Warning

Logging can have a negative impact on performance. Disable verbose logging (such
as logging of SQL statements) before running any performance tests. It is
advisable to limit or disable logging for a production system. You can disable
logging altogether by setting the openjpa.Log property to
none.

1.
Logging Channels

Logging is done over a number of logging channels, each of
which has a logging level which controls the verbosity of
log messages recorded for the channel. OpenJPA uses the following logging
channels:

openjpa.Tool: Messages issued by the OpenJPA command line
and Ant tools. Most messages are basic statements detailing which classes or
files the tools are running on. Detailed output is only available via the
logging category the tool belongs to, such as openjpa.Enhance
for the enhancer (see Section 2, “
Enhancement
”) or
openjpa.MetaData for the mapping tool (see
Section 1, “
Forward Mapping
”). This logging category
is provided so that you can get a general idea of what a tool is doing without
having to manipulate logging settings that might also affect runtime behavior.

openjpa.MetaData: Details about the generation of metadata
and object-relational mappings.

openjpa.Runtime: General OpenJPA runtime messages.

openjpa.Query: Messages about queries. Query strings and any
parameter values, if applicable, will be logged to the TRACE
level at execution time. Information about possible performance concerns will be
logged to the INFO level.

openjpa.DataCache: Messages from the L2 data cache plugins.

openjpa.jdbc.JDBC: JDBC connection information. General JDBC
information will be logged to the TRACE level. Information
about possible performance concerns will be logged to the INFO
level.

openjpa.jdbc.SQL: This is the most common logging channel to
use. Detailed information about the execution of SQL statements will be sent to
the TRACE level. It is useful to enable this channel if you
are curious about the exact SQL that OpenJPA issues to the datastore.

When using the built-in OpenJPA logging facilities, you can enable SQL logging
by adding SQL=TRACE to your openjpa.Log
property.

OpenJPA can optionally reformat the logged SQL to make it easier to read. To
enable pretty-printing, add PrettyPrint=true to the
openjpa.ConnectionFactoryProperties property. You can control
how many columns wide the pretty-printed SQL will be with the
PrettyPrintLineLength property. The default line length is 60 columns.

While pretty printing makes things easier to read, it can make output harder to
process with tools like grep.

3.
Disabling Logging

Disabling logging can be useful to analyze performance without any I/O overhead
or to reduce verbosity at the console. To do this, set the openjpa.Log
property to none.

Disabling logging permanently, however, will cause all warnings to be consumed.
We recommend using one of the more sophisticated mechanisms described in this
chapter.

4.
Log4J

When openjpa.Log is set to log4j, OpenJPA
will delegate to Log4J for logging. In a standalone application, Log4J logging
levels are controlled by a resource named log4j.properties
, which should be available as a top-level resource (either at the top level of
a jar file, or in the root of one of the CLASSPATH
directories). When deploying to a web or EJB application server, Log4J
configuration is often performed in a log4j.xml file
instead of a properties file. For further details on configuring Log4J, please
see the Log4J
Manual. We present an example log4j.properties file
below.

5.
Apache Commons Logging

Set the openjpa.Log property to commons to
use the Apache
Jakarta Commons Logging thin library for issuing log messages. The
Commons Logging libraries act as a wrapper around a number of popular logging
APIs, including the
Jakarta Log4J
project, and the native
java.util.logging package in JDK 1.4. If neither of these libraries are
available, then logging will fall back to using simple console logging.

When using the Commons Logging framework in conjunction with Log4J,
configuration will be the same as was discussed in the Log4J section above.

5.1.
JDK 1.4 java.util.logging

When using JDK 1.4 or higher in conjunction with OpenJPA's Commons Logging
support, logging will proceed through Java's built-in logging provided by the
java.util.logging package. For details on configuring the built-in
logging system, please see the
Java Logging Overview.

By default, JDK 1.4's logging package looks in the
JAVA_HOME/lib/logging.properties file for logging configuration. This
can be overridden with the java.util.logging.config.file
system property. For example:

# specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# the following creates two handlers
handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
# set the default logging level for the root logger
.level=ALL
# set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level=INFO
# set the default logging level for new FileHandler instances
java.util.logging.FileHandler.level=ALL
# set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
# set the default logging level for all OpenJPA logs
openjpa.Tool.level=INFO
openjpa.Runtime.level=INFO
openjpa.Remote.level=INFO
openjpa.DataCache.level=INFO
openjpa.MetaData.level=INFO
openjpa.Enhance.level=INFO
openjpa.Query.level=INFO
openjpa.jdbc.SQL.level=INFO
openjpa.jdbc.JDBC.level=INFO
openjpa.jdbc.Schema.level=INFO

6.
Custom Log

If none of available logging systems meet your needs, you can configure the
logging system with a custom logger. You might use custom logging to integrate
with a proprietary logging framework used by some applications servers, or for
logging to a graphical component for GUI applications.

To make OpenJPA use your custom log factory, set the
openjpa.Log configuration
property to your factory's full class name. Because this property is a plugin
property (see Section 4, “
Plugin Configuration
” ), you can also
pass parameters to your factory. For example, to use the example factory above
and set its prefix to "LOG MSG", you would set the openjpa.Log
property to the following string:

OpenJPA uses a relational database for object persistence.
It communicates with the database using the Java DataBase Connectivity (JDBC)
APIs. This chapter describes how to configure OpenJPA to work with the JDBC
driver for your database, and how to access JDBC functionality at runtime.

1.
Using the OpenJPA DataSource

OpenJPA includes its own simple javax.sql.DataSource
implementation. If you choose to use OpenJPA's DataSource
, then you must specify the following properties:

openjpa.ConnectionUserName: The JDBC user name for
connecting to the database.

openjpa.ConnectionPassword: The JDBC password for the above
user.

openjpa.ConnectionURL: The JDBC URL for the database.

openjpa.ConnectionDriverName: The JDBC driver class.

To configure advanced features, use the following optional
properties. The syntax of these property strings follows the syntax of OpenJPA
plugin parameters described in Section 4, “
Plugin Configuration
”.

openjpa.ConnectionProperties: If the listed driver is an
instance of java.sql.Driver, this string will be parsed
into a Properties instance, which will then be used to
obtain database connections through the Driver.connect(String url,
Properties props) method. If, on the other hand, the listed driver
is a javax.sql.DataSource, the string will be treated
as a plugin properties string, and matched to the bean setter methods of the
DataSource instance.

Bind the DataSource into JNDI, and then specify its
location in the jta-data-source or
non-jta-data-source element of the
JPA XML format (depending on
whether the DataSource is managed by JTA), or in the
openjpa.ConnectionFactoryName property.

Specify the full class name of the DataSource
implementation in the openjpa.ConnectionDriverName property in place of a JDBC
driver. In this configuration OpenJPA will instantiate an instance of the named
class via reflection. It will then configure the DataSource
with the properties in the
openjpa.ConnectionProperties setting.

The features of OpenJPA's own DataSource can
also be used with third-party implementations. OpenJPA layers on top of the
third-party DataSource to provide the extra
functionality. To configure these features use the
openjpa.ConnectionFactoryProperties property described
in the previous section.

2.1.
Managed and XA DataSources

Certain application servers automatically enlist their DataSource
s in global transactions. When this is the case, OpenJPA should not
attempt to commit the underlying connection, leaving JDBC transaction completion
to the application server. To notify OpenJPA that your third-party
DataSource is managed by the application server, use the
jta-data-source element of your
persistence.xml file or set the
openjpa.ConnectionFactoryMode property to
managed.

Note that OpenJPA can only use managed DataSources when
it is also integrating with the application server's managed transactions. Also
note that all XA DataSources are enlisted, and you must
set this property when using any XA DataSource.

When using a managed DataSource, you should also
configure a second unmanaged DataSource that OpenJPA can
use to perform tasks that are independent of the global transaction. The most
common of these tasks is updating the sequence table OpenJPA uses to generate
unique primary key values for your datastore identity objects. Configure the
second DataSource using the non-jta-data-source
persistence.xml element, or OpenJPA's various
"2" connection properties, such as openjpa.ConnectionFactory2Name
or openjpa.Connection2DriverName. These
properties are outlined in Chapter 2,
Configuration
.

3.
Runtime Access to DataSource

The JPA standard defines how to access JDBC connections from enterprise beans.
OpenJPA also provides APIs to access an EntityManager's
connection, or to retrieve a connection directly from the
EntityManagerFactory's DataSource.

The
OpenJPAEntityManager.getConnection method
returns an EntityManager's connection. If the
EntityManager does not already have a connection, it will obtain
one. The returned connection is only guaranteed to be transactionally consistent
with other EntityManager operations if the
EntityManager is in a managed or non-optimistic transaction, if the
EntityManager has flushed in the current transaction, or
if you have used the OpenJPAEntityManager.beginStore
method to ensure that a datastore transaction is in progress. Always close the
returned connection before attempting any other EntityManager
operations. OpenJPA will ensure that the underlying native
connection is not released if a datastore transaction is in progress.

4.
Database Support

OpenJPA can take advantage of any JDBC 2.x compliant
driver, making almost any major database a candidate for use. See our officially
supported database list in Appendix 2,
Supported Databases
for more
information. Typically, OpenJPA auto-configures its JDBC behavior and SQL
dialect for your database, based on the values of your connection-related
configuration properties.

If OpenJPA cannot detect what type of database you are using, or if you are
using an unsupported database, you will have to tell OpenJPA what
org.apache.openjpa.jdbc.sql.DBDictionary to use.
The DBDictionary abstracts away the differences between
databases. You can plug a dictionary into OpenJPA using the
openjpa.jdbc.DBDictionary
configuration property. The built-in dictionaries are listed
below. If you are using an unsupported database, you may have to write your own
DBDictionary subclass, a simple process.

4.1.
DBDictionary Properties

The standard dictionaries all recognize the following properties. These
properties will usually not need to be overridden, since the dictionary
implementation should use the appropriate default values for your database. You
typically won't use these properties unless you are designing your own
DBDictionary for an unsupported database.

AllowsAliasInBulkClause:
When true, SQL delete and update statements may use table aliases.

ArrayTypeName: The overridden default column type for
java.sql.Types.ARRAY. This is used only when the schema is
generated by the mappingtool.

AutoAssignClause: The column definition clause to append to
a creation statement. For example, "AUTO_INCREMENT" for
MySQL. This property is set automatically in the dictionary, and should not need
to be overridden, and is only used when the schema is generated using the
mappingtool.

AutoAssignTypeName:
The column type name for auto-increment
columns. For example, "BIGSERIAL" for PostgreSQL. This
property is set automatically in the dictionary and should not need to be
overridden. It is used only when the schema is generated using the
mappingtool.

BatchLimit:
The default batch limit for sending multiple SQL statements at once to the
database. A value of -1 indicates unlimited batching, and any positive integer
indicates the maximum number of SQL statements to batch together.
Defaults to 0 which disables batching.

BigintTypeName: The overridden default column type for
java.sql.Types.BIGINT. This is used only when the schema is
generated by the mappingtool.

BinaryTypeName: The overridden default column type for
java.sql.Types.BINARY. This is used only when the schema is
generated by the mappingtool.

BitTypeName: The overridden default column type for
java.sql.Types.BIT. This is used only when the schema is generated by
the mappingtool.

BlobBufferSize: This property establishes the buffer size in
the INSERT/UPDATE operations with an
java.io.InputStreamThis is only used with OpenJPA's
Section 7.11, “
Stream LOB Support
”. Defaults to 50000.

BlobTypeName: The overridden default column type for
java.sql.Types.BLOB. This is used only when the schema is
generated by the mappingtool.

BooleanTypeName:
The overridden default column type for
java.sql.Types.BOOLEAN. This is used only when the schema
is generated by the mappingtool.

CastFunction:
The SQL function call to cast a value to another SQL type.
Use the tokens {0} and {1} to represent
the two arguments. The result of the function is convert the
{0} value to a {1} type.
The default is "CAST({0} AS {1})".

CatalogSeparator: The string the database uses to delimit
between the schema name and the table name. This is typically "."
, which is the default.

CharTypeName: The overridden default column type for
java.sql.Types.CHAR. This is used only when the schema is
generated by the mappingtool.

ClobBufferSize: This property establish the buffer size in
the INSERT/UPDATE operations with a
java.io.ReaderThis is only used with OpenJPA's
Section 7.11, “
Stream LOB Support
”. Defaults to 50000.

ClobTypeName: The overridden default column type for
java.sql.Types.CLOB. This is used only when the schema is
generated by the mappingtool.

ClosePoolSQL:
A special command to issue to the database when shutting down the pool.
Usually the pool of connections to the database is closed when the
application is ending. For embedded databases, whose lifecycle is
coterminous with the application, there may be a special
command, usually "SHUTDOWN",
that will cause the embedded database to close cleanly.
Defaults to null.

ConcatenateFunction:
The SQL function call or operation to concatenate two strings.
Use the tokens {0} and {1} to represent
the two arguments. The result of the function or operation is to concatenate
the {1} string to the end of the {0}
string. Defaults to "({0}||{1})".

ConstraintNameMode: When creating constraints, whether to
put the constraint name before the definition ("before"),
just after the constraint type name ("mid"), or after the
constraint definition ("after").
Defaults to "before".

CreatePrimaryKeys: When false, do not
create database primary keys for identifiers. Defaults to true
.

CrossJoinClause: The clause to use for a cross join
(cartesian product). Defaults to "CROSS JOIN".

CurrentDateFunction:
The SQL function call to obtain the current date from the database.
Defaults to "CURRENT_DATE".

CurrentTimeFunction:
The SQL function call to obtain the current time from the database.
Defaults to "CURRENT_TIME".

CurrentTimestampFunction:
The SQL function call to obtain the current timestamp from the database.
Defaults to "CURRENT_TIMESTAMP".

DatePrecision:
The database is able to store time values to this degree of precision,
which is expressed in nanoseconds.
This value is usually one million, meaning that the database is able
to store time values with a precision of one millisecond. Particular
databases may have more or less precision.
OpenJPA will round all time values to this degree of precision
before storing them in the database.
Defaults to 1000000.

DateTypeName: The overridden default column type for
java.sql.Types.DATE. This is used only when the schema is
generated by the mappingtool.

DecimalTypeName: The overridden default column type for
java.sql.Types.DECIMAL. This is used only when the schema is
generated by the mappingtool.

DistinctCountColumnSeparator: The string the database uses
to delimit between column expressions in a SELECT COUNT(DISTINCT
column-list) clause. Defaults to null
for most databases, meaning that
multiple columns in a distinct COUNT clause are not supported.

DistinctTypeName: The overridden default column type for
java.sql.Types.DISTINCT. This is used only when the schema
is generated by the mappingtool.

DoubleTypeName: The overridden default column type for
java.sql.Types.DOUBLE. This is used only when the schema is
generated by the mappingtool.

DriverVendor: The vendor of the particular JDBC driver you
are using. Some dictionaries must alter their behavior depending on the driver
vendor. Dictionaries usually detect the driver vendor and set this property
themselves. See the VENDOR_XXX constants defined in the
DBDictionary Javadoc for available options.

DropTableSQL:
The SQL statement used to drop a table. Use the token {0}
as the argument for the table name.
Defaults to "DROP TABLE {0}".

FixedSizeTypeNames:
A comma separated list of additional database types that have a size
defined by the database. In other words, when a column of a fixed
size type is declared, its size cannot be defined by the user. Common
examples would be DATE, FLOAT,
and INTEGER.
Each database dictionary has its own internal set of fixed size type names
that include the names mentioned here and many others.
Names added to this property are added to the dictionary's internal set.
Defaults to null.

FloatTypeName: The overridden default column type for
java.sql.Types.FLOAT. This is used only when the schema is
generated by the mappingtool.

ForUpdateClause: The clause to append to SELECT
statements to issue queries that obtain pessimistic locks. Defaults
to "FOR UPDATE".

GetStringVal:
A special function to return the value of an XML
column in a select statement. For example, Oracle uses
".getStringVal()", as in,
"select t0.xmlcol.getStringVal() from xmltab t0".
Defaults to the empty string.

InClauseLimit:
The maximum number of elements in an IN clause. OpenJPA
works around cases where the limit is exceeded. Defaults to -1 meaning
no limit.

InitializationSQL: A piece of SQL to issue against the
database whenever a connection is retrieved from the DataSource
.

InnerJoinClause: The clause to use for an inner join.
Defaults to "INNER JOIN".

IntegerTypeName: The overridden default column type for
java.sql.Types.INTEGER. This is used only when the schema is
generated by the mappingtool.

JavaObjectTypeName: The overridden default column type for
java.sql.Types.JAVAOBJECT. This is used only when the schema
is generated by the mappingtool.

LastGeneratedKeyQuery: The query to issue to obtain the last
automatically generated key for an auto-increment column. For example,
"SELECT LAST_INSERT_ID()" for MySQL. This property is set
automatically in the dictionary, and should not need to be overridden.

LongVarbinaryTypeName: The overridden default column type
for java.sql.Types.LONGVARBINARY. This is used only when the
schema is generated by the mappingtool.

LongVarcharTypeName: The overridden default column type for
java.sql.Types.LONGVARCHAR. This is used only when the
schema is generated by the mappingtool.

MaxAutoAssignNameLength: Set this property to the maximum
length of the sequence name used for auto-increment columns. Names longer than
this value are truncated. Defaults to 31.

MaxColumnNameLength: The maximum number of characters in a
column name. Defaults to 128.

MaxConstraintNameLength: The maximum number of characters in
a constraint name. Defaults to 128.

MaxEmbeddedBlobSize:
When greater than -1, the maximum size of a BLOB value
that can be sent directly to the database within an insert or update statement.
Values whose size is greater than MaxEmbeddedBlobSize force
OpenJPA to work around this limitation. A value of -1 means that there is
no limitation. Defaults to -1.

MaxEmbeddedClobSize:
When greater than -1, the maximum size of a CLOB value
that can be sent directly to the database within an insert or update statement.
Values whose size is greater than MaxEmbeddedClobSize force
OpenJPA to work around this limitation. A value of -1 means that there is
no limitation. Defaults to -1.

MaxIndexNameLength: The maximum number of characters in an
index name. Defaults to 128.

MaxIndexesPerTable: The maximum number of indexes that can
be placed on a single table. Defaults to no limit.

MaxTableNameLength: The maximum number of characters in a
table name. Defaults to 128.

NextSequenceQuery: A SQL string for obtaining a native
sequence value. May use a placeholder of {0} for the variable
sequence name. Defaults to a database-appropriate value. For example,
"SELECT {0}.NEXTVAL FROM DUAL" for Oracle.

NullTypeName: The overridden default column type for
java.sql.Types.NULL. This is used only when the schema is
generated by the mappingtool.

NumericTypeName: The overridden default column type for
java.sql.Types.NUMERIC. This is used only when the schema is
generated by the mappingtool.

OtherTypeName: The overridden default column type for
java.sql.Types.OTHER. This is used only when the schema is
generated by the mappingtool.

OuterJoinClause: The clause to use for an left outer join.
Defaults to "LEFT OUTER JOIN".

Platform:
The name of the database that this dictionary targets.
Defaults to "Generic", but all dictionaries override this
value.

RangePosition:
Indicates where to specify in the SQL select statement the range, if any,
of the result rows to be returned.
When limiting the number of returned result rows to a subset of all those
that satisfy the query's conditions, the position of the range clause
varies by database.
Defaults to 0, meaning that the range
is expressed at the end of the select statement but before any locking clause.
See the RANGE_XXX constants defined in DBDictionary.

RealTypeName: The overridden default column type for
java.sql.Types.REAL. This is used only when the schema is
generated by the mappingtool.

RefTypeName: The overridden default column type for
java.sql.Types.REF. This is used only when the schema is generated by
the mappingtool.

RequiresAliasForSubselect: When true, the database
requires that subselects in a FROM clause be assigned an alias.

RequiresAutoCommitForMetadata: When true, the JDBC driver
requires that autocommit be enabled before any schema interrogation operations
can take place.

RequiresCastForComparisons:
When true, comparisons of two values of different types or
of two literals requires a cast in the generated SQL.
Defaults to false.

RequiresCastForMathFunctions:
When true, math operations on two values of different types or
on two literals requires a cast in the generated SQL.
Defaults to false.

RequiresConditionForCrossJoin: Some databases require that
there always be a conditional statement for a cross join. If set, this parameter
ensures that there will always be some condition to the join clause.

RequiresTargetForDelete:
When true, the database requires a target for delete statements. Defaults
to false.

ReservedWords: A comma-separated list of reserved words for
this database, beyond the standard SQL92 keywords.

SchemaCase: The case to use when querying the database
metadata about schema components. Defaults to making all names upper case.
Available values are: upper, lower, preserve.

SearchStringEscape:
The default escape character used when generating SQL LIKE
clauses. The escape character is used to escape the wildcard meaning of the
_ and % characters.
Note: since JPQL provides the ability to define the escape character in
the query, this setting is primarily used when translating other query
languages, such as JDOQL. Defaults to "\\"
(a single backslash in Java speak).

SelectWords: A comma-separated list of keywords which may be
used to start a SELECT statement for this database. If an application executes
a native SQL statement which begins with SelectWords OpenJPA will treat the
statement as a SELECT statement rather than an UPDATE statement.

SequenceNameSQL:
Additional phrasing to use with SequenceSQL.
Defaults to null.

SequenceSQL:
General structure of the SQL query to use when interrogating the database
for sequence names.
As there is no standard way to obtain sequence names,
it defaults to null.

SequenceSchemaSQL:
Additional phrasing to use with SequenceSQL.
Defaults to null.

SimulateLocking: Some databases do not support pessimistic
locking, which will result in an exception when you attempt a
transaction while using the pessimistic lock manager.
Setting this property to true suppresses the
locking of rows in the database, thereby allowing pessimistic transactions
even on databases that do not support locking. At the same time, setting this
property to true means that you do not obtain the semantics
of a pessimistic
transaction with the database. Defaults to false.

SmallintTypeName: The overridden default column type for
java.sql.Types.SMALLINT. This is used only when the schema
is generated by the mappingtool.

StorageLimitationsFatal: When true, any data
truncation/rounding that is performed by the dictionary in order to store a
value in the database will be treated as a fatal error, rather than just issuing
a warning.

StoreCharsAsNumbers: Set this property to false
to store Java char fields as CHAR
values rather than numbers. Defaults to true.

StoreLargeNumbersAsStrings: When true, the dictionary
prefers to store Java fields of
type BigInteger and BigDecimal)
as string values in the database. Likewise, the dictionary will instruct
the mapping tool to map these Java types to character columns.
Because some databases have limitations on the number of digits that can
be stored in a numeric column (for example, Oracle can only store 38
digits), this option may be necessary for some applications.
Note that this option may prevent OpenJPA from executing meaningful numeric
queries against the columns. Defaults to false.

StringLengthFunction: Name of the SQL function for getting
the length of a string. Use the token {0} to represent the
argument.

StructTypeName: The overridden default column type for
java.sql.Types.STRUCT. This is used only when the schema is
generated by the mappingtool.

SubstringFunctionName: Name of the SQL function for getting
the substring of a string.

SupportsAlterTableWithAddColumn: When true, the database
supports adding a new column in an ALTER TABLE statement.
Defaults to true.

SupportsAlterTableWithDropColumn: When true, the database
supports dropping a column in an ALTER TABLE statement.
Defaults to true.

SupportsAutoAssign:
When true, the database supports auto-assign columns, where the value of
column is assigned upon insertion of the row into the database.
Defaults to false.

SupportsCascadeDeleteAction: When true, the database supports
the CASCADE delete action on foreign keys.
Defaults to true.

SupportsCascadeUpdateAction:
When true, the database supports the CASCADE
update action on foreign keys. Defaults to true.

SupportsComments:
When true, comments can be associated with the table in the table creation
statement. Defaults to false.

SupportsCorrelatedSubselect:
When true, the database supports correlated subselects. Correlated
subselects are select statements nested within select statements that
refers to a column in the outer select statement. For performance
reasons, correlated subselects are generally a last resort.
Defaults to true.

SupportsDefaultDeleteAction: When true, the database supports
the SET DEFAULT delete action on foreign keys.
Defaults to true.

SupportsDefaultUpdateAction:
When true, the database supports the SET DEFAULT update
action on foreign keys. Defaults to true.

SupportsDeferredConstraints: When true, the database
supports deferred constraints. The
database supports deferred constraints by checking for constraint
violations when the transaction commits, rather than checking for
violations immediately after receiving each SQL statement within the
transaction. Defaults to true.

SupportsLockingWithSelectRange: When true, the database
supports FOR UPDATE select clauses with queries that select a
range of data using LIMIT, TOP or the
database equivalent. Defaults to true.

SupportsModOperator:
When true, the database supports the modulus operator (%)
instead of the MOD function.
Defaults to false.

SupportsMultipleNontransactionalResultSets: When true, a
nontransactional connection is capable of having multiple open
ResultSet instances.

SupportsNullDeleteAction: When true, the database supports
the SET NULL delete action on foreign keys.
Defaults to true.

SupportsNullTableForGetColumns: When true, the database
supports passing a null parameter to
DatabaseMetaData.getColumns as an optimization to get information
about all the tables. Defaults to true.

SupportsNullTableForGetImportedKeys: When true, the
database supports passing a null parameter to
DatabaseMetaData.getImportedKeys as an optimization to get
information about all the tables. Defaults to false.

SupportsNullTableForGetIndexInfo: When true, the database
supports passing a null parameter to
DatabaseMetaData.getIndexInfo as an optimization to get information
about all the tables. Defaults to false.

SupportsNullTableForGetPrimaryKeys: When true, the
database supports passing a null parameter to
DatabaseMetaData.getPrimaryKeys as an optimization to get
information about all the tables. Defaults to false.

SupportsNullUpdateAction:
When true, the database supports the SET NULL update
action on foreign keys. Defaults to true.

SupportsQueryTimeout: When true, the JDBC driver supports
calls to java.sql.Statement.setQueryTimeout.

SupportsRestrictDeleteAction: When true, the database
supports the RESTRICT delete action on foreign keys.
Defaults to true.

SupportsRestrictUpdateAction:
When true, the database supports the RESTRICT update
action on foreign keys. Defaults to true.

SupportsSchemaForGetColumns: When false, the database
driver does not support using the schema name for schema reflection on column
names.

SupportsSchemaForGetTables: If false, then the database
driver does not support using the schema name for schema reflection on table
names.

SupportsSelectEndIndex: When true, the database can create a
select that is limited to the first N results.

SystemSchemas: A comma-separated list of schema names that
should be ignored.

SystemTables: A comma-separated list of table names that
should be ignored.

TableForUpdateClause: The clause to append to the end of
each table alias in queries that obtain pessimistic locks.
Defaults to null.

TableTypes: Comma-separated list of table types to use when
looking for tables during schema reflection, as defined in the
java.sql.DatabaseMetaData.getTableInfo JDBC method. An example is:
"TABLE,VIEW,ALIAS". Defaults to "TABLE".

TimeTypeName: The overridden default column type for
java.sql.Types.TIME. This is used only when the schema is
generated by the mappingtool.

TimestampTypeName: The overridden default column type for
java.sql.Types.TIMESTAMP. This is used only when the schema
is generated by the mappingtool.

TinyintTypeName: The overridden default column type for
java.sql.Types.TINYINT. This is used only when the schema is
generated by the mappingtool.

ToLowerCaseFunction: Name of the SQL function for converting
a string to lower case. Use the token {0} to represent the
argument.

ToUpperCaseFunction: SQL function call for for converting a
string to upper case. Use the token {0} to represent the
argument.

TrimBothFunction:
The SQL function call to trim any number of a particular character
from both the start and end of a string.
Note: some databases do not support specifying the character in which
case only spaces or whitespace can be trimmed.
Use the token {1} when possible to represent the character,
and the token {0} to represent the string.
Defaults to "TRIM(BOTH {1} FROM {0})".

TrimLeadingFunction:
The SQL function call to trim any number of a particular character
from the start of a string.
Note: some databases do not support specifying the character in which
case only spaces or whitespace can be trimmed.
Use the token {1} when possible to represent the character,
and the token {0} to represent the string.
Defaults to "TRIM(LEADING {1} FROM {0})".

TrimTrailingFunction:
The SQL function call to trim any number of a particular character
from the end of a string.
Note: some databases do not support specifying the character in which
case only spaces or whitespace can be trimmed.
Use the token {1} when possible to represent the character,
and the token {0} to represent the string.
Defaults to "TRIM(TRAILING {1} FROM {0})".

UseGetBestRowIdentifierForPrimaryKeys: When true,
metadata queries will use DatabaseMetaData.getBestRowIdentifier
to obtain information about primary keys, rather than
DatabaseMetaData.getPrimaryKeys.

UseGetBytesForBlobs: When true,
ResultSet.getBytes will be used to obtain blob data rather than
ResultSet.getBinaryStream.

UseGetObjectForBlobs: When true,
ResultSet.getObject will be used to obtain blob data rather than
ResultSet.getBinaryStream.

UseGetStringForClobs: When true,
ResultSet.getString will be used to obtain clob data rather than
ResultSet.getCharacterStream.

UseSchemaName: If false, then avoid
including the schema name in table name references. Defaults to true
.

UseSetBytesForBlobs: When true,
PreparedStatement.setBytes will be used to set blob data, rather
than PreparedStatement.setBinaryStream.

UseSetStringForClobs: When true,
PreparedStatement.setString will be used to set clob data, rather
than PreparedStatement.setCharacterStream.

ValidationSQL: The SQL used to validate that a connection is
still in a valid state. For example, "SELECT SYSDATE FROM DUAL"
for Oracle.

VarbinaryTypeName: The overridden default column type for
java.sql.Types.VARBINARY. This is used only when the schema
is generated by the mappingtool.

VarcharTypeName: The overridden default column type for
java.sql.Types.VARCHAR. This is used only when the schema is
generated by the mappingtool.

XmlTypeName:
The column type name for XML columns. This
property is set automatically in the dictionary and should not need to be
overridden. It is used only when the schema is generated using the
mappingtool. Defaults to "XML".

4.2.
MySQLDictionary Properties

The mysql dictionary also understands the following
properties:

DriverDeserializesBlobs: Many MySQL drivers automatically
deserialize BLOBs on calls to ResultSet.getObject. The
MySQLDictionary overrides the standard
DBDictionary.getBlobObject method to take this into account. If
your driver does not deserialize automatically, set this property to
false.

TableType: The MySQL table type to use when creating tables.
Defaults to "innodb".

UseClobs: Some older versions of MySQL do not handle clobs
correctly. To enable clob functionality, set this to true.
Defaults to false.

OptimizeMultiTableDeletes: MySQL as of version 4.0.0
supports multiple tables in DELETE statements. When
this option is set, OpenJPA will use that syntax when doing bulk deletes
from multiple tables. This can happen when the
deleteTableContentsSchemaTool
action is used. (See Section 13, “
Schema Tool
” for
more info about deleteTableContents.) Defaults to
false, since the statement may fail if using InnoDB
tables and delete constraints.

4.3.
OracleDictionary Properties

The oracle dictionary understands the following additional
properties:

UseTriggersForAutoAssign: When true, OpenJPA will allow
simulation of auto-increment columns by the use of Oracle triggers. OpenJPA will
assume that the current sequence value from the sequence specified in the
AutoAssignSequenceName parameter will hold the value of the
new primary key for rows that have been inserted. For more details on
auto-increment support, see Section 4.4, “
Autoassign / Identity Strategy Caveats
”
.

AutoAssignSequenceName: The global name of the sequence that
OpenJPA will assume to hold the value of primary key value for rows that use
auto-increment. If left unset, OpenJPA will use a the sequence named
"SEQ_<table name>".

MaxEmbeddedBlobSize: Oracle is unable to persist BLOBs using
the embedded update method when BLOBs get over a certain size. The size depends
on database configuration, e.g. encoding. This property defines the maximum size
BLOB to persist with the embedded method. Defaults to 4000 bytes.

MaxEmbeddedClobSize: Oracle is unable to persist CLOBs using
the embedded update method when Clobs get over a certain size. The size depends
on database configuration, e.g. encoding. This property defines the maximum size
CLOB to persist with the embedded method. Defaults to 4000 characters.

UseSetFormOfUseForUnicode: Prior to Oracle 10i, statements
executed against unicode capable columns (the NCHAR,
NVARCHAR, NCLOB Oracle types) required
special handling to be able to store unicode values. Setting this property to
true (the default) will cause OpenJPA to attempt to detect when the column of
one of these types, and if so, will attempt to correctly configure the statement
using the OraclePreparedStatement.setFormOfUse. For
more details, see the Oracle
Readme For NChar. Note that this can only work if OpenJPA is able to
access the underlying OraclePreparedStatement instance,
which may not be possible when using some third-party datasources. If OpenJPA
detects that this is the case, a warning will be logged.

5.
Setting the Transaction Isolation

OpenJPA typically retains the default transaction isolation level of the JDBC
driver. However, you can specify a transaction isolation level to use through
the
openjpa.jdbc.TransactionIsolation configuration property. The
following is a list of standard isolation levels. Note that not all databases
support all isolation levels.

default: Use the JDBC driver's default isolation level.
OpenJPA uses this option if you do not explicitly specify any other.

6.
Setting the SQL Join Syntax

Object queries often involve using SQL joins behind the scenes. You can
configure OpenJPA to use either SQL 92-style join syntax, in which joins are
placed in the SQL FROM clause, the traditional join syntax, in which join
criteria are part of the WHERE clause, or a database-specific join syntax
mandated by the
DBDictionary. OpenJPA only supports outer joins when using
SQL 92 syntax or a database-specific syntax with outer join support.

The
openjpa.jdbc.DBDictionary plugin accepts the
JoinSyntax property to set the system's default syntax. The available
values are:

7.
Accessing Multiple Databases

Through the properties we've covered thus far, you can configure each
EntityManagerFactory to access a different
database. If your application accesses multiple databases, we recommend that you
maintain a separate persistence unit for each one. This will allow you to easily
load the appropriate resource for each database at runtime, and to give the
correct configuration file to OpenJPA's command-line tools during development.

8.
Configuring the Use of JDBC Connections

In its default configuration, OpenJPA obtains JDBC connections on an as-needed
basis. OpenJPA EntityManagers do not retain a connection
to the database unless they are in a datastore transaction or there are open
Query results that are using a live JDBC result set. At
all other times, including during optimistic transactions,
EntityManagers request a connection for each query, then
immediately release the connection back to the pool.

In some cases, it may be more efficient to retain connections for longer periods
of time. You can configure OpenJPA's use of JDBC connections through the