EclipseLink provides a diverse set of features to measure and optimize application performance. You can enable or disable most features in the descriptors or session, making any resulting performance gains global.

Introduction to Optimization

Performance considerations are present at every step of the development cycle. Although this implies an awareness of performance issues in your design and implementation, it does not mean that you should expect to achieve the best possible performance in your first pass.

For example, if optimization complicates the design, leave it until the final development phase. You should still plan for these optimizations from your first iteration, to make them easier to integrate later.

The most important concept associated with tuning your EclipseLink application is the idea of an iterative approach. The most effective way to tune your application is to do the following:

To identify the changes that improve your application performance, modify only one or two components at a time. You should also tune your application in a nonproduction environment before you deploy the application.

Identifying Sources of Application Performance Problems

For various parts of an EclipseLink-enabled application, this section describes the performance problems most commonly encountered and provides suggestions for improving performance. Areas of the application where performance problems could occur include the following:

Measuring EclipseLink Performance with the EclipseLink Profiler

The most important challenge to performance tuning is knowing what to optimize. To improve the performance of your application, identify the areas of your application that do not operate at peak efficiency. The EclipseLink performance profiler helps you identify performance problems by logging performance statistics for every executed query in a given session.

Note: You should also consider using general performance profilers such as JDeveloper or JProbe to analyze performance problems. These tools can provide more detail that may be required to properly diagnose a problem.

The EclipseLink performance profiler logs the following information to the EclipseLink log file (for general information about EclipseLink logging, see Logging):

query class;

domain class;

total time, total execution time of the query, including any nested queries (in milliseconds);

local time, execution time of the query, excluding any nested queries (in milliseconds);

How to Configure the EclipseLink Performance Profiler

The EclipseLink performance profiler is an instance of org.eclipse.persistence.tools.profiler.PerformanceProfiler class. It provides the following public API:

logProfile – enables the profiler;

dontLogProfile – disables the profiler;

logProfileSummary – organizes the profiler log into a summary of all the individual operation profiles including operation statistics like the shortest time of all the operations that were profiled, the total time of all the operations, the number of objects returned by profiled queries, and the total time that was spent in each kind of operation that was profiled;

logProfileSummaryByQuery – organizes the profiler log into a summary of all the individual operation profiles by query;

logProfileSummaryByClass – organizes the profiler log into a summary of all the individual operation profiles by class.

How to Access the EclipseLink Profiler Results

The simplest way to view EclipseLink profiler results is to read the EclipseLink log files with a text editor. For general information about EclipseLink logging, such as logging file location, see Logging.

Alternatively, you can use the graphical performance profiler that the EclipseLink Web client provides. For more information, refer to the Web client online Help and README files.

Identifying General Performance Optimization

In general, avoid overriding EclipseLink default behavior unless your application requires it. Some EclipseLink defaults are suitable for a development environment; you should change these defaults to suit your production environment (see Optimizing for a Production Environment).

Use the Workbench rather than manual coding. These tools are not only easy to use: the default configuration they export to deployment XML (and the code it generates, if required) represents best practices optimized for most applications.

Optimizing for a Production Environment

Some EclipseLink defaults are suitable for a development environment but we recommend that you change these to suit your production environment for optimal performance. These defaults include:

Optimizing Schema

Optimization is an important consideration when you design your database schema and object model. Most performance issues occur when the object model or database schema is too complex, which can make the database slow and difficult to query. This is most likely to happen if you derive your database schema directly from a complex object model.

To optimize performance, design the object model and database schema together. However, allow each model to be designed optimally: do not require a direct one-to-one correlation between the two.

The nature of this application dictates that you always look up employees and addresses together. As a result, querying a member based on address information requires a database join, and reading a member and its address requires two read statements. Writing a member requires two write statements. This adds unnecessary complexity to the system, and results in poor performance.

A better solution is to combine the MEMBER and ADDRESS tables into a single table, and change the one-to-one relationship to an aggregate relationship. This lets you read all information with a single operation, and doubles the update and insert speed, because only a single row in one table requires modifications.

Optimized Schema (Aggregation of Two Tables Case)

Elements

Details

Classes

Member, Address

Tables

MEMBER

Relationships

address - Embedded (aggregate) - Address

Schema Case 2: Splitting One Table Into Many

To improve overall performance of the system, split large tables into two or more smaller tables. This significantly reduces the amount of data traffic required to query the database.

For example, the system illustrated in the Original Schema (Splitting One Table into Many Case) table assigns employees to projects within an organization. The most common operation reads a set of employees and projects, assigns employees to projects, and update the employees. The employee's address or job classification is also occasionally used to determine the project on which the employee is placed.

When you read a large volume of employee records from the database, you must also read their aggregate parts. Because of this, the system suffers from general read performance issues. To resolve this, break the EMPLOYEE table into the EMPLOYEE, ADDRESS, PHONE, EMAIL, and JOB tables, as illustrated in the Optimized Schema (Splitting One Table into Many Case) table.

Because you usually read only the employee information, splitting the table reduces the amount of data transferred from the database to the client. This improves your read performance by reducing the amount of data traffic by 25 percent.

Schema Case 3: Collapsed Hierarchy

A common mistake when you transform an object-oriented design into a relational model, is to build a large hierarchy of tables on the database. This makes querying difficult, because queries against this type of design can require a large number of joins. It is usually a good idea to collapse some of the levels in your inheritance hierarchy into a single table.

The system suffers from complexity issues that hinder system development and performance. Nearly all queries against the database require large, resource-intensive joins. If you collapse the three-level table hierarchy into a single table, as illustrated in the Optimized Schema (Collapsed Hierarchy Case) table, you substantially reduce system complexity. You eliminate joins from the system, and simplify queries.

Optimized Schema (Collapsed Hierarchy Case)

Elements

Details

Classes

Tables

Person

none

Employee

EMPLOYEE

SalesRep

EMPLOYEE

Staff

EMPLOYEE

Client

CLIENT

Contact

CLIENT

Schema Case 4: Choosing One Out of Many

In a one-to-many relationship, a single source object has a collection of other objects. In some cases, the source object frequently requires one particular object in the collection, but requires the other objects only infrequently. You can reduce the size of the returned result set in this type of case by adding an instance variable for the frequently required object. This lets you access the object without instantiating the other objects in the collection.

The Original Schema (Choosing One out of Many Case) table represents a system by which an international shipping company tracks the location of packages in transit. When a package moves from one location to another, the system creates a new a location entry for the package in the database. The most common query against any given package is for its current location.

Original Schema (Choosing One out of Many Case)

Elements

Details

Instance Variable

Mapping

Target

Title

ACME Shipping Package Location Tracking system

Classes

Package, Location

Tables

PACKAGE, LOCATION

Relationships

Package

locations

OneToMany

Location

A package in this system can accumulate several location values in its LOCATION collection as it travels to its destination. Reading all locations from the database is resource intensive, especially when the only location of interest is the current location.

To resolve this type of problem, add a specific instance variable that represents the current location. You then add a one-to-one mapping for the instance variable, and use the instance variable to query for the current location. As illustrated in the Original Schema (Choosing One out of Many Case) table, because you can now query for the current location without reading all locations associated with the package, this dramatically improves the performance of the system.

Optimized Schema (Choosing One out of Many Case)

Elements

Details

Instance Variable

Mapping

Target

Classes

Package, Location

Tables

PACKAGE, LOCATION

Relationships

Package

locations

OneToMany

Location

Package

currentLocation

OneToOne

Location

Optimizing Mappings and Descriptors

Always use indirection (lazy loading). It is not only critical in optimizing database access, but also allows EclipseLink to make several other optimizations including optimizing its cache access and unit of work processing. See Configuring Indirection (Lazy Loading).

Avoid expensive initialization in the default constructor that EclipseLink uses to instantiate objects. Instead, use lazy initialization or use an EclipseLink instantiation policy (see Configuring Instantiation Policy) to configure the descriptor to use a different constructor.

We recommend you increase the size of your session read and write connection pools to the desired number of concurrent threads (for example, 50). You configure this in EclipseLink when using an internal connection pool or in the data source when using an external connection pool.

Optimizing Cache

Cache coordination (see Cache Coordination) is one way to allow multiple, possibly distributed, instances of a session to broadcast object changes among each other so that each session's cache can be kept up-to-date.

If you do use cache coordination, use JMS for cache coordination rather than RMI. JMS is more robust, easier to configure, and runs asynchronously. If you require synchronous cache coordination, use RMI.

You can configure a descriptor to control when the EclipseLink runtime will refresh the session cache when an instance of this object type is queried (see Configuring Cache Refreshing). We do not recommend the use of Always Refresh or Disable Cache Hits.

Using Always Refresh may result in refreshing the cache on queries when not required or desired. As an alternative, consider configuring cache refresh on a query by query basis (see How to Refresh the Cache).

Using Disable Cache Hits instructs EclipseLink to bypass the cache for object read queries based on primary key. This results in a database round trip every time an object read query based on primary key is executed on this object type, negating the performance advantage of the cache. When used in conjunction with Always Refresh, this option ensures that all queries go to the database. This can have a significant impact on performance. These options should only be used in specialized circumstances.

Optimizing Data Access

Depending on the type of data source your application accesses, EclipseLink offers a variety of Login options that you can use to tune the performance of low level data reads and writes.

You can use several techniques to improve data access performance for your application. This section discusses some of the more common approaches, including the following:

JDBC driver properties that are not supported directly by Workbench or EclipseLink API can still be configured as generic JDBC properties that EclipseLink passes to the JDBC driver.

For example, some JDBC drivers, such as Sybase JConnect, perform a database round trip to test whether or not a connection is closed: that is, calling the JDBC driver method isClosed results in a stored procedure call or SQL select. This database round-trip can cause a significant performance reduction. To avoid this, you can disable this behavior: for Sybase JConnect, you can set property name CLOSED_TEST to value INTERNAL.

For more information about configuring general JDBC driver properties from within your EclipseLink application, see Configuring Properties.

How to Optimize Data Format

By default, EclipseLink optimizes data access by accessing the data from JDBC in the format the application requires. For example, EclipseLink retrieves long data types from JDBC instead of having the driver return a BigDecimal that EclipseLink would then have to convert into a long.

Some older JDBC drivers do not perform data conversion correctly and conflict with this optimization. In this case, you can disable this optimization (see Configuring Advanced Options).

How to Use Batch Writing for Optimization

Batch writing can improve database performance by sending groups of INSERT, UPDATE, and DELETE statements to the database in a single transaction, rather than individually.

When used without parameterized SQL, this is known as dynamic batch writing.

In POJO applications, you can use setMaxBatchWritingSize method of the Login interface. The meaning of this value depends on whether or not you are using parameterized SQL:

If you are using parameterized SQL (you configure your Login by calling its bindAllParameters method), the maximum batch writing size is the number of statements to batch with 100 being the default.

If you are using dynamic SQL, the maximum batch writing size is the size of the SQL string buffer in characters with 32000 being the default.

By default, EclipseLink does not enable batch writing because not all databases and JDBC drivers support it. We recommend that you enable batch writing for selected databases and JDBC drivers that support this option. If your JDBC driver does not support batch writing, use the batch writing capabilities of EclipseLink, known as native batch writing (see Configuring JDBC Options).

How to Use Parameterized SQL (Parameter Binding) and Prepared Statement Caching for Optimization

Using parameterized SQL, you can keep the overall length of an SQL query from exceeding the statement length limit that your JDBC driver or database server imposes.

Using parameterized SQL and prepared statement caching, you can improve performance by reducing the number of times the database SQL engine parses and prepares SQL for a frequently called query.

By default, EclipseLink enables parameterized SQL but not prepared statement caching. We recommend that you enable statement caching either in EclipseLink when using an internal connection pool or in the data source when using an external connection pool and choose a statement cache size appropriate for your application.

Not all JDBC drivers support all JDBC binding options (see Configuring JDBC Options). Selecting a combination of options may result in different behavior from one driver to another. Before selecting JDBC options, consult your JDBC driver documentation. When choosing binding options, consider the following approach:

Try binding all parameters with all other binding options disabled.

If this fails to bind some large parameters, consider enabling one of the following options, depending on the parameter's data type and the binding options that your JDBC driver supports:

If this fails to bind some large parameters, try enabling streams for binding.Typically, configuring string or byte array binding will invoke streams for binding. If not, explicitly configuring streams for binding may help.

For Java EE applications that use EclipseLink external connection pools, you must configure parameterized SQL in EclipseLink, but you cannot configure prepared statement caching in EclipseLink. In this case, you must configure prepared statement caching in the application server connection pool. For example, in OC4J, if you configure your data-source.xml file with a managed data-source (where connection-driver is oracle.jdbc.OracleDriver, and class is oracle.j2ee.sql.DriverManagerDataSource), you can configure a non-zero num-cached-statements that enables JDBC statement caching and defines the maximum number of statements cached.

For applications that use EclipseLink internal connection pools, you can configure parameterized SQL and prepared statement caching.

You can configure parameterized SQL and prepared statement caching at the following levels:

session database login level–applies to all queries and provides additional parameter binding API to alleviate the limit imposed by some drivers on SQL statement size.We recommend that you use this approach.For more information, see the following:

How to Use Named Queries for Optimization

Whenever possible, use named queries in your application. Named queries help you avoid duplication, are easy to maintain and reuse, and easily add complex query behavior to the application. Using named queries also allows for the query to be prepared once, and for the SQL generation to be cached.

How to Use Batch and Join Reading for Optimization

To optimize database read operations, EclipseLink supports both batch and join reading. When you use these techniques, you dramatically decrease the number of times you access the database during a read operation, especially when your result set contains a large number of objects.

How to Use Read-Only Queries for Optimization

You can configure an object-level read query as read-only, as this shows. When you execute such a query in the context of a UnitOfWork (or EclipseLink JPA persistence provider), EclipseLink returns a read-only, non-registered object. You can improve performance by querying read-only data in this way because the read-only objects need not be registered or checked for changes.

How to Use JDBC Fetch Size for Optimization

The JDBC fetch size gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed.

For large queries that return a large number of objects you can configure the row fetch size used in the query to improve performance by reducing the number database hits required to satisfy the selection criteria.

Most JDBC drivers default to a fetch size of 10, so if you are reading 1000 objects, increasing the fetch size to 256 can significantly reduce the time required to fetch the query's results. The optimal fetch size is not always obvious. Usually, a fetch size of one half or one quarter of the total expected result size is optimal. Note that if you are unsure of the result set size, incorrectly setting a fetch size too large or too small can decrease performance.

Set the query fetch size with ReadQuery method setFetchSize, as the JDBC Driver Fetch Size example shows. Alternatively, you can use ReadQuery method setMaxRows to set the limit for the maximum number of rows that any ResultSet can contain.

In this example, when you execute the query, the JDBC driver retrieves the first 50 rows from the database (or all rows if less than 50 rows satisfy the selection criteria). As you iterate over the first 50 rows, each time you call cursor.next(), the JDBC driver returns a row from local memory–it does not need to retrieve the row from the database. When you try to access the fifty first row (assuming there are more than 50 rows that satisfy the selection criteria), the JDBC driver again goes to the database and retrieves another 50 rows. In this way, 100 rows are returned with only two database hits.

If you specify a value of zero (default; means the fetch size is not set), then the hint is ignored and the JDBC driver's default is used.

How to Use Cursored Streams and Scrollable Cursors for Optimization

You can configure a query to retrieve data from the database using a cursored Java stream or scrollable cursor. This lets you view a result set in manageable increments rather than as a complete collection. This is useful when you have a large result set. You can further tune performance by configuring the JDBC driver fetch size used (see How to Use JDBC Fetch Size for Optimization|).

How to Use Result Set Pagination for Optimization

As this figure shows, using ReadQuery methods setMaxRows(maxRows) and setFirstResult(firstResult), you can configure a query to retrieve a result set in pages, that is, a partial result as a List of pageSize (or less) results.

Using Result Set Pagination

In this example, for the first query invocation, pageSize=3, maxRows=pageSize, and firstResult=0. This returns a List of results 00 through 02.

For each subsequent query invocation, you increment maxRows=maxRows+pageSize and firstResult=firstResult+pageSize. This returns a new List for each page of results 03 through 05, 06 through 08, and so on.

Typically, you use this approach when you do not necessarily need to process the entire result set. For example, when a user wishes to scan the result set a page at a time looking for a particular result and may abandon the query after the desired record is found.

The advantage of this approach over cursors is that it does not require any state or live connection on the server; you only need to store the firstResult index on the client. This makes it useful for paging through a Web result.

Similar to partial object reading, but returns only the data instead of the objects.

Supports complex reporting functions such as aggregation and group-by functions. Also lets you compute complex results on the database, instead of reading the objects into the application and computing the results locally.For more information, see Report Query.

Similar to the weak identity map, except that the map uses soft references instead of weak references. This method allows full garbage collection and provides full caching and guaranteed identity

Allows for optimal caching of the objects without the overhead of a sub-cache, while still allowing the JVM to garbage collect the objects if memory is low.For more information, see Soft Identity Map.

Reading Case 1: Displaying Names in a List

An application may ask the user to choose an element from a list. Because the list displays only a subset of the information contained in the objects, it is not necessary to query for all information for objects from the database.

EclipseLink features that optimize these types of operations include the following:

These features let you query only the information required to display the list. The user can then select an object from the list.

No Optimization

JPA

/* Read all the employees from the database, ask the user to choose one and return it. *//* This must read in all the information for all the employees */
ListBox list;// Fetch data from database and add to list boxList employees = entityManager.createQuery("Select e from Employee e").getResultList();
list.addAll(employees);// Display list box
....
// Get selected employee from list
Employee selectedEmployee =(Employee) list.getSelectedItem();return selectedEmployee;

Native API

/* Read all the employees from the database, ask the user to choose one and return it. *//* This must read in all the information for all the employees */
ListBox list;// Fetch data from database and add to list boxList employees = session.readAllObjects(Employee.class);
list.addAll(employees);// Display list box
....
// Get selected employee from list
Employee selectedEmployee =(Employee) list.getSelectedItem();return selectedEmployee;

Partial Object Reading

Partial object reading is a query designed to extract only the required information from a selected record in a database, rather than all the information the record contains. Because partial object reading does not fully populate objects, you can neither cache nor edit partially read objects.

In this example, the query builds complete employee objects, even though the list displays only employee last names. With no optimization, the query reads all the employee data.

The Optimization Through Partial Object Reading example demonstrates the use of partial object reading. It reads only the last name and primary key for the employee data. This reduces the amount of data read from the database.

Optimization Through Partial Object Reading

JPA

/* Read all the employees from the database, ask the user to choose one and return it. *//* This uses partial object reading to read just the last names of the employees. */
ListBox list;// Fetch data from database and add to list box List employees = entityManager.createQuery("Select new Employee(e.id, e.lastName) from Employee e").getResultList();
list.addAll(employees);// Display list box
....
// Get selected employee from list
Employee selectedEmployee =(Employee)entityManager.find(Employee.class, ((Employee)list.getSelectedItem()).getId());return selectedEmployee;

Native API

/* Read all the employees from the database, ask the user to choose one and return it. *//* This uses partial object reading to read just the last names of the employees. *//* Since EclipseLink automatically includes the primary key of the object, the full object can easily be read for editing */
ListBox list;// Fetch data from database and add to list box
ReadAllQuery query =new ReadAllQuery(Employee.class);
query.addPartialAttribute("lastName");// The next line avoids a query exception
query.dontMaintainCache();List employees = session.executeQuery(query);
list.addAll(employees);// Display list box
....
// Get selected employee from list
Employee selectedEmployee =(Employee)session.readObject(list.getSelectedItem());return selectedEmployee;

Report Query

Report query lets you retrieve data from a set of objects and their related objects. Report query supports database reporting functions and features.

The Optimization Through Report Query example demonstrates the use of report query to read only the last name of the employees. This reduces the amount of data read from the database compared to the code in the No Optimization example, and avoids instantiating employee instances.

Optimization Through Report Query

JPA

/* Read all the employees from the database, ask the user to choose one and return it. *//* This uses a report query to read just the last names of the employees. */
ListBox list;// Fetch data from database and add to list box// This query returns a List of Object[] data valuesList rows = entityManager.createQuery("Select e.id, e.lastName from Employee e").getResultList();
list.addAll(rows);// Display list box
....
// Get selected employee from listObject selectedItem[]=(Object[])list.getSelectedItem();
Employee selectedEmployee =(Employee)entityManager.find(Employee.class, selectedItem[0]);return selectedEmployee;

Native API

/* Read all the employees from the database, ask the user to choose one and return it. *//* The report query is used to read just the last name of the employees. *//* Then the primary key stored in the report query result to read the real object */
ListBox list;// Fetch data from database and add to list box
ExpressionBuilder builder =new ExpressionBuilder();
ReportQuery query =new ReportQuery (Employee.class, builder);
query.addAttribute("lastName");
query.retrievePrimaryKeys();List reportRows =(List) session.executeQuery(query);
list.addAll(reportRows);// Display list box
....
// Get selected employee from list
ReportQueryResult result =(ReportQueryResult) list.getSelectedItem();
Employee selectedEmployee =(Employee)result.readobject(Employee.Class, session);

Fetch Groups

Fetch groups are similar to partial object reading, but does allow caching of the objects read. For objects with many attributes or reference attributes to complex graphs (or both), you can define a fetch group that determines what attributes are returned when an object is read. Because EclipseLink will automatically execute additional queries when the get method is called for attributes not in the fetch group, ensure that the unfetched data is not required: refetching data can become a performance issue.

// Use fetch group at query level
ReadAllQuery query =new ReadAllQuery(Employee.class);
FetchGroup group =new FetchGroup("nameOnly");
group.addAttribute("firstName");
group.addAttribute("lastName");
query.setFetchGroup(group);
JpaQuery jpaQuery =(JpaQuery)entityManager.createQuery("Select e from Employee e");
jpaQuery.setDatabaseQuery(query);List employees = jpaQuery.getResultList();/* Only Employee attributes firstName and lastName are fetched.
If you call the Employee get method for any other attribute, EclipseLink executes
another query to retrieve all unfetched attribute values. Thereafter,
calling that get method will return the value directly from the object */

Native API

// Use fetch group at query level
ReadAllQuery query =new ReadAllQuery(Employee.class);
FetchGroup group =new FetchGroup("nameOnly");
group.addAttribute("firstName");
group.addAttribute("lastName");
query.setFetchGroup(group);List employees = session.executeQuery(query);/* Only Employee attributes firstName and lastName are fetched.
If you call the Employee get method for any other attribute, EclipseLink executes
another query to retrieve all unfetched attribute values. Thereafter,
calling that get method will return the value directly from the object */

Reading Case 2: Batch Reading Objects

The way your application reads data from the database affects performance. For example, reading a collection of rows from the database is significantly faster than reading each row individually.

A common performance challenge is to read a collection of objects that have a one-to-one reference to another object. This typically requires one read operation to read in the source rows, and one call for each target row in the one-to-one relationship.

// Read all the employees; collect their address' cities. Although the code// is almost identical because joining optimization is used it takes only 1 query*/
// Read all the employees from the database using joining. // This requires 1 SQL call
ReadAllQuery query = new ReadAllQuery(Employee.class);
ExpressionBuilder builder = query.getExpressionBuilder();
query.setSelectionCriteria(builder.get("lastName").equal("Smith"));
query.addJoinedAttribute("address");
Vector employees = session.executeQuery(query);
/// SQL: Select E.*, A.* from Employee E, Address A where E.l_name = 'Smith' and // E.address_id = A.address_id Iterate over employees and get their addresses. // The previous SQL already read all the addresses, so no SQL is required
Enumeration enum = employees.elements();
Vector cities = new Vector();
while (enum.hasMoreElements()) {
Employee employee = (Employee) enum.nextElement();
cities.addElement(employee.getAddress().getCity());
}

Optimization Through Batch Reading

// Read all the employees; collect their address' cities. Although the code // is almost identical because batch reading optimization is used it takes only 2 queries // Read all the employees from the database, using batch reading. // This requires 1 SQL call, note that only the employees are read
ReadAllQuery query = new ReadAllQuery(Employee.class);
ExpressionBuilder builder = query.getExpressionBuilder();
query.setSelectionCriteria(bulder.get("lastName").equal("Smith"));
query.addBatchReadAttribute("address");
Vector employees = (Vector)session.executeQuery(query);
// SQL: Select * from Employee where l_name = 'Smith'// Iterate over employees and get their addresses.
// The first address accessed will cause all the addresses to be read in a single SQL call
Enumeration enum = employees.elements();
Vector cities = new Vector();
while (enum.hasMoreElements()) {
Employee employee = (Employee) enum.nextElement();
cities.addElement(employee.getAddress().getCity());
// SQL: Select distinct A.* from Employee E, Address A // where E.l_name = 'Smith' and E.address_id = A.address_i
}

Joins offer a significant performance increase under most circumstances. Batch reading offers a further performance advantage in that it allows for delayed loading through value holders, and has much better performance where the target objects are shared.

Batch reading and joining are available for one-to-one, one-to-many, many-to-many, direct collection, direct map and aggregate collection mappings. Note that one-to-many joining will return a large amount of duplicate data and so is normally less efficient than batch reading.

Reading Case 3: Using Complex Custom SQL Queries

EclipseLink provides a high-level query mechanism. However, if your application requires a complex query, a direct SQL or stored procedure call may be the best solution.

Reading Case 4: Using View Objects

Some application operations require information from several objects rather than from just one. This can be difficult to implement, and resource-intensive. The No Optimization example illustrates unoptimized code that reads information from several objects.

No Optimization

/* Gather the information to report on an employee and return the summary of the information. In this situation, a hash table is used to hold the report information. Notice that this reads a lot of objects from the database, but uses very little of the information contained in the objects. This may take 5 queries and read in a large number of objects */
public Hashtable reportOnEmployee(String employeeName) {
Vector projects, associations;
Hashtable report = new Hashtable();
// Retrieve employee from database
Employee employee = session.readObject(Employee.class,
new ExpressionBuilder.get("lastName").equal(employeeName));
// Get all the projects affiliated with the employee
projects = session.readAllObjects(Project.class,
"SELECT P.* FROM PROJECT P," +
"EMPLOYEE E WHERE P.MEMBER_ID = E.EMP_ID AND E.L_NAME = " +
employeeName);
// Get all the associations affiliated with the employee
associations = session.readAllObjects(Association.class, "SELECT A.* " +
"FROM ASSOC A, EMPLOYEE E WHERE A.MEMBER_ID = E.EMP_ID AND E.L_NAME = "
+ employeeName);
report.put("firstName", employee.getFirstName());
report.put("lastName", employee.getLastName());
report.put("manager", employee.getManager());
report.put("city", employee.getAddress().getCity());
report.put("projects", projects);
report.put("associations", associations);
return report;
}

To improve application performance in these situations, define a new read-only object to encapsulate this information, and map it to a view on the database. To set the object to be read-only, configure its descriptor as read-only (see Configuring Read-Only Descriptors).

Reading Case 5: Inheritance Subclass Outer-Joining

If you have an inheritance hierarchy that spans multiple tables and frequently query for the root class, consider using outer joining. This allows an outer-joining to be used for queries against an inheritance superclass that can read all of its subclasses in a single query instead of multiple queries.

Note that on some databases, the outer joins may be less efficient than the default multiple queries mechanism.

Lets you group all insert, update, and delete commands from a transaction into a single database call. This dramatically reduces the number of calls to the database (see Batch Writing and Parameterized SQL).

For write-only, or non-cached (isolated) objects, the unit of work isolation level should be set to isolated-always to avoid caching overhead when not caching (see Cache Isolation).

Writing Case: Batch Writes

The most common write performance problem occurs when a batch job inserts a large volume of data into the database. For example, consider a batch job that loads a large amount of data from one database, and then migrates the data into another. The following objects are involved:

Simple individual objects with no relationships.

Objects that use generated sequence numbers as their primary key.

Objects that have an address that also uses a sequence number.

The batch job loads 10,000 employee records from the first database and inserts them into the target database. With no optimization, the batch job reads all the records from the source database, acquires a unit of work from the target database, registers all objects, and commits the unit of work.

No Optimization

JPA

// Read all the employees from source entity manager
// Read all the employees from the database. This requires 1 SQL call,
// but will be very memory intensive as 10,000 objects will be read
List<Employee> employees = (List<Employee>)sourceEntityManager.createQuery("Select e from Employee e").getResultList();
//SQL: Select * from Employee// Acquire a unit of work and register the employees
targetEntityManager.getTransaction().begin();
for (Employee employee : employees) {
targetEntityManager.persist(employee);
}
targetEntityManager.getTransaction().commit();

Native API

// Read all the employees, acquire a unit of work, and register them
// Read all the employees from the database. This requires 1 SQL call,
// but will be very memory intensive as 10,000 objects will be read
List employees = sourceSession.readAllObjects(Employee.class);
//SQL: Select * from Employee// Acquire a unit of work and register the employees
UnitOfWork uow = targetSession.acquireUnitOfWork();
uow.registerAllObjects(employees);
uow.commit();

This batch job performs poorly, because it requires 60,000 SQL executions. It also reads huge amounts of data into memory, which can raise memory performance issues. EclipseLink offers several optimization features to improve the performance of this batch job.

To improve this operation, do the following:

Use EclipseLink batch read operations and cursor support (see Cursors).

Use batch writing or parameterized batch writing to write to the database (see Batch Writing and Parameterized SQL).If your database does not support batch writing, use parameterized SQL to implement the write query.

Cursors

To optimize the query in the No Optimization example, use a cursored stream to read the Employees from the source database. You can also employ a weak identity map instead of a hard or soft cache identity map in both the source and target databases.

To address the potential for memory problems, use the releasePrevious method after each read to stream the cursor in groups of 100. Register each batch of 100 employees in a new unit of work and commit them.

Although this does not reduce the amount of executed SQL, it does address potential out-of-memory issues. When your system runs out of memory, the result is performance degradation that increases over time, and excessive disk activity caused by memory swapping on disk.

Batch Writing and Parameterized SQL

Batch writing lets you combine a group of SQL statements into a single statement and send it to the database as a single database execution. This feature reduces the communication time between the application and the server, and substantially improves performance.

You can enable batch writing alone (dynamic batch writing) using Login method useBatchWriting. If you add batch writing to the No Optimization example, you execute each batch of 100 employees as a single SQL execution. This reduces the number of SQL executions from 20,200 to 300.

You can also enable batch writing and parameterized SQL (parameterized batch writing) and prepared statement caching. Parameterized SQL avoids the prepare component of SQL execution. This improves write performance because it avoids the prepare cost of an SQL execution. For parameterized batch writing you would get one statement per Employee, and one for Address: this reduces the number of SQL executions from 20,200 to 400. Although this is more than dynamic batch writing alone, parameterized batch writing also avoids all parsing, so it is much more efficient overall.

Although parameterized SQL avoids the prepare component of SQL execution, it does not reduce the number of executions. Because of this, parameterized SQL alone may not offer as big of a gain as batch writing. However, if your database does not support batch writing, parameterized SQL will improve performance. If you add parameterized SQL in the No Optimization example, you must still execute 20,200 SQL executions, but parameterized SQL reduces the number of SQL PREPAREs to 4.

Sequence Number Preallocation

SQL select calls are more resource-intensive than SQL modify calls, so you can realize large performance gains by reducing the number of select calls you issue. The code in the No Optimization example uses the select calls to acquire sequence numbers. You can substantially improve performance if you use sequence number preallocation.

In EclipseLink, you can configure the sequence preallocation size on the login object (the default size is 50). The No Optimization example uses a preallocation size of 1 to demonstrate this point. If you stream the data in batches of 100 as suggested in Cursors, set the sequence preallocation size to 100. Because employees and addresses in the example both use sequence numbering, you further improve performance by letting them share the same sequence. If you set the preallocation size to 200, this reduces the number of SQL execution from 60,000 to 20,200.

Multiprocessing

You can use multiple processes or multiple machines to split the batch job into several smaller jobs. In this example, splitting the batch job across threads enables you to synchronize reads from the cursored stream, and use parallel Units of Work on a single machine.

This leads to a performance increase, even if the machine has only a single processor, because it takes advantage of the wait times inherent in SQL execution. While one thread waits for a response from the server, another thread uses the waiting cycles to process its own database operation.

The following example illustrates the optimized code for this example. Note that it does not illustrate multiprocessing.

Enable weaving and change tracking to greatly improve transactional performance. For more information, see Optimizing Using Weaving.

If your performance measurements show that you have a performance problem during unit of work commit, consider using object level or attribute level change tracking, depending on the type of objects involved and how they typically change. For more information, see Unit of Work and Change Policy.

Optimizing Using Weaving

We recommend that you enable weaving to improve performance.

In addition to using weaving to transparently configure lazy loading (indirection) and change tracking, EclipseLink uses weaving to make numerous internal optimizations.

We recommend that you enable weaving. Transactional performance can be greatly improved through using weaving and change tracking.

Optimizing the Application Server and Database Optimization

Configuring your application server and database correctly can have a big impact on performance and scalabilty. Ensure that you correctly optimize these key components of your application in addition to your EclipseLink application and persistence.

Ensure that your database has been configured correctly for optimal performance and its expected load.

Optimizing Storage and Retrieval of Binary Data in XML

When working with Java API for XML Web Services (JAX-WS), you can use XML binary attachments to optimize the storage and retrieval of binary data in XML. Rather than storing the data as a base64 BLOB, you can optimize it by sending the data as a Multipurpose Internet Mail Extensions (MIME) attachment in order to retrieve it on the other end.

To make the use of XML binary attachments, register an instance of the org.eclipselink.persistence.ox.attachment.XMLAttachmentMarshaller or XMLAttachmentUnmarshaller interface with the binding framework. During a marshal operation, binary data will be handed into the XMLAttachmentMarshaller, which will be required to provide an ID that you can use at a later time to retrieve the data. For more information, see
EclipseLink runtime supports MtOM and SwaRef-style attachments.

EclipseLink provides support for the following Java types as attachments:

java.awt.Image

javax.activation.DataHandler

javax.mail.internet.MimeMultipart

javax.xml.transform.Source

byte[]

Byte[]

You can generate schema and mappings based on JAXB 2.0 classes for these types.

You can configure which mappings will be treated as attachments and set the MIME types of those attachments. You perform configurations using the following JAXB 2.0 annotations:

XmlAttachmentRef–Used on a DataHandler to indicate that this should be mapped to a swaRef in the XML schema. This means it should be treated as a SwaRef attachment.

XmlMimeType–Specifies the expected MIME type of the mapping. When used on a byte array, this value should be passed into the XMLAttachmentMarshaller during a marshal operation. During schema generation, this will result in an expectedContentType attribute being added to the related element.

XmlInlineBinaryData–Indicates that this binary field should always be written inline as base64Binary and never treated as an attachment.

How to Use an Attachment Marshaller and Unmarshaller

You implement EclipseLink XMLAttachmentMarshaller and XMLAttachmentUnmarshaller interfaces to add and retrieve various types of XML attachments. An XMLMarshaller holds an instance of an XMLAttachmentMarshaller, and XMLUnmarshaller–an instance of an XMLAttachmentUnmarshaller.

You set and obtain an attachment marshaller and unmarshaller using the following corresponding XMLMarshaller and XMLUnmarshaller methods:
setAttachmentMarshaller(XMLAttachmentMarshaller am)getAttachmentMarshaller()setAttachmentUnmarshaller(XMLAttachmentUnmarshaller au)getAttachmentUnmarshaller()

The following example shows how to use an attachment marshaller in your application.