The default is
forward-only
. The benefit of the other two is that the result set will
be scrollable and hence objects will only be read in to memory when accessed. So if you
have a large result set you should set this to one of the scrollable values.

When using a "scrollable" result set (see above for
datanucleus.rdbms.query.resultSetType
)
by default the query result will cache the rows that have been read. You can control this
caching to optimise it for your memory requirements. You can set the query extension
datanucleus.query.resultCacheType
and it has the following possible values

If you have a large result set you clearly don't want to instantiate all objects
since this would hit the memory footprint of your application. To get the number of
results many JDBC drivers will load all rows of the result set. This is to be avoided
so DataNucleus provides control over the mechanism for getting the size of results.
The persistence property
datanucleus.query.resultSizeMethod
has a default of
last
(which means navigate to the last object - hence hitting the JDBC driver problem).
If you set this to
count
then it will use a simple "count()" query to get the size.

When a transaction is committed by default all remaining results for a query are loaded
so that the query is usable thereafter. With a large result set you clearly don't want this to
happen. So in this case you should set the extension
datanucleus.query.loadResultsAtCommit
to false.

DataNucleus provides a useful extension allowing control over the ResultSet's that are
created by queries. You have at your convenience some properties that give you the power to
control whether the result set is read only, whether it can be read forward only, the
direction of fetching etc.

When using the method
contains
on a collection (or
containsKey
,
containsValue
on a map) this will either add an EXISTS subquery (if there is a NOT or OR present in the query)
or will add an INNER JOIN across to the element table. Let's take an example

Note that we add the
contains
first that binds the variable "b1" to the element table,
and then add the condition on the variable. The order is important here. If we instead had put
the condition on the variable first we would have had to do a CROSS JOIN to the variable table
and then try to repair the situation and change it to INNER JOIN if possible. In this case the
generated SQL will be like

In all situations we aim for DataNucleus JDOQL implementation to work out the right way of
linking a variable into the query, whether this is via a join (INNER, LEFT OUTER), or via a
subquery. As you can imagine this can be complicated to work out the optimum for all situations
so with that in mine we allow (for a limited number of situations) the option of specifying
the join type. This is achieved by setting the query extension
datanucleus.query.jdoql.{varName}.join
to the required type. For 1-1 relations this would
be either "INNERJOIN" or "LEFTOUTERJOIN", and for 1-N relations this would be either
"INNERJOIN" or "SUBQUERY".

Please, if you find a situation where the optimum join type is not chosen then report it
in JIRA for project "NUCRDBMS" as priority "Minor" so it can be registered for future work

With a JPQL query running on an RDBMS the query is compiled into SQL. Here we give a
few examples of what SQL is generated. You can of course try this for yourself observing
the content of the DataNucleus log.

In JPQL you specify a candidate class and its alias (identifier). In addition you can
specify joins with their respective alias. The DataNucleus implementation of JPQL will
preserve these aliases in the generated SQL.