Administration of the MapR-DB is done primarily via the command line (maprcli) or with the MapR Control System (MCS). Regardless of whether the MapR-DB table is used for binary files or JSON documents, the same types of commands are used with slightly different parameter options. MapR-DB administration is associated with tables, columns and column families, and table regions.

This section describes how to verify, through log output, that your OJAI query is running out of memory due to memory limits in the OJAI Distributed Query Service. It then describes how to adjust the memory settings in the service.

Administration of the MapR-DB is done primarily via the command line (maprcli) or with the MapR Control System (MCS). Regardless of whether the MapR-DB table is used for binary files or JSON documents, the same types of commands are used with slightly different parameter options. MapR-DB administration is associated with tables, columns and column families, and table regions.

MapR-DB supports two types of table: binary tables and JSON tables. This section covers how to create, edit, and delete tables, as well as how to set parameter values, display parameter values, grant permissions and access, replicate tables, and more using the MapR Control System and the CLI.

This section covers overviews of column families in binary tables and JSON tables, how to create column families, alter them, delete them, set permissions on them, and set and display parameter values.

This section describes two ways to access a Java OJAI query plan and provides general information about how to interpret the query plan. You can examine the query plan to determine if the Java OJAI client chooses an appropriate execution path.

This section describes how to verify, through log output, that your OJAI query is running out of memory due to memory limits in the OJAI Distributed Query Service. It then describes how to adjust the memory settings in the service.

The MapR Data Access Gateway is a service that acts as a proxy and gateway for translating requests between lightweight client applications and the MapR cluster. This section describes considerations when upgrading the service, how to modify configuration settings, and how to administer and manage the service.

Adjusting Memory Settings in the OJAI Distributed Query Service

This section describes how to verify, through log output, that your OJAI query is
running out of memory due to memory limits in the OJAI Distributed Query Service. It then
describes how to adjust the memory settings in the service.

Before adjusting the OJAI Distributed Query Service memory settings, first confirm that
your query has run out of memory due to limits in the service.

You should see output like the following in your client application log:

15:32:46.465 [Thread-21] - Error caused in scan Drill submissionFailed for "select t.`$$ENC00FIAF62LE`,t.`$$document` from dfs.`/tables/business` t where ((t.`city` = 'Currie') and (t.`state` = 'PA') and (t.`review_count` > 5100)) limit 1
org.ojai.exceptions.OjaiException: Drill submissionFailed for "select t.`$$ENC00FIAF62LE`,t.`$$document` from dfs.`/tables/business` t where ((t.`city` = 'Currie') and (t.`state` = 'PA') and (t.`review_count` > 5100)) limit 128" please ch
at com.mapr.ojai.store.impl.DrillDocumentStream$DocumentResultsListener.submissionFailed(DrillDocumentStream.java:220)
at com.mapr.ojai.store.impl.DelegatingResultsListener$2.run(DelegatingResultsListener.java:84)
at com.mapr.ojai.store.impl.RunnableQueue$QueueRunner.run(RunnableQueue.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query.Failure trying to allocate initial reservation for Allocator. Attempted to allocate 5000000 bytes and received an outcome of FAILED_LOCAL.
Fragment 0:0

In the OJAI Distributed Query Service, log files are in the
/opt/mapr/drill/<drill-version>/logs/drillbit.log on each node
where the Query Service is running.

You should see output like the
following:

2017-10-07 15:32:41,693 [BitServer-3] INFO o.a.drill.exec.ops.FragmentContext - User Error Occurred: One or more nodes ran out of memory while executing the query. (Failure trying to allocate initial reservation for Allocator. Attempted
org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query.
Failure trying to allocate initial reservation for Allocator. Attempted to allocate 7000000 bytes and received an outcome of FAILED_LOCAL.
Fragment 1:1

After confirming, increase the Query Service memory settings by editing the
/opt/mapr/conf/conf.d/warden.drill-bits.conf file on each Drillbit
node. The file contains the following entries: