Batch Queries

A Batch Query enables you to request queries with long-running CPU processing times. Typically, these kind of requests raise timeout errors when using the SQL API. In order to avoid timeouts, you can use Batch Queries to create, read and cancel queries. You can also run a chained batch query to chain several SQL queries into one job. A Batch Query schedules the incoming jobs and allows you to request the job status for each query.

Batch Queries are not intended to be used for large query payloads that contain over 16384 characters (16kb). For instance, if you are inserting a large number of rows into your table, you still need to use the Import API or SQL API for this type of data management. Batch Queries are specific to queries and CPU usage.

Note: In order to use Batch Queries, you must be authenticated using API keys.

Authentication

An API Key is required to manage your jobs. The following error message appears if you are not authenticated:

Batch Queries Job Schema

A Batch Query request to your CARTO account includes the following job schema elements. Only the query element can be modified. All other elements of the job schema are defined by the Batch Query and are read-only.

Name

Description

job_id

a universally unique identifier (uuid).

user

user identifier, as displayed by the username.

status

displays the result of the long-running query. The possible status results are:

pending

job waiting to be executed.

running

indicates that the job is currently running.

done

job executed successfully.

failed

job executed but failed, with errors.

canceled

job canceled by user request.

unknown

appears when it is not possible to determine what exactly happened with the job.

query

the SQL statement to be executed in a database. You can modify the select SQL statement to be used in the job schema.

Tip: In some scenarios, you may need to retrieve the query results from a finished job. See Fetching Job Results for details.

created_at

the date and time when the job schema was created.

updated_at

the date and time of when the job schema was last updated, or modified.

Note: Jobs can only be canceled while the status: "running" or status: "pending", otherwise the Batch Query operation is not allowed. You will receive an error if the job status is anything but “running” or “pending”.

errors: ["The job status is done, cancel is not allowed"]

DELETE Examples

If you are using the Batch Query cancel operation for cURL DELETE request, use the following code:

curl -X DELETE "http://{username}.carto.com/api/v2/sql/job/{job_id}"

If you are using the Batch Query cancel operation for a Node.js client DELETE request, use the following code:

Chaining Batch Queries

In some cases, you may need to chain queries into one job. The Chaining Batch Query option enables you run an array of SQL statements, and define the order in which the queries are executed. You can use any of the operations (create, read, list, update, cancel) for the queries in a chained batch query.

Note: The Batch Query returns a job status for both the parent Chained Batch Query request, and for each child query within the request. The order in which each query is executed is guaranteed. Here are the possible status results for Chained Batch Queries:

If one query within the Chained Batch Query fails, the "status": "failed" is returned for both the job and the query, and any “pending” queries will not be processed

If you cancel the Chained Batch Query job, the job status changes to "status": "cancelled". Any running queries within the job will be stopped and changed to "status": "pending", and will not be processed

Suppose the first query job status is "status": "done", the second query is "status": "running", and the third query "status": "pending". If the second query fails for some reason, the job status changes to "status": "failed" and the last query will not be processed. It is indicated which query failed in the Chained Batch Query job

Creating several jobs does not guarantee that jobs are going to be executed in the same order that they were created. If you need run queries in a specific order, you may want use Chaining Batch Queries.

POST Examples

If you are using the Chained Batch Query operation for cURL POST request, use the following code:

Chaining Batch Queries with fallbacks

When you need to run an extra query based on how a chaining query finished, Batch Queries enable you to define onerror and onsuccess fallbacks. This powerful feature opens a huge range of possibilities, for instance:

You can create jobs periodically in order to get updated data and create a new table where you can check the status of your tables.

If a query of a job fails (and onerror fallbacks for that query and job are defined), then Batch Queries runs the first fallback for that query. The job fallback runs next and sets the job as failed. Remaining queries will not be executed. Furthermore, Batch Queries will run the onsuccess fallback at the job level, if (and only if), every query has finished successfully.

Templates

Batch Queries provide a simple way to get the error message and the job identifier to be used in your fallbacks, by using the following templates:

<%= error_message %>: will be replaced by the error message raised by the database.

<%= job_id %>: will be replaced by the job identifier that Batch Queries provides.

Fetching Job Results

In some scenarios, you may need to fetch the output of a job. If that is the case, wrap the query with SELECT * INTO, or CREATE TABLE AS. The output is stored in a new table in your database. For example, if using the query SELECT * FROM airports:

Wrap the query SELECT * INTO job_result FROM (SELECT * FROM airports) AS job

Best Practices

For best practices, follow these recommended usage notes when using Batch Queries:

Batch Queries are recommended for INSERT, UPDATE, and CREATE queries that manipulate and create new data, such as creating expensive indexes, applying updates over large tables, and creating tables from complex queries. Batch queries have no effect for SELECT queries that retrieve data but do not store the results in a table. For example, running a batch query using SELECT * from my_dataset will not produce any results.

Batch Queries are not intended for large query payloads (e.g: inserting thousands of rows), use the Import API for this type of data management.

There is a limit of 16kb per job. The following error message appears if your job exceeds this size: