This document is intended as a reference guide to the full syntax and semantics of the SQL++ Query Language, a SQL-inspired language for working with semistructured data. SQL++ has much in common with SQL, but some differences do exist due to the different data models that the two languages were designed to serve. SQL was designed in the 1970’s for interacting with the flat, schema-ified world of relational databases, while SQL++ is much newer and targets the nested, schema-optional (or even schema-less) world of modern NoSQL systems.

In the context of Apache AsterixDB, SQL++ is intended for working with the Asterix Data Model (ADM),a data model based on a superset of JSON with an enriched and flexible type system. New AsterixDB users are encouraged to read and work through the (much friendlier) guide “AsterixDB 101: An ADM and SQL++ Primer” before attempting to make use of this document. In addition, readers are advised to read through the Asterix Data Model (ADM) reference guide first as well, as an understanding of the data model is a prerequisite to understanding SQL++.

In what follows, we detail the features of the SQL++ language in a grammar-guided manner. We list and briefly explain each of the productions in the SQL++ grammar, offering examples (and results) for clarity.

SQL++ is a highly composable expression language. Each SQL++ expression returns zero or more data model instances. There are three major kinds of expressions in SQL++. At the topmost level, a SQL++ expression can be an OperatorExpression (similar to a mathematical expression), an ConditionalExpression (to choose between alternative values), or a QuantifiedExpression (which yields a boolean value). Each will be detailed as we explore the full SQL++ grammar.

The following table summarizes the precedence order (from higher to lower) of the major unary and binary operators:

Operator

Operation

EXISTS, NOT EXISTS

Collection emptiness testing

^

Exponentiation

*, /, %

Multiplication, division, modulo

+, -

Addition, subtraction

||

String concatenation

IS NULL, IS NOT NULL, IS MISSING, IS NOT MISSING, IS UNKNOWN, IS NOT UNKNOWN, IS VALUED, IS NOT VALUED

Unknown value comparison

BETWEEN, NOT BETWEEN

Range comparison (inclusive on both sides)

=, !=, <>, <, >, <=, >=, LIKE, NOT LIKE, IN, NOT IN

Comparison

NOT

Logical negation

AND

Conjunction

OR

Disjunction

In general, if any operand evaluates to a MISSING value, the enclosing operator will return MISSING; if none of operands evaluates to a MISSING value but there is an operand evaluates to a NULL value, the enclosing operator will return NULL. However, there are a few exceptions listed in comparison operators and logical operators.

Comparison operators are used to compare values. The comparison operators fall into one of two sub-categories: missing value comparisons and regular value comparisons. SQL++ (and JSON) has two ways of representing missing information in a object - the presence of the field with a NULL for its value (as in SQL), and the absence of the field (which JSON permits). For example, the first of the following objects represents Jack, whose friend is Jill. In the other examples, Jake is friendless a la SQL, with a friend field that is NULL, while Joe is friendless in a more natural (for JSON) way, i.e., by not having a friend field.

Examples

{“name”: “Jack”, “friend”: “Jill”}

{“name”: “Jake”, “friend”: NULL}

{“name”: “Joe”}

The following table enumerates all of SQL++’s comparison operators.

Operator

Purpose

Example

IS NULL

Test if a value is NULL

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NULL;

IS NOT NULL

Test if a value is not NULL

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT NULL;

IS MISSING

Test if a value is MISSING

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS MISSING;

IS NOT MISSING

Test if a value is not MISSING

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT MISSING;

IS UNKNOWN

Test if a value is NULL or MISSING

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS UNKNOWN;

IS NOT UNKNOWN

Test if a value is neither NULL nor MISSING

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT UNKNOWN;

IS VALUED

Test if a value is neither NULL nor MISSING

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS VALUED;

IS NOT VALUED

Test if a value is NULL or MISSING

SELECT * FROM ChirpMessages cm WHERE cm.user.name IS NOT VALUED;

BETWEEN

Test if a value is between a start value and a end value. The comparison is inclusive to both start and end values.

SELECT * FROM ChirpMessages cm WHERE cm.chirpId BETWEEN 10 AND 20;

=

Equality test

SELECT * FROM ChirpMessages cm WHERE cm.chirpId=10;

!=

Inequality test

SELECT * FROM ChirpMessages cm WHERE cm.chirpId!=10;

<>

Inequality test

SELECT * FROM ChirpMessages cm WHERE cm.chirpId<>10;

<

Less than

SELECT * FROM ChirpMessages cm WHERE cm.chirpId<10;

>

Greater than

SELECT * FROM ChirpMessages cm WHERE cm.chirpId>10;

<=

Less than or equal to

SELECT * FROM ChirpMessages cm WHERE cm.chirpId<=10;

>=

Greater than or equal to

SELECT * FROM ChirpMessages cm WHERE cm.chirpId>=10;

LIKE

Test if the left side matches a pattern defined on the right side; in the pattern, “%” matches any string while “_” matches any character.

SELECT * FROM ChirpMessages cm WHERE cm.user.name LIKE “%Giesen%”;

NOT LIKE

Test if the left side does not match a pattern defined on the right side; in the pattern, “%” matches any string while “_” matches any character.

SELECT * FROM ChirpMessages cm WHERE cm.user.name NOT LIKE “%Giesen%”;

The following table summarizes how the missing value comparison operators work.

In a simple CASE expression, the query evaluator searches for the first WHEN … THEN pair in which the WHEN expression is equal to the expression following CASE and returns the expression following THEN. If none of the WHEN … THEN pairs meet this condition, and an ELSE branch exists, it returns the ELSE expression. Otherwise, NULL is returned.

In a searched CASE expression, the query evaluator searches from left to right until it finds a WHEN expression that is evaluated to TRUE, and then returns its corresponding THEN expression. If no condition is found to be TRUE, and an ELSE branch exists, it returns the ELSE expression. Otherwise, it returns NULL.

Quantified expressions are used for expressing existential or universal predicates involving the elements of a collection.

The following pair of examples illustrate the use of a quantified expression to test that every (or some) element in the set [1, 2, 3] of integers is less than three. The first example yields FALSE and second example yields TRUE.

It is useful to note that if the set were instead the empty set, the first expression would yield TRUE (“every” value in an empty set satisfies the condition) while the second expression would yield FALSE (since there isn’t “some” value, as there are no values in the set, that satisfies the condition).

A quantified expression will return a NULL (or MISSING) if the first expression in it evaluates to NULL (or MISSING). A type error will be raised if the first expression in a quantified expression does not return a collection.

Components of complex types in the data model are accessed via path expressions. Path access can be applied to the result of a SQL++ expression that yields an instance of a complex type, for example, a object or array instance. For objects, path access is based on field names. For arrays, path access is based on (zero-based) array-style indexing. SQL++ also supports an “I’m feeling lucky” style index accessor, [?], for selecting an arbitrary element from an array. Attempts to access non-existent fields or out-of-bound array elements produce the special value MISSING. Type errors will be raised for inappropriate use of a path expression, such as applying a field accessor to a numeric value.

The following examples illustrate field access for a object, index-based element access for an array, and also a composition thereof.

The most basic building block for any SQL++ expression is PrimaryExpression. This can be a simple literal (constant) value, a reference to a query variable that is in scope, a parenthesized expression, a function call, or a newly constructed instance of the data model (such as a newly constructed object, array, or multiset of data model instances).

Literals (constants) in SQL++ can be strings, integers, floating point values, double values, boolean constants, or special constant values like NULL and MISSING. The NULL value is like a NULL in SQL; it is used to represent an unknown field value. The specialy value MISSING is only meaningful in the context of SQL++ field accesses; it occurs when the accessed field simply does not exist at all in a object being accessed.

The following are some simple examples of SQL++ literals.

Examples

'a string'
"test string"
42

Different from standard SQL, double quotes play the same role as single quotes and may be used for string literals in SQL++.

A variable in SQL++ can be bound to any legal data model value. A variable reference refers to the value to which an in-scope variable is bound. (E.g., a variable binding may originate from one of the FROM, WITH or LET clauses of a SELECT statement or from an input parameter in the context of a function body.) Backticks, for example, `id`, are used for delimited identifiers. Delimiting is needed when a variable’s desired name clashes with a SQL++ keyword or includes characters not allowed in regular identifiers. More information on exactly how variable references are resolved can be found in the appendix section on Variable Resolution.

Example

Functions are included in SQL++, like most languages, as a way to package useful functionality or to componentize complicated or reusable SQL++ computations. A function call is a legal SQL++ query expression that represents the value resulting from the evaluation of its body expression with the given parameter bindings; the parameter value bindings can themselves be any SQL++ expressions.

The following example is a (built-in) function call expression whose value is 8.

A major feature of SQL++ is its ability to construct new data model instances. This is accomplished using its constructors for each of the model’s complex object structures, namely arrays, multisets, and objects. Arrays are like JSON arrays, while multisets have bag semantics. Objects are built from fields that are field-name/field-value pairs, again like JSON.

The following examples illustrate how to construct a new array with 4 items and a new object with 2 fields respectively. Array elements can be homogeneous (as in the first example), which is the common case, or they may be heterogeneous (as in the second example). The data values and field name values used to construct arrays, multisets, and objects in constructors are all simply SQL++ expressions. Thus, the collection elements, field names, and field values used in constructors can be simple literals or they can come from query variable references or even arbitrarily complex SQL++ expressions (subqueries). Type errors will be raised if the field names in an object are not strings, and duplicate field errors will be raised if they are not distinct.

At the uppermost level, the world of data is organized into data namespaces called dataverses. To set the default dataverse for a series of statements, the USE statement is provided in SQL++.

As an example, the following statement sets the default dataverse to be “TinySocial”.

Example

USE TinySocial;

When writing a complex SQL++ query, it can sometimes be helpful to define one or more auxilliary functions that each address a sub-piece of the overall query. The declare function statement supports the creation of such helper functions. In general, the function body (expression) can be any legal SQL++ query expression.

In this section, we will make use of two stored collections of objects (datasets), GleambookUsers and GleambookMessages, in a series of running examples to explain SELECT queries. The contents of the example collections are as follows:

The SELECT VALUE clause in SQL++ returns an array or multiset that contains the results of evaluating the VALUE expression, with one evaluation being performed per “binding tuple” (i.e., per FROM clause item) satisfying the statement’s selection criteria. For historical reasons SQL++ also allows the keywords ELEMENT or RAW to be used in place of VALUE (not recommended).

If there is no FROM clause, the expression after VALUE is evaluated once with no binding tuples (except those inherited from an outer environment).

Example

SELECT VALUE 1;

This query returns:

[
1
]

The following example shows a query that selects one user from the GleambookUsers collection.

In SQL++, the traditional SQL-style SELECT syntax is also supported. This syntax can also be reformulated in a SELECT VALUE based manner in SQL++. (E.g., SELECT expA AS fldA, expB AS fldB is syntactic sugar for SELECT VALUE { 'fldA': expA, 'fldB': expB }.) Unlike in SQL, the result of an SQL++ query does not preserve the order of expressions in the SELECT clause.

Example

In SQL++, SELECT * returns a object with a nested field for each input tuple. Each field has as its field name the name of a binding variable generated by either the FROM clause or GROUP BY clause in the current enclosing SELECT statement, and its field value is the value of that binding variable.

Note that the result of SELECT * is different from the result of query that selects all the fields of an object.

Example

SELECT *
FROM GleambookUsers user;

Since user is the only binding variable generated in the FROM clause, this query returns:

As in standard SQL, SQL++ field access expressions can be abbreviated (not recommended!) when there is no ambiguity. In the next example, the variable user is the only possible variable reference for fields id, name and alias and thus could be omitted in the query. More information on abbbreviated field access can be found in the appendix section on Variable Resolution.

Example

For each of its input tuples, the UNNEST clause flattens a collection-valued expression into individual items, producing multiple tuples, each of which is one of the expression’s original input tuples augmented with a flattened item from its collection.

The following example is a query that retrieves the names of the organizations that a selected user has worked for. It uses the UNNEST clause to unnest the nested collection employment in the user’s object.

As an alternative, the LEFT OUTER UNNEST clause offers SQL’s left outer join semantics. For example, no collection-valued field named hobbies exists in the object for the user whose id is 1, but the following query’s result still includes user 1.

Example

Note that if u.hobbies is an empty collection or leads to a MISSING (as above) or NULL value for a given input tuple, there is no corresponding binding value for variable h for an input tuple. A MISSING value will be generated for h so that the input tuple can still be propagated.

The SQL++ UNNEST clause is similar to SQL’s JOIN clause except that it allows its right argument to be correlated to its left argument, as in the examples above — i.e., think “correlated cross-product”. The next example shows this via a query that joins two data sets, GleambookUsers and GleambookMessages, returning user/message pairs. The results contain one object per pair, with result objects containing the user’s name and an entire message. The query can be thought of as saying “for each Gleambook user, unnest the GleambookMessages collection and filter the output with the condition message.authorId = user.id”.

In SQL++, in addition to stored collections, a FROM clause can iterate over any intermediate collection returned by a valid SQL++ expression. In the tuple stream generated by a FROM clause, the ordering of the input tuples are not guaranteed to be preserved.

Example

SQL++ permits correlations among FROM terms. Specifically, a FROM binding expression can refer to variables defined to its left in the given FROM clause. Thus, the first unnesting example above could also be expressed as follows:

For non-matching left-side tuples, SQL++ produces MISSING values for the right-side binding variables; that is why the last object in the above result doesn’t have a message field. Note that this is slightly different from standard SQL, which instead would fill in NULL values for the right-side fields. The reason for this difference is that, for non-matches in its join results, SQL++ views fields from the right-side as being “not there” (a.k.a. MISSING) instead of as being “there but unknown” (i.e., NULL).

The left-outer join query can also be expressed using LEFT OUTER UNNEST:

The SQL++ GROUP BY clause generalizes standard SQL’s grouping and aggregation semantics, but it also retains backward compatibility with the standard (relational) SQL GROUP BY and aggregation features.

In a GROUP BY clause, in addition to the binding variable(s) defined for the grouping key(s), SQL++ allows a user to define a group variable by using the clause’s GROUP AS extension to denote the resulting group. After grouping, then, the query’s in-scope variables include the grouping key’s binding variables as well as this group variable which will be bound to one collection value for each group. This per-group collection (i.e., multiset) value will be a set of nested objects in which each field of the object is the result of a renamed variable defined in parentheses following the group variable’s name. The GROUP AS syntax is as follows:

As we can see from the above query result, each group in the example query’s output has an associated group variable value called msgs that appears in the SELECT *’s result. This variable contains a collection of objects associated with the group; each of the group’s message values appears in the msg field of the objects in the msgs collection.

The group variable in SQL++ makes more complex, composable, nested subqueries over a group possible, which is important given the more complex data model of SQL++ (relative to SQL). As a simple example of this, as we really just want the messages associated with each user, we might wish to avoid the “extra wrapping” of each message as the msg field of a object. (That wrapping is useful in more complex cases, but is essentially just in the way here.) We can use a subquery in the SELECT clase to tunnel through the extra nesting and produce the desired result.

Example

SELECT uid, (SELECT VALUE g.msg FROM g) AS msgs
FROM GleambookMessages gbm
GROUP BY gbm.authorId AS uid
GROUP AS g(gbm as msg);

The next example shows a more interesting case involving the use of a subquery in the SELECT list. Here the subquery further processes the groups. There is no renaming in the declaration of the group variable g such that g only has one field gbm which comes from the FROM clause.

Example

SELECT uid,
(SELECT VALUE g.gbm
FROM g
WHERE g.gbm.message LIKE '% like%'
ORDER BY g.gbm.messageId
LIMIT 2) AS msgs
FROM GleambookMessages gbm
GROUP BY gbm.authorId AS uid
GROUP AS g;

In the SQL++ syntax, providing named binding variables for GROUP BY key expressions is optional. If a grouping key is missing a user-provided binding variable, the underlying compiler will generate one. Automatic grouping key variable naming falls into three cases in SQL++, much like the treatment of unnamed projections:

If the grouping key expression is a variable reference expression, the generated variable gets the same name as the referred variable;

If the grouping key expression is a field access expression, the generated variable gets the same name as the last identifier in the expression;

For all other cases, the compiler generates a unique variable (but the user query is unable to refer to this generated variable).

The next example illustrates a query that doesn’t provide binding variables for its grouping key expressions.

Example

SELECT authorId,
(SELECT VALUE g.gbm
FROM g
WHERE g.gbm.message LIKE '% like%'
ORDER BY g.gbm.messageId
LIMIT 2) AS msgs
FROM GleambookMessages gbm
GROUP BY gbm.authorId
GROUP AS g;

The group variable itself is also optional in SQL++’s GROUP BY syntax. If a user’s query does not declare the name and structure of the group variable using GROUP AS, the query compiler will generate a unique group variable whose fields include all of the binding variables defined in the FROM clause of the current enclosing SELECT statement. In this case the user’s query will not be able to refer to the generated group variable, but is able to call SQL-92 aggregation functions as in SQL-92.

In the traditional SQL, which doesn’t support nested data, grouping always also involves the use of aggregation to compute properties of the groups (for example, the average number of messages per user rather than the actual set of messages per user). Each aggregation function in SQL++ takes a collection (for example, the group of messages) as its input and produces a scalar value as its output. These aggregation functions, being truly functional in nature (unlike in SQL), can be used anywhere in a query where an expression is allowed. The following table catalogs the SQL++ built-in aggregation functions and also indicates how each one handles NULL/MISSING values in the input collection or a completely empty input collection:

Function

NULL

MISSING

Empty Collection

COLL_COUNT

counted

counted

0

COLL_SUM

returns NULL

returns NULL

returns NULL

COLL_MAX

returns NULL

returns NULL

returns NULL

COLL_MIN

returns NULL

returns NULL

returns NULL

COLL_AVG

returns NULL

returns NULL

returns NULL

ARRAY_COUNT

not counted

not counted

0

ARRAY_SUM

ignores NULL

ignores NULL

returns NULL

ARRAY_MAX

ignores NULL

ignores NULL

returns NULL

ARRAY_MIN

ignores NULL

ignores NULL

returns NULL

ARRAY_AVG

ignores NULL

ignores NULL

returns NULL

Notice that SQL++ has twice as many functions listed above as there are aggregate functions in SQL-92. This is because SQL++ offers two versions of each – one that handles UNKNOWN values in a semantically strict fashion, where unknown values in the input result in unknown values in the output – and one that handles them in the ad hoc “just ignore the unknown values” fashion that the SQL standard chose to adopt.

Example

Example

SELECT uid AS uid, ARRAY_COUNT(grp) AS msgCnt
FROM GleambookMessages message
GROUP BY message.authorId AS uid
GROUP AS grp(message AS msg);

This query returns:

[ {
"uid": 1,
"msgCnt": 5
}, {
"uid": 2,
"msgCnt": 2
} ]

Notice how the query forms groups where each group involves a message author and their messages. (SQL cannot do this because the grouped intermediate result is non-1NF in nature.) The query then uses the collection aggregate function ARRAY_COUNT to get the cardinality of each group of messages.

Each aggregation function in SQL++ supports DISTINCT modifier that removes duplicate values from the input collection.

Example

For compatibility with the traditional SQL aggregation functions, SQL++ also offers SQL-92’s aggregation function symbols (COUNT, SUM, MAX, MIN, and AVG) as supported syntactic sugar. The SQL++ compiler rewrites queries that utilize these function symbols into SQL++ queries that only use the SQL++ collection aggregate functions. The following example uses the SQL-92 syntax approach to compute a result that is identical to that of the more explicit SQL++ example above:

Example

SELECT uid, COUNT(*) AS msgCnt
FROM GleambookMessages msg
GROUP BY msg.authorId AS uid;

It is important to realize that COUNT is actually not a SQL++ built-in aggregation function. Rather, the COUNT query above is using a special “sugared” function symbol that the SQL++ compiler will rewrite as follows:

SELECT uid AS uid, ARRAY_COUNT( (SELECT VALUE 1 FROM `$1` as g) ) AS msgCnt
FROM GleambookMessages msg
GROUP BY msg.authorId AS uid
GROUP AS `$1`(msg AS msg);

The same sort of rewritings apply to the function symbols SUM, MAX, MIN, and AVG. In contrast to the SQL++ collection aggregate functions, these special SQL-92 function symbols can only be used in the same way they are in standard SQL (i.e., with the same restrictions).

SQL++ provides full support for SQL-92 GROUP BY aggregation queries. The following query is such an example:

Example

SELECT msg.authorId, COUNT(*)
FROM GleambookMessages msg
GROUP BY msg.authorId;

This query outputs:

[ {
"authorId": 1,
"$1": 5
}, {
"authorId": 2,
"$1": 2
} ]

In principle, a msg reference in the query’s SELECT clause would be “sugarized” as a collection (as described in Implicit Group Variables). However, since the SELECT expression msg.authorId is syntactically identical to a GROUP BY key expression, it will be internally replaced by the generated group key variable. The following is the equivalent rewritten query that will be generated by the compiler for the query above:

SELECT authorId AS authorId, ARRAY_COUNT( (SELECT g.msg FROM `$1` AS g) )
FROM GleambookMessages msg
GROUP BY msg.authorId AS authorId
GROUP AS `$1`(msg AS msg);

Example

Both WHERE clauses and HAVING clauses are used to filter input data based on a condition expression. Only tuples for which the condition expression evaluates to TRUE are propagated. Note that if the condition expression evaluates to NULL or MISSING the input tuple will be disgarded.

The ORDER BY clause is used to globally sort data in either ascending order (i.e., ASC) or descending order (i.e., DESC). During ordering, MISSING and NULL are treated as being smaller than any other value if they are encountered in the ordering key(s). MISSING is treated as smaller than NULL if both occur in the data being sorted. The ordering of values of a given type is consistent with its type’s <= ordering; the ordering of values across types is implementation-defined but stable. The following example returns all GleambookUsers in descending order by their number of friends.

Example

SELECT VALUE user
FROM GleambookUsers AS user
ORDER BY ARRAY_COUNT(user.friendIds) DESC;

WITH can be particularly useful when a value needs to be used several times in a query.

Before proceeding further, notice that both the WITH query and its equivalent inlined variant include the syntax “[0]” – this is due to a noteworthy difference between SQL++ and SQL-92. In SQL-92, whenever a scalar value is expected and it is being produced by a query expression, the SQL-92 query processor will evaluate the expression, check that there is only one row and column in the result at runtime, and then coerce the one-row/one-column tabular result into a scalar value. SQL++, being designed to deal with nested data and schema-less data, does not (and should not) do this. Collection-valued data is perfectly legal in most SQL++ contexts, and its data is schema-less, so a query processor rarely knows exactly what to expect where and such automatic conversion is often not desirable. Thus, in the queries above, the use of “[0]” extracts the first (i.e., 0th) element of an array-valued query expression’s result; this is needed above, even though the result is an array of one element, to extract the only element in the singleton array and obtain the desired scalar for the comparison.

Similar to WITH clauses, LET clauses can be useful when a (complex) expression is used several times within a query, allowing it to be written once to make the query more concise. The next query shows an example.

UNION ALL can be used to combine two input arrays or multisets into one. As in SQL, there is no ordering guarantee on the contents of the output stream. However, unlike SQL, SQL++ does not constrain what the data looks like on the input streams; in particular, it allows heterogenity on the input and output streams. A type error will be raised if one of the inputs is not a collection. The following odd but legal query is an example:

In SQL++, an arbitrary subquery can appear anywhere that an expression can appear. Unlike SQL-92, as was just alluded to, the subqueries in a SELECT list or a boolean predicate need not return singleton, single-column relations. Instead, they may return arbitrary collections. For example, the following query is a variant of the prior group-by query examples; it retrieves an array of up to two “dislike” messages per user.

Example

SELECT uid,
(SELECT VALUE m.msg
FROM msgs m
WHERE m.msg.message LIKE '%dislike%'
ORDER BY m.msg.messageId
LIMIT 2) AS msgs
FROM GleambookMessages message
GROUP BY message.authorId AS uid GROUP AS msgs(message AS msg);

Note that a subquery, like a top-level SELECT statment, always returns a collection – regardless of where within a query the subquery occurs – and again, its result is never automatically cast into a scalar.

Example

If the compiler cannot figure out how to resolve an unqualified field name, which will occur if there is more than one variable in scope (e.g., GleambookUsers u and GleambookMessages m as above), we will get an identifier resolution error as follows:

The SQL++ compiler does type checks based on its available type information. In addition, the SQL++ runtime also reports type errors if a data model instance it processes does not satisfy the type requirement.

Example

abs("123");

Since function abs can only process numeric input values, we will get a type error as follows:

A query can potentially exhaust system resources, such as the number of open files and disk spaces. For instance, the following two resource errors could be potentially be seen when running the system:

Error: no space left on device
Error: too many open files

The “no space left on device” issue usually can be fixed by cleaning up disk spaces and reserving more disk spaces for the system. The “too many open files” issue usually can be fixed by a system administrator, following the instructions here.

In addition to queries, an implementation of SQL++ needs to support statements for data definition and manipulation purposes as well as controlling the context to be used in evaluating SQL++ expressions. This section details the DDL and DML statements supported in the SQL++ language as realized today in Apache AsterixDB.

The CREATE statement in SQL++ is used for creating dataverses as well as other persistent artifacts in a dataverse. It can be used to create new dataverses, datatypes, datasets, indexes, and user-defined SQL++ functions.

The CREATE DATAVERSE statement is used to create new dataverses. To ease the authoring of reusable SQL++ scripts, an optional IF NOT EXISTS clause is included to allow creation to be requested either unconditionally or only if the dataverse does not already exist. If this clause is absent, an error is returned if a dataverse with the indicated name already exists.

The following example creates a new dataverse named TinySocial if one does not already exist.

The CREATE TYPE statement is used to create a new named datatype. This type can then be used to create stored collections or utilized when defining one or more other datatypes. Much more information about the data model is available in the data model reference guide. A new type can be a object type, a renaming of another type, an array type, or a multiset type. A object type can be defined as being either open or closed. Instances of a closed object type are not permitted to contain fields other than those specified in the create type statement. Instances of an open object type may carry additional fields, and open is the default for new types if neither option is specified.

The following example creates a new object type called GleambookUser type. Since it is defined as (defaulting to) being an open type, instances will be permitted to contain more than what is specified in the type definition. The first four fields are essentially traditional typed name/value pairs (much like SQL fields). The friendIds field is a multiset of integers. The employment field is an array of instances of another named object type, EmploymentType.

Example

The next example creates a new object type, closed this time, called MyUserTupleType. Instances of this closed type will not be permitted to have extra fields, although the alias field is marked as optional and may thus be NULL or MISSING in legal instances of the type. Note that the type of the id field in the example is UUID. This field type can be used if you want to have this field be an autogenerated-PK field. (Refer to the Datasets section later for more details on such fields.)

The CREATE DATASET statement is used to create a new dataset. Datasets are named, multisets of object type instances; they are where data lives persistently and are the usual targets for SQL++ queries. Datasets are typed, and the system ensures that their contents conform to their type definitions. An Internal dataset (the default kind) is a dataset whose content lives within and is managed by the system. It is required to have a specified unique primary key field which uniquely identifies the contained objects. (The primary key is also used in secondary indexes to identify the indexed primary data objects.)

Internal datasets contain several advanced options that can be specified when appropriate. One such option is that random primary key (UUID) values can be auto-generated by declaring the field to be UUID and putting “AUTOGENERATED” after the “PRIMARY KEY” identifier. In this case, unlike other non-optional fields, a value for the auto-generated PK field should not be provided at insertion time by the user since each object’s primary key field value will be auto-generated by the system.

Another advanced option, when creating an Internal dataset, is to specify the merge policy to control which of the underlying LSM storage components to be merged. (The system supports Log-Structured Merge tree based physical storage for Internal datasets.) Currently the system supports four different component merging policies that can be chosen per dataset: no-merge, constant, prefix, and correlated-prefix. The no-merge policy simply never merges disk components. The constant policy merges disk components when the number of components reaches a constant number k that can be configured by the user. The prefix policy relies on both component sizes and the number of components to decide which components to merge. It works by first trying to identify the smallest ordered (oldest to newest) sequence of components such that the sequence does not contain a single component that exceeds some threshold size M and that either the sum of the component’s sizes exceeds M or the number of components in the sequence exceeds another threshold C. If such a sequence exists, the components in the sequence are merged together to form a single component. Finally, the correlated-prefix policy is similar to the prefix policy, but it delegates the decision of merging the disk components of all the indexes in a dataset to the primary index. When the correlated-prefix policy decides that the primary index needs to be merged (using the same decision criteria as for the prefix policy), then it will issue successive merge requests on behalf of all other indexes associated with the same dataset. The system’s default policy is the prefix policy except when there is a filter on a dataset, where the preferred policy for filters is the correlated-prefix.

Another advanced option shown in the syntax above, related to performance and mentioned above, is that a filter can optionally be created on a field to further optimize range queries with predicates on the filter’s field. Filters allow some range queries to avoid searching all LSM components when the query conditions match the filter. (Refer to Filter-Based LSM Index Acceleration for more information about filters.)

An External dataset, in contrast to an Internal dataset, has data stored outside of the system’s control. Files living in HDFS or in the local filesystem(s) of a cluster’s nodes are currently supported. External dataset support allows SQL++ queries to treat foreign data as though it were stored in the system, making it possible to query “legacy” file data (for example, Hive data) without having to physically import it. When defining an External dataset, an appropriate adapter type must be selected for the desired external data. (See the Guide to External Data for more information on the available adapters.)

The following example creates an Internal dataset for storing FacefookUserType objects. It specifies that their id field is their primary key.

Example

The next example creates another Internal dataset (the default kind when no dataset kind is specified) for storing MyUserTupleType objects. It specifies that the id field should be used as the primary key for the dataset. It also specifies that the id field is an auto-generated field, meaning that a randomly generated UUID value should be assigned to each incoming object by the system. (A user should therefore not attempt to provide a value for this field.) Note that the id field’s declared type must be UUID in this case.

Example

CREATE DATASET MyUsers(MyUserTupleType) PRIMARY KEY id AUTOGENERATED;

The next example creates an External dataset for querying LineItemType objects. The choice of the hdfs adapter means that this dataset’s data actually resides in HDFS. The example CREATE statement also provides parameters used by the hdfs adapter: the URL and path needed to locate the data in HDFS and a description of the data format.

The CREATE INDEX statement creates a secondary index on one or more fields of a specified dataset. Supported index types include BTREE for totally ordered datatypes, RTREE for spatial data, and KEYWORD and NGRAM for textual (string) data. An index can be created on a nested field (or fields) by providing a valid path expression as an index field identifier.

An indexed field is not required to be part of the datatype associated with a dataset if the dataset’s datatype is declared as open and if the field’s type is provided along with its name and if the ENFORCED keyword is specified at the end of the index definition. ENFORCING an open field introduces a check that makes sure that the actual type of the indexed field (if the optional field exists in the object) always matches this specified (open) field type.

The following example creates a btree index called gbAuthorIdx on the authorId field of the GleambookMessages dataset. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the author-id field.

Example

CREATE INDEX gbAuthorIdx ON GleambookMessages(authorId) TYPE BTREE;

The following example creates an open btree index called gbSendTimeIdx on the (non-predeclared) sendTime field of the GleambookMessages dataset having datetime type. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the sendTime field. The index is enforced so that records that do not have the “sendTime” field or have a mismatched type on the field cannot be inserted into the dataset.

Example

The following example creates a btree index called crpUserScrNameIdx on screenName, a nested field residing within a object-valued user field in the ChirpMessages dataset. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the nested screenName field. Such nested fields must be singular, i.e., one cannot index through (or on) an array-valued field.

Example

The following example creates an rtree index called gbSenderLocIdx on the sender-location field of the GleambookMessages dataset. This index can be useful for accelerating queries that use the spatial-intersect function in a predicate involving the sender-location field.

Example

The following example creates a 3-gram index called fbUserIdx on the name field of the GleambookUsers dataset. This index can be used to accelerate some similarity or substring maching queries on the name field. For details refer to the document on similarity queries.

Example

CREATE INDEX fbUserIdx ON GleambookUsers(name) TYPE NGRAM(3);

The following example creates a keyword index called fbMessageIdx on the message field of the GleambookMessages dataset. This keyword index can be used to optimize queries with token-based similarity predicates on the message field. For details refer to the document on similarity queries.

Example

CREATE INDEX fbMessageIdx ON GleambookMessages(message) TYPE KEYWORD;

The following example creates an open btree index called gbReadTimeIdx on the (non-predeclared) readTime field of the GleambookMessages dataset having datetime type. This index can be useful for accelerating exact-match queries, range search queries, and joins involving the readTime field. The index is not enforced so that records that do not have the readTime field or have a mismatched type on the field can still be inserted into the dataset.

The following is an example of a CREATE FUNCTION statement which is similar to our earlier DECLARE FUNCTION example. It differs from that example in that it results in a function that is persistently registered by name in the specified dataverse (the current dataverse being used, if not otherwise specified).

Example

When an artifact is dropped, it will be droppped from the current dataverse if none is specified (see the DROP DATASET example above) or from the specified dataverse (see the DROP TYPE example above) if one is specified by fully qualifying the artifact name in the DROP statement. When specifying an index to drop, the index name must be qualified by the dataset that it indexes. When specifying a function to drop, since SQL++ allows functions to be overloaded by their number of arguments, the identifying name of the function to be dropped must explicitly include that information. (friendInfo@1 above denotes the 1-argument function named friendInfo in the current dataverse.)

The LOAD statement is used to initially populate a dataset via bulk loading of data from an external file. An appropriate adapter must be selected to handle the nature of the desired external data. The LOAD statement accepts the same adapters and the same parameters as discussed earlier for External datasets. (See the guide to external data for more information on the available adapters.) If a dataset has an auto-generated primary key field, the file to be imported should not include that field in it.

The following example shows how to bulk load the GleambookUsers dataset from an external file containing data that has been prepared in ADM (Asterix Data Model) format.

The SQL++ INSERT statement is used to insert new data into a dataset. The data to be inserted comes from a SQL++ query expression. This expression can be as simple as a constant expression, or in general it can be any legal SQL++ query. If the target dataset has an auto-generated primary key field, the insert statement should not include a value for that field in it. (The system will automatically extend the provided object with this additional field and a corresponding value.) Insertion will fail if the dataset already has data with the primary key value(s) being inserted.

Inserts are processed transactionally by the system. The transactional scope of each insert transaction is the insertion of a single object plus its affiliated secondary index entries (if any). If the query part of an insert returns a single object, then the INSERT statement will be a single, atomic transaction. If the query part returns multiple objects, each object being inserted will be treated as a separate tranaction. The following example illustrates a query-based insertion.

Example

The SQL++ UPSERT statement syntactically mirrors the INSERT statement discussed above. The difference lies in its semantics, which for UPSERT are “add or replace” instead of the INSERT “add if not present, else error” semantics. Whereas an INSERT can fail if another object already exists with the specified key, the analogous UPSERT will replace the previous object’s value with that of the new object in such cases.

The following example illustrates a query-based upsert operation.

Example

UPSERT INTO UsersCopy (SELECT VALUE user FROM GleambookUsers user)

*Editor’s note: Upserts currently work in AQL but are not yet enabled (at the moment) in SQL++.

The SQL++ DELETE statement is used to delete data from a target dataset. The data to be deleted is identified by a boolean expression involving the variable bound to the target dataset in the DELETE statement.

Deletes are processed transactionally by the system. The transactional scope of each delete transaction is the deletion of a single object plus its affiliated secondary index entries (if any). If the boolean expression for a delete identifies a single object, then the DELETE statement itself will be a single, atomic transaction. If the expression identifies multiple objects, then each object deleted will be handled as a separate transaction.

The SET statement can be used to override some cluster-wide configuration parameters for a specific request:

SET <IDENTIFIER> <STRING_LITERAL>

As parameter identifiers are qualified names (containing a ‘.’) they have to be escaped using backticks (``). Note that changing query parameters will not affect query correctness but only impact performance characteristics, such as response time and throughput.

The system can execute each request using multiple cores on multiple machines (a.k.a., partitioned parallelism) in a cluster. A user can manually specify the maximum execution parallelism for a request to scale it up and down using the following parameter:

compiler.parallelism: the maximum number of CPU cores can be used to process a query. There are three cases of the value p for compiler.parallelism:

p < 0 or p > the total number of cores in a cluster: the system will use all available cores in the cluster;

p = 0 (the default): the system will use the storage parallelism (the number of partitions of stored datasets) as the maximum parallelism for query processing;

all other cases: the system will use the user-specified number as the maximum number of CPU cores to use for executing the query.

Example

In the system, each blocking runtime operator such as join, group-by and order-by works within a fixed memory budget, and can gracefully spill to disks if the memory budget is smaller than the amount of data they have to hold. A user can manually configure the memory budget of those operators within a query. The supported configurable memory parameters are:

compiler.groupmemory: the memory budget that each parallel group-by operator instance can use; 32MB is the default budget.

compiler.sortmemory: the memory budget that each parallel sort operator instance can use; 32MB is the default budget.

compiler.joinmemory: the memory budget that each parallel hash join operator instance can use; 32MB is the default budget.

For each memory budget value, you can use a 64-bit integer value with a 1024-based binary unit suffix (for example, B, KB, MB, GB). If there is no user-provided suffix, “B” is the default suffix. See the following examples.

Example

SET `compiler.groupmemory` "64MB";
SELECT msg.authorId, COUNT(*)
FROM GleambookMessages msg
GROUP BY msg.authorId;

Example

SET `compiler.sortmemory` "67108864";
SELECT VALUE user
FROM GleambookUsers AS user
ORDER BY ARRAY_LENGTH(user.friendIds) DESC;

Example

By default, the system tries to build an index-only plan whenever utilizing a secondary index is possible. For example, if a SELECT or JOIN query can utilize an enforced B+Tree or R-Tree index on a field, the optimizer checks whether a secondary-index search alone can generate the result that the query asks for. It mainly checks two conditions: (1) predicates used in WHERE only uses the primary key field and/or secondary key field and (2) the result does not return any other fields. If these two conditions hold, it builds an index-only plan. Since an index-only plan only searches a secondary-index to answer a query, it is faster than a non-index-only plan that needs to search the primary index. However, this index-only plan can be turned off per query by setting the following parameter.

noindexonly: if this is set to true, the index-only-plan will not be applied; the default value is false.

Example

SET noindexonly 'true';
SELECT m.message AS message
FROM GleambookMessages m where m.message = " love product-b its shortcut-menu is awesome:)";

In this Appendix, we’ll look at how variables are bound and how names are resolved. Names can appear in every clause of a query. Sometimes a name consists of just a single identifier, e.g., region or revenue. More often a name will consist of two identifiers separated by a dot, e.g., customer.address. Occasionally a name may have more than two identifiers, e.g., policy.owner.address.zipcode. Resolving a name means determining exactly what the (possibly multi-part) name refers to. It is necessary to have well-defined rules for how to resolve a name in cases of ambiguity. (In the absence of schemas, such cases arise more commonly, and also differently, than they do in SQL.)

The basic job of each clause in a query block is to bind variables. Each clause sees the variables bound by previous clauses and may bind additional variables. Names are always resolved with respect to the variables that are bound (“in scope”) at the place where the name use in question occurs. It is possible that the name resolution process will fail, which may lead to an empty result or an error message.

One important bit of background: When the system is reading a query and resolving its names, it has a list of all the available dataverses and datasets. As a result, it knows whether a.b is a valid name for dataset b in dataverse a. However, the system does not in general have knowledge of the schemas of the data inside the datasets; remember that this is a much more open world. As a result, in general the system cannot know whether any object in a particular dataset will have a field named c. These assumptions affect how errors are handled. If you try to access dataset a.b and no dataset by that name exists, you will get an error and your query will not run. However, if you try to access a field c in a collection of objects, your query will run and return missing for each object that doesn’t have a field named c – this is because it’s possible that some object (someday) could have such a field.

WITH and LET clauses bind a variable to the result of an expression in a straightforward way

Examples:

WITH cheap_parts AS (SELECT partno FROM parts WHERE price < 100) binds the variable cheap_parts to the result of the subquery.

LET pay = salary + bonus binds the variable pay to the result of evaluating the expression salary + bonus.

FROM, GROUP BY, and SELECT clauses have optional AS subclauses that contain an expression and a name (called an iteration variable in a FROM clause, or an alias in GROUP BY or SELECT.)

Examples:

FROM customer AS c, order AS o

GROUP BY salary + bonus AS total_pay

SELECT MAX(price) AS highest_price

An AS subclause always binds the name (as a variable) to the result of the expression (or, in the case of a FROM clause, to the individual members of the collection identified by the expression.)

It’s always a good practice to use the keyword AS when defining an alias or iteration variable. However, as in SQL, the syntax allows the keyword AS to be omitted. For example, the FROM clause above could have been written like this:

FROM customer c, order o

Omitting the keyword AS does not affect the binding of variables. The FROM clause in this example binds variables c and o whether the keyword AS is used or not.

In certain cases, a variable is automatically bound even if no alias or variable-name is specified. Whenever an expression could have been followed by an AS subclause, if the expression consists of a simple name or a path expression, that expression binds a variable whose name is the same as the simple name or the last step in the path expression. Here are some examples:

FROM customer, order binds iteration variables named customer and order

GROUP BY address.zipcode binds a variable named zipcode

SELECT item[0].price binds a variable named price

Note that a FROM clause iterates over a collection (usually a dataset), binding a variable to each member of the collection in turn. The name of the collection remains in scope, but it is not a variable. For example, consider this FROM clause used in a self-join:

FROM customer AS c1, customer AS c2

This FROM clause joins the customer dataset to itself, binding the iteration variables c1 and c2 to objects in the left-hand-side and right-hand-side of the join, respectively. After the FROM clause, c1 and c2 are in scope as variables, and customer remains accessible as a dataset name but not as a variable.

Special rules for GROUP BY:

If a GROUP BY clause specifies an expression that has no explicit alias, it binds a pseudo-variable that is lexicographically identical to the expression itself. For example:

GROUP BY salary + bonus binds a pseudo-variable named salary + bonus.

This rule allows subsequent clauses to refer to the grouping expression (salary + bonus) even though its constituent variables (salary and bonus) are no longer in scope. For example, the following query is valid:

While it might have been more elegant to explicitly require an alias in cases like this, the pseudo-variable rule is retained for SQL compatibility. Note that the expression salary + bonus is not actually evaluated in the HAVING and SELECT clauses (and could not be since salary and bonus are no longer individually in scope). Instead, the expression salary + bonus is treated as a reference to the pseudo-variable defined in the GROUP BY clause.

A GROUP BY clause may be followed by a GROUP AS clause that binds a variable to the group. The purpose of this variable is to make the individual objects inside the group visible to subqueries that may need to iterate over them.

The GROUP AS variable is bound to a multiset of objects. Each object represents one of the members of the group. Since the group may have been formed from a join, each of the member-objects contains a nested object for each variable bound by the nearest FROM clause (and its LET subclause, if any). These nested objects, in turn, contain the actual fields of the group-member. To understand this process, consider the following query fragment:

FROM parts AS p, suppliers AS s
WHERE p.suppno = s.suppno
GROUP BY p.color GROUP AS g

Suppose that the objects in parts have fields partno, color, and suppno. Suppose that the objects in suppliers have fields suppno and location.

Then, for each group formed by the GROUP BY, the variable g will be bound to a multiset with the following structure:

In general, the variables that are in scope at a particular position are those variables that were bound earlier in the current query block, in outer (enclosing) query blocks, or in a WITH clause at the beginning of the query. More specific rules follow.

The clauses in a query block are conceptually processed in the following order:

FROM (followed by LET subclause, if any)

WHERE

GROUP BY (followed by LET subclause, if any)

HAVING

SELECT or SELECT VALUE

ORDER BY

OFFSET

LIMIT

During processing of each clause, the variables that are in scope are those variables that are bound in the following places:

In earlier clauses of the same query block (as defined by the ordering given above).

Example: FROM orders AS o SELECT o.date The variable o in the SELECT clause is bound, in turn, to each object in the dataset orders.

In outer query blocks in which the current query block is nested. In case of duplication, the innermost binding wins.

In the WITH clause (if any) at the beginning of the query.

However, in a query block where a GROUP BY clause is present:

In clauses processed before GROUP BY, scoping rules are the same as though no GROUP BY were present.

In clauses processed after GROUP BY, the variables bound in the nearest FROM-clause (and its LET subclause, if any) are removed from scope and replaced by the variables bound in the GROUP BY clause (and its LET subclause, if any). However, this replacement does not apply inside the arguments of the five SQL special aggregating functions (MIN, MAX, AVG, SUM, and COUNT). These functions still need to see the individual data items over which they are computing an aggregation. For example, after FROM employee AS e GROUP BY deptno, it would not be valid to reference e.salary, but AVG(e.salary) would be valid.

Special case: In an expression inside a FROM clause, a variable is in scope if it was bound in an earlier expression in the same FROM clause. Example:

FROM orders AS o, o.items AS i

The reason for this special case is to support iteration over nested collections.

Note that, since the SELECT clause comes after the WHERE and GROUP BY clauses in conceptual processing order, any variables defined in SELECT are not visible in WHERE or GROUP BY. Therefore the following query will not return what might be the expected result (since in the WHERE clause, pay will be interpreted as a field in the emp object rather than as the computed value salary + bonus):

SELECT name, salary + bonus AS pay
FROM emp
WHERE pay > 1000
ORDER BY pay

The process of name resolution begins with the leftmost identifier in the name. The rules for resolving the leftmost identifier are:

In a FROM clause: Names in a FROM clause identify the collections over which the query block will iterate. These collections may be stored datasets or may be the results of nested query blocks. A stored dataset may be in a named dataverse or in the default dataverse. Thus, if the two-part name a.b is in a FROM clause, a might represent a dataverse and b might represent a dataset in that dataverse. Another example of a two-part name in a FROM clause is FROM orders AS o, o.items AS i. In o.items, o represents an order object bound earlier in the FROM clause, and items represents the items object inside that order.

The rules for resolving the leftmost identifier in a FROM clause (including a JOIN subclause), or in the expression following IN in a quantified predicate, are as follows:

If the identifier matches a variable-name that is in scope, it resolves to the binding of that variable. (Note that in the case of a subquery, an in-scope variable might have been bound in an outer query block; this is called a correlated subquery.)

Otherwise, if the identifier is the first part of a two-part name like a.b, the name is treated as dataverse.dataset. If the identifier stands alone as a one-part name, it is treated as the name of a dataset in the default dataverse. An error will result if the designated dataverse or dataset does not exist.

Elsewhere in a query block: In clauses other than FROM, a name typically identifies a field of some object. For example, if the expression a.b is in a SELECT or WHERE clause, it’s likely that a represents an object and b represents a field in that object.

The rules for resolving the leftmost identifier in clauses other than the ones listed in Rule 1 are:

If the identifier matches a variable-name that is in scope, it resolves to the binding of that variable. (In the case of a correlated subquery, the in-scope variable might have been bound in an outer query block.)

(The “Single Variable Rule”): Otherwise, if the FROM clause (or a LET clause if there is no FROM clause) in the current query block binds exactly one variable, the identifier is treated as a field access on the object bound to that variable. For example, in the query FROM customer SELECT address, the identifier address is treated as a field in the object bound to the variable customer. At runtime, if the object bound to customer has no address field, the address expression will return missing. If the FROM clause (and its LET subclause, if any) in the current query block binds multiple variables, name resolution fails with an “ambiguous name” error. Note that the Single Variable Rule searches for bound variables only in the current query block, not in outer (containing) blocks. The purpose of this rule is to permit the compiler to resolve field-references unambiguously without relying on any schema information.

Exception: In a query that has a GROUP BY clause, the Single Variable Rule does not apply in any clauses that occur after the GROUP BY because, in these clauses, the variables bound by the FROM clause are no longer in scope. In clauses after GROUP BY, only Rule 2.1 applies.

In an ORDER BY clause following a UNION ALL expression:

The leftmost identifier is treated as a field-access on the objects that are generated by the UNION ALL. For example:

query-block-1
UNION ALL
query-block-2
ORDER BY salary

In the result of this query, objects that have a foo field will be ordered by the value of this field; objects that have no foo field will appear at at the beginning of the query result (in ascending order) or at the end (in descending order.)

Once the leftmost identifier has been resolved, the following dots and identifiers in the name (if any) are treated as a path expression that navigates to a field nested inside that object. The name resolves to the field at the end of the path. If this field does not exist, the value missing is returned.

Apache AsterixDB, AsterixDB, Apache, the Apache
feather logo, and the Apache AsterixDB project logo are either
registered trademarks or trademarks of The Apache Software
Foundation in the United States and other countries.
All other marks mentioned may be trademarks or registered
trademarks of their respective owners.