Main menu

Post navigation

Dimensional Benchmarking of String Splitting SQL

I noticed a question on AskTom last November concerning SQL for splitting delimited strings, Extract domain names from a column having multiple email addresses, a kind of question that arises frequently on the forums. There was some debate between reviewers Rajeshwaran Jeyabal, Stew Ashton and the AskTom guys on whether an XML-based solution performs better or worse than a more 'classic' solution based on the Substr and Instr functions and collections. AskTom's Chris Saxon noted:

For me this just highlights the importance of testing in your own environment with your own data. Just because someone produced a benchmark showing X is faster, doesn't mean it will be for you.

For me, relative performance is indeed frequently dependent on the size and 'shape' of the data used for testing. As I have my own 'dimensional' benchmarking framework, A Framework for Dimensional Benchmarking of SQL Performance, I was able to very quickly adapt Rajesh's test data to benchmark across numbers of records and numbers of delimiters, and I put the results on the thread. I then decided to take the time to expand the scope to include other solutions, and to use more general data sets, where the token lengths vary as well as the number of tokens per record.

In fact the scope expanded quite a bit, as I found more and more ways to solve the problem, and I have only now found the time to write it up. Here is a list of all the queries considered:

Queries using Connect By for row generation

MUL_QRY - Cast/Multiset to correlate Connect By

LAT_QRY - v12 Lateral correlated Connect By

UNH_QRY - Uncorrelated Connect By unhinted

RGN_QRY - Uncorrelated Connect By with leading hint

GUI_QRY - Connect By in main query using sys_guid trick

RGX_QRY - Regular expression function, Regexp_Substr

Queries not using Connect By for row generation

XML_QRY - XMLTABLE

MOD_QRY - Model clause

PLF_QRY - database pipelined function

WFN_QRY - 'WITH' PL/SQL function directly in query

RSF_QRY - Recursive Subquery Factor

RMR_QRY - Match_Recognize

Test Problem

DELIMITED_LISTS Table

CREATE TABLE delimited_lists(id INT, list_col VARCHAR2(4000))
/

Functional Test Data

The test data consist of pipe-delimited tokens ('|') in a VARCHAR2(4000) column in a table with a separate integer unique identifier. For functional testing we will add a single 'normal' record with two tokens, plus four more records designed to validate null-token edge cases as follows:

All queries returned the expected results above, except that the XML query returned 12 rows with only a single null token returned for record 5. In the performance testing, no null tokens were included, and all queries returned the same results.

Performance Test Data

Each test set consisted of 3,000 records with the list_col column containing the delimited string dependent on width (w) and depth (d) parameters, as follows:

Each record contains w tokens

Each token contains d characters from the sequence 1234567890 repeated as necessary

The output from the test queries therefore consists of 3,000*w records with a unique identifier and a token of length d. For performance testing purposes the benchmarking framework writes the results to a file in csv format, while counting only the query steps in the query timing results.

In Oracle upto version 11.2 VARCHAR2 expressions cannot be longer than 4,000 characters, so I decided to run the framework for four sets of parameters, as follows:

Depth fixed, high; width range low: d=18, w=(50,100,150,200)

Depth fixed, low; width range high: d=1, w=(450,900,1350,1800)

Width fixed, low; depth range high: w=5, d=(195,390,585,780)

Width fixed, high; depth range low: w=150, d=(6,12,18,24)

All the queries showed strong time correlation with width, while a few also showed strong correlation with depth.

Queries

All execution plans are from the data point with Width=1800, Depth=1, which has the largest number of tokens per record.

This is the 'classic' CONNECT BY solution referred to above, which appears frequently on AskTom and elsewhere, and I copied the version used by Jayesh. The somewhat convoluted casting between subquery and array and back to SQL record via multiset allows the prior table in the from list to be referenced within the inline view, which is otherwise not permitted in versions earlier than 12.1, where the LATERAL keyword was introduced.

Despite this query being regarded as the 'classic' CONNECT BY solution to string-splitting, we will find that it is inferior in performance to a query I wrote myself across all data points considered. The new query is also simpler, but is not the most efficient of all methods, as we see later.

This query is taken from Splitting Strings: Proof!, and uses a v12.1 new feature, described with examples in LATERAL Inline Views. The new feature allows you to correlate an inline view directly without the convoluted Multiset code, and can also be used with the keywords CROSS APPLY instead of LATERAL. It's sometimes regarded as having peformance advantages, but in this context we will see that avoiding this correlation altogether is best for performance.

I wrote the UNH_QRY query in an attempt to avoid the convoluted Multiset approach of the 'classic' solution. The reason for the use of arrays and Multiset seems to be that, while we need to 'generate' multiple rows for each source row, the number of rows generated has to vary by source record and so the row-generating inline view computes the number of tokens for each record in its where clause.

The use of row-generating subqueries is quite common, but in other cases one often has a fixed number of rows to generate, as in data densification scenarios for example. It occurred to me that, although we don't know the number to generate, we do have an upper bound, dependent on the maximum number of characters, and we could generate that many in a subquery, then join only as many as are required to the source record.

This approach resulted in a simpler and more straightforward query, but it turned out in its initial form to be very slow. The execution plan above shows that the row generator is driving a nested loops join within which a full scan is performed on the table. The CBO is not designed to optimise this type of algorithmic query, so I added a leading hint to reverse the join order, and this resulted in much better performance. In fact, as we see later the hinted query outperforms the other CONNECT BY queries, including the v12.1 LAT_QRY query at all data points considered.

This query also generates rows using CONNECT BY, but differs from the others shown by integrating the row-generation code with the main rowset and avoiding a separate subquery against DUAL. This seems to be a more recent approach than the traditional Multiset solution. It uses a trick involving the system function sys_guid() to avoid the 'connect by cycle' error that you would otherwise get, as explained in this OTN thread: Reg : sys_guid()
.

Unfortunately, and despite its current popularity on OTN, it turns out to be even less efficient than the earlier approaches, by quite a margin.

This is a fairly well known approach to the problem that involves doing the string splitting within a pipelined database function that is passed the delimited string as a parameter. I wrote my own version for this article, taking care to make only one call to each of the oracle functions Instr and Substr within a loop over the tokens.

The results confirm that it is in fact the fastest approach over all data points considered, and CPU time increased approximately linearly with number of tokens.

Oracle introduced the ability to include a PL/SQL function definition directly in a query in version 12.1. I converted my pipelined function into a function within a query, returning an array of character strings.

As we would expect from the results of the similar pipelined function approach, this also turns out to be a very efficient solution. However, it may be surprising to many that it is significantly slower (20-30%) than using the separate database function, given the prominence that is usually assigned to context-switching.

Oracle introduced Match_Recognize in v12.1, as a mechanism for pattern matching along the lines of regular expressions for strings, but for matching patterns across records. I wrote the query for this article, converting each character in the input strings into a separate record to allow for its use.

This approach might seems somewhat convoluted, and one might expect it to be correspondingly slow. As it turns out though, for most datasets it is faster than many of the other methods, the ones with very long tokens being the exception, and CPU time increased linearly with both number of tokens and number of characters per token. It is notable that, apart from the exception mentioned, it outperformed the regular expression query.

The CPU times are listed but elapsed times are much the same. Each table has columns in order of increasing last CPU time by query.

Depth fixed, high; width range low: d=18, w=(50,100,150,200)

Depth fixed, low; width range high: d=1, w=(450,900,1350,1800)

Width fixed, low; depth range high: w=5, d=(195,390,585,780)

Width fixed, high; depth range low: w=150, d=(6,12,18,24)

A Note on the Row Generation by Connect By Results

It is interesting to observe that the 'classical' mechanism for row-generation in string-splitting and similar scenarios turns out to be much slower than a simpler approach that removes the correlation of the row-generating subquery. This 'classical' mechanism has been proposed on Oracle forums over many years, while a simpler and faster approach seems to have gone undiscovered. The reason for its performance deficit is simply that running a Connect By query for every master row is unsurprisingly inefficient. The Use of the v12.1 LATERAL correlation syntax simplifies but doesn't improve performance by much.

The more recent approach to Connect By row-generation is to use the sys_guid 'trick' to embed the Connect By in the main query rather than in a correlated subquery, and this has become very popular on forums such as OTN. As we have seen, although simpler, this is even worse for performance: Turning the whole query into a tree-walk isn't good for performance either. It's better to isolate the tree-walk, execute it once, and then just join its result set as in RGN_QRY.

Conclusions

The database pipelined function solution (PLF_QRY) is generally the fastest across all data points

Using the v12.1 feature of a PL/SQL function embedded within the query is almost always next best, although slower by up to about a third; its being slower than a database function may surprise some

Times generally increased uniformly with numbers of tokens, usually either linearly or quadratically

Times did not seem to increase so uniformly with token size, except for XML (XML_QRY), Match_Recognize (RMR_QRY) and regular expression (RGX_QRY)

For larger numbers of tokens, three methods all showed quadratic variation and were very inefficient: Model (MOD_QRY), regular expression (RGX_QRY), and recursive subquery factors (RSF_QRY)

We have highlighted two inefficient but widespread approaches to row-generation by Connect By SQL, and pointed out a better method

These conclusions are based on the string-splitting problem considered, but no doubt would apply to other scenarios involving requirements to split rows into multiple rows based on some form of string-parsing.