I am currently working on project where we are extracting data from some AX tables. The data from these extracts is transformed in vary ways to be consumed by various applications. One of the of the transformations requires many column values to be concatenated into one column values. The query that is currently used to do this was written by a contractor who has now left. It uses the PATH mode FOR XML Path to concatenate values. The query currently takes 24 minutes to run. We are currently in development but ideally in production this data needs to be extracted every 7 minutes. I would just like to know if there is a way I can tune the query or rewrite it in another way that could speed up the retrieval. The source tables contain about 4- 5 million records.

The way you have done the SELECT (SELECT...), (SELECT..) ... FROM forces those massive nested loop joins on 900K rows and the logical IO from those is stunningly high.

First, make sure you have a covering index on each of the correlated subqueries. But even that may not help you with all those 900K iterations.

If it isn't fast enough, I would switch to a CLR object to do the concatenation. That will probably be the most efficient. It also avoids not one but TWO scenarios currently where you can get the WRONG OUTPUT!! A) you have no order by in the correlated subqueries meaning they can put the concatenation in any order on output and B) XML "special characters" can actually BREAK the FOR XML processing (think <, >, etc).

You could even try a cursor-based solution to build the output of each of those correlated subqueries.

thanks for the responses. I will try that alternative approach and let you know what my results are. CLR came up when I was discussing this with a colleague of mine. I will also check this out and let you know what my findings are.

thanks for the responses. I will try that alternative approach and let you know what my results are. CLR came up when I was discussing this with a colleague of mine. I will also check this out and let you know what my findings are.

I think Adam Machanic has done some blogging on the CLR side of things that could be useful.

You could also see here for some (mostly bad) options for string concat: https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/

And this post shows you why the COALESCE/ISNULL method isn't viable (another ordering issue just like the FOR XML problem): http://msmvps.com/blogs/robfarley/archive/2007/04/08/coalesce-is-not-the-answer-to-string-concatentation-in-t-sql.aspx

eseosaoregie (4/13/2013)I am currently working on project where we are extracting data from some AX tables. The data from these extracts is transformed in vary ways to be consumed by various applications. One of the of the transformations requires many column values to be concatenated into one column values. The query that is currently used to do this was written by a contractor who has now left. It uses the PATH mode FOR XML Path to concatenate values. The query currently takes 24 minutes to run. We are currently in development but ideally in production this data needs to be extracted every 7 minutes. I would just like to know if there is a way I can tune the query or rewrite it in another way that could speed up the retrieval. The source tables contain about 4- 5 million records.

I have also attached a copy of the execution plan of the latest query run. Any help would be much appreciated.

The major problem with that query is that it recalculates the colon separate list for like rows which is a huge waste of resources. Using "Divide'n'Conquer" methods, a separate table should be calculated to hold single instances of each concatenation grouped by I.ItemID and p.SPECID using a single query with a GROUP BY to build such a thing.

The same is also true of the other lookup tables (like the BOM table). The same information is concatenated over and over and over for each row.

The key to performance on this problem will be to correctly "pre-aggregate" the concatenations in separate tables and then join to those tables.

To wit, even the introduction of a CLR to do the concatenation might not be as performant as it could be because it would still have to do the concatentation of identical data using the current structure of the current query.

--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013

Good points Jeff. I would definitely look at pre-populating the concatenated objects as temp tables. I had that in my notes I took while looking at alternatives for this and just missed putting it in the reply!

I looked at prepopulating those temp objects tho and I think in order to get the table key and the concatenated string is a double hit on the table for the methods I checked (except for cursor, which has is it's own issues obviously). SQL CLR into a temp object could still be best.

Here's an example of what I'm talking about. Of course, I don't have access to the data so the code is completely untested but this will calculate the delimited aggregations just once for each TemplateID/SpecID combination in the PstProdTmplData table instead of 6 identical recalculations for each and every row of the InventTable.

While the following looks like a lot of code, it will allow your code to run with comparatively blazing performance as to the way it is currently structured. It's a very common method (pre-aggregate and pivot using a Cross Tab) to solve the problem of using an EAV table effectively.

WITHctePreAgg AS(--===== Preaggregate the semi-colon delimited "Data" for each TemplateID/SpecID combination -- for performance. We'll pivot the data later SELECT p1.TemplateID, p1.SpecID --To be used as a join filter in another query., SpecIDs = STUFF( ( SELECT ' ; ' + p2.Data FROM dbo.PstProdTmplData p2 WHERE p2.TemplateID = p1.TemplateID FOR XML PATH('') ) ,1,3,'') FROM dbo.PstProdTmplData p1 WHERE p1.SpecID IN ( 'Episode title' ,'Commercial brand' ,'Commercial product' ,'Commercial type' ,'Country of origin' ,'Year of production' ) AND p.TemplateID IN (SELECT ItemID FROM dbo.InventTable) --Implicitly DISTINCT and as fast as a join. GROUP BY p1.TemplateID, p1.SpecID) --=== Now, pivot the data so that it's normalized instead of being an EAV-style result set and store it -- all in a temporary lookup table for easy and very high performance joining in the final query. SELECT TemplateID = ISNULL(TemplateID,0) --Makes the column in the lookup table NOT NULL., CREpisodeTitle = MAX(CASE WHEN SpecID = 'Episode title' THEN SpecIDs ELSE '' END), CRComBrand] = MAX(CASE WHEN SpecID = 'Commercial brand' THEN SpecIDs ELSE '' END), CRComProduct = MAX(CASE WHEN SpecID = 'Commercial product' THEN SpecIDs ELSE '' END), CRComType = MAX(CASE WHEN SpecID = 'Commercial type' THEN SpecIDs ELSE '' END), CRCountry = MAX(CASE WHEN SpecID = 'Country of origin' THEN SpecIDs ELSE '' END), CRYear = MAX(CASE WHEN SpecID = 'Year of production' THEN SpecIDs ELSE '' END) INTO #Lookup_PstProdTmplData FROM ctePreAgg GROUP BY TemplateID;--===== Add a Primary Key for extra join performance. We let the system name the PK on Temp Tables -- because such constraints must be uniquely named in the database and we don't want to destroy -- the ability of more than one instance of the code to run conncurrently. ALTER TABLE #Lookup_PstProdTmplData ADD PRIMARY KEY CLUSTERED (TemplateID);

Once that's inplace, the final query becomes a blazing-performance cake walk. (Note that I didn't pre-aggregate/pivot all the tables that should be. You have to have some of the fun! )

As a bit of a sidebar, this isn't Oracle and we don't have the 30 character object name limitation. Consider NOT using abbreviations for table names in the future as they serve only to daze and confuse the uninitiated. For example, whoever designed these tables used the name "PstProdTmplData" for a table that should have been named "PostProductionTemplate". It's only 7 characters longer and there's no chance of someone misreading the "Tmp" in the original abbreviated name as meaning "Temporary" especially if the miss the "l" in the name.

Also, we all know tables have data in them so the word "Data" in the table name is a bit superfluous.

--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013