The forum will be down for maintenance over the weekend of August 18-20, 2017. The forum will be shut down on the evening (EDT) of Friday, August 18. Downtime is unknown but may be up to two days. The forum will be restarted as soon as maintenance is complete.

I'm currently doing some comparisons between a database running on 12.0.1.4278, 16.0.0.2158 and 17.0.0.1062 (i.e. the the latest builds) on a modest Windows 7 64-bit machine with 8 GB RAM and 4 logical processors. The database was rebuilt for each of these versions and each instance has run some tests on the 64-bit engine (using all logical processors) with identical settings in a (hopefully) almost identical environment.

With the default max_query_tasks option set to 0 (to allow intra-query parallelism), it shows that v16 and v17 make heavily use of parallel operators - property('ExchangeTasksCompleted') lists about 1.5 million tasks - whereas v12 does not (below 100 tasks).

For the tested workload, comparisons have shown that intra-query parallelism unfortunately does decrease the performance noticeably (the complete tests take about 25 % longer), so with v16/v17 I would need to temporarily set max_query_tasks to 1 for the according connections to prevent a significant performance degradation compared to v12. (The workload itself is quite "serial" by design, so I am not surprised that parallelism is not helpful here).

Question: Unless I have overlooked that information, the v16/v17 docs do not state that the usage of intra-query parallelism has been enhanced compared to v12. Is that result of my tests expected behaviour change?

(I won't be able to offer plans of the according queries as they are embedded in stored procedures and functions... - It's just a general question to theses versions.)

Now open Sybase Central/SQL Central, choose Application Profiling mode, "Open an analysis file or connect to a tracing database", "Connect to a tracing database", and input the information for your tracing database. You should now see the statements run by your procedures; right-click each statement to see the plans.

Sorry, my problem is the amount of data, as really a bunch of request is made by issuing just a few stored function calls. I had initially tried to generate a RLL with level "all" and got a really huge log file (as mentioned, with more than a million of parallel requests - even the according -o console log is several MB large...). - That made me ask for a more general information (which seems not available, according to your and Ani's answers).

I guess I will have to try to break things down in order to get something traceable, So a big thanks for the exact hints!

@Volker: If you can add code to your application or stored procedures to "pick and choose" which queries to capture the plans for, you can use the techniques described here... it's what I've used inside Foxhound to capture The Plans From Hell from among gigabytes of SQL requests.

It might be a lot of work to set up, but the filtering can be (almost) perfect.

Yes, that's exactly the problem, as stated, I'm primarily issuing a few stored functions that themselves call other functions - a quite complex hierarchy of calls and queries. It will take a while to get something reproducible... - and believe it or not, my personal throughput is usually best when I choose "set option Volker.max_query_tasks = 1", too...

Sorry for the very late reply - I guess I'm facing some kind of "Premature optimization" problem here, i.e. taking the time to modify the procedures/functions to gather the plans of their relevant queries would take much longer than the execution times of the - rarely used - maintenance tasks themselves...(*) - and that would not even include the time and effort to compare and analyse the plans...

As the database was migrated to v16 a while ago and I'm doing frequently ad-hoc queries, I have learnt that trying to temporarily set max_query_tasks to 1 is often helpful when queries take longer than expected. So intra-query parallelism remains a "grab bag" for me:)