cancel
Showing results for 
Search instead for 
Did you mean: 

Does v16 make much more use of intra-query parallelism than v12?

VolkerBarth
Contributor
3,159

I'm currently doing some comparisons between a database running on 12.0.1.4278, 16.0.0.2158 and 17.0.0.1062 (i.e. the the latest builds) on a modest Windows 7 64-bit machine with 8 GB RAM and 4 logical processors. The database was rebuilt for each of these versions and each instance has run some tests on the 64-bit engine (using all logical processors) with identical settings in a (hopefully) almost identical environment.

With the default max_query_tasks option set to 0 (to allow intra-query parallelism), it shows that v16 and v17 make heavily use of parallel operators - property('ExchangeTasksCompleted') lists about 1.5 million tasks - whereas v12 does not (below 100 tasks).

For the tested workload, comparisons have shown that intra-query parallelism unfortunately does decrease the performance noticeably (the complete tests take about 25 % longer), so with v16/v17 I would need to temporarily set max_query_tasks to 1 for the according connections to prevent a significant performance degradation compared to v12. (The workload itself is quite "serial" by design, so I am not surprised that parallelism is not helpful here).


Question: Unless I have overlooked that information, the v16/v17 docs do not state that the usage of intra-query parallelism has been enhanced compared to v12. Is that result of my tests expected behaviour change?

(I won't be able to offer plans of the according queries as they are embedded in stored procedures and functions... - It's just a general question to theses versions.)

Breck_Carter
Participant
0 Kudos

> I won't be able to offer plans of the according queries as they are embedded in stored procedures and functions

See Capturing and Storing Plans With GRAPHICAL_PLAN()

(...and now you know how you're going to spend the rest of the day 🙂

VolkerBarth
Contributor
0 Kudos

Oh Breck, you made my day... to cite the resume from your elaborate list:

"3. When it's fast enough, stop."

MCMartin
Participant
0 Kudos

It might be that not the intra-query parallelism but the statistics have changed, so that the optimiser now prefers the parallel execution

Breck_Carter
Participant
0 Kudos

The statistics are indeed powerful! 🙂

VolkerBarth
Contributor
0 Kudos

Hm, I have rebuild the databases with reloaded statistics, so I guess that should not matter here (unless identical stats would lead to different behaviour in newer versions).

VolkerBarth
Contributor

Dear SAP engineers, any official feedback is still highly appreciated:)

Accepted Solutions (1)

Accepted Solutions (1)

Former Member

As Ani mentioned, you can get the short plans by using request logging (use level "all"). This is the easiest option.

You can also use diagnostic tracing to get full graphical plans:

dbinit -dba tracinguser,tracingpassword tracing.db
dbsrv## -x tcpip tracing.db

From your own database:

  insert into sa_diagnostic_tracing_level ( scope, identifier,
          trace_type, trace_condition, value, enabled ) 
          values( null, null, 'plans_with_statistics', null, null, 1 );
  commit;
  attach tracing to "uid=tracinguser;pwd=tracingpassword;eng=tracing";
  --run your procedure
  detach tracing with save;

Now open Sybase Central/SQL Central, choose Application Profiling mode, "Open an analysis file or connect to a tracing database", "Connect to a tracing database", and input the information for your tracing database. You should now see the statements run by your procedures; right-click each statement to see the plans.

VolkerBarth
Contributor
0 Kudos

Sorry, my problem is the amount of data, as really a bunch of request is made by issuing just a few stored function calls. I had initially tried to generate a RLL with level "all" and got a really huge log file (as mentioned, with more than a million of parallel requests - even the according -o console log is several MB large...). - That made me ask for a more general information (which seems not available, according to your and Ani's answers).

I guess I will have to try to break things down in order to get something traceable, So a big thanks for the exact hints!

Breck_Carter
Participant
0 Kudos

@Volker: If you can add code to your application or stored procedures to "pick and choose" which queries to capture the plans for, you can use the techniques described here... it's what I've used inside Foxhound to capture The Plans From Hell from among gigabytes of SQL requests.

It might be a lot of work to set up, but the filtering can be (almost) perfect.

VolkerBarth
Contributor

Yes, that's exactly the problem, as stated, I'm primarily issuing a few stored functions that themselves call other functions - a quite complex hierarchy of calls and queries. It will take a while to get something reproducible... - and believe it or not, my personal throughput is usually best when I choose "set option Volker.max_query_tasks = 1", too...

Breck_Carter
Participant

Ha ha! ...maybe you need an assistant 🙂

Answers (2)

Answers (2)

VolkerBarth
Contributor
0 Kudos

Sorry for the very late reply - I guess I'm facing some kind of "Premature optimization" problem here, i.e. taking the time to modify the procedures/functions to gather the plans of their relevant queries would take much longer than the execution times of the - rarely used - maintenance tasks themselves...(*) - and that would not even include the time and effort to compare and analyse the plans...

As the database was migrated to v16 a while ago and I'm doing frequently ad-hoc queries, I have learnt that trying to temporarily set max_query_tasks to 1 is often helpful when queries take longer than expected. So intra-query parallelism remains a "grab bag" for me:)

Anyway, thanks again for the helpful hints.


(*) Here's a - possibly - helpful chart:

Is It Worth the Time?

Former Member
0 Kudos

We need to see plans and queries to do any comments on this. Could you please generate at least request level log with all plans and all statements for these particular tests?

Thanks! Ani