cancel
Showing results for 
Search instead for 
Did you mean: 

common ways to speed up sqla

Former Member
2,588

Hi Our application has a function where it will calculate various totals on employee schedules. One very large group's schedule was taking about 40 seconds to process before a defrag, dump, load, defrag was performed. After the rebuild/defrag process the user reported that the process was taking about 35 seconds. Besides making sure that there is enough memory to cache the entire database, and the rebuild/defrag process described above, what are some of the other ways to speed up sqla database processing? I realize that we may have to look at redesigning the schedule totaling function, but I want to make sure that sqla is setup for its fastest possible performance.

Thanks, Doug.

VolkerBarth
Contributor
0 Kudos

IMVHO, the re-design of complex and "slow" queries is usually much more efficient than methods to speed up the "whole database" - unless one is faced with obvious bottlenecks like way too less RAM and the like.

Besides that, in order to avoid the common answer "It depends...", I guess you will have to ask more specific questions and tell more on your options of choice. I don't think there is one and only setup for "the fastest possible performance" in general...

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member

Hi Doug

We are using SQLA for intensive calculations (less data retrieval). Some pointers:

  1. Use the profiler in Sybase Central to find your bottle-necks (Only profile procedures and functions)

  2. There is an overhead in calling functions that one tends to ignore. If you have loads of functions that each performs a small task, consider moving them back into the main procedure. There are strict guidelines as to when SQLA will translate your function to an inline set of calls within the calling routine. But generally speaking, you should do well flattening your depth of functions calling functions, calling... you get the idea.

  3. Consider materialized views. Following your performance profiling (1), see if you can replace some data retrieval from views (if any) with materialized views. It can make a huge difference in calculation performance if you reduce the fetch-time of single-row selections.

  4. FASTFIRSTROW. If your calculation typically requires one record from a data set, consider using the FastFirstRow table hint in a query's FROM clause to set the optimization goal for a specific query without having so set the global optimisation goal settings of your database.

As Volker stated, every use-case is different, but you cannot go wrong with these steps (especially the first one). Good luck!

VolkerBarth
Contributor
0 Kudos

FWIW, I'm quite sure for his second point, Liam is relating to the following FAQ:

Deep level calling of Stored Procedures - Expensive

Former Member
0 Kudos

Impressive memory Volker 🙂

VolkerBarth
Contributor
0 Kudos

...nah, just the linked pages of this forum:)

VolkerBarth
Contributor
0 Kudos

...or should I have said: "Virtual memory!"?

Breck_Carter
Participant

When you follow Liam's advice to use the procedure profiler, you may discover one or more slow SQL queries.

If you obtain representative and meaningful plans (Graphical Plan With Statistics) for those queries, maybe you can see where to apply Michael's advice to create an index.

You could also save and post the saplan files here.

...and/or run the Index Consultant.

...and/or run Database Tracing In 50 Easy Steps ( that's a last resort 🙂

mmangels
Explorer

Hi Doug,

I would first make sure your database server and client run on the latest EBF patch and if possible the last release of SQL Anywhere (16). We've encountered several major performance increases when installing the latest patch (on version 11.0.1).

Indexes can really speed up things. I would check which queries are executed during your 35/40s period and try to optimize them by adding specific indexes. Remember that adding indexes also has drawbacks !