Showing results for 
Search instead for 
Did you mean: 

How to improve data hit cache maxdb 7.6

Former Member
0 Kudos

Hell experts

Iam trying to improve the system perfomance on ECC6 Maxdb 7.6.I have notce that if i start that database only my data cache hit rate is on 99%.on i start the sap instance it drops of to 92% even though there ar no users working on the system.So i think there is some tuning i need to do on the sap instance side.

Any ideas welcome

Accepted Solutions (0)

Answers (1)

Answers (1)

Active Contributor
0 Kudos

This is normal behaviour.

If you started your database and your system, the cache is empty and everything has to be read from the disk to the cache for the first time. The hit rate will increase as soon as you call transactions, that have been called before.


Former Member
0 Kudos

Thanx Markus

It has been constantly at that rate.I want to tune it to reach at least 99% .i increased the CACHE_SIZE and CAT_CACHE_SUPPLY but it seems the catalog cache is the one which is letting me down its around 88%

Former Member
0 Kudos

Hello, we have SAP ECC 6.0 on maxDB 7.6


Data Cache - Catalog Cache

99% - 68%

we modiffied the CAT_CACHE_SUPPLY from 9.000 to 11.000 but don't work

what other we must take into account to improve the performance?

Former Member
0 Kudos

Don't worry about the catalog cache hit rate of 88%. The catalog cache is a cache 'above' to data cache, so a miss in the catalog cache normally doesn't produce any I/O.

Besides, increasing the catalog cache size doesn't always produce higher hit rates, because some events ( DDL, transaction rollback ) clear the cache and accesses to non existing objects also reduce the hit rate.

Best Regards,


Active Contributor
0 Kudos


> It has been constantly at that rate.I want to tune it to reach at least 99% .

What a totally senseless tuning goal!

What do you believe is the advantage of having 99% cache hit ratio? Better performance?

Shorter reponsetimes?

The buffer cache hit ratio (BCHR) has no direct connection to your response times!

You could have a BCHR of 90% and the performance is fine.

You could have a BCHR of 99% and everything is terribly slow.

What you should tune is response time. Find out, what takes most of the time of your business transactions. And then - change that.

If the reason for the long response time is the database, than figure out, what happens in the database when the statement is loosing time.

Is the session waiting for Locks? Is it waiting for much I/O? Is the optimizer access plan efficient? Could an index improve the data access performance?

MaxDB comes with the DBAnalyzer tool that helps you to answer some of the above questions.

Also you can monitor your statements with the Command Monitor and check the sessions state via the x_cons tool.

You can do all of this - but tuning by BCHR won't help you here.

If you still want to have a superb BCHR just do:

- create a table - a big one. Don't provide any primary key or index.

- run

select count (column_name1) from table

over and over and over and over again.

After a while your BCHR should go up to 99% by that without having the need to analyze anything in detail.



Former Member
0 Kudos

Hi ,

Can somebody help me with how to find or calculate

1 .DBHR ( database hit Ratios) in MAXDB ? to decide OPTIMAL value for SharedSQLcommandcachesize in MAXDB ?

Thanks ,