cancel
Showing results for 
Search instead for 
Did you mean: 

SQL log files

Former Member
0 Kudos

We are experiencing problems with full optimize + compress because our SQL transaction log is growing to the size of our disk (10Gig). We manually shrink the log file but as soon as we kick off an optimize it grows to 10G again in a matter of minutes. We're looking for some kind of configuration that will allow the optimization process to finish successfully without letting the log file grow so large. We have a simple recovery method set on our server. The result of the optimization failing has left all of our finance application record in the fact2 table instead of the fact table.

Can any one help.

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

1. What version of BPC and SQL are you on?

2. How many records are in your fact table for the application your trying to compress?

3. How long has it been since you've compressed the application?

Shrinking the transaction log before starting the process will not help (it will actually make the process take longer) if the processing requires the log space to complete the transaction. Shrinking after it's done to release the disk space would give back the space. You have a limited number of options and limiting the size of the transaction log is not one of them.

1. You could temporarly add an additional log file on a separate disk drive to get through the compression and then drop it.

2. If you have data that is no longer needed (old budget versions etc) you can delete this first then run the compression.

However, if you are that tight on disk space you probably need to add additional disk space to your server.

As far as the records getting left in FAC2 once you get the optimization to run, you'll want to check your fact table and delete any records where the SOURCE is not equal to 0. These would be duplicate records.

Answers (2)

Answers (2)

Former Member
0 Kudos

Great Info James Thanks. A few comments maybe you can help with.

From what you mention about the incremental opt. is sounds like (now that it's fixed in 5.1) that we never need to do a full opt. We can accomplish everything we need to with incremental & processing the dims when we have structure changes. We were successful in running the incremental and all the data seems to be where it should be (FACT). As for the data vs. log files. we do have them on separate disk arrays in production but we were just running into the file growing beyond it's disk space causing the optimization failure.

Our DBA said the reason the log file is growing so large is because the optimization program does not commit the transactions as it processes, it commits them at the end of the optimization process. With this info, I set the log file (in test) to grow by 10% but only go as high as 1G. I than ran a full process with out compress and though the log file grew to 1G, the optimization did manage to complete successfully. I just don't know if by limiting that file to 1G, I've forced the system to work properly and the commits happen more frequently, or if I've set the log to not hold all the info and potentially not be able to recover / rollback the transactions if the opt process is aborted or fails.

Long-term we can look to add more space to the log array, I just don't think is should need a 100G drive, you know!

Matt

Former Member
0 Kudos

Based on what I've seen your DBA's analysis is correct. The optimization runs as a single transaction and only commits at the end. I've never run a SQL trace or seen the underlying code but based on watching the log and when the processing completes that seems to be the case. There is a large difference between just optimizing and running a compression so the fact it took less logging isn't suprising. But, if the process needs more that you have allocated to the log it would fail. I generally keep the log size to a middle of the road range (~5GB) so that processes don't spend a lot of time growing the log.

Adding some additional space for the log files will keep you out of trouble. Hopefully you don't need 100GB or something else is wrong. Make sure to check the fact table for the SOURCE equals 0 on all your records. If it done then everything was cleaned up with the re-optimization.

Former Member
0 Kudos

1. We are on 5.1sp1 - multi server environment - our DB server is SQL 2005

2. We have 18 million records in our finance application

3. We have a nightly process that runs a full optimize with a compress

As for the log file...here's what's strange. Our DB is 25 Gig. When we run an optimize on our test server which has enough space for the log to grow to 35Gig. The file grew to that size and failed. I'm also wondering if partioning our DB will help with this process.

Former Member
0 Kudos

So, if I'm reading your information correctly you have a 35GB disk array that has both your data files and your log files and the data files are 25GB leaving a maximum of 10GB for the log. I assume since you're running this nightly that the process completed two nights ago and just failed last night. In general 18M records isn't that many and it's good that you do nightly compressions. It scary how many times I hear people complain about the size of thier database and when I ask about how often they compress it they give me a blank stare and ask me what do I mean by compression!

So a couple of things:

1. You don't need to do a full optimization just do an incremental with compression (which you couldn't do in 4.2). Doing an incremental does everything (data wise) as a full but a full also removes the entire cube and rebuilds the structure along with all the data. This isn't necessary and can take a lot more time. Your structure gets updated when you process a dimension for updates so doing in nightly doesn't add any benefits.

2. A best practice is to have your data files and log files on separate disk arrays. This is because the log disk I/O is alway serial I/O. All you're doing is writing a trail of what was done in case you need to undo it. Whereas data access is alway random I/O. By having them on the same array, you're creating contention between the two processes which reduces performance.

3. If the log cannot grow large enough to complete the transaction at hand, then it will abort the process and rollback the transaction so somehow you're going to have to find enough space either by adding additional disk space or add additional logs onto other disk drives in the server. Partitioning the SQL tables wouldn't help as we don't support it in the application so the compression would alway read all records from all three fact tables, summarize them, and then write the summarized records back into FACT.

4. If you can clear data, that will temporarily help but you'll probably be back in the same situation before long.

5. While you're trying to resolve this, run an incremental optimization (without compression) and then run a query on FACT and verify that all records have a SOURCE value of zero. If not, delete the ones that don't otherwise your data will be doubled up. If you have to clear any records, run the incremental optimization again just to be sure the cube is in sync with the data. Be sure you have a validated backup of the application before doing anything directly in SQL.