I've been looking at the Lite Optimize process for some time now and would like to share some further knowledge to help you tweak the Lite Optimize process chain in order to reduce the amount of time your model is out of planning mode and, in turn, prevent users from being able to save data or return dodgy data. SAP Note 1649749 talks about this issue.
I recently wrote a blog on Lite Optimize - A little guide to the big things you need to know which looks at various ways you can improve and use the Lite Optimize process effectively. This article looks at a new approach solely aimed at reducing the amount of time your model is sitting outside of planning mode. This, in no way, aims to put down the original design of the underlying Lite Optimize process chain but merely gives you an option if, like ourselves, you find yourself with a 40 minute period where users cannot save data. In an International organisation, this can be a pain! One thing I will put down though is the current inability for BPC to inform users that saving is not possible - a blank save confirmation screen is not acceptable and many of our users have left the Input Schedule thinking data has been saved after much time spend inputting data! I will also recommend an improvement to this awareness in the post.
Reduce time model is out of plan mode
I'll get straight to the point, as you probably already know, users are unable to save back to the model during the running of the Lite Optimize process and reports on the data can produce inconsistent results. The process can run for various amounts of time depending on the number of unprocessed requests waiting to be optimised in the model. For our organisation, we run the process once a day after the overnight data loads - it takes around 40 minutes.
The change I am suggesting here is quite simple. I am suggesting that you consider moving the 'BPC: Create Statistics' step to after the step where the cube is put back into plan mode. For us, the process of updating the statistics accounts for 85% of the running time for the overall chain.
It isn't the first time this technique has been suggested, a blog post in January 2013 suggested 2 processes could be run in parallel (1 which starts the statistics, and another which pauses the chain at that point for 3 seconds before then moving the model back into planning mode while the statistics job continues to run). Credit is due here for the high level idea but the technical suggestion was incorrect. Firstly, concurrent processing is not supported in chains run from BPC. Also, and very key, the cube does not need to be in plan mode when the 'Create Statistics' step kicks off.
The reason SAP designed the chain like this (I assume) is so the end user is only able to write back to the cube once performance is at optimum levels. The question you have to ask yourself before implementing this change is "Am I willing to allow users to write back and use the model for a period after the cube is switched back to planning mode and while the statistics are still updating?" For us, performance doesn't really suffer too badly and we're far more concerned about reducing our down-time.
Improve awareness of inability for users to save
Regardless of whether you choose to implement the above, it remains the case the BPC does not let a user know when a Lite Optimise is running. Unless your users are diligent enough to check the 'View Status' for run packages before every save (I very much doubt it) then it is quite likely that anyone attempting to save data will not know that it has been unsuccessful.
Although the following change does not fully prevent this problem, it does go some way to help avoid it.
I have created a program which automatically switches the BPC environment online and offline whenever it is executed. I recommend that this step is added to the Lite Optimize process chain when the cube is taken out of planning mode and then again once it is put back. Any user not already in the system at this point will be prevented from accessing until the cube returns to planning mode. An informative message is also set to make the user aware of this.
Unfortunately, anyone already in the system will be blissfully unaware that their next save could be doomed. It is also quite common for reports to return incorrect data during the optimisation process (see SAP note linked in the initial section)
Here is the code:
I hope this post might help someone or at least provoke some suggestions on other ways to get around the problems I have described.