Showing results for 
Search instead for 
Did you mean: 

Analysis Process Designer and performance

0 Kudos

Hello BW-world,

For a very large Bank in the Netherlands we have plans in building the follwing scenario:

1->Flat files are loaded into PSA tables as transaction data

2->APD will transform this data from PSA into transactional ODS's (no delta capability but very fast)

3->An infoset will be built to link info-objects (hierary enabled) and the transactional ODS's

4->A Bex query will be used on top of this info set

5->This Bex query is input in a APD-process, then aggregation takes place in this APD, then flat files will be created in this APD-process.


How will 2)perform when we have millions of records to transfer,

How will 4) perform on an infoset with 20 entities in the infoset?

How will 5)perform when there are millions of records in the query that have to be aggregated to a couple of thousands of records?

Thanks for any comments!

Best regards,


Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

I'm assuming you are talking about BW 3.5 because doesn't 2004s allow you to load to a transactional ODS with conventional InfoPkg?

We are loading 20 million rows to an T-ODS using APD and it performs very well.

Once teh data is in a T ODS, there really isn't anytihgn to stop you form wirting your own SQL if necessary if you can't get the APD toolset to do what you want.

Active Contributor
0 Kudos

You may also want to refer to OSS notes:

893318: APD: Load distribution in background job

794257: APD.FAQ: General performance note

Hope this helps...

Former Member
0 Kudos


the APD does`nt be a useful tool for modeling a performant

data flow. A lot of SAP BW user think so.

The performance problems are given by the ddic intern table, using to uploading the extracted data into the apd used wa_table and structures.

To have the best performance on your scenario, please use the following scenario as possible.

Step 1.

upload the extracted data from the psa into a data layer

there you can reduce and harmonize the data by using a transactional ods

Step 2.

build a infoset that joins the data

Step 3.

build one query to reducing data and make two copies of ist

Step 4.

build a useful data mining model

Step 5.

upload the results of the data mining model into a transactional ods

Step 6.

link the uploaded data into a infoset ore write back into a standard ods

Step 7.

query the data

If you use this scenario you have a lot of benefits. Better performance, better quality of persistent data and actual and traininged data.

The recommend next step (if you want a alerting) is to build a reporting agent report - if you have usefull processes in the query.

There are a workshop for Data mining and APD, named BW380.

I hope I helped you.

Otherwise give me a message.

0 Kudos

> the APD does`nt be a useful tool for modeling a

> performant data flow. A lot of SAP BW user think so.


that's interesting, but I did not understand it completely. What "ddic intern table" do you mean? Do you have any examples for performance problems?

Why are you talking about data mining? Laurens seems not to need it.

Kind regards,


Former Member
0 Kudos


Ik zou het gewoon doen. Klinkt goed, en SAP is een zeer goed product.

Vriendelijke groeten

Floris Padt

P.S. WIj hebben hier serieuze performance problemen op een SQL server machine en hebben niet meer dan 7 miljoen records. Zonder aggregaten duurde sommige rapporten meer dan 40 minuten. Dit terwijl in de presentatie van BI 7.0 1 miljard records in 8 seconden konden worden opgehaald.

0 Kudos

I know I am asking a lot, but does anybodu have experiences with this APD in terms of performance?

Former Member
0 Kudos

Hi Laurens,

I would make sure that you are follwing the performance guideline for BW. Here is a link to a site that you should look at.

Cheers! Bill