Showing results for 
Search instead for 
Did you mean: 

Bulkloader Loading Performance

0 Kudos

Hi Experts,

We noticed that in the processing folders of the bulkloader (e.g. /workingdirectory/SAPCC_CIT/PREPAID/Instance#1/CC1) the processing seems to be slow:

1. The number of files are already increasing on this directory (reaching 800+) which normally only averages about 200 files.

2. When we re-process files from INVALID folders, it's not getting processed immediately. Takes hours for example where the file is still in .arc.csv. It gets processed after so we know that the original issue while it failed was already corrected.

Question: For these type of cases, can these be monitored for example in an application like introscope? And is the solution here restarting the bulkloader?



View Entire Topic
0 Kudos

Hi Steve,

Please find below the requested details.

1. to find back the thread dumps, or generate new ones
By design, the CC instances automatically generate thread dumps when they run into timeouts or severe exceptions.
If that's your case, you'll find these files in the work/dump directory. Since you're trying to diagnose a past occurrence of an issue, we recommend you to search these files first. If they're present, they'll give you context information about what happened.
The name of each file is suffixed with the type of even that prompted its creation: automatic files are consequently suffixed with either "timeout" or "exception", this is how you can recognise them in the directory.
Also, please note that CC automatically deletes these files after 14 days by default (this can be changed by modifying the THREAD_DUMP_RETENTION_PERIOD parameter).

If you still need to create files manually, please log into admin+, and use the "dump" command, to generate thread dumps for the targeted instance(s).
For example:

# if you're interested only in bulkLoader#3
dump bulkLoader#3

# if you want a thread dump for each running bulkloader
dump bulkLoader

# with no argument, you'll generate a dump for each running instance

Whatever you choose, the generated files will also be in the work/dump directory, but with the suffix "ondemand", this time.

Numerous settings are available for the tread dumps (in particular if you want CC to generate several dumps in a row when an error occurs, with a defined time interval, etc.).
Here's the corresponding documentation, from our Parameter reference:

2. to check the allocated JCo resources
The JCo settings are in your jco.destination file. When you set up CI manually, you have to import that file using (the name is usually "jco.destination", but you may choose a different one).
If you still have that file around, you can reopen it, and see the settings there. Otherwise, you can export it using as follows:

/usr/sap/SID/CCDxx/script/ jcodestination export /tmp/jco.destination.export CI_JCO_DEST_NAME -login=adminuser

This'll at least give the values for "peak_limit" and "pool_capacity" (defining how many JCo connections may be kept, and used at the same time). The parameters that aren't explicitly set in your file keep their default value, and you may need the assistance of a JCo expert to tune them.
Before even checking the settings, it's best to go to the work/ directory, and see if you have JCo- or RFC-related logs. They may contain detailed information about what happened (for example, if the issue was due to an insufficient number of available connections, you'll see it there). If you don't find any such files, please let us know, we can help you activate the JCo traces.
Here's already the reference of the JCo logging parameters:

Feel free to ask for more details about the procedures, or to open a customer ticket if that's preferable (in particular if you want us to analyse your files).

Best regards.

SAP Convergent Charging Support

0 Kudos

Thank you François for the detailed input,

From the work dump folder mentioned we actually saw some files there generated on the date when the issue was encountered. However, they are suffixed with "shutdown" and "periodical" only. Do we have any references we can check to better understand the suffixes here and the content of those files?

So far we have identified that threads were blocked on that one bulkloader server causing the slow performance. In the work dump file we can see some info like threads e.g. for PREPAID CIT READER in RUNNABLE state and some are in WAITING state. The waiting state seem to refer to locks? We are trying to look for info on what might be the cause for example if it's waiting, what is it waiting for and if in CI can we see trace the program?