Showing results for 
Search instead for 
Did you mean: 

Java long running threads: how to avoid them, because impact to MII scheduler ?

0 Kudos

we have problems with MII scheduler. After a while the system is active, yield confirmations are not sent to Erp, and no asynchronous task is performed: it seems the scheduler is hang. In effect we find dispatchers in running state, never changing state; next run time is in the past and remain the same. Even if we disable and re-enable job by job, jobs come back to running state, and don't process any message. To restore Java instance to operating state, we have to restart it. We realized that this problem happens when there are several long running managed application threads, more or less when they are around thirty. To note that these long running threads are also all dedicated to serving a well identified consumer (from MMC we see an ID number), but we don't know where to go to see which consumer is it, understand who and why it needs a dedicated thread. In the log we don't find particular errors in the interval time when the thread was assigned to the ID or when it was last updated.
See file attached.
Someone can help me to understand what is happening to my MII / ME system?
Thank you !


Accepted Solutions (0)

Answers (4)

Answers (4)

We isolated the problem, and probably we discovered a Sap bug that we managed to bypass. The problem was in the Java printing via adobe: if there is any problem with windows (caused by driver's problem or similar thing) the application hungs, no way to catch an exception to close prematurely the thread; you must always restart the instance to remove hanged threads. So if the application is asynchronous, after a while the scheduler stops (because is reaching a maximum number of active threads ? Where to check if there is a limit of max active async threads ?) We changed the application to have synchronous printing, moving the problem to other type of thread. In the next future we will modify the printing application to leave forever adobe services, and send the stream directly to the printer via http post. Thansk for your help

0 Kudos

Hi Romano,

Thanks for sharing the explanation. One additional question is why asynchronous? Granted for printing, it is sending the output and perhaps you don't want the job to wait for the printing to complete, but asynchronous seems to cause more problems than benefits (in my experience). Just a thought to consider. If there is a problem, synchronous will do a much better job of highlighting it.

Cheers, Mike

0 Kudos

Maybe you need to check your Java max thread counts? If you only have a few (5 or 10?) and you have more jobs running at the same time, this could be the cause.

Do you have the scheduler Transaction Persistence set to Always? While I have not run into an issue with this, a co-worker has run into problems with the amount of data being retained with this setting. Changing to On Error will greatly reduce the amount of retained transaction data.

Have your NW Admin check the log files in NW to see if they are getting full and have them check the overall settings as well. They might see something else causing problems.

"we are aggregating all data at every half an hour (correlate option)": I do not understand what is meant by this statement. Are you referring to Master Data updates from ECC? If you are doing full refreshes instead of using ECC Change Pointers, this could bog down your system significantly. Especially if this is occurring with Material Masters (MATMAS IDocs) and/or other large datasets being transferred. If you are doing your updates with a pull (BAPI/RFC) rather than a push (IDocs), then this is an opportunity for improvement especially if coupled with ECC Change Pointers.

Can you provide a list of the jobs running and what they do? Especially those running hourly or more frequently. Perhaps reviewing and reducing the frequency of the jobs is an option.

Regards, Mike

0 Kudos

Thank you for your contribution, Mike

0 Kudos

Hello Mike,

1. we don't see any log entry (at error level and up so on)

2. we are aggregating all data at every half an hour (correlate option)

3. We pass few data to erp; usually number of pieces to confirm. Consumption will be done by Erp via backflush

4. we use std bapi

Anyway, what we don't understand is:

1. why scheduler stop its execution; there a lot of different jobs/different task. I can accept the scheduler stop one job/one program, but not all job/not different program, but not asynchronuos requests

2. we cannot see from anywhere what is a long running managed application thread (who launched it, what is is doing, how to kill it). We don't know where to go to see which consumer is it, understand who and why it needs a dedicated thread.

3. we don't know if there is a limit (how many) for long running managed application thread. When we arrive more or less around thirty, we have to restart the Java instance


0 Kudos

Perhaps the answer lies in figuring out why the jobs are running so long?

  • First place to look is logging (ie get rid of most or all of it).
  • Second is multiple round trips to ERP or Database or PCo when one could make one call to pass all the necessary information. Perhaps aggregating all the data together before passing it to the BAPI/RFC is a reasonable solution.
  • Third is to look at how much data needs to be passed or returned from ERP or Database or PCo. Try to put filters to limit the data.
  • What RFC or BAPI are you using to send the Yield Confirmations to ERP? Would a customized BAPI/RFC be worth the investment?

Regards, Mike