Problem Description:
Our SAP PO system (4-node cluster, 128GB RAM per node, NW 7.5 SP24) recently crashed due to 800MB-1GB payloads combined with 500QPS on specific ICO interfaces. The root cause traces to uncontrolled message volume/size overwhelming the system, despite our "aggressive" global configuration:
- max_message_size = 1GB (yes, I know this is risky 😰)
- Standard message processing without interface-specific throttling
Key Challenges:
- No Interface-Level Controls: SAP Help/notes (e.g., 2503816, 2761288) only describe global message size/QPS limits. We need per-ICO interface rules (e.g., "Interface X: max 500MB payload, 100QPS").
- Scalability Gap: 4 nodes with 128GB RAM should handle large messages, but high concurrency + payloads cause JVM OOMs (heap dumps show 90%+ usage during peaks).
- Business Impact: These interfaces process critical EDI/XML files from logistics partners – we can’t reduce payload size or QPS without re-architecting upstream systems.
Asking the Community:
- Has anyone implemented interface-specific throttling (QPS/message size) in PO 7.5?
- Are there hidden configuration parameters or custom BAdIs for ICO-level controls?
- For large message handling: Any best practices beyond (e.g., disk-based processing, async queues, chunking)?
- Could SAP Cloud Integration (CPI) sidecar help offload large messages while keeping PO for legacy logic?
System Details:
- NW PO 7.5 SP24 (nwa 7.5 SP24)
- 4x nodes (VMs: 32vCPU, 128GB RAM, SSD)
- All ICO interfaces use synchronous processing (due to partner requirements)
- Message types: XML/EDI (uncompressed, no attachments)
Appreciate any insights – we’re currently firefighting with temporary circuit breakers and need a sustainable solution!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.