cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

SAP JCo non Unicode communication between Two Unicode systems.

polireddy
Discoverer
0 Kudos
507

Customer SAP system contain some special characters(special characters come when the user upload the data from XL/Word etc). Those characters, should not send to Java application and

replace those special characters with #(or with any other symbol) and send to java application.

Above issue is resolved using the following way

if we set it as non-unicode, those character will be automatically replaced by SAP with #(or with any configured Error Indicator).

SAP destination level: Selected non-unicode/Codepage from client-side logon language (at runtime)
Java application level: did not set anything explicitly. leave it to sapjco lib default values(JCO_PCS=1, JCO_CODEPAGE= 1100)

We have tested above settings in Pre production environment, we got positive result and things are working as expected. When we moved to Production environment, same settings are not working.

Please suggest some to solution for this issue.

Platform details:
Java application : unicode system
SAP System: unicode system
SAP enhancement version: EHP8
Sap jco jar version: 3.1.7
OS: Linux redhat 7.9
JDK: 11
kind of Java program: standalone Java application using sapjco3.jar to communicate with SAP systems to send/receive IDoc files via SAP ALE.

Application flows:
1. we will send IDoc files to SAP ALE system by using sapjco3.jar JCoCustomDestination class
2. we will receive IDoc files from SAP ALE using sapjco3.jar JCoServer class

Accepted Solutions (0)

Answers (1)

Answers (1)

HAL9000
Product and Topic Expert
Product and Topic Expert

Because Java itself is always a "Unicode system", JCo will also always communicate with other Unicode systems in a Unicode communication code page. The JCo documentation (doc file javadoc/configuration.html) says, that it is mandatory to setup the Unicode option in corresponding SM59 destinations targeting JCo programs.
Usually, it does not make sense to have code page conversions on both sides and possible character garbling between 2 Unicode systems.

So what you are trying to do is officially not supported and I also guess it won't work correctly, especially if having non-Unicode code pages for IDoc exchange without using the Java IDoc Library Add-On.
You are trying to misuse JCo here for destroying character data, but JCo's purpose is to present the same data to Java developers as they do exist in an ABAP system.

If you want to replace characters, either do this already on uploading them to the ABAP system, or your Java application has to do this in your custom code after receiving the data from the ABAP system. Intentionally destroying characters during RFC communication is not supported. And also always think about the correct data type to be used for your data. Word or Excel documents are more binary data instead of character data. To preserve the format of Word and Excel document files (or any native file formats), one should use binary data types (RAW data) for storing them at ABAP side, so that these data won't undergo any character code page conversions when transferring it via RFC.

I also strongly recommend to use the Java IDoc Library Add-On. I know that it is almost impossible to receive and interpret non-Unicode IDocs by custom code, especially when it gets to multi-byte non-Unicode code pages. This recommendation is a general one - not related to your intended use case.