Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Error!CX_SY_CONVERSION_CODEPAGE!! character set. code page '4110' to '4102'

former_member181966
Active Contributor
0 Kudos

Hey Guys ,

We`re also facing this problem in our system . We are on UNICODE and multi-language ( EN,JA and ? ) . Any way When I was running program its giving a short dump at following statement

open dataset p_ifile for input in text mode encoding default

I changed it to .. ->

open dataset p_ifile for input in text mode encoding non-Unicode .

OR

open dataset P_IFILE in text mode encoding default for input

ignoring conversion errors.

In both cases its working fine . But here’s our fear what will happen if we kickoff our conversions in Japanese language . Does it takes care all of the characters , as i have notices wherever there’s an apostrophe " ' " in text . its dumping out .

I have also tried to create UCP parameter ID as described in one of the OSS note but it doesn’t work too ..

I mean I can still catch an exception on the fly and work around . But my only fear as I described above what’ll will happen with special characters which are in Japanese

Right now we’re saving file as UNICODE text and then opening up the file again and saving it as UTF-8 and kicking off our conversions program and loads ....

If any body can pop-up any good idea , that’ll be great with double "Tt"

FYI

I have also look at the following notes . We are on 4102 code page

Note 758870 - Account statement: Character set when the file is uploaded

Note 976416 - RFEBJP00: Dump CX_SY_CONVERSION_CODEPAGE during import

Note 946493 - ! Unicode conversion: Some fields in clusters not converted

Note 626354

Thanks,

Saquib Khan

.

1 ACCEPTED SOLUTION

sridhar_k1
Active Contributor

Open dataset option ENCODING DEFAULT in unicode system expects a file in UTF-8 encoding. It's dumping because read dataset failed to convert a character in the fil to UTF-16BE.

You've mentioned that you are saving the file as UNICODE text and then converting it to UTF-8.

Is the UNICODE text file in UTF-16 encoding? if you can save the file in UTF-16 then the following code uploads the file into SAP as UTF-16. no need of converting to UTF-8.

parameters: p_file type rlgrap-filename.
data: gv_xstr type xstring,
      gv_str type string,
      gv_bom(2) type x,
      gv_encoding type abap_encod,
      gv_endian type abap_endia,
      conv_obj type ref to cl_abap_conv_in_ce,
      gv_nl(2) type c value cl_abap_char_utilities=>cr_lf.

data: gt_text type table of char255,
      gs_text type char255.

open dataset p_file for input in binary mode.
read dataset p_file into gv_xstr.
close dataset p_file.

gv_bom = gv_xstr.

if gv_bom eq 'FFFE'.
  gv_encoding = '4103'.          "code page for UTF-16LE
  gv_endian = 'L'.
  shift gv_xstr left by 2 places in byte mode.
elseif gv_bom eq 'FEFF'.
  gv_encoding = '4102'.          "code page for UTF-16BE
  gv_endian = 'B'.
  shift gv_xstr left by 2 places in byte mode.
else.
  message 'Byte order mark not found at the begining of the file'
           type 'E'.
endif.

try.
    call method cl_abap_conv_in_ce=>create
      exporting
        encoding    = gv_encoding
        endian      = gv_endian
        replacement = '#'
      receiving
        conv        = conv_obj.

    call method conv_obj->convert
      exporting
        input = gv_xstr
        n     = -1
      importing
        data  = gv_str.
  catch cx_root.
    message 'Error during conversion' type 'E'.
endtry.

split gv_str at gv_nl into table gt_text.

loop at gt_text into gs_text.
  write:/ gs_text.
endloop.

If there's no byte order mark (first 2 bytes of the file) comment code related to that.

If new lins char is only LF, change gv_nl declaration to GV_NL type c value CL_ABAP_CHAR_UTILITIES=>NEWLINE

Regards

Sridhar

10 REPLIES 10

Former Member

HI Saquib

<b>I changed it to .. ->

open dataset p_ifile for input in text mode encoding non-Unicode .

OR

open dataset P_IFILE in text mode encoding default for input

ignoring conversion errors.</b>

Above statements avoid the errors, but it is unsafe in all circumstances. Depending on the system we interact we might need to change the characteristics of the file that we download or upload.

Though SAP has been UNICODE enabled, we have to check whether the interacting systems do support UNICODE and the codepage that we use for different languages.

In your case, as you are opening a file to read the same. Please check the characteristics of the SOURCE system.

Hope above info can give you some idea.

Kind Regards

Eswar

0 Kudos

Hi Eswar ,

Thanks for your reply. But I’m confuse here ....

Above statements avoid the errors, but it is unsafe in all circumstances. Depending on the system we interact we might need to change the characteristics of the file that we download or upload.

<i><b>-When you say "we might need to change the characteristics of the file that we download or upload."

Like In my case I’m uploading a file in my ABAP program reading it and then displaying it in SAP , that particular file we save in text with encoding "UNICODE" because we don’t want to loose some of the special Japanese character . Once the file is saved we re-open the file and change the encoding again to "UTF-8" and then push this file to AL11. Make sense ?</b></i>

<b><u>As I already mentioned that we’re on UNICODE system with code page "4102 " ( UTF-16BE Unicode / ISO/IEC 10646) </u></b>

Thanks,

Saquib Khan

.

.

sridhar_k1
Active Contributor

Open dataset option ENCODING DEFAULT in unicode system expects a file in UTF-8 encoding. It's dumping because read dataset failed to convert a character in the fil to UTF-16BE.

You've mentioned that you are saving the file as UNICODE text and then converting it to UTF-8.

Is the UNICODE text file in UTF-16 encoding? if you can save the file in UTF-16 then the following code uploads the file into SAP as UTF-16. no need of converting to UTF-8.

parameters: p_file type rlgrap-filename.
data: gv_xstr type xstring,
      gv_str type string,
      gv_bom(2) type x,
      gv_encoding type abap_encod,
      gv_endian type abap_endia,
      conv_obj type ref to cl_abap_conv_in_ce,
      gv_nl(2) type c value cl_abap_char_utilities=>cr_lf.

data: gt_text type table of char255,
      gs_text type char255.

open dataset p_file for input in binary mode.
read dataset p_file into gv_xstr.
close dataset p_file.

gv_bom = gv_xstr.

if gv_bom eq 'FFFE'.
  gv_encoding = '4103'.          "code page for UTF-16LE
  gv_endian = 'L'.
  shift gv_xstr left by 2 places in byte mode.
elseif gv_bom eq 'FEFF'.
  gv_encoding = '4102'.          "code page for UTF-16BE
  gv_endian = 'B'.
  shift gv_xstr left by 2 places in byte mode.
else.
  message 'Byte order mark not found at the begining of the file'
           type 'E'.
endif.

try.
    call method cl_abap_conv_in_ce=>create
      exporting
        encoding    = gv_encoding
        endian      = gv_endian
        replacement = '#'
      receiving
        conv        = conv_obj.

    call method conv_obj->convert
      exporting
        input = gv_xstr
        n     = -1
      importing
        data  = gv_str.
  catch cx_root.
    message 'Error during conversion' type 'E'.
endtry.

split gv_str at gv_nl into table gt_text.

loop at gt_text into gs_text.
  write:/ gs_text.
endloop.

If there's no byte order mark (first 2 bytes of the file) comment code related to that.

If new lins char is only LF, change gv_nl declaration to GV_NL type c value CL_ABAP_CHAR_UTILITIES=>NEWLINE

Regards

Sridhar

0 Kudos

Hello Sridhar,

Thanks for writing back and giving me guild line. Unfortunately, I have tried the code before and our main aim is to read the file as per normal practice (as user safe it as TEXT and its is in ANSI)

1-<u>Open dataset option ENCODING DEFAULT in Unicode system expects a file in UTF-8 encoding. It's dumping because read dataset failed to convert a character in the fil to UTF-16BE.</u>

<i><b>TRUE</b></i>

<u><i>2-You've mentioned that you are saving the file as UNICODE text and then converting it to UTF-8.

Is the UNICODE text file in UTF-16 encoding? if you can save the file in UTF-16 then the following code uploads the file into SAP as UTF-16. no need of converting to UTF-8.</i></u>

Yes, we are on UTF-16 encoding. That’s was something we were doing to test, but our final goal is to read text file as user safe it in normal circumstances. None of the user wants to safe the TEXT file by going through the list of steps.

I was wondering, may be there’s some wonder code which we wrote and it worked. Though, the following code worked but with the fear on our mind that It’s ignoring some of the special Japanese characters. E.g. I save the file as normal text (ANSI) and read it. It showed me correct output. (FYI- for English Language but for Japanese "BIG NO ").

open dataset p_ifile for input in text mode encoding non-Unicode .

OR

open dataset P_IFILE in text mode encoding default for input

ignoring conversion errors.

...

.

0 Kudos

How are you saving the file? notepad? If you save file in notepad with default option ANSI in japanese locale PC, windows uses Shift-JIS encoding.

You've mentioned that the code "open dataset p_ifile for input in text mode encoding non-Unicode " worked with japanese text when saved with ANSI. It worked because, Open dataset using codepage 8000 as the input file's encoding. SAP developed 8000 based on microsoft's Shift-JIS codepage CP932.

SAP unicode system chooses input codepage from table TCP0C when ENCODING NON-UNICODE option used. check the table for japanese language.

So you can use the following safely but the PC sysem language should be japanese when saving the file

open dataset p_ifile for input in text mode encoding non-Unicode

Regards

Sridhar

0 Kudos

Thanks for the prompt reply!!!

Yes, we’re saving the file in notepad with Def option ANSI .I just checked this table and we have following entry for "JA"

<b>PLATFORM SunOS

LANGU JA

COUNTRY JP

MODIFIER

LOCALE ja_JP.PCK

CHARCO 8000

CHARCOMNLS 8000</b>

Let me try this out and will get back to you. It makes sense to me.

Keeping my finger crossed

0 Kudos

Thank you! You saved me hours!

0 Kudos

Hi Experts,

I am getting same error CX_SY_CONVERSION_CODEPAGE!! character set. code page '4110' to '4102' in my program.

when i am writing the data in application serever

My aim is to save the data and if it is downloaded in text file it should in unicode big endian format.

i tried,

OPEN DATASET W_file  FOR OUTPUT IN  LEGACY TEXT MODE.
OPEN DATASET W_file  FOR OUTPUT IN  LEGACY TEXT MODE big endian.
OPEN DATASET W_file  FOR OUTPUT IN  LEGACY TEXT MODE BIG ENDIAN CODE PAGE 4102.

i am getting file in utf-8 format.

please guide me.

Thanks ,

Narmadha M.

0 Kudos

Hello All,

I am facing the similar issue, but here the code pages are '4110' and '4103'. We are on CRM 7.0.

One of the field of a conversion file contains a pipe (broken bar) character, because of which the 'Open data set' giving a run time error. When I looked at the file through AL11, this pipe character is being shown as '#'.

If I add the 'Ignore conversion errors' statement to Open dataset statement then it is working, but the value is being stored as '#' which is wrong.

When I download the file to presentation server and then upload it again to App server, then the Pipe character is being recognized and the file is being loaded successfully.

Appreciate if any body resolved similar kind of issue and if provide a solution.

Regards,

Veera.

Former Member
0 Kudos

Dear friend

758870 - Account statement: Character set when the file is uploaded

shailesh