Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Truncated record in OPEN DATASET ENCODING NON-UNICODE

Former Member
0 Kudos

Hi,

I have to read a Unicode created file into a non-unicode SAP System, version 4.7.

When I make the OPEN DATASET using ENCODING UTF-8 y get a CONVT_CODEPAGE dump. That´s odd cause my system is non-unicode. I don´t wanna use IGNORING CONVERSION ERRORS attribute, the output will be corrupt.

But when I use ENCODING NON-UNICODE or ENCODING DEFAULT the READ DATASET mysteriously truncate the record which try to read from real 401 characters to 361 characters. Variable is string.

I can see full records through AL11.

Any ideas?

Thanks,

Pablo.

1 ACCEPTED SOLUTION

nils_buerckel
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Pablo,

what kind of characters are stored in your UTF-8 file (Which language) ?

Which code page does your Non-Unicode system support ?

If these two areas do not match, then you get the mentioned error ...

Best regards,

Nils Buerckel

3 REPLIES 3

Former Member
0 Kudos

Hi,

Try using:

open dataset filename in text mode encoding default for input

ignoring conversion errors.

As said in AL11 its coming so the above code is used for that.

Hope this will surely help you !!!

Regards,

Punit

nils_buerckel
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Pablo,

what kind of characters are stored in your UTF-8 file (Which language) ?

Which code page does your Non-Unicode system support ?

If these two areas do not match, then you get the mentioned error ...

Best regards,

Nils Buerckel

0 Kudos

Hello,

Finally attribite IGNORING CONVERSION ERRORS is working with ENCODING UTF-8 option.

I thought this attribute change special characters from the file into # character or another in SAP field, but it ignore dump and write data correctly.

Thanks all,

Pablo.