‎2008 May 06 1:07 AM
Hi Guys
Any suggestion ....
Getting file from legacy and splitting in to six different files and putting in Application server.
It is taking 28 hours.....dont know why??
Using Open dataset and close dataset for 6 times for 6 files...
checking conditions before splitting...
Thanks in advance.
‎2008 May 06 4:40 PM
Analyze program via SM30 as mentioned in previous email. I would focus on type of internal tables used to manipulate data, i.e, standard tables can be "killers"; e.g., use sorted tables.
‎2008 May 06 1:18 PM
Hi,
First of all you need to check in the SE30 if there are performance problems about database access or others routines.
Regards,
Fernando
‎2008 May 06 4:40 PM
Analyze program via SM30 as mentioned in previous email. I would focus on type of internal tables used to manipulate data, i.e, standard tables can be "killers"; e.g., use sorted tables.
‎2008 May 06 5:30 PM
Hello.
How big are your files? Consider separating in smaller ones ...
Use SM30 as already been told ... or debug it!
Best regards.
Valter Oliveira.
‎2008 May 10 7:32 AM
Hello krk,
Are you using US or NUS (Unicode, or not)?
For Unicode the file interface, the OPEN DATASET statement has been completely overhauled and the following enhancements added for US:
OPEN DATASET dsn IN TEXT MODE ......OPEN DATASET dsn IN BINARY MODE......OPEN DATASET dsn IN LEGACY TEXT MODE .......OPEN DATASET dsn IN LEGACY BINARY MODE .......I am not sure but I guess the Unicode concept "is able" to cause such trouble in uploading from the application server. Back in 2001 (no Unicode at this time) I had to upload more the 700,000 records, each about 1,000 Bytes long. There was no problem in uploading the data. My long runtime (28 hours) was more likely caused by inserting the data into SAP tables, even I used the Direct-Input technique (BDC took about 10 times as long).
Hope it helps!
Best wishes,
Heinz