cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

How to handle source data split over multiple files?

Former Member
0 Likes
579

Hi All,

I have a requirement to process source data where the data maybe split over multiple files. The data maybe split as shown below.

I am only concerned with the section header and row data. I need to be able to refer to the section header data from file1 when processing the data in file n+1.

The file naming convention is xxx-yyyymmdd.sequenceNumber.version.format. The final file is determined by the fact that it contains a record footer and record count section. My initial thinking is that stitching the files together prior to processing in PI would be cumbersome since you can't just append the data to the end of the previous file.

Any suggestions where I may start looking to find how to handle this scenario would be much appreciated.

regards

Julian

Accepted Solutions (0)

Answers (3)

Answers (3)

anupam_ghosh2
Active Contributor
0 Likes

Hi Julian,

                   You can use java mapping to resolve this problem.

The mapping will split the source file and produce smaller files as per your requirement. No need to use FCC inreceiver file adapter. You can make use of ASMA to generate the filenames.

Regards

Anupam

Former Member
0 Likes

This message was moderated.

Bhargavakrishna
Active Contributor
0 Likes

Hi Julian,

You can achieve this either by using UDF's or XSLT or Java mapping.

Refer below link

http://scn.sap.com/thread/3269146

Refer the reply of anupam Ghosh in the above discussion

http://wiki.sdn.sap.com/wiki/display/XI/Split+Mapping+using+UDF

apart from this, try to use split by value function and see (im not sure about the results)

Split by value: You can insert a context change in the queue after each value or after each change to the value, or after each tag without a value.

Regards

Bhargava krishna

Former Member
0 Likes

Hi Bhargava,

Thanks for taking the time to respond.

regards

Julian

Bhargavakrishna
Active Contributor
0 Likes

Hi,

Did you resolved the issue?

Regards

Bhargava krishna

Former Member
0 Likes

Hi Bhargava,

No this issue hasn't yet been resolved. The source file gets split at every 100,000 rows so initially the source file will not be split and due to time constraints we have decided to go down the path of creating an interface dealing with a single source file and then address the issue of split source files (I'm going to have to learn to use BPMs - which is a good thing ).

Your solution if I read it correctly was based on a misunderstanding of my question (I'll lay the blame for that with how I structured my question). Your solution was for splitting the source into 1 to many files, where my issue is that the source can be 1 to many files and the source data needs to be stitched together somehow. Again thanks for taking the time and effort to respond and for following up.

For some reason my response to DNK Siddhardha thanking for the links posted and my response that they look like they will be useful was rejected by the moderator.

regards

Julian

siddhardha_dnk
Active Participant
0 Likes
Former Member
0 Likes

This message was moderated.