cancel
Showing results for 
Search instead for 
Did you mean: 

To read first line of flat file and return whole file after changes to first line in SAP CPI

Hemant_Malvi
Explorer
0 Kudos
2,160

Hello Experts,

Problem statement.

CPI receives flat file(tab delimited), file contains column names and data records of column names. Once flat file is received, I want to read and change column names[space(' ') to be replaced by underscore(_)] which are on the first line instead of reading whole file. Once column name is changed, return whole file with changes in column names.

Example Input file:

No<tab>employee id<tab>customer<tab>case number

1<tab>12345<tab>ABC<tab>2345

2<tab>3231<tab><tab>34532

Note: These are large files between 15 MB - 50MB.

Output required:

No<tab>employee_id<tab>customer<tab>case_number

1<tab>12345<tab>ABC<tab>2345

2<tab>3231<tab><tab>34532

Groovy script which I am currently working on. This script only gives output as column names and not whole flat file which includes data records as well.

*****************************

import com.sap.gateway.ip.core.customdev.util.Message;

import java.util.HashMap;

def Message processData(Message message)

{ def reader = message.getBody(java.io.Reader)

def array = []

reader.find { line -> line == 1

String newline = line.replaceAll(' ', '_')

array.add(newline)

}

def messageLog = messageLogFactory.getMessageLog(message); messageLog.addAttachmentAsString("CSV_file",array.toListString(), "text/plain");

message.setBody(array.toListString());

return message }

**********************************

Please provide solution to return whole flat file along with enhanced column names as output.

Regards,

Hemant

Accepted Solutions (1)

Accepted Solutions (1)

r_herrmann
Active Contributor

Hi hemant.malvi,

you can use the transformLine functionality of the reader and then just filter for the first line. With view onto your sample code it should work like that:

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message)
{
    def reader = message.getBody(java.io.Reader)
    def writer = new StringWriter()
    def first = true
    reader.transformLine(writer) { line-> 
        if (first) {
            line = line.replace(" ", "_")
	    first = false
        }
        line
    }
    def result = writer.toString()
    def messageLog = messageLogFactory.getMessageLog(message)
    messageLog.addAttachmentAsString("CSV_file",result , "text/plain")
    message.setBody(result );
    return message
}

p.s.: There's a small button named "code" in the text editor here on answers.sap.com. By use of that you can easily format source code in your questions.

MortenWittrock
Active Contributor

Nice and Groovy solution. It's too bad the payload cannot be transformed in place; that would keep the memory usage down.

Regards,

Morten

Hemant_Malvi
Explorer
0 Kudos

Hi Raffael,

Thanks for the script and I will use "code" option surely whenever required.

After applying the script to large flat file which contains 45k records out of which only 2k gets read and gives error in CPI.

Error:

javax.servlet.ServletException: Error during write of the response: , cause: java.net.SocketTimeoutException

Can we optimize the code so that it can handle large files as well?

Regards,

Hemant

r_herrmann
Active Contributor
0 Kudos

Hi Hemant,

from where does this error come? It doesn't look like it is an CPI error. Maybe your calling application doesn't wait long enogh. Have your tried to raise the timeout limit of your calling application?

Hemant_Malvi
Explorer
//Changes done to your code

    message.setBody(result);


        writer.flush(); // line added
        reader.close(); // line added
    return message
}

Hello Raffael,

Thanks a lot for your quick response and appreciate your efforts.

How "javax.servlet.ServletException: Error during write of the response: , cause: java.net.SocketTimeoutException" issue got resolved?

I am using POSTMAN tool to test the scenario, I increased Max response size in POSTMAN and it worked.

*******************************************************

Also,I have made changes to your code to optimize the processing. In my scenario after reading the file, it is converted to XML using converter and then a mapping. By using your code, it takes approximately 7 mins to process 45k records of flat file for complete processing in CPI. After code optimization, the process is taking approximately 4 mins.

I am sending edited code in my comments.

Please let me know if we can optimize more or 4 mins is fine for such file?

Again, thanks a lot.

Regards,

Hemant

r_herrmann
Active Contributor
0 Kudos

Hi Hemant,

I think you won't come below the 4 minute mark. From a code perspective the only more efficient way would be to edited the data "in-memory" by working with pointers and modifying the relevant bytes. But as far as I know, this is impossible in CPI's environment.

Hemant_Malvi
Explorer
0 Kudos

Hello Raffael,

Thanks for this information.

My understanding is that it also depends on your CPI sizing for processing the data. Correct me if I am wrong.

Currently, I am using some sandbox CPI which is fully loaded with iflows, can it be one of the reason for slow processing time?

Regards,

Hemant

r_herrmann
Active Contributor

Hi Hemant,

yes this also will have some effects. As with everything that runs on a computer: More resources = (most times) higher processing speed. So the process may speed up, if run a production tenant.

Answers (0)