cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Bulk upload to Success Factor via CPI,

lgutiegt
Explorer
0 Kudos
1,545

I am doing a massive upload to Success Factor, the process in summary:

1. CSV file data source

2. The upsert is done in the jobinfo and compensation

3. I save the errors in xml format in a Data Store

4. I am using Splitter

5. At the end of the process I read the Data Store and generate the process report that includes OK and Errors.

The process worked very well at the beginning, but now I notice that when I run it after 5 min it ends, I check the error file and it comes incomplete, sometimes it only loads 10%, 30% ... 60% of the data that comes in the File,

Could you guide me what is happening. At first it worked fine.

Main

Other data:

1. Eventually I have a connection drop with SF, the error reset connection, but the process continues.

Thanks Luis

Accepted Solutions (0)

Answers (4)

Answers (4)

lgutiegt
Explorer
0 Kudos

Hi David

You are right, I have seen this behavior, some days it works ok, and other executions it stops processing after 5 minutes, but I do not see that the integration leaves an error log.

I would like to understand more and control the event. I have to try it today and see how it goes.

daviddasilva
Active Contributor
0 Kudos

I have had experience where I used up the memory available for the 24hrs, so even though I started testing with smaller payloads, it still didn't work correctly as I had to wait for my logged data to be cleared over the next 24hrs. This could be similar to what you are seeing.

Is it working now a few days later?

lgutiegt
Explorer
0 Kudos

Thanks for comment,

I have it set to be record x record, each record affects the current and historical JobInfo and Compensation measures as of a fixed date.

That is, from 14-Oct-2022 to today

I don't understand why sometimes it works and other times it doesn't. And the file sizes vary, from 14 thousand or 1 thousand records.

Right now I'm loading 1600 records. I don't understand what's going on

nlgro02343
Active Contributor
0 Kudos

As in you're running it but with bigger pay loads it gets stuck at some moment and then the next day times out?
If so, I've had that too and that was because of the log steps (groovy scripts logging the data) that were causing it as Cloud integration didn't log that many of them. Before that the issue was that the connector from which I was pulling data got too much data and it basically gave up because of that, so instead I took a step first to get the IDs of data that I needed to have, then split those up and let them be calls so that the connector wasn't taking such huge chunks of data.