cancel
Showing results for 
Search instead for 
Did you mean: 

Duplicates when loading Master Data from InfoObject to InfoObject

eric_vangrieken
Explorer
0 Kudos
345

Hi,

I'm trying to update master data for InfoObject ZEMPLOYEE from InfoObject 0EMPLOYEE where ZEMPLOYEE only contains a subset of the attributes of 0EMPLOYEE.

I always get duplicate errors even if I flag the update to 'handle duplicate record keys'.

When I set the DTP to 'Valid records Update, reporting possible' I see that most updates are not done.

Semantic group only contains the employee number as key so I would expect for the time-independent attributes to take the last values and the time-dependent attributes to nicely follow the sequencing as in 0EMPLOYEE.

What am I doing wrong here?

Regards - Eric

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Eric,

since the infoobject ZEMPLOYEE has at least one attribute as time-dependent it is not necessary to use the "handle duplicate record keys" option. This option is valid in case your data have inconsistencies. For example:

Data record 1 is valid from 01.01.2006 to 31.12.2006

Data record 2 has the same key but is valid from 01.07.2006 to 31.12.2007

The system corrects the time interval for data record 1 to 01.01.2006 to 30.06.2006.

Instead, make a mapping in the transformation rules from 0EMPLOYEE to ZEMPLOYEE for infoobjects 0DATETO and 0DATEFROM. Then you should not get any duplicates.

Best regards,

Theodoros

eric_vangrieken
Explorer
0 Kudos

Theodoros,

thanks for the response. (Sorry for the late reply but I was out yesterday).

The mapping you suggest (0DATEFROM, 0DATETO) is exactly what we have in place that's why I do not understand why we still get duplicates.

What I'm starting to wonder is following:

for 0EMPLOYEE

DATEFROM DATETO EMPLOYEE ENTRYDATE ORG LASTNAME (Other Attr)

01.03.2006 31.07.2006 806750 21.10.2001 50023146 VAN GRIEKEN X Y Z

01.08.2006 31.12.2006 806750 21.10.2001 50023146 VAN GRIEKEN X B C

In this example all attributes targetted for ZEMPLOYEE remain constant though other attributes change e.g. Y becomes B and Z becomes C.

this gives for ZEMPLOYEE:

DATEFROM DATETO EMPLOYEE ENTRYDATE ORG LASTNAME (NO Other Attr)

01.03.2006 31.07.2006 806750 21.10.2001 50023146 VAN GRIEKEN

01.08.2006 31.12.2006 806750 21.10.2001 50023146 VAN GRIEKEN

in other words 2 identical entries except for DATEFROM and DATETO.

(In fact for ZEMPLOYEE attribute ORG is the only time-dependent attr.)

I wonder if this doesn't cause the duplicates.

Actually here I would expect that the data transfer "merges" the entries to create for ZEMPLOYEE only one entry:

DATEFROM DATETO EMPLOYEE ENTRYDATE ORG LASTNAME (NO Other Attr)

01.03.2006 31.12.2006 806750 21.10.2001 50023146 VAN GRIEKEN

Regards,

Eric

Former Member
0 Kudos

Hi there,

> this gives for ZEMPLOYEE:

> DATEFROM DATETO EMPLOYEE ENTRYDATE ORG LASTNAME (NO Other Attr)

> 01.03.2006 31.07.2006 806750 21.10.2001 50023146 VAN GRIEKEN

> 01.08.2006 31.12.2006 806750 21.10.2001 50023146 VAN GRIEKEN

>

>I wonder if this doesn't cause the duplicates.

No, this cannot give you double entries since ZEMPLOYEE has in its key except from the /BIC/ZEMPLOYEE field also the DATETO field. In your example, both rows have different DATETO values.

Please, check if table /BIC/MZEMPLOYEE has both fields as a key (the 3rd one should be OBJVERS).

If at least one of the attributes of ZEMPLOYEE is marked as time dependent then the above should hold.

Best regards,

Theodoros

eric_vangrieken
Explorer
0 Kudos

Key fields are:

/BIC/ZEMPLOYEE

OBJVERS

DATETO

as expected I suppose. Same for the time-dependent Q-table.

Everything is in place for normal behaviour yet something is causing the duplicate errors.

I'm left absolutely clueless...

Eric

eric_vangrieken
Explorer
0 Kudos

Just to close this neatly.

It proved to be the time-dependency settings of the attributes in ZEMPLOYEE.

There were attributes that were non-time-dependent in 0EMPLOYEE while in ZEMPLOYEE they were still time-dependent. Hence the "strange" duplication errors...

The topic now has status answered.

Answers (0)