on 10-06-2011 2:24 AM
Hello All,
Here the scenario is Flat file to XML.One of the value in the flat file is Invoice number.
There will be multiple records in the flat file.As per the business requirement, PI should check the uniqueness of Invoice number and if ever it finds another record with same Invoice number then that record should be dropped and the other valid records should be processed to target.
How to handle this mapping
Please guide me
Thanks,
Regardss,
Moorthy
Hello,
This can be handled by playing around with contexes. Can you show us a sample input and output so that we can better visualize your problem?
Regards,
Mark
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello,
You can use the following mapping:
For employee number
empnum -> removeContext -> sort:lexicographical ascending -> splitByValue:valueChanged -> collapseContext -> splitByValue:eachValue -> targetEmpNum
For place
empnum -> removeContext -> sortByKey:lexicographical asc -----------> formatByExample -> collapseContext -> splitByValue:eachValue -> targetCity
city ---> removeContext -> / /
empnum -> removeContext -> sort:lexicographical asc -> splitByValue:valueChanged /
For Invoice
empnum -> removeContext -> sortByKey:lexicographical asc -----------> formatByExample -> collapseContext -> splitByValue:eachValue -> targetInvoice
Invoice ---> removeContext -> / /
empnum -> removeContext -> sort:lexicographical asc -> splitByValue:valueChanged /
Hope this helps,
Mark
Concatenate the fields and check for the dupliactes of same combination using the same mapping steps Mark sugested.
If ever it is same then I have to drop the duplicated record
If your requirement is to avoid duplicates ever (duplicates in future files as-well), then you need to look for other alternatives.
One of the option is to use a udf -
http://wiki.sdn.sap.com/wiki/display/XI/DifferentwaystokeepyourInterfacefromprocessingduplicate+files
Refer the solution 2 in the above wiki..
Instead of storing the file names, store the required fields (emp# and name) for duplication check.
Remove the dynamic config part in udf and pass the emp# and name as input to udf.
Edited by: nagarjuna _s on Oct 6, 2011 11:49 AM
Edited by: nagarjuna _s on Oct 6, 2011 11:50 AM
1234BANGALOREABC560032
1234HYDERABADABC560033
1235NEWDELHIIIIIIIIIIIIIIIIIII560032
1234HYDERABADABCD560033
in this case the target mapping willl generate record structure for first, third and fourth record???
If yes then check the below mapping (the mapping steps are same as mentioned by Mark with a slight modification)
empnum -> removeContext
--concat(;)----- sort (ascending) -> splitByValue(valueChanged) -> collapseContext--splitByValue(eachValue) -> udf1----targetEmpNum
empname -> removeContext
For place
empnum -> removeContext
--concat(;)------------------------> sortByKey(asc) -----------> formatByExample -> collapseContext -> splitbtvalue(each value)----targetPlace
empname-->removecontext /
city ---> removeContext -> / /
/
empnum -> removeContext /
--concat(;)----- sort (ascending) -> splitByValue(valueChanged)
empname->removecontext
empnum -> removeContext
--concat(;)----- sort (ascending) -> splitByValue(valueChanged) -> collapseContext--splitByValue(eachValue) -> udf2----targetEmpName
empname-> removeContext
UDF1:
String [] temp =var1[0].split(";");
result.addValue(temp[0]);
UDF2:
String [] temp =var1[0].split(";");
result.addValue(temp[1]);
User | Count |
---|---|
78 | |
9 | |
8 | |
6 | |
6 | |
6 | |
6 | |
6 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.