‎2006 Nov 13 11:40 AM
Hi I created a infoset and build a query on it
I getting duplicate records in the out put
How do i delete the duplicate records in query..
Please help me
Raju singhireddy
‎2006 Nov 13 11:43 AM
Hi
It depends on the join you have defined in the infoset, are you sure It's right?
Max
‎2006 Nov 13 11:45 AM
I do not know if you can as you do not have access to the internal table at development time, as the actual code is created at runtime.
If this is the case I would say you have to options: -
1. Create your own internal table in the infoset and use that as the output.
2. Design the report using ABAP only instead of infoset.
‎2006 Nov 13 11:51 AM
Hi where ever u are righting the query in module pool ..
after fetching the data into internal table write :
"delete duplicates from table itab where <fieldname> is initial. "
‎2006 Nov 13 12:50 PM
hi
good
just try this , hope help you
First mark the rows which is causing duplicate records through
expression
and pass that flag into router with two condition one has true and
another
has false and let the true one goes to delete_duplicate table and
another
into the table that hold duplicate
For example
The column c1, c2, c3 has duplicate values
Source qualifier
C1
C2
C3
C4
Then use sorter transformation and sort the record based on the sort
key
with c1, c2 and c3.
Then use the expression transformation after sorter transformation and
configure as given below.
Don't change chronological order as stated below
C1
C2
C3
C4
V_detect_duplicate = iif(c1=v_c1 and c2=v_c2 and c3=v_c3,
FALSE, TRUE )
(ROWS MARKED to detect duplicate rows) Use
Variable Ports. V_ denotes variable port
(integer)
V_c1 --> c1 (in expression editor use C1)
V_c2 --> c2
V_c3 --> c3
O_detect_duplicate --> v_detect_duplicate
Use router and create two groups
Remove_duplicate --> v_detect_duplicate = true
Non_duplicate --> v_detect_duplicate = false
Then pass the group remove_duplicate to one target that doesn't
hold any
duplicate values.
And another group Non_duplicate to another target table that hold
duplicate records.
thanks
mrutyun^