‎2007 Dec 12 12:49 PM
Hi experts,
How are you doing?
Now I want to collect data daily from MKPF inner join MSEG, I created customized view "ZWTMV" for this join, and a customized table "ZWTMVCOLLECT" for collected data.
I wrote the code then make the program as scheduled job, but the job cancelled due to "DBIF_RSQL_SQL_ERROR"
REPORT zwmdsd_collect_monthly .
DATA:
wa_zwtmv LIKE zwtmv,
wa_zwtmvcollect LIKE zwtmvcollect,
it_zwtmv LIKE zwtmv OCCURS 0,
it_zwtmvcollect LIKE zwtmvcollect OCCURS 0.
DATA:
datestart TYPE d,
dateend TYPE d,
monthend TYPE d.
SELECTION-SCREEN BEGIN OF BLOCK bl1 WITH FRAME TITLE text-001.
PARAMETER p_date TYPE d.
PARAMETER p_end TYPE d.
SELECTION-SCREEN END OF BLOCK bl1.
datestart = p_date.
dateend = p_end.
START-OF-SELECTION.
WHILE monthend < dateend.
CALL FUNCTION 'SG_PS_GET_LAST_DAY_OF_MONTH'
EXPORTING
day_in = datestart
IMPORTING
last_day_of_month = monthend.
SELECT matnr shkzg budat bwart lgort umlgo menge
INTO CORRESPONDING FIELDS OF TABLE it_zwtmv
FROM zwtmv
WHERE budat BETWEEN datestart AND monthend.
IF sy-subrc = 0.
LOOP AT it_zwtmv INTO wa_zwtmv.
wa_zwtmvcollect-budat = monthend.
wa_zwtmvcollect-matnr = wa_zwtmv-matnr.
wa_zwtmvcollect-shkzg = wa_zwtmv-shkzg.
wa_zwtmvcollect-bwart = wa_zwtmv-bwart.
wa_zwtmvcollect-lgort = wa_zwtmv-lgort.
wa_zwtmvcollect-umlgo = wa_zwtmv-umlgo.
wa_zwtmvcollect-menge = wa_zwtmv-menge.
COLLECT wa_zwtmvcollect INTO it_zwtmvcollect.
ENDLOOP.
CLEAR: wa_zwtmv, wa_zwtmvcollect.
MODIFY zwtmvcollect FROM TABLE it_zwtmvcollect.
IF sy-subrc = 0.
COMMIT WORK .
ELSE.
ROLLBACK WORK.
ENDIF.
REFRESH: it_zwtmv, it_zwtmvcollect.
ENDIF.
datestart = monthend + 1.
ENDWHILE.
I'm so looking forward to your help, every help will be great appreciated.
Thanks,
Tony
‎2007 Dec 12 12:52 PM
Hi,
I think its a Basis problem, basis people has to increase the memory. I faced the same probelm.
Regards,
Prashant
‎2007 Dec 12 3:51 PM
There should be additional information with the dump. If you would post that, it would help.
But it could be that the rollback segment has been filled. You can get around this by updating your table in chunks and doing a commit after each chunk.
Rob