Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Which is Performance wise better MOVE or MOVE CORRESPONDING ...

Former Member
0 Likes
3,407

Hi SAP-ABAP Experts .

Which is Performance wise better ...

MOVE or MOVE CORRESPONDING .

Regards : Rajneesh

1 ACCEPTED SOLUTION
Read only

Former Member
2,389

> Select single will pick the first maching record, but Select up to one row will select the most accurate search criterial.

that is nonsense !!

There is no difference. Use Select single if the key is fully is fully specified, i.e. if only one record can fulfill the condition.

Use up to n rows if several can fulfill to condition, but you want only n. n=1 is the special case.

Note, if 'order by' is added then all records fulfilling the condition must be found, same with aggregates. The 'up to n rows' does not read all records by itself. Often stated differently here.

Move and move-corresponding:

Used what really fits your needs. Use 'move into' if both structures are identcial. Use 'move corresponding' if they are not.

Siegfried

17 REPLIES 17
Read only

Former Member
0 Likes
2,389

<removed by moderator>

Edited by: Mike Pokraka on Aug 4, 2008 6:51 PM

Read only

0 Likes
2,389

Hi SAP-ABAP Experts .

Which is Performance wise better ...and Why ?

(a) MOVE or MOVE CORRESPONDING .

(b) Select single and Select up to one row

Regards : Rajneesh

Read only

0 Likes
2,389

Hi,

it depends on how many fields you have in the internal table.

for less number of entries, it doesnt make any diff.

for more entries, say u want to move 10 field values into an internal table with 50+ fields, move corresponding will take slightly more time than MOVE.

so, its ur call now.

regards,

madhu

Read only

Former Member
0 Likes
2,389

This message was moderated.

Read only

Former Member
0 Likes
2,389

move is better...

other option it need conversion soo

Read only

Former Member
2,390

> Select single will pick the first maching record, but Select up to one row will select the most accurate search criterial.

that is nonsense !!

There is no difference. Use Select single if the key is fully is fully specified, i.e. if only one record can fulfill the condition.

Use up to n rows if several can fulfill to condition, but you want only n. n=1 is the special case.

Note, if 'order by' is added then all records fulfilling the condition must be found, same with aggregates. The 'up to n rows' does not read all records by itself. Often stated differently here.

Move and move-corresponding:

Used what really fits your needs. Use 'move into' if both structures are identcial. Use 'move corresponding' if they are not.

Siegfried

Read only

Former Member
0 Likes
2,389

Hi Sigfried,

totally agree.

Performance is defined at design time: Consider a data model that has flaws in it : not relevant HOW you access the data; it will have poor performance by design.

The design should be that way that the most common queries your users send to the DB are as efficiently as possible: If you come out with a query containing nested joins and EXISTS several levels deep the design is poor...

Performance is defined before you create any table - decisions like wich fields belong together in one tuple, wich indexes are necessary, what data volume to expect (ever considered partitioning for a table?)

Tuning-after is really a pain : You really wouldn't want to change physical structures in a production system.

They follow then the 'Never change a running program' and get stucked with the poor performance.

Benchmark everything!

Small-scale benchmark your approaches for problem solving (The MOVE against the MOVE-CORRESPONDING).

big time benchmark with simulated concurrent users (10,100,1000,10.000)

Don't believe in "We always did it that way' and 'Everyone knows you should do it that way'

There can be said lot more about that...

yk

Read only

0 Likes
2,389

> There can be said lot more about that...

all good, but I think the problem here is that many people don't have the correct ABC/top-down-approach to solve performance issues, as in

A

- use primary or secondary indexes when accessing database tables

- use sorted or hashed internal tables and access by key (or index where applicable)

B

- declare only required fields in target tables for array selects

- ...

C

- any circulating myth like "use FOR ALL ENTRIES" or "don't use MOVE-CORRESPONDING"...

Thomas

Read only

0 Likes
2,389

>

> all good, but I think the problem here is that many people don't have the correct ABC/top-down-approach to solve performance issues, as in

>

> A

>

> - use primary or secondary indexes when accessing database tables

I would put this into the data model design phase...

> - use sorted or hashed internal tables and access by key (or index where applicable)

>

> B

>

> - declare only required fields in target tables for array selects

> - ...

>

> C

>

> - any circulating myth like "use FOR ALL ENTRIES" or "don't use MOVE-CORRESPONDING"...

>

> Thomas

the others you have to benchmark in the implementation/development phase

Read only

Former Member
0 Likes
2,389

Thomas is going in the right direction, I would add one point

A

  • access path and indexes

  • too large numbers of records or executions

  • processing of internal tables => no quadratic coding

these are the main performance issues!

> Performance is defined at design time

partly yes, but more is determined during runtime, you must check everything at least once. Many things can go wrong and will go wrong.

  • database does not what you expect

  • selections are empty, everything is read

  • quadratic coding is not only caused by nested loops, sometimes also by logic and other things.

B

.... there are several other topics ....

C

difference between SELECT SINGLE and SELECT UP TO 1 ROWS ..... is several microseconds

Siegfried

Read only

0 Likes
2,389

> A

>

> * access path and indexes

Indexes and hence access paths are defined when you design the data model. They are part of the model design.

> * too large numbers of records or executions

consider a datawarehouse environment - you have to deal with huge loads of data. a Million records are considered "small" here. Terms like "small" or "large" are depending on the context you are working in.

If you never heard of Star transformation, Partitioning and Parallel Query you will get lost here!

OLTP is different: you have short transactions, but a huge number of concurrent users.

You would not even consider Bitmap indexes in an OLTP environment - but maybe a design that evenly distributes data blocks over several files for avoiding hot spots on heavily used tables.

> * processing of internal tables => no quadratic coding

>

> these are the main performance issues!

>

> > Performance is defined at design time

> partly yes, but more is determined during runtime, you must check everything at least once. Many things can go wrong and will go wrong.

sorry, it's all about the data model design - sure you have to tune later in the development but you really can't tune successfully on a BAD data model ... you have to redesign.

If the model is good there is a chance the developers chooses the worst access to it , but then you have the potential to tune with success because your model allows for a better access strategy.

The decisions you make in the design phase are detemining the potential for tuning later.

>

> * database does not what you expect

I call this the black box view: The developer is not interested in the underlying database.

Why we have different DB vendors if they would all behave the same way? I.e. compare concurrency

and consistency implementations in various DB's - totally different. You can't simply apply your working knowledge from one database to another DB product. I learned the hard way while implementing on INFORMIX and ORACLE...

Read only

0 Likes
2,389

You seem to have good knowledge in the areas you're referring to.

Most of the time here though, people use the already modelled and designed SAP standard data model in a wrong way and come here with their problems. E.g. "my report dumps with TIME_OUT", because they are selecting on BSEG without the primary key. Simple as that. I'm afraid these fundamental theories, while correct, won't help the poor chaps with their specific problems.

Cheers

Thomas

Read only

0 Likes
2,389

Hi Thomas,

that's sad but true. I know I here them calling for more CPU , faster disks...

Because I'm in the business of performance tuning often when I'm called they already

have gone through this stage...spent a lot of money only to hit the next

limit because there is no scalability in the model nor in the application.

The worst thing is when the design is bad and you can't change - you are stuck to it.

- Well, you can index... and over index

- or sometimes go the "hard" native SQL way

bye yk

Read only

0 Likes
2,389

True. Some of my income derives from fixing bad designs though, so I should not complain too much.

Read only

Former Member
0 Likes
2,389

yes this thread is answered .

Thanks for all Great replies .

Regards : rajneesh

Read only

0 Likes
2,389

Rajneesh - you've asked enough questions to know that the proper way to say "thanks" in the SDN forums is by assigning points to the helpful answers.

Rob

Read only

nishantbansal91
Active Contributor
0 Likes
2,389

This message was moderated.