Financial Management Blogs by SAP
Get financial management insights from blog posts by SAP experts. Find and share tips on how to increase efficiency, reduce risk, and optimize working capital.
cancel
Showing results for 
Search instead for 
Did you mean: 
Paulo_Moncada
Product and Topic Expert
Product and Topic Expert
1,768

Have you ever thrown a party and only expected one set of guests, only to discover that on the day of the event, you're dealing with two or more groups? While it may seem strange, as the host, you must handle this circumstance in such a way that you do not order the unexpected group to leave the party, but rather entertain both groups according to your preferences. This is the same situation that occurs when users see "Duplicate Data" in the Analyze Screen of the Query Function when only a set of data is expected.


In this blog, not only we will answer the frequently asked question "why am I getting duplicate data when analyzing the Query function?" we will also go over on why this is happening, as well as the "tips and tricks" to managing these data effectively.


The Backstory:

In SAP Profitability and Performance Management 3.0, users can use a Process Template when executing a processing function such as: View Function, Calculation Function, or other processing functions with the intention to filter a specific set data that the users need as an output of a Query Function and later in the Analyze Screen.


In this example, I am using a simple set of data from an input Model Table.


I also created 2 Process Templates:


P0001_ which is created to select only the data for CUST1
P0002_ which is created to select only the data for CUST3

In the following examples below, we will use a View Function to have a better understanding on how the data are being stored for each execution. It could happen that the users will initially execute the processing function without the use of a Process Template.


1st Execution result without a process template:


And will later execute the processing function again this time with the Process Templates.

2nd Execution result using the Process Template P0001_

3rd Execution result using the Process Template P0002_

As a result, the data from the executions will be stored in a temporary data storage of the processing function also known as the YTable. Now, when the processing function (executable) is utilized as an input of the Query function, the same set of data will then be projected in the Analyzed Screen. This is due to the fact that the Query function will use the generated YTable where the result of the executions is stored. This is also the reason why users like You and I, are seeing the data as if they are duplicated.


Notice the key figures for CUST1 and CUST3 are aggregated.


The YTable of a processing function can be checked by choosing the Function Attributes button and by going to the Runtime Attributes Tab.


And can be validated in Transaction SE16

 

This is the same situation that occurs when the users execute the same functions again, this time in My Activities Application. Assuming that there are two (2) users that are assigned to execute two (2) Process Instances PROC_01 and PROC_02.


When the first user executes the Process Instance “PROC_01” Process0001 with a Selection = CUST1


And Launch the Query Function, notice that the Key Figures for CUST1 are again aggregated with the from the execution made in Modeling Environment and the execution in My Activities Application.


The Second user will then execute the Process Instance “PROC_02” Process0002 with a Selection = CUST3

And will Launch the Query Function as well.

The users might think “these are not the data that I need” or maybe “what I want to see are the data from my own execution” So, what can you do if the user only wants to see the data from a specific execution? The good news is that there is a means to handle these records.

 

The Tips and Tricks:

In your modeling environment, start by creating one Environment Field. In this example I created a field which I call ZE_PROCID (you can name it whatever you like) Choose your processing function, and on edit mode maintain the created field in the “Input Tab” and populate it with :l_proc_id in formula.


Make sure to save your changes and reactivate the processing function.

Process ID is a technical field therefore this cannot be used in function configuration/selection. However, with the use of this new field, our goal is to have this populated with the Process ID used in the execution. It is important to take note, that if you re-execute the processing function without a Process Template, this field will still be empty. However, if you re-execute the processing function again but with a Process Template, you can expect this fields will be populated with the “Process ID” of the Process Template that you’ve used.


Now, maintain the same Field in the Query Function. Once it is maintained, save your changes, and reactivate the query function.


In the Analyze Screen, you can now select the filter value for the Process ID that you want to see, in this example we only want to see the data from the execution of Process Template P0001_


Or you can remove the Field from the Rows. Toggle the Filter Bar > maintain the ZE_PROCID field > and declare the process ID that you want to see, in this example we only want to see the data from the execution of Process Template P0002_


In My Activities Application, these steps will benefit the users that will execute the Process Instances. When either of the Process Instance PROC_01 or PROC_02 is executed. The users will now have the freedom to filter the specific Process Instance used for their own execution.


Either via Filter option while the ZE_PROCID Field is in Rows

Or via Filter Bar

 

I hope the short backstory and the tips and tricks that are discussed in this blog helped you as a user not only to understand the reason why there seems to be a duplicate data in the analyze screen, but also equipped you with the simple ways on how to manage the data effective.


Thank you for taking the time in reading this blog.

2 Comments