We have now come to a stage where a lot of lessons have been learned from SAP IBP implementation. With each successful and not so successful implementation, the implementation team goes through lots of gyrations and emotional stress. This blog is an attempt to look at various aspects of the implementation project and be ready for the challenges. Also, the attempt will be made to provide solution approaches for some of the critical issues typically encountered. We will be covering topics related to implementation of an IBP Time series-based Supply Planning.
To begin with the implementation, it is very important to understand the company’s background and its supply chain capabilities and limitations. The project team should be well versed with the company’s products and services. Time spent to bring the project team onboard related to company’s history, its position in the marketplace, vision, and growth opportunities, goes a long way to establish the foundation for the project. Let’s talk about the steps taken to demystify the challenges faced during each phase of the project.
Phase 0 – Project Prep
During project prep, the typical activities include preparation of project charter, assembling the project team and execute the project kick-off. Although this phase typical duration is 1-2 weeks, it is very important phase and determines the pace of the project. Bringing together the right resources based on the scope of the project is very critical for the success, both from client side and SI perspective. With IBP implementation, strong Supply Chain background and understanding of the planning process helps to better design architecture. Based on the project scope, along with the supply planning processes, the data integration interfaces design and architecture is the key. All in all, it is recommended to have at least 2 onsite functional resources and 2 offshore resources. One of these resources must be a fully dedicated CI-DS resource with ABAP background. Based on my experience, this area is typically overlooked and is not staffed with right skilled resources.
Baselining the project KPI’s is also important to measure the success of the project. The project objective and goals should be specific and measurable. For e.g. One of the project objectives was to reduce the total freight transportation costs (Customer transports + Interplant movements) by 10%.
Project Kick-off should be well prepared and would consists of the high-level project sponsorship commitment, project timeline and scope, resources from business and IT with authority to recommend and implement process and data changes. RACI chart would enable the proper distribution of the responsibilities and would avoid any discrepancies going forward.
One more important part to take note off is a strong project management resource, who is responsible to keep track of all the project deliverables and is a liaison between the business and project team. He or she should be a well-respected and authoritative person, with good understanding of the business process. Scheduling the meetings by making sure that all the resources are available is the key for a good start. Of course, meeting etiquettes are always warranted for.
All in all, it sounds that we are off to a good start!!
Phase 1 – Explore and Blueprint
This phase of the project starts with the blueprinting sessions. With IBP Supply, SAP has provided a pre-delivered content to kick-start the project with process flows, business process design documents, test scripts, data templates, etc…Refer to rapid.sap.com for accessing the Rapid Deployment Solution (RDS) content.
It is always better to leverage Business Process Hierarchy (BPH) to streamline the process, which gives the business a framework based on SCOR model. During the blueprint sessions, along with the process flows, system demo of the process in the system helps to better visualize the To-Be process and capture the business requirements.
During the blueprint sessions, a very important topic was discussed related to Aggregated Supply Planning. With the data consistency not being accurate and difficult to maintain at the most granular level, business wanted to build the process based on Aggregated attributes of Product and Customer. We decided to use Product Family and Customer Sales District as aggregate level planning attributes. This was a key decision, as IBP Demand Planning and Inventory Optimization was already implemented, and the data was available at the most granular level. This was now a challenge to aggregate all the Supply Planning data such as Capacity requirements, Customer Transportation Lanes, Location Transportation Lanes…to name a few, along with Initial Inventory, Open Sales Orders, Open Process Orders, Open STO’s. All these data elements had to be aggregated to Product Family and Customer Sales District level.
Pros and Cons were discussed, and SAP’s Product Development team was involved to understand if there is any roadmap item for Aggregated Supply Planning. SAP came back with the confirmation that it is a roadmap item but will not be available 2-3 years down the road, which takes it to 2025 if not earlier.
Before the start of this project a PoC was conducted to confirm the viability of the design approach and scalability.
Following options were discussed and decision was made to go with Option 1. During the final presentation, it was decided that while some business would leverage the aggregated supply data model, other business units would want to go with the standard supply planning functionality at the most granular level.
Hence due diligence was required to be done to make sure that both the data models – Aggregate and Granular both work in tandem and do not conflict each other.
Custom Aggregated Model Vs Standard SAP Aggregated Model
With the business requirements captured during the blueprint sessions, a detailed solution approach was documented, and Fit/Gap analysis was delivered. It is very important that the project team understand the detailed solution approach and agree upon the high-level design. For e.g. Business required modeling of Storage and Transportation capacity constraints.
During this phase baseline config of the new planning area was enabled.
Phase 2: Realization Build
With Option 1, a separate data model with a separate Aggregated Planning Area was modeled, where
- PRDID = Product Family
- CUSTID = Customer Sales District
Each SKU was assigned a Product Family and had a 1:1 relationship. We decided to use External Material Group field on the Basic data of Material Master to maintain the SKU-Family relationship. This helped to leverage External Material Group field to aggregate the data from ECC and bring it over to IBP through CI-DS interfaces.
Additionally, for all the cost parameters at aggregate level, we built Z-Tables to store the data. Following Z-Tables were built to store the data.
- ZAGG_LOC_PROD – Stores Inventory Holding and Inventory Target Violation Cost Rates, along with Sub-Network ID
- ZAGG_LOC_SRC – All T-Lanes with the Location Transportation Cost Rate – Variable and Fixed
- ZAGG_CUST_PROD – Customer Product relationship with Non-Delivery and Late-Delivery Cost Rates
- ZAGG_CUST_SRC – Customer Sourcing T-Lanes with Customer Transportation Cost Rates – Variable and Fixed
- ZAGG_PRD_SRC – Production Source Header and Production Source Resource data along with Capacity Consumption Rates, Production Cost Rates – Variable and Fixed
New CI-DS Interfaces were designed and built to populate all the master data and key figure data into IBP. This is where a strong CI-DS resource with excellent ABAP skill set played a key role, as it required some complex logic to aggregate the data. For each data object a mapping document was created and logic for aggregation was outlined.
Aggregation logic for Master Data and Transaction Data Objects - 18 new interfaces (at Aggregated PH2/PH3/PH4 Product ID and Aggregated Customer Sales District Customer ID) required for existing data objects * 2 or 3 Aggregation Levels of Product Hierarchy, resulting in 36 to 54 new interfaces
- Aggregation Logic for all planning relevant parameters, such as UOM Conversion Factor, Min/Max Lot Sizes, Lead Times
- Aggregation Logic for Sourcing Ratios – Customer Sourcing, Location Sourcing, Production Sourcing
- Aggregation Logic for Capacity Consumption Rate
- Aggregation Logic for Demand Aggregation at new Aggregated Product ID and Aggregated Customer ID, to be implemented to release to Supply Planning
- Dis-aggregation Logic for Optimizer Output Key Figures, to disaggregate the Production Plan, Logistics Plan and Procurement Plan from Aggregated Product ID to individual SKU
- Aggregation Logic required for Inventory, Open Sales Orders, Process Orders, Purchase Orders and Stock Transport Orders
- Aggregation Logic to be defined for Unit Cost, Planned Price, Inventory Holding Cost
With the build phase complete, the design is solidified, data is interfaced and ready for the testing phase to start. Sufficient time should be made available in validating the design and functionality. We had planned for several design validation sessions with the business, which helped to identify some of the anomalies and resolve them upfront. Typical timeframe for build phase is approx. 6 to 8 weeks, including CI-DS interfaces.
Phase 3: Realization – Test
Testing typically is conducted in 2 cycles followed by 1 round of user acceptance testing including end user training. The challenges faced during test cycles is the business resources availability, as they must be doing their day job and support the testing. This was addressed by a preparing the test plan and schedule at least 3 weeks in advance and blocking the calendars of the participating team members. Prepping the system landscape with the latest and up-to-date data should not be overlooked. Having the system close to production like helps the business community to reflect on challenges they face on day to day basis and the opportunity presented to overcome, or at least identify a work around.
Almost 40-50% of the time spent by the business in conducting the S&OP cycle is to manually assimilate the data in Excel and come up with a feasible and executable plan. The opportunity with this implementation presented itself to bring everything into one data model and automate the planning process as much as possible. During testing the first major issue was to bring the demand data from SKU level from one planning area, to aggregate level into another planning area. We tried with multiple ways by using copy operators between planning areas, but the best approach that worked well was to create 2 separate CI-DS tasks – 1. Download the demand data at an aggregate level into CSV file format and 2. Load the CSV file into target planning area at an aggregate level. With this 2-step approach, the missing planning objects issue was addressed, as the copy operator has limitations in creating the planning objects when executed in between planning areas. This helped to download the demand data from production tenant where DP and IO were already live and loading it into test tenant for both the cycles.
The time frame for SIT1 and SIT2 had to be extended to make sure that entire end-to-end process works well. Although we had created individual test scripts to test the stand-alone functionality, cycle 2 was spent in doing a string testing and simulating the production like environment.
It was early on decided during the PoC that Time-Series based Supply Optimizer with Delivery Maximization as objective function works well for the business. The reason Delivery Maximization was chosen (instead of Profit Maximization), was to satisfy all the demand and not prioritizing it based on the customer tiers. All customers were treated as same and from supply planning perspective no demand should remain unsatisfied, of course with in the realm of capacity constraints.
Another key aspect that was addressed during the implementation was the usage of Inventory Target as a static quantity, instead of Target Day’s of Supply. We found that the fair share would not work in case of supply constraint situation and would not distribute the available supply across locations and while it would satisfy the Inventory Target for one product-location, it would leave other product-location with Zero projected stock. While we knew that Inventory Target Violation costs only works with Inventory Target populated, modelling the Target Day’s of Supply resulted in much better achievement and management of Inventory Targets.
During cycle 2 testing, we also testing the real-life simulation functionalities. The decision made by the business was to have a lock down plan in the Baseline version and utilize Optimize version to let the optimizer propose based on the constraints. The baseline version was locked by populating the Min/Max Customer Source Quota and restricting the supply from the preferred (may not typically optimized) sourcing location. Hence in Optimize version, the business removed all the limitations and let the optimizer decide purely based on cost optimization and capacity constraints. Capacity constraints modelled were Production, Storage and Transportation resources.
In old APO days, we always said that Optimizer is a black box, and no one knows how the plan is derived. While with IBP and good documentation, the understanding of optimizer behavior has become easy and has helped to instill the confidence. Having said that it is very important to understand the optimizer behavior and we spent sufficient time in understanding why the Stock Transfers were generated from one location vs another location. While the Delivery Maximization works on the total cost minimization, the team had to literally understand the sourcing decisions. While the Non-Delivery Costs is raised to a very high level in the Delivery Maximization, Customer Transportation, Location Transportation, Manufacturing Costs, Inventory Holding Costs play a significant role in deciding the optimized plan. We used actual freight costs based on the Mode of transport, Trucking capacity and the distance travelled between sourcing location and destination i.e. customer or a plant within the supply chain network.
The output of the optimizer constrained supply plan from Aggregated Supply Planning had to be disaggregated back to individual SKU. The constrained supply plan was then published to ECC as Planned Independent Requirements (PIR’s). The disaggregation logic was designed to be based on the original Demand Planning Qty. Following steps will help to understand the disaggregation logic;
- Calculate the Demand Planning Ratio for each SKU and Customer Sales District
DPQRATIO@MTHPRODCUSTSALESDIST = DPQ@MTHPRODCUSTSALESDIST / DPQ@MTHPRDFMLCUSTSALESDIST
- Calculate the Constrained Supply Plan by disaggregating from Product Family to SKU based on DPQ Ratio
CONSTRAINEDSUPPLYPLAN@MTHPRODCUSTSALESDIST = DEPENDENTCUSTOMERDEMANDDS@MTHPRPODFMLCUSTSALESDIST * DPQRATIO@MTHPRODCUSTSALESDIST
While we had issues with related to planning levels, we had to try several iterations on coming up with the correct disaggregation values.
During testing, we were also able to engage SAP Max Attention team and helped us in resolving the critical issues. SAP’s On Demand Experts helped to resolve and provide insights at every path of the way. Thanks to SAP’s Quality Assurance team for being with us in this entire implementation journey and helping the project team to overcome documented issues.
UAT was planned and executed along with the end user training. During the training the users were asked to execute the process and assess the new Day In Life Of.
After almost 8 weeks of testing and resolving all the issues, the Go-NoGo decision was made to conduct the technical cutover, followed by business cutover with going through the end to end S&OP cycle with the new application. While business will be operating both the legacy and IBP S&OP process for couple of months, the new application would help enable to make optimized decisions resulting in feasible and executable production plan, logistics plan and procurement plan.
Phase 4: Cutover and Go-Live
With a detailed cutover plan in place with specific date and time and assigned responsibilities, the team executed the plan seamlessly. It is important to go through the plan multiple times with the entire team and adjusting before the week of executing the plan.
The Go-Live was confirmed by executing the optimizer run in the Production tenant. While the business team is analyzing the results, the hypercare support will continue and provide support as the business is walks the new path with new application.