Replenishment quantities, production schedules, and truck routes – all these functions are only as good as the master data that helps them run. Find out what causes low-quality master data and a three-point approach that can help your business avoid it.
Every decision fundamentally relies of the trustworthiness of master data. It doesn't matter if a business uses the best processes and the most powerful analytics globally; everything a company does will not fully suceed if input data is flawed.
This concept is even more true for supply chain decision-making, where the goal is often centered around finding optimal solutions. Various optimization models, especially those narrow in scope, will fail if the underlying master data is not sufficiently reliable. Therefore, a certain threshold of data quality needs to be reached (see figure below).
Figure: The Link Between Data Quality and Decision Quality
Recent advancements in artificial intelligence and machine learning make the requirement for high-quality master data more vital. As businesses further improve their supply chains with no-touch, lights-out, and automated supply chains, they must consider critical capabilities – such as their processes, governance, IT infrastructure, and data quality – to help ensure future success.
Pinpointing the cause of low-quality master data
Low master data quality is a common issue widely discussed in many companies worldwide. Problem areas often include duplicated entries, outdated data from legacy systems, or incorrect human input (see table below).
Table: Common Problem Areas for Master Data
Although these errors can be fixed, they create situations that can threaten the short- and long-term effectiveness of the supply chain. Supply chain managers can resolve inventory shortages, rising shipment costs, and declining document quality over time. However, a business cannot thrive when employees do not trust data-driven projects and automated supply chain decisions, leading to lower customer sentiment decreases with every order fulfillment error and delayed delivery.
Derisking master data quality
To evaluate various quality aspects, master data is typically characterized by four dimensions:
- Completeness: All required values are recorded
- Consistency: The representation of the data is the same in all cases
- Accuracy: Recorded value is aligned with the actual value
- Timeliness: The recorded value is always up to date
Each of these dimensions can be assessed and addressed with a framework involving three focus areas:
- Communication of master data relevance
The goal is to increase the awareness around master data quality throughout the organization. Top management should understand the effort required to improve master data and prioritize activities. Master data improvement should also be viewed as a standalone task, instead of being a necessary burden.
- Organizational setup and governance
Based on our research, a high centralization depth is preferred in most cases and is viewed as a vital lever for high-quality master data. With the assistance of master data management experts, this capability can be acquired by defining clear roles, establishing standards and procedures, evaluating data quality, and coordinating reporting organization-wide.
- Processes and incentives
When implementing master data management processes, we distinguish two perspectives. From a long-term perspective, the objective is to fully automate master data creation and maintenance processes. In the short term, organizations aim to establish processes that simplify work for humans as much as possible and minimize errors by using better technologies. In other words, introducing and optimizing processes that create and maintain master data efficiently are at the core of master data activities.
Establishing high-quality master data quality as the norm
The importance of derisking master data is evident for supply chains: a well-developed and fully functioning master data management strategy is critical in today’s ever-evolving marketplace. Communication, organization setup and governance, and processes and incentives play a vital role in ensuring better quality insights as the volume of master data grows.
For the supply chain, such an effort accelerates sound decision-making and order fulfillment in highly satisfying ways for the customer and the business.
Please share feedback or thoughts in a comment or ask questions in the Q&A tag area here:
https://answers.sap.com/tags/67837800100800004488.
I worked on this project with my colleagues Linda Ackermann, Kai Hoberg, and Jörg Wilke.
Niklas Poguntke is a business enterprise associate consultant for Supply Chain Management at SAP.
Dr. Jörg Wilke is head of the SAP Business Consulting practice for supply chain management.
Professor Dr. Kai Hoberg is a professor of Supply Chain and Operations Strategy at Kühne Logistics University (KLU) in Hamburg.
Linda Ackermann is a product manager for SAP S/4HANA at SAP, focusing on the discrete manufacturing industry.