A number of ERP use cases are concerned with the resolution of support queries (e.g., reactive maintenance notifications in asset management, IT support tickets). In the absence of intelligent assistance, a support technician must manually go through each new support query and rely on existing business know-how to trigger downstream solution activities (e.g., carrying out a diagnostic test, ordering a replacement for a broken part). However, this fully manual approach to support query resolution is error-prone and hard to scale. Staffing support technicians is expensive, and these technicians may not always have the business know-how to accurately address the issues raised in the query.
In reactive maintenance scenarios, a maintenance notification is created by a user to report a breakdown of an equipment at a manufacturing plant. The user enters data in the notification to describe the nature of the problem (e.g., equipment is affected, location, nature of the breakdown). For a given notification, a maintenance order can then be created to resolve the problem by carrying out a set of tasks (e.g., checking the equipment, removing the damaged piece, ordering a replacement). These steps can vary depending on the exact nature of the problem. AI can potentially assist the support technician in matching incoming support queries to appropriate downstream solution steps.
In the following, a method of AI-assisted support query resolution using intent induction and matching is described, which can be implemented using SAP's suite of AI services.
In AI use cases where queries are matched with solutions, an intent reflects what the user wants to achieve by raising and resolving the query. The intent may be latent, i.e., not directly mentioned in the raw query. In general, a query may be associated with one or more intents, depending on how specific the query is. Moreover, one intent may be satisfied by more than one solution.
Example:
Large language models (LLMs), such as those available via SAP's Generative AI Hub, can parse the semantic meaning of data and be used to induce intents from queries.
Having a historical database of queries and their associated solutions is a key prerequisite to kickstart the AI-based method of intent induction using LLMs. To generate a corresponding intent database, an LLM is run over each query in the historical database to induce/generate one or more intents for them.
Now, whenever a new query comes in, an LLM is used to induce its intents (e.g., based on the query text). The intents of the new query are compared with the intents stored in the historical database to identify historical intents that are most similar. Solution steps related to these identified intents can then be provided as recommendations to the support technician. Finally, the intents and solutions of the new query are added to the historical database to guide future recommendations; to avoid excessive redundant data storage, this is only done if the information is not already captured in the database.
Accurate recommendations can improve the speed and accuracy of the support technician's work. In particular, rather than comparing the new query with the historical queries directly, the intent-based method checks similarity at the level of intents, which can improve scoring accuracy (e.g., queries that look superficially different may imply similar intents).
The intent-based approach can be optimized in several ways. First, the accuracy of solution recommendations can be improved over time by learning from the interactions with the support technicians during productive usage. In particular, whether the recommendations are accepted or rejected can provide a valuable signal for improving the relevance of future recommendations.
Second, components of the LLM-based intent induction architecture can be optimized, e.g., fine-tuning the prompt, using differentiated prompt templates per use case, and employing Retrieval Augmented Generation (RAG) to parse company-specific codes, shorthand and abbreviations. Translating all text data into a baseline English version can also simplify and improve the accuracy of intent induction and matching.
Third, as historical databases grow large, the AI-based matching can be optimized to reduce latency of recommendation retrieval. E.g., a cache of the most common intents can be separately maintained and checked first for matches before traversing the full database of historical intents. Furthermore, a map-reduce framework can be used to parallelize the search of the historical intent database.
Customers can consider a few different options to enhance their existing SAP workflows with an approach to AI-assisted support query resolution such as the one described above. E.g., customers may choose to implement this in-house or with the help of external partners/consultants, which can allow for customization but also incur related development and maintenance costs.
A better option may be to request AI-assisted support query resolution as a new feature in relevant upcoming SAP product releases. SAP can build the new feature in a way that ensures high product quality and reduces the total cost of ownership (TCO) for customers. Simply give this blog post a like/kudos to indicate your interest in seeing the AI feature described above in relevant upcoming SAP product releases, and feel free to leave a comment to share any further thoughts.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
25 | |
12 | |
11 | |
9 | |
9 | |
7 | |
7 | |
7 | |
6 | |
6 |