SAP Datasphere Intelligent Lookup Series – Up for...
Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
The SAP Datasphere Intelligent Lookup Series is intended to provide you with useful guidance on how to utilize the Intelligent Lookup to leverage the potential of your data landscape. In order to design business processes efficiently and satisfy customers, a very high master data quality is essential. Often, this is not the case in-house. Fortunately, there are external providers who make optimized master data (e.g. organizational data and address data) available. For successful use of these, they have to be linked with internal data. SAP Datasphere provides Intelligent Lookup, a solution to match external data with internal data that semantically fits together even if there is no common key between these data.
This article is the fifth in the blog post series. Its purpose is to introduce the Intelligent Lookup Data Challenge. The purpose of the challenge is to help you to get familiar with the intelligent lookup in a self-paced way. However, it is on purpose not limited to the intelligent lookup itself but will also give the option to learn surrounding Datasphere functionalities such as data integration, data modelling and SAP Analytics Cloud.
Please note: All data shown in this blogpost is sample data.
What is the data challenge about?
The training contains 4 challenges and has 4 datasets. The data used is sample data and consists of German and American car sales data. After each slide that explains the challenge, you find several slides with hints that guide you through the challenge if you need help. Obviously, the fewer hints you need, the better.
After the hints, you will find key solution-steps and ultimately a sample solution (there is more than one way to solve the challenges).
Why is a data challenge a good format?
The Intelligent Lookup Data Challenge is suitable to get familiar with intelligent lookups and with the Datasphere in general. It offers a holistic, hands-on scenario in which you are challenged with a real-world problem using sample data and data from the Datasphere Data Marketplace. The data challenge is self-paced, and you can choose when and how long you want to work on it. It is a seamless journey that starts with importing data into the SAP Datasphere, building intelligent lookups and dataflows, and finally visualizing the data in the SAP Analytics Cloud.
The learnings from the challenge can be applied to future projects. One example are the naming conventions that will be introduced in the sample solution and can also be applied to future projects. They can help to structure tables, views and intelligent lookups while setting them up.
You can read here about a Hackathon that we conducted together with the Mercedes-Benz Group AG that was based on the Intelligent Lookup Data Challenge.
What does the challenge look like?
As described above, the Intelligent Lookup Data Challenge consists of 4 sub-challenges with respective datasets. In each challenge you have to answer a business question that can be answered using intelligent lookup. To give you an example, this is the question for challenge number 2:
"Which car models have been added to the reporting in Q2’22 in Germany?"
To answer this question, you first have to get familiar with the available data and choose suitable datasets. If you don’t know how to approach the question, there are always hints and a sample solution to help you. It is of course more fun to try it yourself first 😊.
How can I get started?
Do you want to take part in the Intelligent Lookup Data Challenge? There are only two things for you to do:
Get a Trial Datasphere or the free tier tenant as part of the SAP Business Technology
Thank you for visiting this blog post. I hope you found it helpful. Feel free to use the comment section below for any feedback or further questions. If you want to learn more about Intelligent Lookup, I recommend you go through the other posts in this series. A big thank you goes to my colleagues Tim, Florian, Josef and Richard for collaborating on the blogpost series.