As we approach the close of the first quarter, we're excited to announce the availability of the "Vector Engine" feature in SAP HANA Cloud as part of the QRC1 Release. This new feature is set to enhance the multi-model capabilities of SAP HANA Cloud by incorporating vector database capabilities for storing embeddings. Embeddings are instrumental in transforming high-dimensional data into a more manageable, lower-dimensional format, thereby simplifying the understanding of complex and unstructured data such as text, images, or user behavior.
Moreover, the integration of the SAP HANA Cloud Vector Engine with the Generative AI Hub for accessing Large Language Models (LLMs) will empower our customers to develop robust AI-based applications and reporting solutions, all while complying with our governance and ethical framework.
Numerous blogs on our SAP Community/Medium delve into the concepts of the SAP HANA Cloud Vector Engine. For a comprehensive understanding of the basics and features of SAP HANA Cloud, Shabana's blog is a great resource. In this discussion, we will concentrate on the use case we published for the Discovery Mission and the scenarios it encompasses.
We have launched an SAP Discovery Mission that covers the basics of SAP HANA Cloud Vector Engine, embedding texts by accessing the foundation models from SAP Generative AI Hub or Azure Open AI. And deploying the RAG application using SAP CAP. We have provided all the repositories as part of the mission and here is the link to our Discovery Mission: Harnessing Generative AI Capabilities with SAP HANA Cloud Vector Engine
In this blog series, I will discuss the architecture and detail one of those scenarios in the follow-up blogs. For the remaining scenarios, I encourage you to subscribe to the Discovery Mission to gain access to Git repositories and Python scripts.
Consider a hypothetical healthcare client, referred to as Client X, who has data stored across multiple systems, including SAP, Salesforce, and external platforms. This data could consist of customer interactions such as calls or emails to their call centers, which could be inquiries, service requests, or feedback about the services offered by the healthcare company.
Client X is interested in leveraging SAP HANA Cloud to process this unstructured data, enabling their business to run direct queries on feedback or transcribed phone calls. This enriched information can then be used for reporting purposes. For instance, if the business asks, "Display all service requests from the past two weeks," the system should be capable of scanning all transcribed texts, analyzing the content to distinguish between service requests and feedback, and delivering the relevant customer texts for the business to act upon. This scenario provides a clear illustration of how:
While we cannot provide actual customer data due to privacy concerns, we will substitute it with JSON reviews about various products or restaurants that will demonstrate all the points mentioned above. We have included a schema as part of the Discovery Mission, and we will also share some scenarios and code snippets in our upcoming blogs.
The architecture we're discussing consists of two main phases: data ingestion and user interaction. Let's dive into the details:
Phase 1: Data Ingestion (Steps 1-3) & Phase 2: User Interaction (Steps 4-8)
Before presenting the results to the user, we access the tiiuae--falcon-40b-instruct(LLMs)from the GenAI Hub to analyze the sentiment of the retrieved text, adding an extra layer of context.
We adopt a similar approach as discussed before.
Phase 1: Data Ingestion (Steps 1-3) & Phase 2: User Interaction (Steps 4-8)
Throughout the following blog series, we'll dive deeper into the code implementation, guiding you through each step of this exciting journey. Stay tuned for more insights and practical examples!
To ensure you're ready to execute these steps, please refer to the "Preparation" and "Setup" sections of the Project board, included in the Discovery Mission.
If you don't have a subscription for GenAI Hub, don't worry.
We've provided alternative options using Azure OpenAI to keep you moving forward.
In our mission, we explored diverse scenarios to showcase the capabilities of SAP HANA Cloud Vector Engine and Generative AI Hub. For those experts eager to delve deeper into these technologies, we've provided Python scripts that serve as a practical resource. The other scenarios are catering to experts focused on SAP HANA Cloud Vector Engine and Azure OpenAI/OpenAI integration.
Here's a breakdown of the scenarios:
Scenarios 1 through 4 introduced various Python scripts that demonstrated the core functionalities of embedding using different SDKs & plugins. Building upon the previous scenarios, Scenarios 5 and 6 focused on validating the embeddings generated from Scenarios 1 to 4. These validation steps ensured the accuracy and reliability of the embeddings, which are crucial for downstream tasks like similarity analysis and clustering.
Finally, Scenario 7 showcased an SAP CAP application that validated embeddings based on either SAP Generative AI Hub or Azure OpenAI/OpenAI integration. This real-world application demonstrated how these technologies can be seamlessly integrated into existing workflows and applications.
Throughout these scenarios, we aimed to provide a comprehensive overview of SAP HANA Cloud Vector Engine and Generative AI Hub, equipping experts with the tools and knowledge necessary to leverage these powerful technologies effectively.
If you have BTP subscriptions to both SAP HANA Cloud and SAP Generative AI Hub, you have the capability to utilize the "Python Scripts-GenAI Hub" tile. This feature enables you to import data from provided JSON document samples and cross-check the data using the Python scripts that are part of these tiles.
For those with subscriptions to SAP HANA Cloud and Azure Open AI, you have the ability to run the "Python Scripts-Azure Open AI" tile. This feature lets you import data from sample JSON documents and subsequently verify its accuracy.
You can easily set it up by adhering to the step-by-step instructions provided within this tile. It's worth noting that the CAP application is compatible with LLMs based on both SAP Generative AI Hub and Azure Open AI.
In the upcoming blog, we'll dive into a practical, hands-on exploration and code review for one of the scenarios we've discussed.
We encourage you to delve into the mission and follow the step-by-step content to gain a deeper understanding of the SAP HANA Cloud Vector Engine and its impressive features that you can experiment with. Your feedback is highly anticipated and greatly valued. If you encounter any difficulties while navigating through the mission, don't hesitate to contact our support team through the Discover Center Mission. Here's to an enjoyable and enlightening learning journey!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
10 | |
10 | |
9 | |
9 | |
8 | |
8 | |
6 | |
5 | |
5 | |
5 |