Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Fukuhara
Product and Topic Expert
Product and Topic Expert
2,976

I am crafting this blog to deliver an illustrative example of implementing Python RAG using the SAP HANA Cloud Vector Engine, SAP AI Core, and Python on Google Colab.

The contents is based on an official tutorial "Retrieval Augmented Generation using generative-ai-hub-sdk and HANA vector search" with modifying the below points.
  1. There are no mention from step 12, because they are almost same as steps between 1 and 11.
  2. Change some code simpler
The content has been adapted from the official tutorial "Retrieval Augmented Generation using generative-ai-hub-sdk and HANA vector search " with several enhancements listed below.
  1. Instructions beyond step 12 have been omitted, as they closely replicate steps 1 through 11.
  2. Additionally, some code has been simplified for brevity.

Prerequisites

The following are some prerequisites. Due to the fact that I was able to utilize it without any configurations, I did not scrutinize their versions or conditions meticulously.

  • SAP AI Core instance
  • SAP HANA Cloud

Environment

System

SystemVersionRemark
SAP AI Core-Plan should be "extended"
SAP HANA Cloud4.00.000.00.1710841718 (fa/CE2024.2) 

Python on Google Colaboratory

As of 2024/4, python version is 3.10.12 on Google Colaboratory.

PackageVersion
hana_ml2.20.24031902
generative-ai-hub-sdk
1.2.2

Jupyter Implementation

1. pip install

First of all, install hana_ml and generative-ai-hub-sdk using pip.

 

 

!pip install hana_ml generative-ai-hub-sdk

 

 

2. import packages

Next, import required packages.  

2.1. Import simple modules

Just import required modules.

 

 

import os

from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from hana_ml import ConnectionContext
from hana_ml.dataframe import create_dataframe_from_pandas
from langchain.prompts import PromptTemplate
import pandas as pd

 

 

2.2. Configuration for connecting SAP AI Core

Before you import the "embedding" module from "gen_ai_hub.proxy.native.openai", it is crucial to configure the necessary settings. For more details, please refer to documentation on Github.  The values for the configuration can be viewed on SAP BTP Cockpit.

If your environment is on mac pc, create '~/.aicore' directory and 'config.json' file under the directory. The file content is JOSN part of the code below.

 

def preconfigure_aicore() -> None:
    path = '/root/.aicore'
    config = '''
    {
        "AICORE_AUTH_URL": "<* * *>",
        "AICORE_CLIENT_ID": "<* * *>",
        "AICORE_CLIENT_SECRET": "<* * *>",
        "AICORE_RESOURCE_GROUP": "<* * *>",
        "AICORE_BASE_URL": "<* * *>"
    }
    '''
    os.makedirs(path, exist_ok=True)
    with open(path+'/config.json', 'w') as f:
        f.write(config)
preconfigure_aicore()
from gen_ai_hub.proxy.native.openai import embeddings

 

 

3. Check SAP HANA Cloud Connection

Now check if SAP HANA Cloud is available.

 

 

cc= ConnectionContext(
    address='<* * *>',
    port='443',  # 443 is usual
    user='<* * *>',
    password='<* * *>',
    encrypt=True
    )
print(cc.hana_version())
print(cc.get_current_schema())

 

 

 You can see HANA Cloud version and your current schema name. 

4.00.000.00.1710841718 (fa/CE2024.2)
<Your schema name> 

 Just define table name you like. I'd like to use Local temporary table, so I used "#" as prefix. 

 

 

TAB = '#GRAPH_DOCU_QRC3_2201'  # Local Temporary table

 

 

 4. Read a csv file

Read a csv file on GitHub using pandas.  I discarded unused columns, such as "L1".

 

 

def read_csv(url):
    df = pd.read_csv(url, low_memory=False, usecols=['ID', 'HEADER1', 'HEADER2', 'TEXT', 'VECTOR_STR'])
    df['VECOTR'] = None
    df.info()
    df.head(3)
    return df

url='https://github.com/SAP-samples/ai-core-samples/blob/main/08_VectorStore/Hana/GRAPH_DOCU_QRC.csv?raw=true'
df = read_csv(url)

 

 

Here are the dataframe info.

<class 'pandas.core.frame.DataFrame'> 
Index: 262 entries, 0 to 261
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 262 non-null int64
1 HEADER1 262 non-null object
2 HEADER2 222 non-null object
3 TEXT 262 non-null object
4 VECTOR_STR 262 non-null object
5 VECOTR 0 non-null object
dtypes: int64(1), object(5)
memory usage: 14.3+ KB

5. Define table and upload data

Proceed to define a table and upload a dataframe into this newly defined table. Ordinarily, I would use the "create_table" function, however, as of April 2024, the data type REAL_VECTOR is unavailable.

 

 

def create_table(cc, sql, df):
    if cc.has_table(TAB):
        cc.drop_table(TAB)
    cc.execute_sql(sql)
    dfh = create_dataframe_from_pandas(
            connection_context=cc,
            pandas_df=df,
            table_name=TAB,
            append=True
         )
    cc.execute_sql(f'''UPDATE {TAB} SET VECTOR = TO_REAL_VECTOR(VECTOR_STR);''')
    print(dfh.get_table_structure())
    display(dfh.head(3).collect())
    return dfh

sql = f'''CREATE LOCAL TEMPORARY COLUMN TABLE {TAB}(
           ID BIGINT, 
           HEADER1 NVARCHAR(5000), 
           HEADER2 NVARCHAR(5000), 
           TEXT NCLOB, 
           VECTOR_STR NCLOB,
           VECTOR REAL_VECTOR(1536));'''  # create_table hana_ml 2.20.24031902 doesn't deal with REAL_VECTOR
dfh = create_table(cc, sql, df)

 

 

01table.png

 6. Similarity Search

6.1. Get embedding

 Create function to get embeddings from model "text-embedding-ada-002".

 

 

def get_embedding(input, model="text-embedding-ada-002") -> str:
    response = embeddings.create(
      model_name=model,
      input=input
    )
    return response.data[0].embedding

 

 

6.2. Vector Search

Create vector search function and test the search funtion. You can use L2DISTANCE  as well.

 

 

def run_vector_search(query: str, k=4):
    query_vector = get_embedding(query)
    sql = f'''SELECT TOP {k} "ID", "HEADER1", "HEADER2", "TEXT",
        "COSINE_SIMILARITY"("VECTOR", TO_REAL_VECTOR('{query_vector}')) as "COSINE_SIMILARITY"
        FROM "{TAB}"
        ORDER BY "COSINE_SIMILARITY"("VECTOR", TO_REAL_VECTOR('{query_vector}')) DESC'''
    hdf = cc.sql(sql)
    df_context = hdf.head(k).collect()
    return df_context

query = "How can I run a shortest path algorithm?"
df_context = run_vector_search(query)
display(df_context)

 

 

Here is the search result.

02.search.png

7. Define prompt template

Define prompt template for SAP HANA Cloud questions.

 

 

# Prompt. Do also use your knowledge from outside the given context.
promptTemplate_fstring = """
You are an SAP HANA Cloud expert.
You are provided multiple context items that are related to the prompt you have to answer.
Use the following pieces of context to answer the question at the end.
Context:
{context}
Question:
{query}
"""
promptTemplate = PromptTemplate.from_template(promptTemplate_fstring)
print(promptTemplate.pretty_repr())

 

 

8. Define Ask process

Define ask process.

 

 

def ask_llm(query: str, retrieval_augmented_generation: bool, k = 4) -> str:

    class color:
        RED = '\033[91m'
        BLUE = '\033[94m'
        BOLD = '\033[1m'
        END = '\033[0m'
    context = ''
    if retrieval_augmented_generation == True:
        print(color.RED + 'Running retrieval augmented generation.' + color.END)
        print(color.RED + '\nEmbedding the query string and running HANA vector search.' + color.END)
        df_context = run_vector_search(query)
        context = ' '.join(df_context['TEXT'].astype('string'))
        print(color.RED + '\nHANA vector search returned {k} best matching documents.'.format(k=k) + color.END)
        print(color.RED + '\nGenerating LLM prompt using the context information.' + color.END)
    else:
        print(color.RED + 'Generating LLM prompt WITHOUT context information.' + color.END)
    prompt = promptTemplate.format(query=query, context=context)
    print(color.RED + '\nAsking LLM...' + color.END)
    llm = ChatOpenAI(proxy_model_name='gpt-4')
#    llm = ChatOpenAI(proxy_model_name='gpt-4', proxy_client=proxy_client)
    response = llm.invoke(prompt).content
    print(color.RED + '...completed.' + color.END)
    print(color.RED + '\nQuery: ' + color.END, query)
    print(color.BLUE + '\nResponse:' + color.BLUE)
    print(response)

query = "I want to calculate a shortest path. How do I do that?"
ask_llm(query=query, retrieval_augmented_generation=True, k=4)

 

 

9. Compare LLM results between RAG and without RAG

Now ask a question to LLM.

 

 

query = "I want to calculate a shortest path. How do I do that?"
ask_llm(query=query, retrieval_augmented_generation=True, k=4)
ask_llm(query=query, retrieval_augmented_generation=False, k=4)

 

 

Here is the first answer with RAG.

 

 

You can calculate the shortest path by using the SHORTEST_PATH built-in function in GraphScript. The SHORTEST_PATH function returns a WEIGHTEDPATH instance containing a shortest path within the given parent graph from a start vertex to a target vertex. 

Here is an example of how to use the SHORTEST_PATH function:

```graphscript
GRAPH g = GRAPH("FLIGHTS", "GRAPH")
WITH TEMPORARY ATTRIBUTES(VERTEX INTEGER vertexAttr = 0, EDGE INTEGER edgeAttr = 42);
VERTEX v_source(:g, 'FRA');
VERTEX v_target(:g, 'JFK');
WEIGHTEDPATH<BIGINT> p = SHORTEST_PATH(:g, :v_source, :v_target);  
SEQUENCE<VERTEX> sv = VERTICES(:p);
SEQUENCE<EDGE> se = EDGES(:p);  
BIGINT lengthOfP = LENGTH(:p);
BIGINT weightOfP = WEIGHT(:p);
```

In this example, we first create an instance of the graph. Then, we define the start vertex (v_source) and the target vertex (v_target). The SHORTEST_PATH function is then used to find the shortest path between these two vertices. The result is stored in a WEIGHTEDPATH object (p).

You can also use a custom weight function as a fourth parameter in the SHORTEST_PATH function if you want to calculate the shortest path based on edge weights other than the hop distance.

For example, to find the shortest path based on the edge attribute "DIST_KM", you can use a custom weight function as below:

```graphscript
WEIGHTEDPATH<DOUBLE> shortest = SHORTEST_PATH(
:g, :v_source, :v_target, (EDGE e) => DOUBLE{ RETURN :e."DIST_KM"; });
```

This function takes an edge as a parameter and returns the distance in kilometers associated with that edge. The shortest path is then calculated based on these distances.

 

 

Here is the first answer without RAG.

 

 

In SAP HANA Cloud, you can calculate the shortest path by using the Graph Processing feature. This feature allows you to model and analyze complex relationships between pairs of objects in a network or graph. Here are the steps:
1. Model your data as a graph: You need to have a graph workspace which defines the vertices (nodes) and edges (connections) of your graph. This can be done using the SQLScript or the SAP HANA Studio.
2. Use the SHORTEST_PATH function: SAP HANA Cloud provides the SHORTEST_PATH function which calculates the shortest path between two given vertices in a graph. The function takes the graph workspace, source vertex, target vertex, and an optional weight column (if the edges have weights) as parameters.
3. Analyze the results: The result of the SHORTEST_PATH function is a table that includes the path sequence, the edge and the vertex details of the shortest path.
Remember that this function only works if there is a path between the source and the target vertices. If there is no such path, the function returns an empty result.

 

 

I'm not familiar with HANA Graph, so I can't evaluate the result.

10. Close HANA connection

Close HANA connection and the local temporary table is dropped automatically.

 

cc.close()

 

 

1 Comment