Technology Blog Posts by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
jing_wen
Product and Topic Expert
Product and Topic Expert
712

SAP HANA Cloud’s multi-model platform brings together vector, graph, text, spatial, and relational data natively. It enables developers and data teams to build smarter, more context-aware AI solutions — directly on operational data.

SAP HANA Cloud uniquely supports:

  • Vector data for semantic and similarity search
  • Graph data for explicit relationship modeling and knowledge graphs
  • Text and spatial data for real-world context
  • Relational data for structured operations and analytics

Rather than sending data across disparate services, you can store and process all of it in one place, accelerating time-to-value while reducing the risk of misalignment. This is multi-model done right, and it is the foundation for powerful AI workloads that scale.

This blog demonstrates how to query RDF knowledge graphs in SAP HANA Cloud, both:

  • Directly from the HANA Cloud Central SQL Console, and
  • Programmatically using Python

We will also show how RDF graphs can be combined seamlessly with vector similarity search, spatial filtering, and SQL analytics to create intelligent, real‑world use cases. The blog expands on this post.

From supplier notes to AI‑ready supply‑chain intelligence

To ground the concepts, we start with a simple but realistic scenario: turning unstructured supplier feedback into AI‑ready supply‑chain intelligence.

1.png

1.1.png

1.2.png

Step 1: Create a supply‑chain schema and table

We first create a dedicated schema and table to store supplier identifiers, short operational reports, vector embeddings, and geographic locations.

CREATE SCHEMA KG_SUPPLYCHAIN;

CREATE TABLE KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP (
  supplier_uri       NVARCHAR(200),
  report_text        NVARCHAR(1000),
  report_embedding   REAL_VECTOR(768),
  geo_location       ST_POINT(4326)
);

Step 2: Insert natural‑language supplier reports

The reports represent real‑world observations such as customs delays or smooth clearance. At this stage, AI embeddings and spatial attributes are not yet populated.

INSERT INTO KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP VALUES
('http://example.org/supplier/AlphaGmbH', 'Consistently on time. No customs issue.', NULL, NULL),
('http://example.org/supplier/BetaGmbH',  'Severe customs delays reported last month.', NULL, NULL),
('http://example.org/supplier/GammaLogistics', 'Smooth operations, cleared customs quickly.', NULL, NULL);

Step 3: Add geographic context

Each supplier is enriched with a geographic location using WGS84 coordinates. This enables proximity analysis and regional risk assessment.

UPDATE KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP
SET geo_location = ST_GeomFromText('POINT(8.6821 50.1109)', 4326)
WHERE supplier_uri = 'http://example.org/supplier/AlphaGmbH';

Step 4: Generate vector embeddings inside the database

SAP HANA Cloud generates vector embeddings directly in‑database using SAP’s native embedding model. This enables semantic understanding of supplier reports.

UPDATE KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP
SET report_embedding = VECTOR_EMBEDDING(
  report_text,
  'QUERY',
  'SAP_NEB.20240715'
);

At this point, we have a single AI‑ready table that supports:

  • Semantic similarity search
  • Spatial analysis
  • Generative AI grounding

—all without exporting data outside SAP HANA Cloud.

Adding semantic supplier knowledge with SAP HANA Knowledge Graph

Relational tables capture operational facts well, but business meaning and rules are better expressed semantically. This is where the SAP HANA Cloud Knowledge Graph engine comes in.

2.png

Step 5: Initialize the RDF graph

We start by removing any existing graph to ensure a clean load.

CALL SPARQL_EXECUTE('DROP GRAPH <kg_supplychain>', '', ?, ?);

Step 6: Insert semantic supplier knowledge

We then insert RDF triples describing suppliers and their business attributes. Each supplier uses the same URI as the relational table, creating a natural bridge between SQL and RDF.

CALL SPARQL_EXECUTE(
'
INSERT DATA {
  GRAPH <kg_supplychain> {
    <http://example.org/supplier/AlphaGmbH> a <Supplier> ;
      <hasCertification> "ISO 9001" ;
      <hasCarbonTaxRate> "low" ;
      <isFlaggedForDelays> false .
  }
}',
'', ?, ?);

Within the Knowledge Graph, suppliers are enriched with structured, machine‑readable facts such as certifications, carbon‑tax exposure, and compliance flags.

The result is a dual‑layer architecture:

  • Relational layer: text, embeddings, spatial data
  • Semantic layer: business meaning, rules, and relationships

Both are connected by shared URIs and executed in the same database.

Querying across graph, vector, spatial, and SQL models

Query 1: Filter Knowledge Graph data directly in SQL

SAP HANA Cloud allows SPARQL queries to be consumed as relational tables using SPARQL_TABLE, enabling seamless integration with SQL analytics.

4.png

SELECT * FROM SPARQL_TABLE('
  SELECT ?supplier ?certification ?carbontax ?flag
  FROM <kg_supplychain>
  WHERE {
    ?supplier a <Supplier> .
    ?supplier <hasCertification> ?certification .
    ?supplier <hasCarbonTaxRate> ?carbontax .
    ?supplier <isFlaggedForDelays> ?flag .
    FILTER(
      ?certification = "ISO 9001" &&
      ?carbontax = "low" &&
      STR(?flag) = "false"
    )
  }
');

This bridges semantic reasoning and relational analytics in a single query flow.

Query 2: Hybrid SPARQL + SQL with vector and spatial filtering

This advanced query combines:

  1. Geospatial filtering (nearby suppliers)
  2. Vector similarity search on supplier reports
  3. Knowledge Graph constraints

3.png

CALL SPARQL_EXECUTE(
'
SELECT *
FROM <kg_supplychain>
WHERE {
  SQL_TABLE("SELECT \"uri_str\", \"REPORT_TEXT\", \"SCO\" FROM (
    SELECT *, sco - FIRST_VALUE(sco) OVER(ORDER BY sco DESC) AS diff
    FROM (
      SELECT \"SUPPLIER_URI\" AS \"uri_str\", \"REPORT_TEXT\",
      COSINE_SIMILARITY(
        \"REPORT_EMBEDDING\",
        VECTOR_EMBEDDING(''no customs delay'', ''QUERY'', ''SAP_NEB.20240715'')
      ) AS sco
      FROM KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP
      WHERE \"GEO_LOCATION\".ST_Distance(
        ST_GeomFromText(''POINT(8.6821 50.1109)'', 4326)
      ) < 50000
      ORDER BY sco DESC
    )
  ) WHERE diff > -0.2") .
}
ORDER BY DESC(?SCO)
LIMIT 10
',
'Accept: application/sparql-results+csv Content-Type: application/sparql-query',
?, ?);

This single query performs AI‑driven supplier discovery, ranking results by semantic meaning while respecting spatial and business rules.

Querying and managing Knowledge Graphs with Python

Beyond the SQL console, Knowledge Graphs can be managed programmatically using Python and the HANA DBAPI.

The following example shows how to:

  • Upload an RDF ontology (Turtle .ttl format)
  • Verify graph load success
  • Query RDF data programmatically
from hdbcli import dbapi

conn = dbapi.connect(
    address='XXX.hana.prod-ap10.hanacloud.ondemand.com',
    port=443,
    user='XXX',
    password='XXX'
)

cursor = conn.cursor()

Load an ontology into SAP HANA Cloud Knowledge Graph

# Populate a new RDF in HANA Cloud with the turtle file content. We put the ontology into a specific graph.
ttl_filename = "/Users/materials_ontology.ttl"
graph_name = "materials_ontology"

# Load the ontology into HANA Cloud KG
print("Loading ontology...")

try:
    with open(ttl_filename, 'r') as ttlfp:
        request_hdrs = ''
        request_hdrs += 'rqx-load-protocol: true' + '\r\n'            # required header for upload protocol
        request_hdrs += 'rqx-load-filename: ' + ttl_filename + '\r\n' # optional header
        request_hdrs += 'rqx-load-graphname: ' + graph_name + '\r\n'   # optional header to specify name of the graph
        
        print(f"Loading file: {ttl_filename}")
        print(f"Graph name: {graph_name}")
        
        # Execute the load
        result = conn.cursor().callproc('SPARQL_EXECUTE', (ttlfp.read(), request_hdrs, '?', None))
        print("Materials ontology loaded successfully!")
        print(f"Result: {result}")
        
except Exception as e:
    print(f"Error loading materials ontology: {e}")
    print(f"Error type: {type(e)}")

Query the ontology 

Do note that you will have to adjust the query based on your ontology structure.

# Verify the materials ontology was loaded
print("Verifying materials ontology load...")

try:
    # Count triples in the materials graph
    query = f"""
    SELECT (COUNT(*) as ?Triples) 
    WHERE 
      {{ GRAPH <{graph_name}> 
          {{ ?s ?p ?o }} 
      }}
    """
    
    resp = conn.cursor().callproc('SPARQL_EXECUTE', (query, 'Accept: application/sparql-results+csv', '?', None))
    print("Materials ontology statistics:")
    print(resp[2])
    
    # Query for some material instances
    query2 = f"""
    PREFIX mat: <http://example.com/materials/>
    PREFIX matprop: <http://example.com/materials/property/>
    
    SELECT ?material ?materialId ?level2 ?level3
    FROM <{graph_name}>
    WHERE {{
        ?material a mat:Material .
        ?material matprop:materialId ?materialId .
        OPTIONAL {{ ?material matprop:level2 ?level2 . }}
        OPTIONAL {{ ?material matprop:level3 ?level3 . }}
    }}
    LIMIT 5
    """
    
    resp2 = conn.cursor().callproc('SPARQL_EXECUTE', (query2, 'Accept: application/sparql-results+csv', '?', None))
    print("\n Sample material instances:")
    print(resp2[2])
    
except Exception as e:
    print(f"Error verifying materials ontology: {e}")

Summary

SAP HANA Cloud enables true multi‑model intelligence by allowing relational data, vector embeddings, spatial context, and RDF knowledge graphs to coexist and execute together.

By combining in‑database vector embeddings, semantic Knowledge Graphs, and SQL, SPARQL, Python access, organizations can build powerful AI applications that are context‑aware, explainable, and operationally grounded—all from a single platform.

SAP HANA Cloud is not just a database for AI. It is where data, meaning, and intelligence converge.