SAP HANA Cloud’s multi-model platform brings together vector, graph, text, spatial, and relational data natively. It enables developers and data teams to build smarter, more context-aware AI solutions — directly on operational data.
SAP HANA Cloud uniquely supports:
Rather than sending data across disparate services, you can store and process all of it in one place, accelerating time-to-value while reducing the risk of misalignment. This is multi-model done right, and it is the foundation for powerful AI workloads that scale.
This blog demonstrates how to query RDF knowledge graphs in SAP HANA Cloud, both:
We will also show how RDF graphs can be combined seamlessly with vector similarity search, spatial filtering, and SQL analytics to create intelligent, real‑world use cases. The blog expands on this post.
To ground the concepts, we start with a simple but realistic scenario: turning unstructured supplier feedback into AI‑ready supply‑chain intelligence.
We first create a dedicated schema and table to store supplier identifiers, short operational reports, vector embeddings, and geographic locations.
CREATE SCHEMA KG_SUPPLYCHAIN;
CREATE TABLE KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP (
supplier_uri NVARCHAR(200),
report_text NVARCHAR(1000),
report_embedding REAL_VECTOR(768),
geo_location ST_POINT(4326)
);The reports represent real‑world observations such as customs delays or smooth clearance. At this stage, AI embeddings and spatial attributes are not yet populated.
INSERT INTO KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP VALUES
('http://example.org/supplier/AlphaGmbH', 'Consistently on time. No customs issue.', NULL, NULL),
('http://example.org/supplier/BetaGmbH', 'Severe customs delays reported last month.', NULL, NULL),
('http://example.org/supplier/GammaLogistics', 'Smooth operations, cleared customs quickly.', NULL, NULL);Each supplier is enriched with a geographic location using WGS84 coordinates. This enables proximity analysis and regional risk assessment.
UPDATE KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP
SET geo_location = ST_GeomFromText('POINT(8.6821 50.1109)', 4326)
WHERE supplier_uri = 'http://example.org/supplier/AlphaGmbH';SAP HANA Cloud generates vector embeddings directly in‑database using SAP’s native embedding model. This enables semantic understanding of supplier reports.
UPDATE KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP
SET report_embedding = VECTOR_EMBEDDING(
report_text,
'QUERY',
'SAP_NEB.20240715'
);At this point, we have a single AI‑ready table that supports:
—all without exporting data outside SAP HANA Cloud.
Relational tables capture operational facts well, but business meaning and rules are better expressed semantically. This is where the SAP HANA Cloud Knowledge Graph engine comes in.
We start by removing any existing graph to ensure a clean load.
CALL SPARQL_EXECUTE('DROP GRAPH <kg_supplychain>', '', ?, ?);We then insert RDF triples describing suppliers and their business attributes. Each supplier uses the same URI as the relational table, creating a natural bridge between SQL and RDF.
CALL SPARQL_EXECUTE(
'
INSERT DATA {
GRAPH <kg_supplychain> {
<http://example.org/supplier/AlphaGmbH> a <Supplier> ;
<hasCertification> "ISO 9001" ;
<hasCarbonTaxRate> "low" ;
<isFlaggedForDelays> false .
}
}',
'', ?, ?);Within the Knowledge Graph, suppliers are enriched with structured, machine‑readable facts such as certifications, carbon‑tax exposure, and compliance flags.
The result is a dual‑layer architecture:
Both are connected by shared URIs and executed in the same database.
SAP HANA Cloud allows SPARQL queries to be consumed as relational tables using SPARQL_TABLE, enabling seamless integration with SQL analytics.
SELECT * FROM SPARQL_TABLE('
SELECT ?supplier ?certification ?carbontax ?flag
FROM <kg_supplychain>
WHERE {
?supplier a <Supplier> .
?supplier <hasCertification> ?certification .
?supplier <hasCarbonTaxRate> ?carbontax .
?supplier <isFlaggedForDelays> ?flag .
FILTER(
?certification = "ISO 9001" &&
?carbontax = "low" &&
STR(?flag) = "false"
)
}
');This bridges semantic reasoning and relational analytics in a single query flow.
This advanced query combines:
CALL SPARQL_EXECUTE(
'
SELECT *
FROM <kg_supplychain>
WHERE {
SQL_TABLE("SELECT \"uri_str\", \"REPORT_TEXT\", \"SCO\" FROM (
SELECT *, sco - FIRST_VALUE(sco) OVER(ORDER BY sco DESC) AS diff
FROM (
SELECT \"SUPPLIER_URI\" AS \"uri_str\", \"REPORT_TEXT\",
COSINE_SIMILARITY(
\"REPORT_EMBEDDING\",
VECTOR_EMBEDDING(''no customs delay'', ''QUERY'', ''SAP_NEB.20240715'')
) AS sco
FROM KG_SUPPLYCHAIN.SUPPLIER_REPORTS_LOOKUP
WHERE \"GEO_LOCATION\".ST_Distance(
ST_GeomFromText(''POINT(8.6821 50.1109)'', 4326)
) < 50000
ORDER BY sco DESC
)
) WHERE diff > -0.2") .
}
ORDER BY DESC(?SCO)
LIMIT 10
',
'Accept: application/sparql-results+csv Content-Type: application/sparql-query',
?, ?);This single query performs AI‑driven supplier discovery, ranking results by semantic meaning while respecting spatial and business rules.
Beyond the SQL console, Knowledge Graphs can be managed programmatically using Python and the HANA DBAPI.
The following example shows how to:
from hdbcli import dbapi
conn = dbapi.connect(
address='XXX.hana.prod-ap10.hanacloud.ondemand.com',
port=443,
user='XXX',
password='XXX'
)
cursor = conn.cursor()# Populate a new RDF in HANA Cloud with the turtle file content. We put the ontology into a specific graph.
ttl_filename = "/Users/materials_ontology.ttl"
graph_name = "materials_ontology"
# Load the ontology into HANA Cloud KG
print("Loading ontology...")
try:
with open(ttl_filename, 'r') as ttlfp:
request_hdrs = ''
request_hdrs += 'rqx-load-protocol: true' + '\r\n' # required header for upload protocol
request_hdrs += 'rqx-load-filename: ' + ttl_filename + '\r\n' # optional header
request_hdrs += 'rqx-load-graphname: ' + graph_name + '\r\n' # optional header to specify name of the graph
print(f"Loading file: {ttl_filename}")
print(f"Graph name: {graph_name}")
# Execute the load
result = conn.cursor().callproc('SPARQL_EXECUTE', (ttlfp.read(), request_hdrs, '?', None))
print("Materials ontology loaded successfully!")
print(f"Result: {result}")
except Exception as e:
print(f"Error loading materials ontology: {e}")
print(f"Error type: {type(e)}")Do note that you will have to adjust the query based on your ontology structure.
# Verify the materials ontology was loaded
print("Verifying materials ontology load...")
try:
# Count triples in the materials graph
query = f"""
SELECT (COUNT(*) as ?Triples)
WHERE
{{ GRAPH <{graph_name}>
{{ ?s ?p ?o }}
}}
"""
resp = conn.cursor().callproc('SPARQL_EXECUTE', (query, 'Accept: application/sparql-results+csv', '?', None))
print("Materials ontology statistics:")
print(resp[2])
# Query for some material instances
query2 = f"""
PREFIX mat: <http://example.com/materials/>
PREFIX matprop: <http://example.com/materials/property/>
SELECT ?material ?materialId ?level2 ?level3
FROM <{graph_name}>
WHERE {{
?material a mat:Material .
?material matprop:materialId ?materialId .
OPTIONAL {{ ?material matprop:level2 ?level2 . }}
OPTIONAL {{ ?material matprop:level3 ?level3 . }}
}}
LIMIT 5
"""
resp2 = conn.cursor().callproc('SPARQL_EXECUTE', (query2, 'Accept: application/sparql-results+csv', '?', None))
print("\n Sample material instances:")
print(resp2[2])
except Exception as e:
print(f"Error verifying materials ontology: {e}")SAP HANA Cloud enables true multi‑model intelligence by allowing relational data, vector embeddings, spatial context, and RDF knowledge graphs to coexist and execute together.
By combining in‑database vector embeddings, semantic Knowledge Graphs, and SQL, SPARQL, Python access, organizations can build powerful AI applications that are context‑aware, explainable, and operationally grounded—all from a single platform.
SAP HANA Cloud is not just a database for AI. It is where data, meaning, and intelligence converge.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 38 | |
| 21 | |
| 19 | |
| 18 | |
| 18 | |
| 18 | |
| 17 | |
| 16 | |
| 16 | |
| 14 |