Showing results for 
Search instead for 
Did you mean: 

Inquiry on SAP HANA Cloud Database vs. HDI Containers in CAP Projects

0 Kudos

Dear Community,

I have recently embarked on exploring CAP Projects on BTP and have encountered a couple of questions:

1. Existing Database Usage: What is the rationale behind not using an existing SAP HANA Cloud database that we've created, in favor of HDI Containers?

2. SAP HANA Native Object: I've reviewed documentation, such as Tutorial 6, which discusses sap-hana-native-object. However, it seems to focus on HDI Containers rather than directly on the SAP HANA Cloud database.

Any insights into these matters would be greatly appreciated.

Warm regards,
Rahul Jain


View Entire Topic
Product and Topic Expert
Product and Topic Expert

Hi @rahuljain257,

HDI is a term used by SAP which could be simply translated to a Database Schema (it is an oversimplification, but it works to better understand it).

When you work in CAP with a HANA Cloud the first step you must do is to create a Database Instance. That is your DBMS. Within the on-prem world, a HANA Platform installation can handle multiple database tenants. However, HANA Cloud will act as a single tenant database. Which means, you do not have multiple database system on BTP unless you have multiple instances of HANA Cloud. 

HANA On-Prem:

HANA Cloud:

Therefore, the idea on BTP with CAP is to have a single HANA Cloud instance (which contains a single database) and reuse it as much as possible across all your applications. 

So, instead of manually creating a database schema, and schema users, authorization to that schema, etc. - you will simply create an instance of an HDI. Internally, that instance creation will automatically create an empty schema for your application that is ready to be used. 

Once you bind that HDI instance to an application, the platform automatically creates environment variables for your application so that it easily finds all data needed to connect to that schema, deploy database artifacts and run SQL statements.

CAP is an opinionated framework in which you will model your database entities and services using an artifact that is coded in CDS syntax. That artifact gets "compiled" (I should say, translated) into several artifacts that each system understands natively. So, if your CAP project is targeting HANA as its database it will first create all entities defined in CDS as an artifact with the extension HDBTABLE. Analogously, a CDS View would be defined as HDBVIEW and so on. Such artifacts are known as database native artifacts (AKA design time artifacts) and they can be issued to a HANA database to create what is known as Runtime Artifacts. Don't worry about how to deploy them - you won't be deploying them manually anyway.  

So, instead of creating your application using DDL, you will be defining the data entities and their relationships (similar to a ERM diagram but more text descriptive and human friendly) in CDS syntax. You will also define any OData services using the same syntax. CAP will handle the database artifact creation as well as implement the services required. Services will feature all HTTP methods (CRUDS) but can then be then extended in either Java or NodeJS according to your needs.

When you package your project for distribution, the package contains a reference to what is known as HDI Deployer. The database deployer is an app which handles all HDI runtime artifacts. So it will be the application that runs "once-per-deploy" and will be issuing any relevant SQL/Database commands to update tables, views, etc on the HDI container.

The documentation you referenced here explains how to obtain access from CAP to a runtime artifact that has been created by other means. Let's say you have a Java application that uses your HANA Cloud instance as data persistency. Java usually requires you to send a DDL file in order to have all the runtime artifacts created. Therefore, you cannot reference them on your CDS file using its syntax directly. You have to configure your HDI to include the credentials needed to connect to the Java Schema. Plus, you must  recreate the same data in CDS syntax in order to reference data from that application into yours. 

Hope this helps.

Best regards,