Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
3,584
Hi All and welcome to this blog.

This blog is concerned about creation and deployment of various modules (namely Hana, Java ,NodeJS) to SAP cloud platform cloud foundry as a single Multi-target Application.

All the modules will be created using SAP Web Ide Full Stack.

We will go through step by step creation and deployment of these modules starting with Hana module and moving up to NodeJs and then Java.

 

Use Case:- We are going to create an Hana trial database and will create entities in it .

The data inside these entities will be exposed by NodeJs odata layer.

Java layer would be able to insert records in this database using JDBC.

 

Below described is a macroscopic overview for each module:-

1)Hana Trial database - We will create an instance of SAP HANA Schemas & HDI Containers (Trial) and use this as our underlying database system.We will be creating entities in this database and insert records (persist data).

2)NodeJs module:- We will create a nodeJs module to expose the data present in the above Hana DB as odata service.

3)Java (Springboot):- We will create a simple Springboot app which used JDBC to persist data inside the hana trial instance .

 

 

Pre-requisites required :-

Accounts and infrastructure:-

  1. SAP Cloud platform Cloud foundry trial account.

  2. SAP Web Ide Full stack service


Skills:-

  1. Basic knowledge of Cloud foundry and SAP Cloud Platform.

  2. Basic knowledge of JAVA ,JDBC,SAPUI5,NodeJS and odata.


 

Lets start with creation of the first module HANA :-

STEP 1:- Launch SAP web Ide full Stack and create a new MTA project.

Log into your SAP cloud platform neo trial and start  SAP web Ide Full Stack service.

SAP web Ide Full stack service is still a part of NEO rather than Cloud foundry.

So we will be using this service from NEO and will connect it to our SAP cloud foundry trial account (endpoint).

 



 

Inside the Web ide, right click on the workspace and create a new Multi target application.

This application is going to have all the modules inside it .

After the successful creation of the MTA, you will see a new project named "JobPortal" in your workspace.This is going to have a default generated mta.yaml manifest file.

With changes(addition and deletion) to the MTA, manifest file will automatically be updated by the webIde.

 

STEP 2:Create a HANA DB module, cds artifact inside the MTA project.

Right click on the created MTA and create a new hana database module and name is as "DB".



 

Now we will create a CDS artifact for the DB module and add two entities to it, Job and Enrollment.

Right click on the src folder of the DB module and create a new cds artifact.

After this you can use the graphical view layout to drag and drop and create table,define relation mappings etc  in the DB module.

For reference I am pasting here the code for the cdsartifact. I have named the file as cdsArtifact.hdbcds.
namespace JobPortal.DB;

context cdsArtifact {

/*@@layout{"layoutInfo":{"x":32,"y":121.5}}*/
entity Job {
key JOB_ID : String(10) not null;
DESCRIPTION : String(50) not null;
DEPARTMENT : String(20);
OWNER_NAME : String(50) not null;
association : association[1, 1..*] to cdsArtifact.Enrollment { JOB_ID };
};

/*@@layout{"layoutInfo":{"x":-444,"y":105.5}}*/
entity Enrollment {
key JOB_ID : String(20) not null;
key PERSON_ID : String(20) not null;
FIRST_NAME : String(20) not null;
LAST_NAME : String(20) not null;
EMAIL : String(40) not null;
LOCATION : String(20);
};
};

 

STEP 3:- Create and insert some mock data using hdbtabledata files in both the tables.

We need to create jobs.hdbtabledata and enrollments.hdbtabledata fikes in the same src folder of DB module.

The contents of the files are:

 

jobs.hdbtabledata
{

"format_version": 1,

"imports":

[ {

"target_table" : "JobPortal.DB::cdsArtifact.Job",

"source_data" : { "data_type" : "CSV", "file_name" : "JobPortal.DB::jobs.csv", "has_header" : false },

"import_settings" : { "import_columns" : ["JOB_ID","DESCRIPTION","OWNER_NAME" ] },

"column_mappings" : {"JOB_ID" : 1,"DESCRIPTION" : 2,"OWNER_NAME" : 3}

}

]

}

enrollments.hdbtabledata
{

"format_version": 1,

"imports":

[ {

"target_table" : "JobPortal.DB::cdsArtifact.Enrollment",

"source_data" : { "data_type" : "CSV", "file_name" : "JobPortal.DB::enrollments.csv", "has_header" : false },

"import_settings" : { "import_columns" : ["JOB_ID", "PERSON_ID", "FIRST_NAME","LAST_NAME","EMAIL"] },

"column_mappings" : {"JOB_ID": 1, "PERSON_ID" : 2, "FIRST_NAME" : 3,"LAST_NAME" :4,"EMAIL" :5}

}

]

}

As directed by the upper two hdbtabledata files, the data to be inserted will be present in job.csv and enrollments.csv. So we create these csv files having data to be inserted in both the tables in the same folder src.

jobs.csv
"JB101","desc1","Shobhit"
"JB102","desc2","Shobhit"
"JB103","desc3","Shobhit"
"JB104","desc4","Shobhit"

 

enrollments.csv
"JB101","I335136","Shobhit","Pathak","shobhit.pathak@sap.com"
"JB101","I335137","Alex","Pattrick","shobhit.pathak@sap.com"
"JB102","I335137","Shaan","Shaahid","shobhit.pathak@sap.com"
"JB104","I335136","Shobhit","Pathak","shobhit.pathak@sap.com"

Finally the whole DB module would look like this in SAP web Ide full stack



 

Its time to build the DB module. In order to do that , you need to make sure you have a builder installed in your cloud foundry account in SCP.

Right click on the DB module and build it .



You can right click on the src folder and open the HDI container to look at the created tables.

You can also use this to work out other SQL queries(Insert,delete etc) as well.

 

STEP 4:- Create a nodeJs module in order to expose the above created database as an odata service

Right click on the MTA project and create a new module, NodeJs module.

I am naming this module "odatajs" and enabling support for xsjs.



 

Now lets create a new folder under lib named "xsodata" which will contain our xsodata file.

After creating the folder , create a "service.xsodata" file and put this code snippet in the file.
service  
{

"JobProtal.DB::cdsArtifact.Job" as "Jobs" navigates ("Job_Enrollments" as "HasEnrollments");

"JobProtal.DB::cdsArtifact.Enrollment" as "Enrollments" ;
association "Job_Enrollments" principal "Jobs"("JOB_ID")
multiplicity "1" dependent "Enrollments"("JOB_ID") multiplicity "*";

}

Now run this module as nodeJs application. You will get a generated link.

Change the url and add your service file location at the end (see in snapshot).

You will be able to see the data persisted in Hana module.



 

Step 5:- Create a Java module to persist records in existing DB module(Hana trial db)

 

Right click on the MTA project and create a new Java module. Select Spring boot .

The structure of the project would look like this.



 

 

 

We will be using Java JDBC and will be pushing some records in Jobs entity.

In the HelloController.java file create a new method called CreateJobs and add this code to it .

 
package com.sap.springLayer.javaSpringLayer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.json.JSONArray;
import org.json.JSONObject;
import java.sql.DriverManager;

@Controller
@EnableAutoConfiguration
public class HelloController {

@RequestMapping(value = "/", method = RequestMethod.GET, produces = "text/plain")
@ResponseBody
String home() {
StringBuilder builder = new StringBuilder();
builder.append("Hello World !!");
String data="";
builder.append("\n\nJDBC connection available: ");
try {
Connection conn = getConnection();
if (conn != null) {
builder.append("yes");
builder.append("\n\nCurrent Hana DB user:\n");
String userName = getCurrentUser(conn);
builder.append(userName);
builder.append("\n\nCurrent Hana schema:\n");
builder.append(getCurrentSchema(conn));
data=createJobs(conn);
} else {
builder.append("no");
}
} catch (SQLException e) {
builder.append("no");
}

return builder.append(data).toString();
}

private String getCurrentUser(Connection conn) throws SQLException {
String currentUser = "";
PreparedStatement prepareStatement = conn.prepareStatement("SELECT CURRENT_USER \"current_user\" FROM DUMMY;");
ResultSet resultSet = prepareStatement.executeQuery();
int column = resultSet.findColumn("current_user");
while (resultSet.next()) {
currentUser += resultSet.getString(column);
}
return currentUser;
}

private String getCurrentSchema(Connection conn) throws SQLException {
String currentSchema = "";
PreparedStatement prepareStatement = conn.prepareStatement("SELECT CURRENT_SCHEMA \"current_schema\" FROM DUMMY;");
ResultSet resultSet = prepareStatement.executeQuery();
int column = resultSet.findColumn("current_schema");
while (resultSet.next()) {
currentSchema += resultSet.getString(column);
}
return currentSchema;
}

private String createJobs(Connection conn) throws SQLException {
String currentSchema = "";
StringBuilder log = new StringBuilder();

String insertTableSQL1 ="INSERT INTO DATABASE_1"+"."+"\"JobProtal.DB::cdsArtifact.Job\""
+ "(JOB_ID, DESCRIPTION, DEPARTMENT, OWNER_NAME) VALUES"
+ "(?,?,?,?)";
StringBuilder builder1 = new StringBuilder();
try{


PreparedStatement preparedStatement = conn.prepareStatement(insertTableSQL1);

preparedStatement.setString(1, "JB0987");
preparedStatement.setString(2, "SAP architectRole");
preparedStatement.setString(3, "SAP JDBC HANA");
// preparedStatement.setString(4, "Tomorrow");
preparedStatement.setString(4, "ShobhitPathak");





preparedStatement.executeUpdate();

builder1.append("\n\nRecords Inserted:\n");
System.out.println("Records Inserted");
return builder1.toString();
}
catch(Exception e) {
System.out.println(e.getMessage());
builder1.append(e.getMessage());
return builder1.toString();




}

}



private Connection getConnection() {
Connection conn = null;
String DB_USERNAME = "";
String DB_PASSWORD = "";
String DB_HOST = "";
String DB_PORT = "";

try {

JSONObject obj = new JSONObject(System.getenv("VCAP_SERVICES"));
JSONArray arr = obj.getJSONArray("hanatrial");
DB_USERNAME = arr.getJSONObject(0).getJSONObject("credentials").getString("user");
DB_PASSWORD = arr.getJSONObject(0).getJSONObject("credentials").getString("password");
DB_HOST = arr.getJSONObject(0).getJSONObject("credentials").getString("host").split(",")[0];
DB_PORT = arr.getJSONObject(0).getJSONObject("credentials").getString("port");
String DB_READ_CONNECTION_URL = "jdbc:sap://" + DB_HOST + ":" + DB_PORT;


conn = (Connection) DriverManager.getConnection(DB_READ_CONNECTION_URL, DB_USERNAME, DB_PASSWORD);
} catch (Exception e) {
System.out.println("Connection Error"+e.getMessage());
System.out.println("SHOBHIT");
}

return conn;
}

public static void main(String[] args) throws Exception {
SpringApplication.run(HelloController.class, args);
}
}

Call this method after the connection is established.

I have placed it here:-



After this, build the java module .

After you successfully build it, run the module as Java application.

Click on the URL generated. You will be able to see your Hana database user and schema name along with status of JDBC connectivity.

Along with this you will see the message "Records inserted".

 

 





To check the inserted record in the entity, right click and open the HDI container.

Click on tables>> Job table.

Double click and select open data.

You should see your record present.



 

So now we have built all the three modules as a part of a Multi target application and deployed them on SAP cloud platform cloud foundry.

I hope this information would be helpful.

Comments ,doubts and suggestions are always welcome.

 

 

 

 

 

 
9 Comments