Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member743968
Participant
2,163

Purpose


Today the goal is to deploy our cds model to SQLlite, PostgreSQL docker and lastly to Postgres hosted on AWS RDS. Deploying to AWS is optional and doesn't change our development pattern. However, a SaaS solution could not realistically support users or tenants until paired with a cloud postgres instance.

Let's get started, If you haven't completed Part 1, run through that blog post then meet me back here. You can always reference the mt-tacoshop repository after postgreSQL deployment here.

Set-up


The current version of CDS (^5.7.x) combined with our project structure runs smoother when our package.json is in the root of our project. Move srv/package.json & srv/package-lock.json files to the root of our project.

Update the start script to reflect the directory changes. Let's also add a script to deploy to sqlite and tell cds to reference our sqlite db.
"scripts": {
"start": "node ./srv/server.js",
"deploy:sqlite": "cds deploy -2 sqlite::memory:"
},
"engines": {
"node": "^12 || ^14"
},
"cds": {
"requires": {
"db": {
"kind": "sqlite",
"credentials": {
"database": ":memory"
}
}
},
"features": {
"in_memory_db": true
}
}

 

We'll need to add @Sisn/cds to our project and add it to our server.js
npm add @sap/cds

npm add sqlite3 --save-dev 

 

Update srv/server.jsto serve our cds definitions and temporarily comment out our xsuaa service.
const cds = require('@sap/cds');

....

xsenv.loadEnv();

....

/*
passport.use('JWT', new xssec.JWTStrategy(services.uaa));
app.use(passport.initialize());
app.use(passport.authenticate('JWT', {
session: false
}));
*/

cds.connect();
cds.serve().in(app);

 

Create a db/tenant-model.cds file with whatever entity your SaaS model sells. cuid represents an automatically assigned guid by @Sisn/cds on creation.
namespace mt.tacoshop;
using {cuid} from '@sap/cds/common';

type tCurrency : String(3);

aspect aValue {
value : Decimal(10,3);
currency : tCurrency;
};

entity Taco: cuid, aValue {
type : String(50);
};

 

To auto-load load sample data on deployment, create a db/data/mt.tacoshop.Taco.csv file and give a few lines of data.
ID,type,value,currency
fe23a844-7dc1-11ec-90d6-0242ac120003,Free Sample,0.00,USD
fe23ab28-7dc1-11ec-90d6-0242ac120003,Homemade,1.50,USD

 

Create a srv/tenant-model.cds file, as long as the naming convention matches our db/<filename>.cds, CDS will automatically pick this up and expose it as a V4 oData service.
using mt.tacoshop as mt from '../db/tenant-model';

service CatalogService @(path : '/Tenant/Catalog') {
entity Tacos as projection on mt.Taco;
}

 

This is a good place to commit your work in case you need a restore point. At this point, we can attempt to try to build and run cds.
npm install
npm run deploy:sqlite
npm run start

 

You should see a VError: No service matches uaa.

Remember our mta.yaml? We defined a few a few resources as org.cloudfoundry.managed-service.

Those are the micro-services that link our applications running on BTP. They were automatically created on deployment to our trial account but they don't exist on your local machine. We'll want to copy our VCAP_SERVICES from BTP to our local development environment.


No UAA Service


 

Back in the BTP trial account, select your srv application and view environment variables. You'll want to copy the entire "VCAP_SERVICES" Object.


environment variables


Back in the root of your project, create a default-env.json file and paste in your environment variables. Structure should look somethining like below. Note you wont be able to view this in our sample repo as it contains sensitive data.
{
"VCAP_SERVICES": {
"saas-registry": [
{
"label": "saas-registry",
"provider": null,
"plan": "application",
"name": "mt-tacoshop-registry",
"tags": ["SaaS"],
"credentials": {
....
}"
}
],
"xsuaa": [
{
"label": "xsuaa",
"provider": null,
"plan": "application",
"name": "mt-tacoshop-uaa",
"tags": ["xsuaa"],
"credentials": {
...
},
"syslog_drain_url": null,
"volume_mounts": []
}
],
"destination": [
{
"label": "destination",
"provider": null,
"plan": "lite",
"name": "mt-tacoshop-dest",
"tags": ["destination","conn","connsvc"],
"credentials": {
...
},
"syslog_drain_url": null,
"volume_mounts": []
}
]
}
}

 

You should now be able to build and run the project locally using sqlite as a temp db. Let's shut down the cds server and uncomment our xsuaa logic.


sqlite sample data


 

We've changed the project structure to make it easier to run locally but in turn may have broken the deployment process. As a sanity check whenever adjusting project structure, I like to re-deploy and make sure our subscribe and unsubscribe process is working. It's a lot easier to fix things now than next week when you've added 24 features and two new modules.

 

From the project root, build the cds deployment. Note, you'll want to run cds build before all future BTP deployments.
cds build

 

You'll notice a new gen directory in your project. We'll want to use the gen directory For BTP deployments. Update our mta.yaml to use the gen/srv during deployment. This is a good time to also enable-parallel-deployments and keep-existing-routes.
ID: mt-tacoshop
_schema-version: "3.1"
version: 0.0.4
parameters:
enable-parallel-deployments: true
keep-existing-routes: true
modules:
- name: mt-tacoshop-srv
type: nodejs
path: gen/srv
parameters:
memory: 256M
disk-quota: 512M
provides:
- name: srv_api
properties:
url: ${default-url}
requires:
- name: mt-tacoshop-uaa
- name: mt-tacoshop-registry
- name: mt-tacoshop-dest

 

We'll now want to rebuild our mta and deploy. If after re-deployment you confirm that subscribe & unsubscribe are working, you're in great shape!

 

On subscribe / unsubscribe you may have lost the route. Rebind from trial if needed.


rebind route to tenant


 

Grant administrator rights if you did not complete this is part 1


role-collections


 

At this point, you might be saying.. this has been a lot of work and haven't even touched postgreSQL. Time to get our hands dirty!

 

PostgreSQL Docker



create `docker-compose.yml` in project root




# Use postgres/example user/password credentials
version: '3.1'

services:

db:
image: postgres:alpine
restart: always
environment:
POSTGRES_PASSWORD: postgres
# volumes:
# - ./db/data:/tmp/data
# - ./db/init:/docker-entrypoint-initdb.d/
ports:
- '5432:5432'


adminer:
image: adminer
restart: always
ports:
- 8080:8080




create `Dockerfile` in project root:



FROM node:14-buster-slim

RUN apt-get update && apt-get upgrade -y && nodejs -v && npm -v
# causes an error with node:14-buster-slim
# RUN apt-get --no-install-recommends -y install openjdk-11-jre
WORKDIR /usr/src/app
COPY gen/srv/package.json .
COPY package-lock.json .
RUN npm ci
COPY gen/srv .
COPY app app/
RUN find app -name '*.cds' | xargs rm -f

EXPOSE 4004
# Not needed with node:14-buster-slim
#RUN groupadd --gid 1000 node \
# && useradd --uid 1000 --gid node --shell /bin/bash --create-home node \
# && mkdir tmp \
# && chown node -R tmp
USER node
CMD [ "npm", "start" ]

 

And importantly tell our local environment to look for a database on localhost port 5432 by adding a user-provided VCAP_SERVICE to our default-env.json
"user-provided": [
{
"label": "user-provided",
"name": "pg-bookshop-database",
"tags": ["relational", "database", "plain"],
"instance_name": "mt-tacoshop-db",
"credentials": {
"host": "localhost",
"port": "5432",
"database": "tacoshop",
"user": "postgres",
"password": "postgres",
"schema": "public"
}
}
],

 

Update our package.json to look for a postgres database instead of sqlite
"cds": {
"requires": {
"db": {
"kind": "database"
},
"database": {
"dialect": "plain",
"impl": "cds-pg",
"model": [
"srv"
]
}
},
"migrations": {
"db": {
"multitenant": true,
"schema": {
"default": "public",
"clone": "_cdsdbm_clone",
"reference": "_cdsdbm_ref",
"tenants": ["c21a0f01a094445a97616901eda6b86a"]
},
"deploy": {
"tmpFile": "tmp/_autodeploy.json",
"undeployFile": "db/undeploy.json"
}
}
}

 

I am setting the tenants field equal to my street taco vendor tenant ID and removing all "-" as this causes issues on postgres schemas.


street-food tenant


 

back to our terminal add cds-pg and cds-dbm respectively
npm add cds-pg
npm add cds-dbm --save-dev

 

Until our multitenant updates are included in a cds-dbm release, we'll need to adjust cds-dbm source to source from my github branch




"devDependencies": {
"cds-dbm": "github:austinkloske22/cds-dbm#main",
}




npm install






Manually build `cds-dbm` for deployment. In new terminal navigate to your project root and build. For some reason, this only worked when I launched a new native terminal.



cd node_modules/cds-dbm
npm install
npm run build
sudo chmod 755 ./dist/cli.js


 

Let's add the following scripts to our package.json
"docker:start:pg": "docker-compose -f docker-compose.yml up -d",
"deploy:pg": "node_modules/cds-dbm/dist/cli.js deploy --create-db",
"deploy:pg:load": "node_modules/cds-dbm/dist/cli.js deploy --load-via delta"

 

Make sure that docker is running and start the database container and deploy.
npm run docker:start:pg
npm run deploy:pg

 

 


 

 

Log into localhost:8080 with the credentials in your user-defined-service


 

 


tacoshop-db


 

At this point, we've deployed to both public and our tenantID schema. A multitenant SaaS Solution is starting to take shape.

 

PostgreSQL Cloud (Optional)


After BTP deployment we won't be able to point our application to a docker container running on our local machine. So if you plan on running in this productively, you'll need to create a user-provided service that enables access to your cloud postgreSQL instance. In theory, you should be able to connect to any remote postgreSQL instance that can be connected to through a host url and password. We wont cover the AWS set-up on this blog post but you can google around on the required set-up.

 
cf create-user-provided-service mt-tacoshop-db -t "relational, database, plain" -p '{
"database": "tacoshop",
"host": "postgresaws1.********.**-****-1.rds.amazonaws.com",
"user": "postgres",
"password": "********",
"port": 5432,
"schema": "public" }'

 


 

Telling our application to connect to the cloud instance of postgreSQL is as simple as swapping the default-env.json host and password information.

 

We'll also want to include the new service in our mta.yaml and require it inside of mt-tacoshop-srv for BTP deployments.
  - name: mt-tacoshop-srv
type: nodejs
path: gen/srv
parameters:
memory: 256M
disk-quota: 512M
provides:
- name: srv_api
properties:
url: ${default-url}
requires:
- name: mt-tacoshop-uaa
- name: mt-tacoshop-registry
- name: mt-tacoshop-dest
- name: mt-tacoshop-db

resources:
- name: mt-tacoshop-db
type: org.cloudfoundry.existing-service

 

 

With an updated default-env.json, We can re-deploy the CDS model to the cloud, load-data and even log into the database with your favorite database adminstration tool. I like using PG Admin.


pg-admin


 

This would is another great time to commit our updates and make sure that deployments and subscribe/unsubscribe are still working. Try subscribing to our SaaS solution from our second tenant and add the tenant ID to our deployment.

 

To view our odata service from either tenant, we'll need to let the approuter know about its route.

Add a Tenant route to the app/xs-app.json

{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"routes": [
{
"source": "^/srv/(.*)$",
"authenticationType": "xsuaa",
"scope": "$XSAPPNAME.User",
"destination": "srv"
},
{
"source": "^/Tenant/(.*)$",
"authenticationType": "xsuaa",
"scope": "$XSAPPNAME.User",
"destination": "srv"
},
{
"source": "^/(.*)",
"authenticationType": "none",
"localDir": "resources"
}
]
}

 

Give a url from our index.html to access
<!DOCTYPE HTML>
<html>

<head>
<meta charset="utf-8">
<title>Multitenant Tacoshop</title>
</head>

<body>
<h2>Multitenant Tacoshop SaaS</h2>
<a href="https://blogs.sap.com/srv/info">Info</a><br>
<a href="https://blogs.sap.com/srv/subscriptions">Subscriptions</a><br>
<a href="https://blogs.sap.com/srv/destinations?destination=mydest">Destinations</a><br>
<a href="https://blogs.sap.com/Tenant/Catalog/Tacos">Tacos</a><br>
</body>
</html>

 

 


tacos-and-tequilla odata


 

Great work! One 'gotcha' is that our approuter is still directing requests from both tenants to the public schema. In part 3 of our series, we'll cover how to get the approuter running locally and ensure that each tenant only accesses data from its own schema.
1 Comment
Labels in this area