Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
1,173

Introduction


In this blog post, our main focus will be on harnessing the potential of Amazon Aurora global database within the CAP framework. As a reference point, I recommend checking out my previous blog post that covered the steps for preparing the development environment and creating a new project.

In certain scenarios, there might be a need for an additional relational database alongside SAP HANA Cloud. This is where Amazon Aurora comes in, providing a comprehensive and feature-rich solution. With Amazon Aurora, businesses can effectively develop and operate globally distributed applications, leveraging its outstanding performance, reliability, and scalability. By incorporating Amazon Aurora alongside SAP HANA Cloud, organizations can ensure a powerful and versatile database environment to meet their diverse application requirements and achieve optimal results.

Prerequisites


The following preparations are required to follow the steps and deploy the project by yourself:

– SAP BTP PAYGO or CPEA agreement
– SAP BTP subaccount
– Entitlements for SAP BTP, Cloud Foundry runtime
– AWS cloud platform subscription

Using Amazon Aurora global database


The initial stage of this process involves creating an Amazon Aurora Global Database Cluster, specifically employing the PostgreSQL Compatible Edition for our specific use case. Unlike the mta.yaml approach, this task cannot be automated and requires either manual setup or the utilization of automation tools like AWS CloudFormation or Terraform. For detailed instructions on setting up an Amazon Aurora Global Database Cluster, you can refer Setup Amazon Aurora Global Database Cluster

Ensure that you have included AWS security group rules for all the Nat IPs associated with your BTP Region

Let's suppose that you have followed the steps outlined in my previous blog post and successfully created a new project. With the development environment prepared and the new project in motion, we can now move forward to explore the possibility of integrating Amazon Aurora into the CAP framework.

Now that we have the database and base project ready, we can proceed with the deployment of the database schema. Let's begin this phase of the process.

  1. Execute the following command to create a User-Provided service, utilizing the configuration details from Amazon Aurora.
    cf create-user-provided-service <service name> -p '{\"dbname\": \"<dbname>\",\"hostname\": \"<db host>\",\"password\": \"<password>\",\"port\": \"<db port>\",\"schema\": \"public\",\"username\": \"<db user>\"}'

    Example:
    cf create-user-provided-service sample-db -p '{\"dbname\": \"postgres\",\"hostname\": \"database-1-instance-1.cs1lgzxhfizd.eu-north-1.rds.amazonaws.com\",\"password\": \"<password>\",\"port\": \"5432\",\"schema\": \"public\",\"username\": \"postgres\"}'


  2. The @cap-js/postgres package utilizes the cds-plugin technique to automatically configure your application and use a PostgreSQL database for production. To add the necessary database packages, execute the following commands inside your project’s root directory.
    npm add @cap-js/postgres
    npm add @sap/cds-dk@7


  3. Enhance the package.json to incorporate database details, custom scripts, and build tasks as follows.
     "scripts": {
    "deploy": "cds-deploy"
    },
    "cds": {
    "build": {
    "target": ".",
    "tasks": [
    {"for": "nodejs", "src": "db","options": {"model": ["db","srv"]}},
    {"for": "java","src": "srv","options": {"model": ["db","srv"]}}
    ]
    },
    "requires": {
    "db": {
    "kind": "postgres",
    "impl": "@cap-js/postgres",
    "pool": {
    "acquireTimeoutMillis": 3000
    },
    "vcap": {
    "label": "user-provided"
    }
    }
    }
    }​

    In the updated package.json, we have introduced several modifications. Let's examine them individually:

    • scripts.deploy: The hyphen in "cds-deploy" is essential because we do not utilize "@cds-dk" for deployment. In case you are interested in using "@cds-dk" for other reasons, you may consider incorporating the apt-buildpack in your deployment module.

    • cds.build: There are two build tasks to facilitate a Cloud Foundry deployment. One task is for Node.js, and the other is for Java. This approach empowers us to handle database schema deployment using Node.js while executing the application through Spring Boot.

    • requires.db.pool.acquireTimeoutMillis: This parameter determines the duration allowed for waiting until an existing connection is retrieved from the pool or a new connection is established. By default, this value is set to 1000 milliseconds. If the database connection is taking longer than expected, you can increase this parameter to allow for a longer waiting time.

    • requires.db.vcap.label: If a service is bound to your application and carries the label "postgresql-db," it is automatically chosen as the default option. This feature is particularly valuable in cases where user-defined services are used. As we are currently utilizing a user-provided service, please retain the value as "user-provided".



  4. Now, after enhancing the package.json, we have the ability to manually initiate the build by executing the cds build command, which will generate files and folders ready for deployment. However, note that executing this step right now is not mandatory as it will happen automatically during the mta build stage The next step is to proceed with the final preparation by creating the mta.yml file for deployment.

  5. Use the following command in the project's root folder to generate the mta.yml file with the module and resource definitions.
    cds add mta


  6. The mta.yml file generated in the previous step will need some adjustments before it can be deployed.

    • To leverage the user-provided service created in the previous step, integrate the given resources definition into the mta.yml file.
      resources:
      - name: sample-db
      type: org.cloudfoundry.existing-service


    • To allow the server module to utilize the user provided service, simply add the "requires" statement with the service name.
          provides:
      - name: srv-api # required by consumers of CAP services (e.g. approuter)
      properties:
      srv-url: ${default-url}
      requires:
      - name: sample-db


    • To facilitate the deployment of the database schema, including tables and views to the Amazon Aurora database, we must define the following deployer module.
        - name: pg-db-deployer
      type: nodejs
      path: .
      parameters:
      buildpack: nodejs_buildpack
      stack: cflinuxfs4
      no-route: true
      no-start: true
      disk-quota: 2GB
      memory: 512MB
      tasks:
      - name: deploy
      command: npm run deploy
      disk-quota: 2GB
      memory: 512MB
      build-parameters:
      builder: npm-ci
      before-all:
      custom:
      - npm install --production
      - npx cds build --production​
      ignore: ["node_modules/", "mta_archives/","tmp/","srv/target/"]
      requires:
      - name: sample-db




  7. At this point, we have the option to build and deploy only the pg-db-deployer module. However, we will go further by configuring the Spring Boot connection details to deploy both modules together.

  8. To integrate the PostgreSQL dependency, add the following code snippet to the srv/pom.xml file:
    <dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.3.3</version>
    </dependency>


  9. Incorporate the cds-dk version in the srv/pom.xml file, ensuring that it matches the version specified in package.json. For instance, you can add the<version>7.0.3</version> under configuration


  10. <execution>
    <id>cds.install-cdsdk</id>
    <goals>
    <goal>install-cdsdk</goal>
    </goals>
    <configuration>
    <skip>${cdsdk-global}</skip>
    <version>7.0.3</version>
    </configuration>
    </execution>


  11. By incorporating the specified database connection details into the application.yaml file, your SAP CAP application will seamlessly establish a connection with the Amazon Aurora database using the credentials provided.
    ---
    spring:
    config.activate.on-profile: cloud
    datasource:
    driver-class-name: org.postgresql.Driver
    url: jdbc:postgresql://${vcap.services.sample-db.credentials.hostname}:${vcap.services.sample-db.credentials.port}/${vcap.services.sample-db.credentials.dbname}
    username: ${vcap.services.sample-db.credentials.username}
    password: ${vcap.services.sample-db.credentials.password}
    initialization-mode: never
    hikari:
    maximum-pool-size: 10

    Note: Replace "sample-db" with the name of your actual database instance specified in the mta.yaml file:


All configurations have been completed, and we are now fully prepared for the deployment phase.

Deploy to BTP Cloud Foundry Runtime


To deploy your application to the SAP Business Technology Platform (BTP) Cloud Foundry Runtime, follow these steps:

  1. To generate a single mta.tar archive, execute the following command in project root
    mbt build

    This command will package all the components of the Multi-Target Application (MTA) into a single mta.tar archive, which can then be used for deployment or distribution.

  2. Congratulations! You have reached the final stage. Now, you can proceed with deploying the previously generated archive to Cloud Foundry by executing the following command:


  3. cf deploy .\mta_archives\sample_1.0.0-SNAPSHOT.mtar


  4. After the successful deployment, the application is now prepared for testing. Obtain the application URL from the BTP cockpit, or alternatively, execute the cf app sample-srv command to retrieve it.


    Application home screen


    Click on books entity and enter “system” as a username, skip the password then click on sign in. This will display the books entity sample data.


Schema validation


To directly connect to a Amazon Aurora instance in your local environment, follow these steps

  1. Access the Amazon RDS Console and include your IP address in the security group rules. This action will enable secure access to your Amazon Aurora database from the specified IP address. Remember that the access is granted only from your current IP, so in case your IP changes, you should update the security group rules accordingly.

  2. Download and install the community edition of DBeaver and connect to Amazon Aurora HOST: Amazon Aurora Host
    PORT: Amazon Aurora Port
    DATABASE: “dbname” tag value from environment variable
    USERNAME: “username” tag value from environment variable
    PASSWORD: “password” tag value from environment variable


    DBeaver Connection Settings



  3. After configuring the connection settings in DBeaver, click the "OK" button to save the connection. Once the connection is established, you can proceed to explore the "public" schema of the Amazon Aurora database. This schema typically contains the tables and objects based on your CDS views


     


Delta deployment



  1. Make changes to the "db\data-model.cds" file by adding descriptions..
    namespace my.bookshop;

    entity Books {
    key ID : Integer;
    title : String;
    description: String;
    stock : Integer;
    }


  2. By executing these commands, you can build and deploy only the db module, ensuring that the changes made to the data model are reflected in the Amazon Aurora database.
    mbt build
    cf deploy .\mta_archives\sample_1.0.0-SNAPSHOT.mtar -m db-deployer


  3. After deploying the updated data model changes and sample data, validate them using DBeaver.


Schema Deployment from local host


In the development phase, we anticipate the need for multiple schema deployments, and it's evident that using the MTA module could result some delays, In light of this, let's explore an alternate approach by opting for direct deployment from your localhost.

  1. Let's generate a "default-env.json" file in the project root, utilizing the application's VCAP_SERVICES from environment variable.
    {
    "VCAP_SERVICES": {
    [...]
    }
    }


  2. Let's proceed with the deployment by executing the following commands.
    cds build --production​
    cds deploy

    That's it! Now, the changes have been deployed directly from your localhost, eliminating the need for mbt build.


Related Blogposts


Run and Deploy SAP CAP (Node.js or Java) with PostgreSQL on SAP BTP Cloud Foundry | SAP Blogs

Architecting solutions on SAP BTP for High Availability | SAP Blogs

Conclusion


I trust this gives you a brief insight into the process of running a CAP Java application on BTP while leveraging Amazon Aurora as the database. Moreover, this approach can be expanded to incorporate various data sources, such as PostgreSQL Service, Azure Cosmos DB for PostgreSQL, Google Cloud AlloyDB, and others. The possibilities for integrating different data sources are vast, offering flexibility and scalability to meet diverse application requirements.

We highly appreciate your feedback and welcome any comments or questions you may have.