Hey folks! So, I've recently been immersed in the world of ‘Building Resilient Applications using SAP Cloud Application Programming Model (SAP CAP)’. Yes, I initially had no clue what Resilient Apps were when I first stumbled upon the term. But fear not, we're about to embark on a journey of enlightenment!
Oh and wait! I almost forgot to mention this extra exciting news! I also got the chance to present this topic at SAP Inside Track Bengaluru 2024, along with my colleague @Ajit_K_Panda and the experience was nothing short of amazing! The awesome keynote by Sindhu Gangadharan, the expert sessions being held, the energy of the audience… oof. Nothing short of electric!!
Alright, coming back to the point, you may be thinking 👇
By definition, “Resilience of a cloud Application is its ability to maintain its functionality, availability, and performance even when facing failures, disruptions, or unexpected events.”
That's a pretty straightforward explanation. But hey, if we want to simplify it further, let's paint a picture with a little scenario.
Imagine you've created a Business Application for an SAP client using CAP. This snazzy application also taps into modules from the SAP S/4 HANA On-Premise system.
Now, picture this: one fine day, out of the blue, hoards of users swarm onto your application (seems like your client's marketing game is on point), and lo and behold, your application decides to throw a tantrum and crashes.
There could be various other instances where hiccups could affect your application’s functionality, availability, and performance.
Now, that's not something we can afford, right? We need our application to be as sturdy as a rhinoceros on roller skates!
Enter Resilient Apps.
Now that we're all cozy with the concept of Resilient Applications, let's dive into a scenario I've been tinkering with and how it transforms an ordinary application into a Resilient one. (Keep in mind, any CAP application can achieve Resilience with a few nifty tweaks).
Involve third-party corporation into a background validation of newly added or modified Business Partners, without giving them direct access to digital core. Third-party corporation would be able to review the Business Partner details in extension application and update/validate the verification details in real time. After reviewing the details, the Business Partner data gets updated in the S/4HANA System with the validation results.
That right there is the official spiel of the Business Scenario.
But let's break it down, shall we? (Because let's face it, I needed it broken down the first time I encountered it).
So, what we're aiming for is to let third-party corporations peek at newly added or modified Business Partners (straight from the S/4 HANA on-premise system). However, giving them direct access would be akin to leaving the keys to your digital kingdom lying around.
Instead, we're setting up a system where these third-party folks can edit and validate Business Partner details in real time through an extension application. Once done, the Business Partner data gets a makeover in the SAP S/4HANA on-premise system, and the validation results are served up nicely in the UI for our third-party pals.
The solution architecture for our Resilient Application is meticulously designed to ensure seamless integration, robust functionality, and secure data management. At its core, we employ a Node.js application developed with the Cloud Application Programming (CAP) model, complemented by the SAP Cloud SDK. This application is securely encapsulated within the XSUAA (Authorization and Trust Management) and Identity Authentication (IAS) frameworks, providing controlled access via the Launchpad, facilitated by the HTML5 App Repository for intuitive user interaction.
Facilitating real-time communication and event handling between our application and the S/4 HANA system is the Event Mesh service, leveraging AMQP and MQTT protocols. Specifically, within the S/4 HANA system, custom events for Business Partner creation and modification are meticulously orchestrated, enabling seamless data synchronization.
Key to our architecture is the API_BUSINESS_PARTNER_SRV module from the API Hub within the S/4 HANA system, ensuring standardized and efficient data access. Additionally, to facilitate efficient data retrieval and secure communication, we employ the Cloud Connector to establish a secure tunnel between our HANA cloud database and the S/4 HANA system.
Furthermore, to fortify our application against errors, failures, and disruptions, we leverage a suite of services within the Business Technology Platform (BTP). These include but are not limited to Alert Notification, Autoscaler, and Logging services, ensuring proactive error management and system optimization.
In summary, our architecture is meticulously crafted to uphold the principles of resilience, security, and efficiency, ensuring a seamless user experience and robust application performance.
We will now talk about the services that we are using and why we are using them exactly. We won’t dive deep into the services as extensive information on them is already available on SAP Help, but I will discuss why we are using the service so that you can understand what use cases each service can fit.
We need to fetch data from S/4 HANA and store it in our HANA cloud so as to persist data for the 3rd party vendors too, without giving them access to the digital core. That is where Cloud Connector comes in. It helps in connecting Cloud Applications to On-Prem systems, which is what we are using in our scenario.
It also helps in connecting On-Prem to Cloud Apps which you can explore but it is not in the scope of this scenario, and hence post.
Apart from that, as mentioned above, it creates a secure link, does not expose the entire internal landscape, and also has a lot of the functionality you would want in a Resilient App, like auto-recovery of broken connections etc.
Find out more about Cloud Connector
We are using it to send events of Creation or Updation of Business Partners from S/4 HANA to our CAP Application so that we can do the needful based on the type of event.
You can read more about it here: Event Mesh
We are using CI/CD to have our own pipeline so that whenever multiple developers are working on the project and push changes to GitHub, each push goes through the whole testing process developed in the code and only gets deployed after it passes all checks that have been defined in the CI/CD tool or in the source repository. This maintains the robustness and correctness of the code. If the build fails, the developer receives a notification, and they can view the logs and try again.
You can read more about it here: CI/CD
Working in development, you must have seen different tenants like Development, QA, Pre-Production, Production etc. It is a best practice to have such a pipeline after CI/CD so that there is a differentiation between the development environment and the Production environment.
The development environment is constantly in flux and has a high probability of crashing anytime because of some change or the other that was pushed by a developer. These bugs or crashes should not come in the Production environment, otherwise there will be a lot of damage.
Hence, we use the Cloud Transport Management System in order to define these environments. CI/CD can directly connect with Cloud Transport Management and only deploy to the Development or QA (in our case) environment.
Once the QA environment confirms that the code is working fine and nothing has broken, they deploy it to the Production environment.
We are also using CTMS to have the same control over the code, and not have face to any bugs or crashes in our Production environment.
You can read more about it here: CTMS
We want to scale our app and use resources optimally based on the number of users currently using the app. This saves money and resources.
You can read more about Autoscaler here: Autoscaler
Getting real-time updates of the deployment status, or any crashes or errors on our system so that any problem can be looked into at once. Sounds good right? Who would not want that in their application? It would surely make the app more robust and resilient, right?
And that’s why we are using it!
You can read more about it here: Alert Notification
We are using this service because it gives us much needed information about how our application is running and what are the factors that are affecting it. It also helps with debugging. Moreover, we can see different kinds of data about our app, like how many calls were made to it, how many users are using it, etc.
You can read more about it here: Kibana Dashboard
And that’s all about the services guys!
Ah, the intricate dance of code! Let's take a journey through the enchanted forest of TypeScript and Cloud Application Programming. But don't worry, I'll make sure we don't get lost in the weeds!
First things first, here’s some access to my GitHub repo so that you can follow it and try it out on your own. And also, I would like to thank @maxstreifeneder and @maheshpalavalli for helping me out with this. I have built upon Max’s code and updated the application with the latest TS and CDS.
When deciding between TypeScript (TS) and JavaScript (JS), we consider TS for its static typing, enhanced IDE support, and improved code readability and maintainability. TypeScript's compatibility with the JS ecosystem, better error detection, and growing community make it a compelling choice for projects seeking robustness and scalability. This helps make the application more robust and hence, resilient.
First things first, keeping our dependencies up-to-date is like watering a plant – neglect it, and things start to wither away. So, we're rocking the latest CDS and TypeScript to ensure our code is as fresh as a daisy.
Now, onto the juicy bits! We've imported the API_BUSINESS_SRV_API as cds, which creates a schema in the srv folder within a folder called external. You can use the below command to achieve this feat.
cds import <input_file> --as cds
The schema.cds file in the db folder has been written according to what is required in the app.
Next up, we're generating types for all our CDS files because, let's face it, nobody likes surprises when it comes to types – unless it's a surprise party, of course! Try this command:
npx @cap-js/cds-typer "*" --outputDirectory @cds-models
Now within srv, we write our code. The normal code remains how anyone would write it, but the important parts are how the events are called here. You can view the screenshot below to understand how they are being called from the code:
This is for blocking/unblocking business partners from the UI itself.
This is for defining the BusinessPartnerCreated and Changed events so that they can be used later with the same variable.
And speaking of events, handling them can be trickier than juggling flaming torches. But fear not, we've got it covered. Whether it's creating a Business Partner or making changes, we've got code snippets smoother than a jazz solo on a saxophone.
An example of the creation of a Business Partner looks like this:
Here we get the event from the S/4 system and call handleBusinessPartnerCreated function. The first picture is how we receive events. The second is how you can handle the event. You could handle your application differently, but the method of receiving the event would be the same.
Based on how we had written code for the block/unblock, we can write the handler for it like this:
So these were some of the most important parts of the code that I just wanted to share here, because I had faced quite a lot of problems with some of these parts, especially handling events.
But wait, there's more! Don't forget about the invaluable CAPIRE docs – the secret sauce to unlocking coding nirvana. And let's give a round of applause to the SAP Help Documents and the ever-reliable Community, the real heroes of the coding world, who could give you answers to even the silliest of questions without judgement. Like a BOSS!
Last but not least, always remember to follow the sacred texts of the SAP BTP Developers Guide – it's like the Jedi Code for developers, guiding us through the trials and tribulations of app development. So, let's strap on our coding boots, grab our lightsabers (or keyboards), and embark on this epic coding adventure together!
And there you have it, folks! I hope this journey through the world of Resilient Apps has been as enlightening for you as it has been for me. Remember, implementing these practices isn't just about building better applications – it's about making our developer lives easier and more awesome.
By embracing resilience, we're not just safeguarding our applications against the slings and arrows of outrageous fortune – we're future-proofing them for whatever curveballs life throws our way. And let's be honest, who doesn't love a good curveball?
So, let's make resilience our trusty sidekick, our coding companion on this epic adventure through the digital frontier. And remember, these practices aren't just a one-time affair – they're habits we cultivate, like watering a plant or feeding a pet rock (hey, don't judge, they need love too).
With that, it's time for me to bid adieu. Until next time, keep coding, keep innovating, and keep being awesome.
Signing off,
Aryan
Team PAA at SITBLR'24
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
27 | |
25 | |
19 | |
14 | |
14 | |
11 | |
10 | |
9 | |
7 | |
7 |