Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
alex_myakinkii
Explorer
0 Kudos
315

Introduction

The motivation for this blog for me was to teach myself nestjs and the stuff that comes with it.

And to make it a little bit more interesting for me, rather than following samples they have, I decided to try replace CAP nodejs runtime (based on express) with nestjs (that also uses express under the hood).

As their docs say, "Nest (NestJS) is a framework for building efficient, scalable Node.js server-side applications. It uses progressive JavaScript, is built with and fully supports TypeScript (yet still enables developers to code in pure JavaScript) and combines elements of OOP (Object Oriented Programming), FP (Functional Programming), and FRP (Functional Reactive Programming)." and also that "Nest provides an out-of-the-box application architecture which allows developers and teams to create highly testable, scalable, loosely coupled, and easily maintainable applications. The architecture is heavily inspired by Angular."

I missed that angular part and got back to it after I researched the rxjs observables in regards to Interceptors, but if you unlike me are familiar with angular already, maybe this whole thing will make more sense to you immediately.

So basically, I was aiming to replace ODataAdapter (with its "middleware") + cds.ApplicationService with something from nestjs ecosystem.

Another major topic for me was transaction handling in CAP nodejs, so I was aiming to take a closer look at what happens there.

From cds perspective I took sample app as a reference model and kept it almost untouched.

TL;DR

The result of the exercise is this repo running in 4 "modes":

  1. og cap mode (npm run cds) - just basic crud for catalog and admin service with single  action implemented to showcase some TX handling
  2. vanilla nodejs mode (npm run node) - replaces ApplicationService with our own vanilla JS class FakeCDSService that does NOT implement auto-txing. Here we still mount full ODataAdapter for both services, but do it ourselves without cds_serve (meaning we instantiate everything ourselves).
  3. nestjs mode (npm run nest) - runs project with nestjs runtime with admin and catalog service having different behaviour:
    1. Admin service has two endpoints: "/odata/v4/admin" served with ODataAdapter + FakeCDSService AND "/rest/v1/admin" served with nestjs + middleware dealing with OData to CQN parsing and serving $metadata and service root endpoints (so that it does act as odata service as well)
    2. Catalog service: we completely drop run-dispatch-handle loop part, and have EmptyCDSService service just as provider for cds model for odata parser for "/rest/v1/catalog" endpoint (still serving odata of course)

Because of this (and the fact that we have a primitive support for $batch) "browse app" is pointed to "/rest/v1/catalog" endpoint while "admin-books app" still uses "/odata/v4/admin"

Modes 2 and 3 use MySQLiteService as a subclass of cap-js/sqlite SQLiteService (aka "new db services" implementation) which has simplified run-dispatch-handle loop and supports auto-txing and manual tx we use for $batch and submitOrder action.

Disclaimer

The code in the repo mentioned above is just PoC intended for purely academic use, please do not use it otherwise.

It barely covers some basic features in terms of CAP functionality: handling of odata requests by a single local service)

Some parts of the code (namely some middleware and multipart handling) are taken from actual cds implementation and therefore belong to SAP.

Also in regards to nestjs there must be of course more and better ways to achieve the same, but author had no prior knowledge in it before starting this project (learning it was part of the project).

CDS (CAP nodejs) part

As already was said, we have two sample cds services: admin and catalog
We removed custom handlers (expect for submitOrder action handler we cover later), so ther is just basic default CRUD logic lest

Expectations regarding the Architecture

With new ODataAdapter that replaces okra odata server in cds8, this is how it was described at recap this year:

  • Adapter instance = express router object used for respective path
    • Router object is an isolated instance of middleware and routes
    • Example: app.use('/odata/v4/catalog', <adapter>)
  • Responsibilities:
    • Early access check (DoS): May the current user use the respective service?
    • Parse the url and, if necessary, the payload
    • Set the transactional boundary (cf. /$batch)
    • Construct and dispatch cds. Request to cds. ApplicationService
    • Send (error) response to client

So, based on this it would be cool if we could have something like this in terms of layers of abstraction:

HTTP LAYER (express app receives incoming http requests and routes it somewhere)

    => ODATA MIDDLEWARE LAYER (produces CQN object)

        => APPLICATION SERVICE (handles CQN object)

            => DB SERVICE (executes CQN objet)

In this case we could just replace APPLICATION SERVICE layer with nestjs runtime to handle CQN for us.

What is important to mention here is that express middleware by definition does this:

  • Execute any code.
  • Make changes to the request and the response objects.
  • End the request-response cycle.
  • Call the next middleware function in the stack.

While express router is a different kind of beast, because it actually handles our http requests.

Why does it matter?

Well, because we would like to have a middleware that is independent from the actual handler - it would just populate some properties of req object for us or serve $metadata and stop early.

But as current implementation of "middleware" is responsible for "Constructing and dispatching cds.Request to cds.ApplicationService", this qualifies it as a "handler" that is tightly coupled to cds.ApplicationService (especially in $batch case) - and so we need to make it a proper middleware ourselves (and we do so).

 

// NOTES:
// - only via srv.run in combination with srv.dispatch inside,
//   we automatically either use a single auto-managed tx for the req (i.e., insert and read after write in same tx)
//   or the auto-managed tx opened for the respective atomicity group, if exists
// - in the then block of .run(), the transaction is committed (i.e., before sending the response) if a single auto-managed tx is used
return service
  .run(() => {
    return service.dispatch(cdsReq).then(result => {
      ... // here could be stuff like _readAfterWrite
      return result
    })
  })
  .then(result => {
    ... // here is actual preparation of result
    res.send(result)
  })
  .catch(err => {
    ... // here is preparation of error
    next(err)
  })

 

ApplicationService and auto-txing + AsyncLocalStorage

As was shown above, our hadler use somthing like this to "execute" the cds.Request we have:

 

service.run( () => { return service.dispatch(cdsReq).then( result => { return result } }).then().catch()

 

What does that mean?

Well, after I spent some time debugging all that srv-api.js and srv-dispatch.js and srv-tx.js stuff, it occured to me that (at least) now in the documentation we have some info regarding what I call "run -> dispatch -> handle loop" covered here AND we have some more details regarding the transactions here.

And to be honest, it made much more sense to me only AFTER I implemented my FakeCDSService, MySQLiteService, and nest stuff and came back to dive deeper into transactions stuff.

So, if my understanding of that logic is correct, those "loops" (calls of run or dispatch with functions as arguments) eventually just make sure we properly set our transactional boundaries for incoming call, and here is where AsyncLocalStorage comes into play:

 

// somewhere in cds-tx.js when we understand we need to set new RootContext
if (typeof fn === 'function') {
  const tx = RootTransaction.for (srv, ctx)
  return cds._context.run(tx, () => Promise.resolve(fn(tx) ).then(tx.commit, tx.rollback))
}

 

And what is does for us, is sets cds.context to be our RootContext with cds.context.tx to be our RootTransaction while all subsequent calls to .run (db, other local or remote services) have access to this context.

You can refer to this article to see example 3 as simplified version of this approach.

But that does not directly start a transaction for us! While it sets transactional boundaries, it actually waits until first "run" or "dispatch".

And this is the tricky part they cover in docs:

 

// The tx object created by srv.tx() and passed to the function fn is a derivate of the service instance, constructed like that:
tx = { __proto__:srv,
  context: { tenant, user, locale }, // defaults from cds.context
  model: cds.model, // could be a tenant-extended variant instead
  commit(){...},
  rollback(){...},
}

 

Basically, first dispatch is replaced with _begin that makes sure that .begin() of DB service is called.

And after that - when all handlers finish their processing - commit or rollback is done.

You can refer to srv/cat-service.js where I cover two almost identical cases for this behavior.

Keep in mind, that indeed once some transaction began, it acquires the connection (from the pool).

But then until it is released, new transaction either needs to wait, or get another connection (if DB supports parallel transctions).

In case of SQLite it can introduce deadlocks (as shown in _submitOrder) OR "freeze" your handlers (imagine a scenario with remote service call that takes some considerable amount of time).

Vanilla nodejs part

So, initially my approach was to remove cds bootstrap process and see how all the stuff runs.

And then try to strip off more and more layers of logic from the cds.ApplicationService.

And if you are interested in this, you can browse commit history in repo, cuz it indeed starts with that.

Eventually, I ended up with having a vanilla js class FakeCDSService which does not have any dependencies to cds runtime.

What that means, is this service does NOT provide our handlers with auto-txing, and all the transaction logic happens at DB level (which makes sense).

And there you can see how does "simple" run -> dispatch -> handle loop looks like.

Of course, this implementation only supports default generic crud handler (kinda ON_READ, ON_UPDATE etc) as we removed before/on/after phases completely.

And basically this is enough to implement simplest working odata service (as I said before, $batch part is hard, so I had to stub some weird stuff there) that uses ODataAdapter.

In our case we actually just implement a subclass of HttpAdapter more or less the same way as original cds implementation does, using their standard handlers.

Nestjs part

Finally, here comes that part with an answer to the original question.

Of course, I had several iterations of code "doing something":

  1. Exactly the same ODataAdapter + middlewares I had for vanilla nodejs, that were used at main.ts. That basically did not change anything.
  2. Then I had (and it still works for AdminService) instance of FakeCDSService that was connected with db service through some nestjs magic, but still used ODataAdapter
  3. And final iteration without instance of FakeCDSService and without ODataAdapter

All three use essentially same MySQLiteService (I just changed the name of the class exposed in provider a couple of times).

So, the 3rd option would be more or less proper answer, meaning:

  1. We need to implement proper middleware that parses urls into proper CQN objects (and in cases of $batch requests we have an array of them in req.batch)
  2. We still need to still register that middleware so that it has access to cds definition of a correponding service to use for parsing
  3. We need to have an instance of simplest EmptyCDSService (we define in cds.provider.ts) so that it stores that definition and has endpoints
  4. And then we need to implement controllers that have db service injected to them via nestjs

For point #2 we can actually replace middeware with Interceptors (like we have in transormations.interceptor.ts to deal with batch multipart body and final formatting of odata response)

Also in that case direct instantiation of this EmptyCDSService in configure(consumer) method can potentially be replaced with some other custom provider (we just need constants like name and endpoint)

And then our controllers (like CatalogService in catalog.module.ts) would be responsible for handling of requests with proper transactionalboundaries.

For example, the way we handle $batch there looks like this (more or less cds'y):

Of course, in reality batch requests can be much more complex with interdependent requests

 

('*batch') // omg $batch just does not work ;(
@UseInterceptors(HandleMultipartInterceptor, AddODataContextInterceptor)
async handleBatch(@Req() req: any, ({ passthrough: true }) res: Response) {
    await (this.dbService as Service).tx( async (tx) => {
        await tx.begin() // this is super important to call for now
        for (const r of req.batch.requests) {
            r.result = await tx.run(r.query)
            r.statusCode = 200
        }
    })
    return
}

 

This actually works for simple non-draft-enabled scenario, so our browse app is able to consume this endpoint.

Notice here, that controller does not deal with Response object at all (compare to our current handlers in cds above)

And of course, same approach with AsyncLocalStorage can be applied, and in fact there is a third party library implementing it for transactions imitating @Transactional annotation in spring.

But that actually brings us to an interesting conclusion (it was unexptected for me at the beginning of this exercise)

As a conclusion

After I researched Transactions handling a little bit more, and then read some stuff about it in regards to nestjs and spring, it occured to me that I was actually reimplementing cap java but with nodejs.

And this was confirmed after I (finally) read cap java docs.

And while of course, with proper framework you can basically re-implement all the existing stuff using the "building blocks" this framework provides you with (like interceptors, providers and that stuff in nest), the question now would be: WHY?

Therefore I am not sure I will put more effort into this "yet another DI strongly typed CAP implementation", but if someone is for whatever reason interested in this topic, please find me on LinkedIn (I am open for projects btw).

 

P.S Thoughts regarding the "new community platform"

I was hesitant to use it, and of course the first thing that I had to deal with was "finding my saved draft".. In short, if you don't remember the board you wanted to post it to, you might be out of luck )

But then there was also this "Your post has been adjusted to remove unsupported HTML found in the message body. The HTML has been updated. Please review the post and send it when you are satisfied." which was a real pain in the ass, cuz I only used wysiwyg editor with simple formatting and code blocks.

In my case,  it looks, the issue was something copied from somewhere, and that introduced divs and spans that this engine could not properly deal with together with code blocks. And I had to manually clean up the html..

Therefore I strongly recommend exporting <html> code of your blog somewhere BEFORE you try pressing "Save Draft"

Overall experience: below average, even worse than previous blog platform, which had some "space for improvement" in regards to content creation tools.

Motivation to write something else: not at all.

Which is rather sad..

Labels in this area