tldr; Install https://github.com/timostark/abap-json-serialization and enjoy the fastest possible JSON serialization. The result will be a 10x faster JSON serialization and deserialization compared to /UI2/CL_JSON at the same quality. Be warned though: Read the limitation section first.
Oh no - another JSON Serialization Blog Post? Hey - At least no blog about excel exports
🙂
So why are we in need of a "new" way for JSON serialization? The reason is simple: Runtime! Especially when working with custom REST services with a big payload you will notice a lot of runtime getting lost in JSON serialization. Loosing 30% of your runtime in JSON serialization makes me very unhappy (when I just optimized my more difficult business class).
So, what are our goals:
- Fast
- Support Camel-Case
- Support real booleans and numbers
- Not need to be easy or generic (I will accept a bad life as a developer if it is fast and reliable).
There are already multiple solutions out there - just to mention the most important ones:
So how are they behaving from runtime perspective. Let's take a very simple example and serialize 5.000 lines of SFLIGHT lines and a very complex and deep structure:
So what does that tell us?
Not really surprisingly the only feasible solution on a ABAP stack is the usage of CALL TRANSFORMATION - as this is executed directly in the Kernel, thus not depending on slow ABAP String concat and/or field-symbol traversal.
It might be strange but always remember: Building up strings using concats and traversing over field-symbols inside a structure is very slow in ABAP compared to native languages --> Where possible Kernel Modules like Simple Transformations are preferable performance wise.
There are however quality problems when using CALL TRANSFORMATION ID:
- No Camel-Case
- No real "booleans" (instead 'X' is printed.. tell that somebody outside of the SAP world)
- No real NUMC (instead leading 0s are printed)
There is one solution which was
already mentioned in a blog post, using a custom ABAP transformation to at least support camel case. Unfortunately, that throws away the performance benefit as the fast kernel module has to go up to the ABAP stack for a simple "to-camel-case" transformation.
My suggested solution is, that we use CALL TRANSFORMATION for what it is actually thought: to transform data using ST transformations (Simple Transformation). Remark: CALL TRANSFORMATION can also be used for XSLT Transformation (which are much more powerful but also slower - see remark by
@Sandra Rossi), but this is simply not required here. This means we are creating
an own Simple transformation for the structure/table-type we want to serialize (nested structures are of course possible).
Let's see an example transformation for the table SFLIGHT (shortened):
Nobody wants to write that code (and for sure nobody with a right mind will want to keep that transformation up to date) - but let's first see the runtime impact.
==>
The solution is around 10 times faster than /UI2/CL_JSON, while having the same quality as a result.
As already said of course nobody wants to write these ST mappings - especially for deeply nested structures this is horrible.
Therefore, I've published a small helper program ZJSON_TO_XSLT under MIT license to GitHub which allows you to directly create those transformation for any structure/table
Output (next to the generated transformation).
Execute the transformation using normal CALL TRANSFORMATION call:
DATA(lo_writer_json) = cl_sxml_string_writer=>create( type = if_sxml=>co_xt_json ).
CALL TRANSFORMATION ZSFLIGHT SOURCE root = lt_flights RESULT XML lo_writer_json.
DATA(lv_json) = cl_abap_codepage=>convert_from( lo_writer_json->get_output( ) ).
In my customer projects I am using the API called in the program in a regular job (including a mapping-table) which updates the transformations on the development system in a regular manner. If you want to spend a lot of time you could even create the transformations "live" as local objects on the first access. I personally do not like the approach of local development objects though.
==> Using JSON serialization with fixed transformations, you can get an extremely fast JSON serialization and deserialization while still having high quality.
Limitations:
- A big word of warning: The solution is thought for performance critical development. The solution comes with very high costs: you have to think of an additional development object (the transformation). Even if it updates automatically, it can get out-of-date, you can forget it or it can get corrupted.
- Summed up: If you do not have a problem (i.E. customer complains about slow type-ahead, where you need response times in ms) do not create additional problems using a more complex solution mentioned here.
- The solution works as long as you know the exported JSON types upfront ( i.E. have static data-types). For dynamic data-structures this solution will not work.