Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Active Contributor
This was the last week of a course about Unit Testing and Test Driven Development which has caused all-out war in the ABAP world as people fight to the death to defend or attack the concepts

This week the subject was EXISTING CODE and TEST DOUBLE FRAMEWORKS and THIS and THAT and THE OTHER and basically anything not yet covered.

Unit Tests on the Titanic


As I keep stressing despite the fact that OO ABAP was introduced in the year 2000 with SAP 4.6C the take up has been shockingly low in the intervening period, as the recent online debates have shown a lot of people still program procedurally 100% of the time and have teams that “lack OO knowledge”. In fact one benefit of this course is to make the people who are taking it try and get their heads around OO concepts.

I have posted a vast number of blogs about why I think OO is good, based on a lot of real world experience, but it clearly is not everyone’s cup of tea.

In any event the end result of OO not really catching on in SAP world is that the vast bulk of existing code out there is procedural. Since on the internet and even in this course all examples given of how to introduce unit tests presume the program to be changed is already 100% OO and in real programs are 100% procedural this tends to make people think “this is nothing to do with me” like the Gas Board used to say on “That’s Life!”.

Nonetheless it is possible – I have done it – to use unit testing with procedural programs without changing them at all. The unit test classes have to be OO of course, and I am not saying it is easy, you need a LOT of work in the SETUP and TEARDOWN methods, but it can be done, and in my case I used this technique to transform a program which did not seem to be able to do anything right to one that performed flawlessly, despite the never ending functionality changes coming from the functional business owner.

Really though it is far less effort to start making the procedural program slightly OO than to spend ages using a complicated SETUP method to mess around with global variables and the like.

This involves extracting database calls and the like into methods of database access classes, or UI calls into UI classes and so on. There is no way you can do this to a huge procedural program all at once nor would you want to. This is where the ever popular “island of happiness’ comes in.


I have said this before and I am going to say it again word for word. I say when you have a change to a big program, write a test for the new behaviour, it fails, change the code, the test passes. Then you have one unit test. Let it go at that, do not bother refactoring anything else.

When the next change comes along to the program, the next week, to a totally different part of the program, repeat the procedure. Then you have two tests, and furthermore you know the second change has not broken the first. At this point you have 0.05% of the program under test.

If you rigidly enforce this then after a while you will benefit from what I call the law of “no-one is every satisfied” which translates to “if a part of a program is being used by a human, sooner rather than later they will want it changed”. Therefore after a five year period every single part of the program that is in use will have been subject to a change request, and thus will have a unit test of some sort. This is very simplistic, but that is the general idea. New programs should have tests from the start.

No One is Ever Satisfied


One problem is that of the SELECT-OPTIONS and PARAMETERS in “type one” programs. These are global variables, which as we have seen are poison to unit testing. Since global variables are “seams” you have to have a special class to store the values. In real life you just stuff that class full of the values the user entered; in a test you stuff that class full of fake values.

I have been doing that for a while and wishing there was a better way, but there is not. If there is please tell me. One benefit the course instructors noted was that SELECT-OPTIONS have to be very short or you get a syntax error so you end up with values like S_DFRM whereas the target variable in the class can have a name up to 30 characters like DATE_FROM_RANGE or whatever.

What’s got a Hazelnut in Every Bite?

A complete change of topic! The title of this week’s course was “existing code” but in fact most of it was about various test double frameworks.

Up until now in the course we have been creating our test doubles (mocks, whatever you want to call them) manually, setting the interface as “partially implemented” and then manually coding the methods we are “redefining” with our fake data.

There is a different way to do this which arrived with ABAP 740 as I understand it. Actually there was an open source Z project called ZMOCKA which does the same thing on lower releases. SAP must have taken note of that and built their own version, which is in fact quite flattering for the author of the ZMOCKA project. In any event the original concept came from Java, this is the ABAP version.

What you do here is instead of coding a definition and implementation of your test double you create your test double by passing in the desired interface to CL_ABAP_TESTDOUBLE and backs comes an instance of the correct type to be injected into your code under test.

What comes next is slightly un-intuitive. Via a series of method calls you tell your test double what methods are going to be called, what input parameters are going to come in and what values to return when that happens.

In fact it seems you tell the double what result you want before actually telling it what method is going to be called. Even though I wrote about this two years ago in my book, still the whole thing seems very strange and I will have to do some more experiments before I can tell if this is better or worse than manually creating your test doubles. The claim is that the automated test double framework saves you time, and it may well, but possibly at the expense of clarity.

I could even imagine someone creating a framework of their own to wrap the test double framework to make it more understandable!

You can Ring my SQL

As of Release 7.51 there is a similar framework for creating “test doubles” of SQL statements. Up till now we have been wrapping SQL SELECT statements in database access classes which return hard coded data.

With the OSQL test double framework what you do is make a method call saying what database table is to be mocked and pass in an internal table of data of the same type as that table. Then by means of black magic (in fact the kernel) when the production code does the SELECT in fact the internal table is used as the data source rather than the database.

What this means is you do not have to remove the SEAM that is database access in the production code, so that is less work. In addition in effect your SQL statement is being tested, as if the internal table mirrors the actual database data and the wrong result comes back, that could indicate an error in the WHERE clause.

That is the positive aspect – on the other hand I rather liked the concept of isolating database access into its own class. I suppose you could still do that here. Also it could be argued that things that work “behind the scenes” make it less obvious what is going on a la logical database.

CDS – Miami

Enter Horatio.

Horatio: (puts sunglasses on) Baa Baa (takes sunglasses off) Black Sheep (puts sunglasses on) have you (takes sunglasses off) any (puts sunglasses on) WOOL?

Cue theme music.

The CDS test double framework works on exactly the same principle as the SQL test double framework. It is available as of ABAP 7.52. This time instead of replacing a SELECT from a database table you are replacing a SELECT form a CDS view. I don’t think there is anything else that needs to be said about that.

I will say that a lot of people taking the course must have been really puzzled as (talking about ERP systems only) I reckon maybe only 1% of people are on 7.52 (you need to be running S/4 HANA on premise) maybe another 2-3% are on 7.51 (again you need S/4 HANA on premise) and the other 96% are on lower levels. My company is in the midst of an upgrade from 7.02 to 7.50.

Yet in the course it was never mentioned that the SQL and CDS double framework was only available in higher levels. I can only presume that inside SAP as they get such things five to ten years earlier than your average customer they forget not everyone has them.


Now we come to TEST-SEAMS which sadly are available at lower levels. I say sadly because I wish they were not available at all.

The idea here is you take your production code and you put a START-OF-TEST-SEAM / END-OF-TEST-SEAM statement around a database access or other sort of dependency. Then in the test code you do a TEST-INJECTION which replaces the code in the area of the test seam with some fake data.

On the positive side thus works. On the negative side you still have to change the production code, in my mind it is just as much of a change as replacing the seam with a call to an object method. I have been told that TEST-SEAMS are for when there is “no other possibility” but I cannot think of anything you cannot wrap in a method call. Someone give me just one example and then I will shut up about it.

Worse, now the production code knows it is being tested which is horrible. It is like saying IF TEST_FLAG = ‘X’ DO THIS ELSE DO THAT. In fact if you might as well do just that and not bother with the test seam statement. You get the exact same result.

The analogy is what brought down Volkswagen recently – they had code in their cars which said that if you are being tested produce really low amounts of pollution, if running productively (on the road) produce the normal high amounts of pollution. The unit tests passed, but in real life they got I into huge trouble. You could not do that if the production code was unaware if it was being tested or not. Maybe TEST-SEAMS were invented for Volkswagen.

Where be that Blackbird? I see he, and he CI

CI stands for “Continuous Integration”. This concept arose from languages like Java where everyone develops on their own machine and then “checks in” their code to the main repository at which point of course the new code might break all sorts of things. Thus you need an automated check very frequently to make sure the checked in code has not in fact stuffed things up.

In ABAP that whole concept makes not quite as much sense, as everything is all together in one repository so you cannot delete something that is still in use, for example. Nonetheless of course you can still change a class or a function which compiles fine on its own, but other code that depends on it suddenly breaks due to a new mandatory parameter or whatever.

There is a report (SAP term for program) called RS_AUCV_RUNNER which can be run on a regular basis and fire off all the unit tests and notify someone when any start breaking. This can be scheduled via the ABAP Test Cockpit (ATC).

Now if the unit tests just tested one class at a time that would not do you much good, as the fact the classes no longer play nicely together will not surface.

However if you have been writing tests using “Behaviour Driven Development” then likely all your tests involve more than one class, even if some of them are test doubles. These sort of unit tests are called “multi-level” tests by the course instructors, and “acceptance tests” by the wider testing community as in some automated test frameworks such as FITNESSE. The latter is in the form of a wiki where business people can add new tests and see if they work.

Oddly enough in my real life experiments thus far all my unit tests have ended up in this bucket, testing an expected program behaviour involving the interaction of several classes.

One point the course instructors made here was that if any of your tests have dependencies they are deemed “unstable” as the result varies at random, and this ruins the whole reputation of unit testing. This is more likely to happen in the early days, when that reputation is the most important, as if people get disillusioned in the first month, the whole thing is dead in the water.

Grandmaster Melly Mel: Guidelines for Writing New Code

The very last unit in this course focused on guidelines (I cannot say the term “best practice” without being physically sick) for writing new code.

Here are the notes I made. Everything that follows I agree with, as if they said something I did not agree with I would have either not written it down, or written it down coupled with a load of abuse.

  • Do no work in constructors. They are vehicles for injection of dependencies or do something that can be mocked, but no complex business logic please, as that makes for untestable code. The irony is that most people when starting in OO do tons of stuff in the constructor. In my TDD experiments I found making the code testable involved taking more and more out of constructors until I pretty much did not need them anymore.

  • For any class at all put all the public methods and attributes in an interface which the class then implements. This is because you always want to make your classes as reusable as possible so you never know in the future what other code will want to use your new class. That other code will want unit tests so it will want to be able to mock your new class i.e. create a test double, and you need an interface for that.

  • This whole focus on interfaces makes me realise why you should not test private methods. They are the low level implementation details and may change all the time, whereas your public methods are “published” as it were and this means their nature changes much more rarely.

  • The good news is that in Eclipse apparently you can create the definition of the class first and then extract all the public bits to an interface at the press of the button. Once again, that sounds too good to be true, I literally cannot wait till I start being able to use Eclipse at work which should be in just over a weeks’ time.

  • As might be imagined the rule with variables is “be as local as possible”. I think that goes without saying – the more “global” a variable is the more scope for so called “side effects” where its value changes unexpectedly.

  • This means you have a horrible, horrible trade off whereby – horrible choice number one - you can have all variables in a method local or parameters and have each method have a huge unusable signature (almost as bad as a BAPI) and moreover you get” tramp data” being passed into a method. Tramp Data is data which goes into a method for the sole purpose of being passed into another method the first method calls. I wrote a program like that once, but I would not do it again.

  • The other horrible choice is to have most of the variables be “member variables” which are not quite global variables as they are only “global” within an instance of a class, but they can be changed by any method of that class so there might well possibly be “side effects”.

  • So the course instructors recommended, and I wholeheartedly agree, and so does the guy who sits opposite me at work, and I did a straw poll of the bus queue and they all agreed as well, that when you have data which is “logically” global, then encapsulate it in a class with read only public attributes so the values are immutable once created. The obvious examples is customising data - there is no way that is going to change during the run of a program, and even if it did (transport going in) you would not want the start values to change for the current transaction. So we have a global customising object with all the immutable values, so that is out of control of the calling classes, so no sneaky method can go around changing them causing side effects

  • Use SOLID principles. Say no more – if anyone thinks they are a load of old rubbish, not worth spitting on, then so be it. I will argue no more. By the way if anyone DOES think this can they indicate so in the comments? They would not be the first one from what I see on the internet but then people are humans and argue about everything, which is hopefully a positive thing, as opposed to blind faith.


Lastly the “Law of Demeter” of what is sometimes called a “Train Wreck”. What do these funny terms mean?


The example given was along the lines of MO_DOG->MO_MOUTH->MO_TONGUE->BARK( ).

This is called a “Train Wreck” because it looks like train carriages tied together with the “->” construct.

The problem is that the inner workings of the dog is exposed using this method. The correct way to do this is to have the BARK method as public in the interface and all the other methods private. Inside the dog you can bark any way you want, the caller does not need to know how this happens.

Perhaps you want a more sensible example?


I say you should change that to be MO_CONTROLLER->DISPLAY_WOTSIT_SCREEN( ) and within that method it might call MO_SPECIFIC_VIEW or it might not depending on how it is feeling, but however it decides to do it the WOTSIT_SCREEN is displayed. For example it could start off using CL_SALV_TABLE and then the developer swaps that for the CL_GUI_ALV_GRID when the users want something to be editable.

The point is that those big long list of method calls describe how something works in detail and in OO detail is the last thing we want, we want abstractions which is to say at the top level WHAT is happening as opposed to HOW it is happening. The “what” changes rarely but the “how” we often want to change all the time as we come up with ever better ways of doing the same thing.

For example to use current events you may have a program which analyses data for “exoplanets” to decide if it is enough like Earth that we can live there and stuff that planet right up as well. This is a big focus of scientists worldwide at the moment. In the next few years there will be a series of bigger and better telescopes being launched into space but the nature of the data will not change, there will just be a lot more and probably more accurate. If the algorithms in the analysing program are correct to start off with where does it care where the data is coming from?



Too Busy to Improve

The course is at an end now, so I would just like to give a general overview.

First of all I thought it was brilliant content wise. I learned a whole bunch of new stuff and I am supposed to be somewhat of an expert in this area already (one of the 1% who actually use unit tests) and I really hope a huge number of people at the very minimum grasped the vague idea of what is trying to be achieved by Unit Testing and TDD and indeed OO programming in general (and ABAP in Eclipse) (and the new ABAP constructs).

The actual “style” of the videos I may have not been 100% positive about. As always I recommend everybody to Toastmasters International when it comes to public speaking of any sort. As an aside Toastmasters count talking to a stranger in an elevator (lift) as public speaking as you have a short period to get your point across, even if your point is to explain who you are and what you do.

I talk in public all the time and Toastmasters has made me immeasurably better. The process of improving your public speaking is also a lot more scientific than you might think, which may appeal to some of the more technical types inside SAP.

In any event I am very glad indeed thus course happened. I have been saying more or less the same thing for many years to whoever will listen which is not very many people in ABAP world.

This may make the practice of TDD and Unit testing more mainstream or it may fall on “stony ground” once again. At the very minimum it has sparked a huge amount of debate on the SCN, a forum in dire need of such debates after it lost most of its members in recent times.

I am going to keep using TDD / ABAP Unit in real life, and will keep blogging about my experiences good or bad. I encourage anyone who also tries to use it in real life to do the same – if it all falls down then fair enough, tell us (the SCN) why.

If you hate the concept and see it as madness or worse I would also encourage you to try it anyway, and you will either be surprised, or more likely (due to the fact us humans make up our mind if something is good or bad before doing something and thereafter no evidence will sway us to the contrary) feel vindicated then tell us why it did not work as well. By the way I am 100% sure I am just as guilty of deciding this is good and subconsciously ignoring all contrary arguments though I do make a big effort to read them all.

I spoke about this (ABAP Unit / TDD) at the UK SAP User Group last week, in the next few months I have plans to speak about this at SAP Inside Track in Rome, and maybe at SIT in Hamburg, and to a company in Cologne, going on and on about this like a broken record until no one can take it anymore and either they start doing it or commit ritual suicide rather than hear me keep on about it, or I get assassinated. Maybe I could pass out red hats with the slogan “Let’s Make Testing Great Again”.


 From my point of view that LEGO picture above says it all. Management, and even developers, say that introducing crazy things like unit tests will slow them down and they are so busy that cannot happen.

The contrary argument is that the reason they are so busy in the first place is that they do not have unit tests and thus have to spend all their time fixing problems.

You either believe that or you do not – hopefully, in my organisation at least time will tell the tale. It will either be a gigantic success or colossal failure and I will let you know either way,

Cheersy Cheers

Labels in this area