In the previous post I was talking about what makes great software, which according to the Head First OOA&D book that I'm following, it resumes to 3 aspects:
Great software satisfies the customer.
Great software is flexible.
Great software is maintainable and reusable.
Now I would like to enhance this with something that, in my opinion, also takes a major role on this:
Great software is testable.
No matter how pretty your code looks, if you cannot properly test your program, it will be quite hard to maintain/enhance. This is a big problem for the users, just that they might not know or care, but also for you, because software and applications are constantly changing, evolving, in some cases mutating into some other thing very different from the original scope- let's get real here. And guess who will have to do the dirty work?
So often we find ourselves in one of these situations: Either we are asked to change our own code - more desirable, but likely that we won't remember anything of what we wrote at the time. Or we need to change someone else's code - less desirable, highly probable. How can we be sure that we won't break anything after implenting changes if the code cannot be tested? The traditional approach that I've seen over the course of 10 years is this:
Make some insignificant test in the development environment, sometimes check that there are no dumps.
Transport to quality environment, test again.
Deliver to the functional consultant, he/she runs the same test.
Release to the user. First test: runtime error.
"Of course, they were testing with X data set and we ran our tests with Y data set" will be the first excuse. And even if it's true, the real issue is that the software is weak, not robust, it can't be tested without real data. In other words, the program has dependencies, thus it is impossible to test.
I've been reading a lot about Test-Driven development, and nowadays I'm starting to feel like Neo in The Matrix: I know kung-fu. Seriously, in my experience, TDD is the way to go when it comes to delivering good, quality software.
How do you create good tests, then? Well this feels like kind of a journey, there are many things to learn. Once again, I couldn't recommend more bfeeb8ed7fa64a7d95efc21f74a8c135book ABAP to the future, it has been my guide and also inspired me to start writing these posts.
First of all, ABAP Unit does not have to be a pure technical tool, only available to developers. You don't need a developer key to run unit tests over an ABAP program, therefore anyone with the proper authorizations to display programs/classes should be able to run the tests. So what if we also involve our business experts and functional consultants?
Well, with Behaviour-Driven development we can accomplish this. bfeeb8ed7fa64a7d95efc21f74a8c135talked about it already back in 2013, I'm just discovering this now. BDD is all about simplification inside your test methods, so the methods labeled FOR TESTING in abap unit will have descriptions that make sense to developer, business analyst and user. You can accomplish this by setting each test method that complements the phrase "It should....". Then inside of this method, you create 3 helper methods following the pattern "Given .... (initial condition)", "When .... (method to test)", "Then .... (check result)".
This is how I refactored my test class for the ZCL_INVENTORY class:
class lcl_test_class definition deferred.
"Allow access to private components within the class
class zcl_inventory definition local friends lcl_test_class.
class lcl_test_class definition final for testing
risk level harmless.
types: ty_guitars type standard table of zguitars with empty key.
data: mo_class_under_test type ref to zcl_inventory,
guitar_instance type ref to zcl_guitar,
guitars type ty_guitars.
guitar_to_add type ref to zcl_guitar.
guitar_to_search type ref to zcl_guitar.
mo_exception_raised type abap_bool.
found_guitars type zcl_inventory=>guitars_tab.
"User Acceptance tests:
add_guitar_to_inventory for testing,
add_duplicate_and_get_error for testing,
search_within_the_inventory for testing,
"Other helper methods
load_mockups returning value(re_guitars) type ty_guitars.
So the idea is to include "IT SHOULD" methods as they came right out of the functional specification document. In my example, The ZCL_INVENTORY class should be able to:
Add guitars to the inventory.
Protect the inventory against duplicate objects.
Search for a guitar within the inventory.
Let's see the implementation of the add_guitar_to_inventory() method:
This reads like plain english: In order to add a guitar to the inventory, we start with some guitar attributes, after we add the guitar to the inventory and check that it was succesfully included. So you run a test for this method, and if there's no green light, you know that something is wrong with this part of the process.
The given_guitar_attribs_entered( ) method just intializes one guitar object
data: guitar_spec_attributes type zcl_guitar_spec=>ty_guitar_attributes.