Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Active Contributor

Testing UI5 apps

Test setup is done, functional and integration tests are written and both the Mockserver and cross-browser test automation are in place.

Let's now look at some more advanced test cases/scenarios that might even touch the grey area between "not by the books" and "needs to be done".
Repeat hint throughout the blog series: code -along with installation instructions- is at
git clone, npm install, grunt serve -> http://localhost:8080 and code/follow along.

Age hint: this blog series started when ui5-tooling was still in SAP's womb 🙂 That's why it's not used here, with grunt pulling all the task running weight.

Here's the TOC for this article:

QUnit fixture and QUnit DOM for custom controls

QUnit holds a special HTML element qunit-fixture that can be used for instantiating a DOM. So essentially this means that you can place any UI5 view/controller-pair there in order to interact with and test.
<!-- ... -->
<div id="qunit"></div>
<div id="qunit-fixture"></div>
<!-- ... -->

A viable use case to move Integration tests "down" a level to QUnit could be that you want to test parts of your application only, e.g. that full-blown custom control that gets used in a complex UI setup. In oder to test that custom control, you'd need navigate the complex UI during a test, only for "reaching" that custom control. To save that navigation overhead, why not place the custom control in a standalone, dedicated XML view?


Let's assume we have this custom Input control that sets a special placeholder:
* @namespace
* @name sap.ui.demo.todo
* @param {sap.m.Input} Input class of UI5
* @returns {sap.ui.demo.todo.control.Input} custom Input field
function (Input) {
"use strict";

* custom Input field uses calculation rules and/or values for displaying content.
* @name sap.ui.demo.todo.control.Input
* @extends sap.m.Input
return Input.extend("sap.ui.demo.todo.control.Input", /** @lends sap.ui.demo.todo.control.Input.prototype */ {

* re-use renderer from sap.m.Input
renderer: {},

* initialize control
init: function () {;

By isolating the custom control in a dedicated XML view in qunit-fixture and instantiating/rendering it there, testing the control becomes a lot more managable.


<mvc:View xmlns:mvc="sap.ui.core.mvc"

<todo:Input id="theCustomControlInput"/>


Note: there is an accompanying conroller in webapp/test/unit/view/App.controller.js, but it's skeleton-only, no special methods defined.

What's left now is putting the pieces together:

  • instantiate the view dynamically

  • retrieve the control to test

  • run tests on the control

QUnit.test("test a custom control (working in a 'real' XML view) in a Unit Test", function (assert) {
var fnDone = assert.async();
viewName: "sap/ui/demo/todo/test/unit/view/AppWithInputCC"
.then(function (oView) {
return oView.placeAt("qunit-fixture");
.then(function (oView) {
var oController = oView.getController();
var oInput = oController.byId("theCustomControlInput");
return assert.strictEqual(oInput.getPlaceholder(), "whaaat", "Placeholder checked out fine");
.catch(function (oError) {
// do sth clever here

Bonus: qunit-fixture gets torn down after every test, so no interference potential with other DOM elements or UI5 controls in subsequent tests.

Testing Drag'n'Drop

As of UI5 1.56, Drag and Drop got introduced generically for all UI5 controls; much awaited as it is a powerful UX concept.

It is utilized on meta-level, with markup describing the drag and drop capabilites of a control:
<List id="theList">
<StandardListItem title="item 0"/>
<StandardListItem title="item 1"/>
<StandardListItem title="item 2"/>
<StandardListItem title="item 3"/>
<StandardListItem title="item 4"/>

In the above example, the respective {d|D}ragDrop* declarations enable native browser-level events for dragging and dropping list items. What to do after dropping needs to be handled on controller level (drop=...).

But how do you test that?

The approach is multi-fold:

  1. a DOM is needed for programmatically issuing drag/drop actions. So first the view needs to be created and rendered before we can operate on it.
    QUnit.test("testing Drag and Drop", function (assert) {
    .then(function (oView) {
    return oView;
    .then(function (oView) {
    var oController = oView.getController();
    var oList = oController.byId("theList");

    // we need to stay in the rendering cycle for drag and drop,
    // requiring a valid DOM
    // -> view needs to be rendered first before we can drag and drop anything
    oView.attachAfterRendering(function () {
    // test-logic goes here!

  2. the programmatic drag/drop action needs to happen on DOM level, so the DOM node of the UI5 control(s) used in the Drag'n'Drop scenario need to be retrieved via getDomRef() and worked with:
    function triggerEvent(oControl, sType, mOffset) {
    // ...

  3. the user action of a drag/drop operation needs to be simulated via a browser-native Event:
    function emulateDragDropEvent(sEventType) {
    var oEvent = new Event(sEventType, {
    bubbles: true,
    cancelable: true

    oEvent.dataTransfer = new DataTransfer(); // <- muy importante!

    return oEvent;

    Watch for the dataTransfer property of the event - it again uses the browser-native DataTransfer to equip the Drag'n'Drop-event with information on what DOM nodes are being moved around.

    Omitting the dataTransfer property will loose you any programmatic interaction capabilities!

  4. Fire off:
    // start dragging the first list item
    triggerEvent(oList.getItems()[0], "dragstart");

    The catch here is that a drag and drop action doesn't consist of a single step only, but a series of them - starting the drag operation, moving onto a target element, then dropping the initially dragged node.
    triggerEvent(oSourceListItem, "dragstart");
    triggerEvent(oTargetListItem, "dragenter");
    triggerEvent(oTargetListItem, "dragover");
    triggerEvent(oTargetListItem, "drop");
    triggerEvent(oTargetListItem, "dragend");

Putthing this together into a nice "interface", coating it with syntactic sugar, and then consuming via "speakable" API can look like:

assert.strictEqual(oList.getItems()[0].getTitle(), "item 1", "1 is now the first item");
assert.strictEqual(oList.getItems()[2].getTitle(), "item 0", "0 is at position 3");
assert.strictEqual(oList.getItems()[4].getTitle(), "item 3", "3 moved after 4!")

Nice, ey?!? This might a good time to peak at the source.

Now, remember that this all happens on Unit Test-level, no OPA5, no UIveri5.
Syntactically, all of the above can be reused in OPA5 as well.
UIveri5 has its own approach to this, via the underlying protractor API, using browser.actions().dragAndDrop(node1, node2).

OPA negative tests

We looked at Integration Testing with OPA5 in part 2 of the blog series, mostly covering how to find and interact with UI5 controls. However, when application logic is (partly) moved to the UI, it is common practice to dynamically create or destroy parts of the interface. Testing for visible and existing UI5 controls is easy, but how do you check for non-existent controls?

Writing such "negative" test (as in: testing for non-existing things) on OPA level involves essentially two steps:

  • find all UI5 controls within an application

  • check the control's identification and state for visibility and/or exsitence

There's many ways to recurse over the control tree in a UI5 app, but using jQuery is just so...easy:
// webapp/test/integration/pages/App.js
return this.waitFor({
controlType: "sap.m.App",
matchers: function (oApp) {
var vMap = jQuery("#" + oApp.getId()).find("*");
_.each(vMap, function ($oControl) {
var oControl = jQuery($oControl).control();
// ...

jQuery's find() on the App ID delivers all its' descendant DOM nodes.

Iterating over these with SAP's control() extension to jQuery ( converts the DOM node back to its' UI5 control representation.

Then it's a matter of examining the control itself and checking that it is not the control we're looking for, thus fulfilling the negative test requirement:
// webapp/test/integration/NegativeJourney.js
opaTest("make sure control doesn't exist", function (Given, When, Then) {
.iMakeSureThereIsNo("module", "sap.m.Bla")
.and.iMakeSureThereIsNo("id", "whatEver");

// webapp/test/integration/pages/App.js
iMakeSureThereIsNo: function (sKind, sIdentifier) {
var oControl = jQuery($oControl).control();
if (oControl && oControl[0]) {
switch (sKind) {
case "module":
if (oControl[0].getMetadata().getName() === sIdentifier) {
case "id":
if (oControl[0].getId() === sIdentifier) {
// ...
return aMatches.length === 0;

Again, might be a good time to peak at the complete source of iMakeSureThereIsNo(sKind, sIdentifier) in order to get that complex matter sorted.

Visual regression testing with UIveri5

The newest kid on the testing block is UIveri5, intended for cross-browser end-to-end tests of UI5 applications. The tool itself is so extensive and powerful, it's worth a separate blog series. I've done a primer on it at the SAP Inside Track Frankfurt 2019, might be worth a look, specifically how it differs from OPA.

For this article, let's put an emphasis on using UIveri5 for visual regression testing. And let's use a bare minimum test case, so the focus can be on the visual testing part:
// webapp/test/e2e/TodoAppVisual.spec.js
describe('TodoAppVisual', function() {
it('should compare the start page screenshot to the reference image', function () {

In its' current version, the profile visual intended for visual tests is...well..suboptimal. A better approach is to reuse the integration profile and enhance it with screenshot capabilities:
// webapp/test/e2e/conf.js

// ...
profile: 'integration',
baseUrl: 'http://localhost:8080/index.html',

take: true,
compare: true,
update: false,

storageProvider: {
name: './image/localStorageProvider',
refImagesRoot: './target',
actImagesRoot: './target'
screenshotProvider: {
name: './image/localScreenshotProvider',
screenshotSleep: 100
comparisonProvider: {
name: './image/localComparisonProvider'

// these are important for reference image storage!
// browsers:[{
// browserName: (_chrome_|chromeMobileEmulation|chromeHeadless|firefox|ie|safari|edge),
// browserVersion: '*',
// platformName: (_windows_|mac|linux|android|ios|winphone),
// platformVersion: '*',
// platformResolution: 'WIDTHxHEIGHT',
// ui5.theme: (bluecrystal|_belize_|hcp),
// ui5.direction: (rtl|_ltr_),
// ui5.mode: (_cozy_|compact)

// ...

take instructs UIveri5 to take screenshots, compare to compare the taken screenshot against a reference image (more on that below), and update to -well- update the reference screenshot.

The other three settings ({storage|screenshot|comparison}Provider) make UIveri5 use local (aka on your computer) tooling rather than remote capabilities. BTW: major tool in use for image comparison is Resemble.js!
Heads up: the following is not the standard way of using UIveri5, so proceed with caution. There's some hacky parts, it's not an out-of-the-box visual comparison supersuite!

very frist run

Before being able to compare images, obviously a reference screenshot is needed.

First of all, start the local webserver: grunt serve.

UIveri5 will create reference screenshots automatically when setting update: true in the above config file. The reference images are stored in a folder hierarchy pertaining to the browsers config options above:
/<app dir>/target/images/<testCase>⏎
/<imgName from testCase>.ref.png

example: /webapp/test/e2e/target/images/TodoAppVisual⏎

Fire off uiveri5 from the command line and let it do the work - note that the tests in your *.spec.js will all fail during this very first run due to the lack of reference screen shots.

After the run is finished, take the resulting path and files, submit them to Version Control and use for subsequent visual regression test runs!

regression test runs

Once the reference images are in place, set update: false in the config file, and run uiveri5 again (don't forget grunt serve!) - this is what happens:

  • at the point in the test where takeScreenshot() is used, the current state of the application is screenshotted (sp?) and stored in
    /<app dir>/target/report/screenshots/<testCase>-<test name>_<img index>_<pass|fail>_<timestamp>.png

  • the screenshot is compared to the reference image via
    with the hardwired settings

    • percentage threshold: 0.1%

    • mismatched pixel difference threshold: 200

    Depending on the comparison result, the test fails or passes

  • an HTML report (!) is created/updated at webapp/test/e2e/target/report/screenshots/report.html

Example test run recording:

If you utilize UIveri5 cross browser, e.g. via the command line à la
uiveri5 --browsers=firefox,safari,chrome
the possibilites start showing.

Throw in the fact Uiveri5 supports many authentication options and a remote image storage option, the potential of using this tool for large-scale visual regression testing become obvious.


This article concludes the technical part of my blog series "Testing UI5 apps". Due to the extensive nature of the topic, covering all angles seems a futile effort.

But I sure hope I've stirred up enough dust to get more people from the UI5-verse onto the topic!
For me, exploring the UI5 testing cosmos and writing about it certainly has been, still is and always will be a learning experience.

The last article in the series will be on raising awareness about testing UI5 apps not only to developers, but to all decision makers: a glimpse of real-world numbers and the business impact of having tests and doing testing in UI5 development.
Labels in this area