Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
vobu
Active Contributor
12,435
Testing as a means of securing investment into UI5 app development has become more prevalent. Yet it’s not as integral a part of every UI5 dev effort as I’d like to see. What’s keeping ya?

To further push the issue, let’s evaluate the most prominent frameworks for end-2-end testing in UI5.

Hint: all codes samples are and all setup is bolted into the UI5 ecosystem showcase, so you can glimpse at the code while you’re reading this.

what’s out there


Looking at the test-pyramid, we have essentially three e2e-testing frameworks out there: OPA5, UIVeri5 and wdi5.


test pyramid + wdi5


All three allow for testing user-facing functionality, operating a UI5 application "as a user could": interacting with UI elements.

UIVeri5 and wdi5 both remotely control a browser. That is, they have a runtime other than the UI5 application. Thus, both need a (web)server of sorts -running the UI5 app under test- they can connect to.

OPA5 differs as it shares the same runtime with the UI5 application - it works adjacent to the UI5 app under test, not separated.

(QUnit is out of the picture for this article, as we’re concentrating on running user-facing tests, not purely functional ones.)

Additionally, wdi5 allows for testing hybrid applications on mobile devices. It can connect to a cordova-wrapped UI5 app on iOS, Android and Electron and run the same tests as with a browser-based app.

Let’s look and compare all three from a usage endpoint. I’ll try and follow recommended/best practices as much as possible along the way.

(Disclaimer: I thought up wdi5, so I’m certainly biased; yet I’ll try and remain neutral in comparing the frameworks; deal with it 🙂 )

Installation + Setup


OPA5


OPA5 comes with UI5, no additional installation steps needed. Yet its’ setup is not intuitive, mingles OPA5’s base QUnit in, and needs several inclusion levels.

webapp/test/integration/opaTests.qunit.html:
	<!-- ... -->
<script src="opaTests.qunit.js"></script>
</head>
<body>
<div id="qunit"></div>
<!-- ... -->

webapp/test/integration/opaTests.qunit.js:
sap.ui.getCore().attachInit(function () {
"use strict";

sap.ui.require([
// all test suites aggregated in here
"test/Sample/test/integration/AllJourneys"
], function () {
// `OPA5`'s mama 🙂
QUnit.start();
});
});

webapp/test/integration/AllJourneys.js:
sap.ui.define([
"./arrangements/Startup", // arrangements
"./NavigationJourney", // actions + assertions
"./BindingJourney", // actions + assertions
"./InteractionJourney" // actions + assertions
], /* ... */

excerpt from webapp/test/integration/BindingJourney.js:
sap.ui.define(["sap/ui/test/opaQunit", "./pages/Main", "./pages/Other"], function (opaTest) {
"use strict";

QUnit.module("Binding Journey");

QUnit.module("Other view: PeopleList: items aggregation");

opaTest("bound status", function (Given, When, Then) {
Given.iStartMyApp();

When.onTheAppPage.iPressTheNavButton();

Then.onTheOtherView.iShouldSeeTheList().and.theListShouldBeBound();
});
// ...

Once, yarn dev is started (in / of the UI5-ecosystem-showcase), the OPA5 tests can be run via http://localhost:1081/test/integration/opaTests.qunit.html

UIVeri5


UIVeri5 requires Node.js >=8 and is installed via the standard npm command npm install @ui5/uiveri5.

It’s then set up via a config file and files containing test code - make sure both are in the same file sys folder.

minimal config file: webapp/test/e2e/conf.js
exports.config = {
profile: "integration",
baseUrl: "http://localhost:1081/index.html"
};

exemplary test file webapp/test/e2e/binding.spec.js:
describe("binding", function () { // remember the suite name for the file name!
it("Other view: PeopleList: items aggregation", function () {
element(
by.control({
viewName: "test.Sample.view.Main",
id: "NavButton"
})
).click()
/* ... */
})
})

A common pitfall is that UIVeri5 requires the test-file filename to be identical to the test suite name - e.g. if the suite name is binding , the test file must be saved as binding.spec.js.

Then UIVeri5 can be started via (in /)

  • first launch the webserver: yarn dev (or yarn start:ci)

  • yarn test:uiveri5


wdi5


Similar to UIVeri5, wdi5 lives in Node.js land. Depending on whether you want to use the plain browser-based runtime (wdio-ui5-service) or the hybrid-app test-driver (wdi5), installation and setup differs.

As there’s no equivalent for testing hybrid apps with OPA5 or UIVeri5, we’ll concentrate on wdi5’s browser-only incarnationwdio-ui5-service - for brevity’s sake, we’ll refer to it as wdi5 in this popst, even though that might confuse the two packages even more 🙂

With Node.js >= 12: npm i @wdio/cli wdio-ui5-service.
wdi5 is based on Webdriver.IO (wdio), yet dependency-free itself, so it needs the "base" installed manually.

Config file is identical to wdio and only needs the "ui5" service listed (yes, wdi5 is an official Webdriver.IO-service😞
/packages/ui5-app/wdio.conf.js:
// ...
services: [
// other services like 'chromedriver'
// ...
'ui5'
]
// ...

Hint: you might want to run npx wdio config to see the plethora of feature wdio and thus wdi5 provides as a test-runner.

exemplary test file webapp/test/e2e-wdi5/tests.test.js:
describe("binding", () => {
it("Other view: PeopleList: items aggregation + amount of items", () => {
browser.asControl(navFwdButton).firePress()

const oList = browser.asControl(list)
const aListItems = oList.getAggregation("items")
expect(aListItems.length).toBeGreaterThanOrEqual(1)
})
})

Similar to UIVeri5, run wdi5 via (in /)

  • first launch the webserver: yarn dev (or yarn start:ci)

  • yarn test:wdi5


Selectors, Locators + Usage


Selectors


The topic "selectors" can be handled quickly: all share the same syntax 🙂

So no matter wether you write a test in OPA5, UIVeri5 or wdi5, a typical selector object always looks similar:
{
viewName: "test.Sample.view.Main",
id: "NavButton"
}

Using it in OPA5 always requires embedding it in a waitFor():
// When || Then || (in page object method) this
When.waitFor({
viewName: "test.Sample.view.Main",
id: "NavButton",
// ...
})

In UIVeri5, it gets a little more bracket-ish:
element(
by.control({
viewName: "test.Sample.view.Main",
id: "NavButton"
})
)

Similar, in wdi5, a wrapping into another object is needed (b/c there’s additional properties on selector level possible):
{
selector: {
viewName: "test.Sample.view.Main",
id: "NavButton"
}
}

Locators


Generally speaking, all three frameworks support the same locator options for UI5 controls. They can be retrieved via their id and View-association, but also via a binding path or a property value.
Check out the documentation on the subject for

Usage


As the brief code examples above already hint, interacting with located UI5 controls differs between the frameworks.

OPA5


In OPA5, there are predefined actions that allow interactions with a located UI5 control:
// webapp/test/integration/InteractionJourney.js
When.waitFor({
viewName: "Main",
id: "DateTimePicker",
actions: new EnterText({ // from "sap/ui/test/actions/EnterText"
text: "2020-11-11",
pressEnterKey: true,
}),
})

Further programmatic querying of a UI5 control is possible in the waitFor’s success-handler. All of the control’s native method’s can be used:
Then.waitFor({
id : "productList",
viewName : "Category",
success : function (oList) {
var oItems = oList.getItems()
// ...
}
}

OPA5 gets at its’ limit when interactions with generated UI5 elements inside a control are required, e.g. selecting a date in a sap.m.DateTimePicker’s calendar pop-out. (You could argue that using jQuery for that purpose and doing the $(/id/).trigger("tap")-dance might be an option, but…let’s not go there 🙂 ).

Also, operating any elements or controls (think "reload") outside of the UI5 app is not possible with OPA5.

UIVeri5


With UIVeri5, interaction with located UI5 controls is mainly possible via the underlying Protractor API:
// from webapp/test/e2e/interaction.spec.js
const input = element(
by.control({
viewName: "test.Sample.view.Main",
id: "DateTimePicker",
})
)
input
.sendKeys("2020-11-11")
.then((_) => {
// this is very likely overlooked...
return input.sendKeys(protractor.Key.ENTER)
}) /*...*/

Once located in UIVeri5, unfortunately only a limited set of a UI5 control’s native API is exposed: it’s pretty much only the getProperty() method, chained via .asControl().
// ...continuing above Promise chain
.then((_) => {
return input.asControl().getProperty("value")
})
/* ... */

Additional UI5 API methods, be it convenience shortcuts like getVisible() or aggregations-related calls such as sap.m.List.getItems() , are not available for located controls.

Theoretically, UIVeri5 would allow to remotely operate all browser functions (think "reload") via the underlying Protractor API - but it’s then up to the developer to re-inject the UI5 dependency into the async WebDriver control flow.

wdi5


wdi5 proxies all public API methods of a located UI5 control to the test.
// from webapp/test/e2e-`wdi5`/tests.test.js
const dateTimePicker = {
forceSelect: true, // don't cache selected control
selector: {
viewName: "test.Sample.view.Main",
id: "DateTimePicker"
}
}
const oDateTimePicker = browser.asControl(dateTimePicker)
oDateTimePicker.setValue("2020-11-11") // UI5 API!

To stay API-compliant with UIVeri5, wdi5 offers browser.asControl(<locator>) to make the located UI5 control’s API accessible, including access to its’ aggregation(s).
// from webapp/test/e2e-`wdi5`/tests.test.js
const oList = browser.asControl(list)
const aListItems = oList.getAggregation("items")

With wdio as its’ base, all functionality of Webdriver.IO can be used with wdi5.

Similar to UIVeri5, wdi5 allows for late- and re-injecting of its’ UI5 dependency , so browser reload/refresh scenarios as well as test against localStorage et al are possible.

Test syntax + Utilization


Probably the biggest difference between the three frameworks is the approach to actually writing the tests. For comparison’s sake, I’ve coded both "plain" and page-object-style tests with in OPA5, UIVeri5 and wdi5.

For OPA5, the test/integration/InteractionJourney.js is coded "plain",
same for UIVeri5 in test/e2e/interaction.spec.js and
wdi5 in test/e2e-wdi5/tests.test.js (describe('interaction')).


interaction test in opa5, uiveri5 + wdi5


Encapsulating test functionality in Page Objects is done for

  • OPA5: test/integration/pages/* with test/integration/BindingJourney.js + test/integration/NavigationJourney.js

  • UIVeri5: test/e2e/pages/* with test/e2e/binding.spec.js+ test/e2e/navigation.spec.js

  • wdi5: /test/e2e-wdi5/pages/* with test/e2e-wdi5/tests.test.js (describe('navigation'))


 

OPA5


Every action and assertion needs to be wrapped in a waitFor(), be it inside a test or a Page Object:
opaTest("...", function (Given, When, Then) {
// Arrangements
Given.iStartMyApp();
// Action
When.waitFor({ /*...*/ });
// Assertion
Then.waitFor({/*...*/ });
});

This makes more complex scenarios complicated to write syntax-wise. Also nesting actions/assertions get difficult to maintain.

A synchronous notation is encouraged, using waitFor-sequences that automagically…well… wait for the underlying asynchronous Promises to resolve.

However, running single tests or only one "Journey" requires commenting out/in source code constantly
→ clumsy dev time turnaround.

Assertions are done via sap/ui/test/OPA5/assert and offer ok, equal, propEqual, deepEqual, strictEqual and their negative counterparts:
// from `test/integration/InteractionJourney.js`
Then.waitFor({
viewName: "Main",
id: "DateTimePicker",
success: function (oDateTimePicker) {
`OPA5`.assert.ok(true, oDateTimePicker.getValue().match(/2020/));
`OPA5`.assert.ok(true, oDateTimePicker.getValue().match(/11/));
},
});

 

debugging


For halting the test and runtime, another waitFor needs to be used:
When.waitFor({success: function() { debugger; } })

This stops the OPA5 test at the desired location. Even though aged, more still valid details in https://blogs.sap.com/2018/09/18/testing-ui5-apps-part-2-integration-aka-opa-testing/ by yours truly 🙂

reporting/output


With OPA5 sharing the UI5 runtime with the application, its’ output per definition is browser-based.


In combination with the test-runner karma, a console-based report can be achieved. As this adds another layer of installation/setup/configuration, I’m not going to cover it here. But you can try it out via (in /) yarn test:opa5 and yarn test:opa5-ci 🙂

UIVeri5


UIVeri5 test suites are limited to one suite (describe) per file, offering few organizational capability.

While locating UI5 controls at test time is synchronous in notation, all other significant (inter-)actions in a test require async coding approaches, e.g. via a Promise chain:
// from test/e2e/interaction.spec.js
input
.asControl()
.getProperty("value")
.then((value) => {
expect(value).toMatch(/2020/);
expect(value).toMatch(/15/);
})
.catch((err) => {
return Promise.reject("gnarf");
});

UIVeri5 uses Jasmine matchers for doing the test assertions. When a tests consists of a basic interaction only in conjunction with an assertion, it’s possible to use these UIVeri5 matchers to forego a synchronous coding syntax only:
// from test/e2e/binding.spec.js
const list = element.all(
by.control({
viewName: "test.Sample.view.Other",
controlType: "sap.m.StandardListItem",
})
);
// no Promise chain here
expect(list.count()).toBeGreaterThan(2)

The proper way of when to use what coding approach already highlights one of the challenges when working with UIVeri5: the handling of async vs sync test coding parts is …well… lots of trying. It’s not always clear what method is from the Protractor API, what’s from WebdriverJS native, what’s from UIVeri5 custom?

The Page Object pattern in UIVeri5 works well:
// from test/e2e/pages/main.view.js
module.exports = createPageObjects({
Main: {
arrangements: {
iStartMyApp: () => {
return true
}
},
// ...
assertions: {
iShouldSeeTheApp: () => {
const title = element(
by.control({
viewName: "test.Sample.view.Main",
controlType: "sap.m.Title",
properties: {
text: "#`UI5` demo",
},
})
);
// don't know why this isn't title.asControl() ... 😞
expect(title.getText()).toBe("#`UI5` demo");
}
}
}
})

// from test/e2e/navigation.spec.js
const mainPageObject = require("./pages/main.view");
// ...
describe("navigation", function () {
it("should see the initial page of the app", function () {
Given.iStartMyApp();
Then.onTheMainPage.iShouldSeeTheApp();
});
// ...
})

Main from the page object API translates to onTheMainPage in the test, so does arrangements to Given and assertions to Then. Certainly nice BDD, and although comfortable, it feels a little like too much magic is going on under the hood 🙂

debugging


That also drills through to the debugging part. Per se UIVeri5 offers the typical Node.js approach for debugging: npx uiveri5 —debug exposes the process for attaching by the Node debugger.

But debugging the actual test is difficult, when only above mentioned sync coding is used - there’s no place to actually put a break point to determine a UI5 control’s value. So typically, UIVeri5 tests need to be rewritten in async coding syntax in order to hook into the proper spot during test execution:
// original sync syntax
const myElement = element(by.control({ id: /.../ }))
expect(myElement.getText() /* can't put a breakpoint here */).toEqual("myText")

// needs to be rewritten for debugging to:
myElement.getText().then(text => {
/* put breakpoint here or use "debugger" statement */
expect(text).toEqual("myText");
})

While not a big deal, this still puts overhead into the dev time turnaround.

reporting/output


UIVeri5’s reporter(s) is/are customizable, but offers little overview in its’ default setting, having lots of noise versus result.


Still, command-line API and reporter FTW!

wdi5


Thanks to Webdriver.IO, wdi5 test syntax coding is synchronous sugar entirely:
// from test/e2e-`wdi5`/tests.test.js
describe("interaction", () => {
it("should manually allow date input", () => {
const oDateTimePicker = browser.asControl(dateTimePicker)
oDateTimePicker.setValue("2020-11-11")
/* put breakpoint here or "browser.debug()" to
inspect runtime state of oDateTimePicker */
expect(oDateTimePicker.getValue()).toMatch(/2020/)
expect(oDateTimePicker.getValue()).toMatch(/11/)
})
// ...
})

Test suites are organized via (multiple and nested) describe statements. It’s form uses mocha ’s describe and it keywords per default, although jasmine-style notation is possible (all configured in wdio.conf.js).

Assertions are a superset of Jest.js matchers, extended by Webdriver.IO-specific methods.

Page Objects in wdi5 are plain ES6 classes and can be used just as such in the tests:
// in test/e2e-wdi5/pages/Main.js
class Main extends Page {
_viewName = "test.Sample.view.Main"
_navFwdButton = {
forceSelect: true,
selector: {
viewName: "test.Sample.view.Main",
id: "NavButton"
}
}

iShouldSeeTheApp() {
return (
browser
.asControl({
forceSelect: true,
selector: {
viewName: this._viewName,
controlType: "sap.m.Title",
properties: {
text: "#UI5 demo"
}
}
})
.getText() === "#UI5 demo"
)
}
// ...
module.exports = new Main()

// in test/e2e-wdi5/tests.test.js
const MainPage = require("./pages/Main")
describe("navigation", () => {
it("should see the initial page of the app", () => {
MainPage.open() // inherited from Page.js
expect(MainPage.iShouldSeeTheApp()).toBeTruthy()
})
// ...
})

In combination with class inheritance, Page Objects in wdi5 become highly reusable items. You define shared methods in a base class and re-use them in child classes:
// from packages/ui5-app/webapp/test/e2e-wdi5/pages/Page.js
module.exports = class Page {
open(sHash) {
// wdi5-specific
browser.goTo({sHash: `index.html${sHash}`})
}
}

// in test/e2e-wdi5/pages/Main.js
class Main extends Page {
// ...
}

// in a test file you can now do
const Main = require("./path/to/Main")
Main.open()

debugging


The synchronous coding style makes it easy to put a breakpoint in a test at any line and inspect the runtime state of a UI5 control.

For launching wdi5 in debug mode, either do the regular npx wdio from a VSCode "Debug" terminal or add an inspect flag to wdio.conf.js.

reporting/output


Many reporters are available for Webdriver.IO that subsequently work for wdi5 as well.

Per default, the spec reporter offers a concise status report:



Performance


Time is valuable, not only for testing. But specifically during development: the faster a test-runner executes, the quicker the developer can react according to the results and fix/add/change tests.

And most certainly in CI/CD: the quicker a suite of test runs, the faster Pull/Merge Requests can pass through, the faster features can be (continuously) delivered, the higher the literal quality of service.

All tests in the ecosystem showcase share the same requirements. With the OPA5 tests being a small exception as they are missing any calendar popup interaction due to incapability. Thus, OPA5 has a slight performance advantage per se.

Nevertheless, here are the metrics of the tests all run in headless Chrome ( karma-ui5 as the testrunner for OPA5), with minimum log level set.

Hardware is my MacBook Pro 2.3 GHz Quad-Core Intel Core i7 with 32 GB RAM (yeah, I know).

Every test framework was run 100 times (really!), these are the mean runtimes:

OPA5: 12.03 sec

UIVeri5: 34.02 sec

wdi5: 32.02 sec

Summary


Which framework is "the best", which one is "the winner"?

Clearly there’s no straight answer to that, as each one has its’ advantages and disadvantages, serving certain use cases best.

OPA5 is the fastest of the three, as it shares the runtime with UI5, saving infrastructure overhead such as launching the browser itself. Yet it quickly reaches its’ limits when more advanced test behavior is required, such as cross-interaction amongst UI5 controls or operating on elements other than UI5 controls. Plus it feels clumsy with the manyfold waitFor interaction.

With UIVeri5, it’s possible to really operate the UI as an external user, including elements and features outside of UI5 controls. But UIVeri5’s core (Protractor, WebdriverJS) feels dated, and the glue between the core elements is sometimes missing that last implementation mile (not possible to set a log level in conf.js?!). Certain limits such as the forced correlation between suite name (describe) and filename also add to that impression. Unfortunately only a subset of UI5 API methods are available on a control at test-time. Also the documentation seems to be fragmented - yet a bit hidden in the docs, UIVeri5 has pre-built authenticators, with SAP Cloud Platform SAP ID amongst them.

wdi5 is the youngest framework amongst the three and benefits greatly not only from its’ Webdriver.IO-core, but also from the integration with the sap.ui.test.RecordReplay-API. The latter enables the OPA5-stlye locators during test-time. With syntactic sugar for test notation, the wide variety of options (try running npx wdio config!), TypeScript support and a large support of UI5 API methods on a control, make using wdi5 feel more up-to-date than working with the other two frameworks. Combined with its’ larger sibling, the actual wdi5 npm module (see the docs for the distinction), it can not only drive tests in the browser-scope, but also run them against hybrid apps on iOS, Android and Electron.

So which one to pick?

In terms of licensing, there most certainly is a winner: wdi5 lives under the derived beer-ware license, encouraging all users to frequently buy the contributors a beer.

In that regard: cheers!
8 Comments
Labels in this area