This is the fourth and final part of my blog post series about HCP - ABAP communication. You can find the first post and the list with other blog posts from these series here: HCP, ABAP and websocket part 1 .
The source code for this blog post is in this commit: 63df829.
Today, I will explain how I managed to synchronized the previously built HCP application from week 1 with the ABAP / UI5 / Netweaver application built in weeks 2 and 3. All modifications necessary were on the HCP part (so the on-premise part was untouched).
First I have modified the UI5 application such that it consumes from the on-premise. This involves exposing the on-premise OData model via the HANA Cloud Connector through a destination.
The main difficulty here is that we have a Java application --> we can not use the HTML5 application dispatcher to access on-premise destinations. The solution that I have used is better explained in another blog post of mine: Accessing on-premise from XS on trial.
The foundation of this approach lies in the HTTP Proxy, which is a custom class built to act like a proxy (or dispatcher; the concept is loosely based on the RequestDsipatcher Java EE6 class). In my implementation, the proxy itself is implemented as a servlet: the BackendProxy servlet. When making requests to this servlet, it will actually act like the backend oData service.
After adding this class, I also adjusted the Component.js file, by adding this new "data source" to our list of possible configurations. See the backendConfig property from below:
serviceConfig : { name: "", serviceUrl: "model.svc/"},
ws: { url: "./ws", cooldown: 1000 },
abapServiceConfig : { name: "", serviceUrl: "/sap/opu/odata/SAP/Z<GW_SERVICE_NAME>/"},
abapWs: { url: "/sap/bc/apc/sap/z<push_channel_name>", cooldown: 1000 },
backendServiceConfig : { name: "", serviceUrl: "backend.svc/"},
backendWs: { url: "./bws", cooldown: 1000},
//...
var p = getQueryParams(document.location.search);
if (p.location == "abap") {
mConfig.serviceConfig = mConfig.abapServiceConfig;
mConfig.ws = mConfig.abapWs;
}
else if (p.location == "backend") {
mConfig.serviceConfig = mConfig.backendServiceConfig;
mConfig.ws = mConfig.backendWs;
}
If we run the app now, with the location=backend URL parameter, the data from the on-premise system should be displayed (but the WS won't work; this is the next step).
Delete requests from the HCP UI5 app should also work and should trigger refresh on the dynpro and on-premise UI5, but the reverse won't work (i.e. deleting a row in the dynpro, won't trigger the refresh on the cloud).
Now we just need to synchronize the websocket part, such that the UI5 interface is refreshed when a user deletes a row from the on-premise.
First, we need to build an endpoint. I just simply copied the same endpoint from before into a new class: BackendEndpoint. I did not use the exact same class, because the switch between the pure-HCP version (location empty) and the backed version (location=backend) is done via this "location" parameter on the frontend.
The Endpoint would not know what data each client is actually using (the one stored in the HCP DB or the one from the on-premise). So, to avoid having to send extra information to the WS Endpoint, we just create a new one. I also mapped this new Endpoint to the URL which I have set in the Component.js file ("/bws"). Now the app can be tested again: the WS error should not appear any more, but the synchronization won't work yet.
For the synchronization to work, we need to be able to subscribe to the AMC on-premise channel. Because we can not do it directly, an RFC could be the solution. Instead of making a new RFC, I have simply used the same one as in part 3. We need to call this RFC from the Java app, so a destination to the on-premise is also needed.
Analogical to the way I synchronized the UI5 and dynpro on-premise apps, I have built a small class which just calls this RFC and sends "refresh" messages to the BackedEndpoint when the RFC calls return (if timeout has not ocured).
public enum RFCPoller {
INSTANCE;
private static final Logger LOGGER = LoggerFactory.getLogger(RFCPoller.class);
private RFCPoller() {}
public void start() {
try {
// access the RFC Destination; Replace RFC_DEST with your own destination
JCoDestination destination = JCoDestinationManager.getDestination("REF_DEST");
// make an invocation of STFC_CONNECTION in the backend;
JCoRepository repo = destination.getRepository();
Caller c = new Caller(destination,
repo.getFunction("ZDEMO_FM_REFRESH"));
c.start();
} catch (JCoException e) {
LOGGER.error("JCo Exception: " + e.getMessage());
}
}
private static class Caller extends Thread {
private JCoFunction f;
private JCoDestination d;
public Caller(JCoDestination d, JCoFunction f) {
this.d = d;
this.f = f;
}
@Override
public void run() {
try {
do {
f.execute(d);
JCoParameterList exports = f.getExportParameterList();
String timeout = exports.getString("EF_TIMEOUT");
if (timeout.trim().length() == 0) {
BackendEndpoint.sendRefresh();
}
}while(true);
} catch (JCoException e) {
LOGGER.error("JCo Exception: " + e.getMessage());
}
}
}
}
This Poller class just creates a new thread responsible for calling the RFC in a loop and sending the refresh message to the BackendEndpoint when necessary.
Now, I've did the final test:
When deleting a row from any of these three user interfaces (I've just opened three browser windows and put them next to each other) all the other UIs refresh automatically.
As a summary, I have built:
In the end, using all these artefacts, I have managed to synchronize real-time an UI5 cloud app, an UI5 on-premise app and a dynpro report.
This concludes my blog post series, I hope you liked it :smile: .
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
10 | |
9 | |
7 | |
7 | |
6 | |
6 | |
6 | |
5 | |
5 | |
4 |