Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
LynnS1
Advisor
Advisor
1,640
We’re going to step up this base app code to take on a variety of use cases that we’ll classify as the Case Worker. Imagine an intelligence analyst who has a region of responsibility, a Social Services Director who has at-risk families and individuals to enroll into support programs, an Investigative Reporter with leads and correlating accounts, or any application user that needs to organize unstructured data within a larger process. This is the Case Worker.

They differ from the Data Scientist who analyses data centric problems and objectives to build automated processes that cleanse and normalize information from all forms through pipelines to deliver insight. They know databases, coding, math, and strange names like Spark, Kafka, and Kubernetes and will set up the Case Worker with a base set of knowledge to work with.

Our Case Worker will need something different than a data science workbench like SAP Data Hub which we’ll touch on here too but this needs to be simpler yet just as flexible. Something like what we’ll build in the video below.


What we will use in this post?


SAP offers 3 open source products that developers can get their hands dirty with. One is SAP Data Hub which is based on Docker, Kubernetes, Kafka and provides users with a workbench to cull multiple sources under a single hood with tools to empower Data Scientists with everything from cleansing to complex processing that reach into external APIs or custom algorithms built in Python or R to name just two. An entire OpenSAP course which I highly recommend provides a full breadth scope so will only cover minor details here.

The second is OrientDB which is a multi-modal database with graph, Apache Lucene and a nice studio interface to test out graph algorithms and traversal on what is essentially your data lake. It has been our go-to database since the beginning so will continue to see what it can do here. We will use this as our main core for providing a user with the ability to explore data within the 3rd product, SAP UI5.

Specifically, from SAP UI5, we will be using the NetworkGraph library as before which is testable but is licensed under SAP Fiori. There are open source alternatives such as d3.js but for now we will demonstrate how is could be used in a product. For more on information on the NetworkGraph library please see the excellent information provided by Libor Pisa including creating custom org charts and rendering.

We will combine these 3 elements with the glue of Python and since we will base it on Docker, we can synch the application up with live streams provided by SAP Data Hub thus having a Business UX to complement the Data Science UX of SAP Data Hub.

We will also have a look at what SAP is doing with the next release of SAP Fiori, 3.0 which is focused on serving users with intelligence derived from enterprise data. This innovation concept is coined, “The Intelligent Enterprise” and is based on the solution of combining data sources for automated analysis and machine learning to enhance user experiences much the way we see in other emerging products. For example, MS Word is now providing enhanced suggestions and editing to improve user’s writing products. Same with the Intelligent Enterprise but on a scope with no single industry or line of business, simply the user’s needs.

What is our Objective?


Given the initial definition of the Case Worker, Part 4 will look at a use case of combining data sets to identify individuals at risk. Risk in this simplified use case will be derived by associations of events to persons. An event can be a crime, abuse, education, health, or employment related. The Person will be based on simulated data and have traits including relations to a family. One use case could be prioritizing families based on their associated risks.

We are simulating data streams in which first a family is created and then several cycles run and based on the simulated person’s traits, a type of event may occur. Right now we have this simulation based on age so a child can’t be involved in a crime or employment but can be in an abuse or education. Likewise, an adult won’t have an education record but can have crime or employment. The simulation will run based on the computer time and then go forward to simulate streams set up in SAP Data Hub.

The objective is an app in which we can structure our intelligence from a Business UX from data streams based on SAP Data Hub Data Scientist user work. Since we are focusing on the Fiori NetworkGraph to visualize data, we will call the structured intelligence simply, the current graph. We will start with the following requirements:

  • Simulation of Families for master data

  • Simulation run to act as Publishing transactional stream

  • Storing in the different DBs which act as Subscribers

  • Creating a new DB as a Subscriber just for that Case Worker

  • Basic create, retrieve, update, delete (CRUD) actions

  • Traversal from a node

  • Adding new nodes

  • Adding relationships

  • Viewing information in tables, charts, and network graph


These elements are populated on the graph to give this base application some direction on a development roadmap as part of an open source effort. As use cases are identified, we will prove whether this idea of a multi-modal data lake with a Person, Object, Location, Event (POLE) model for graph optimization works. Can it serve not just the public security industry, but the public sector including social services, phramaceuticals, or any other user requiring a graph of multiple sources to derive intelligence and action?

There are some existing use cases specific to Public sector which can be found here:

That’s the open source challenge in providing this Dockerized ready to go base application that serves as a side car to SAP Data Hub. Its current state is based on simulated data with the requirements as outlined above. The roadmap includes automation of alerts and integrating multiple use cases to test the durability. In saying this is an open source approach, that implies it isn’t a full-time job which could further imply quality. So, with that caveat to expect some bugs while testing, let’s go.

Setting up Data Hub


You can see the official tutorial here. But to keep you jumping around, reader's digest below:

Ensure you have at least 8GB of RAM on your system along with Git and Docker which should already be there if you followed with the last post.

  • Git-clone part 4

  • Get SAP Data Hub Development, DatahubDevEdition version

  • Put the zip contents into the project folder

  • Open the Dockerfile and uncomment the proxy settings if you are behind one




  • Go to the command line and build the image > cd DatahubDevEdition

  • Build the image > docker build --tag datahub .

  • Check the images > docker images

  • Create the network > docker network create dev-net

  • Run the container > docker run -ti --env VORA_USERNAME=vora --env VORA_PASSWORD=SomeNicePassword19920706 --publish 127.0.0.1:8090:8090 --publish 127.0.0.1:9225:9225 --publish 30115:30115 --name datahub --hostname datahub --network dev-net datahub run --agree-to-sap-license

    • See the status loop message noting the non-productive usage, local deployments only

    • Note if you are deploying on Digital Ocean, change the IP in publishing to 0.0.0.0



  • Test connectivity by going to your IP address and ports that were published to the outside world

    • Vora modeling: http://nnn. nnn. nnn. nnn:9225/

    • Pipeline modelings: http://nnn. nnn. nnn. nnn:8090/



  • Open a new command prompt and log into the DataHub container to run CLI. Note that you do not need to be in the application folder.

  • Open a new command prompt and go to the data folder within the FioriFlask4 directory to copy the data to datahub.

    1. sudo docker cp demo.csv datahub:/tmp/demo.csv



  • Open a new command prompt and start up the HDDFS > docker run -ti --publish 127.0.0.1:50070:50070 --name hdfs --hostname hdfs --net dev-net datahub run-hdfs


This should give you screens that look like the ones below with Modeler on the left and SAP Vora on the right. And if not, you can try the IP address in the snapshot. It is a Digital Ocean server I set up but won't keep up full time.



We won't cover any more in Data Hub here as it's the focus of the Data Scientist user and we won't to focus for now on the Case Worker. However, the idea is to build data streams in Data Hub and a full tutorial of that can be found including some use cases we'll make use of:

Let's see how we cover the simulation of data as a separate server within the Flask program. Later we can make it into a micro service like those seen in the Modeler.

Simulating Persons, Objects, Locations, and Events


We will start with a basic creation of families using a pool of names and locations to choose from and we will use the ODB client we set up earlier as an object within the Simulation server. It will first create the master data as families and then run through rounds of simulation. It will determine if a person is involved in an event based on a variable we will call the simclock.

If the simclock is a high value then it likely the person will have an event because the simulation round produces a random number between 1 and 9. The simclock will iterate each round as if building up the potential energy in a person and then reset to the other person variable, the simaction which is the base activity level of a person. Again, if it is a high value, the person will be all ready for action again.
    def create_family(self, **kwargs):
"""
A Family is a group of Persons consisting of 2 parents and at least one child. The status of the family is not set.
It starts with the core_age of one parent with that parent's gender determined by a variable.
The core_age should be in the form of days old not years so that a more varied age can be applied to relatives.
The core_age of parent A is determined through a normal distribution of parent ages
The core_age of parent B is determined through a n dist of parent age differences and the core_age of parent A
TODO: Make sim dob time more random and not based on computer time
:param core_age (int that will determine how many days old and then randomizes DateOfBirth based on today
:return:
"""
# Process all the options and set to random lists if none provided.
# Options are there for loops that re-run the function
if not self.basebook:
self.basebook_setup()

if 'core_age' in kwargs:
core_age = kwargs['core_age'] * 365 + (random.randint(-180, 180))
else:
core_age = (int(np.random.normal(loc=self.ParentA_Ages['mean'], scale=self.ParentA_Ages['stdev'])) * 365
+ random.randint(-180, 180))
if 'LastName' in kwargs:
LastName = kwargs['LastName']
else:
LastName = random.choice(self.LastNames)

# Create the first parent
GenderA = random.choices(self.ParentA_Choices, self.ParentA_Weights)[0]
if GenderA == 'F':
FirstName = random.choice(self.FemaleNames)
else:
FirstName = random.choice(self.MaleNames)
# Create the place of birth
POB_A = self.Locations.sample(1)
POB_A = self.create_location(
City=POB_A['city'][int(POB_A.index[0])],
Country=POB_A['country'][int(POB_A.index[0])],
Latitude=float(POB_A['lat'][int(POB_A.index[0])]),
Longitude=float(POB_A['lng'][int(POB_A.index[0])]),
Population=float(POB_A['pop'][int(POB_A.index[0])]),
Type=self.POLE
)
# Create the person record and key
parentA = self.create_person(
DateOfBirth=(datetime.datetime.now() - datetime.timedelta(days=core_age)).strftime('%Y-%m-%d %H:%M:%S'),
PlaceOfBirth=POB_A['City'],
LastName=LastName,
FirstName=FirstName,
Gender=GenderA,
Type=self.POLE,
Category='SIM'
)
# Create the relation to place of birth
self.create_relation(parentA, POB_A, 'BornIn', self.POLE)
# Create the event for birth
DOB_A = self.create_event(
Type=self.POLE,
Category='Birth',
DateTime=parentA['DateOfBirth'],
Description='%s %s born on %s in %s.' % (FirstName,
LastName,
parentA['DateOfBirth'],
POB_A['City']))
self.create_relation(parentA, DOB_A, 'BornOn', self.POLE)
self.create_relation(DOB_A, POB_A, 'OccurredAt', self.POLE)

# Create the second parent based on the first parent and simulation settings
b_core_age = ((core_age + int(np.random.normal(loc=self.Parent_Age_Difference['mean'],
scale=self.Parent_Age_Difference['stdev'])))
+ (random.randint(-180, 180)))
if random.choices(self.Parent_SameGender_Choices, self.Parent_SameGender_Weights)[0]:
GenderB = GenderA
else:
if GenderA == 'F':
GenderB = 'M'
else:
GenderB = 'F'

if GenderB == 'F':
FirstName = random.choice(self.FemaleNames)
else:
FirstName = random.choice(self.MaleNames)
LastNameB = random.choice(self.LastNames)

POB_B = self.Locations.sample(1)
POB_B = self.create_location(
City=POB_B['city'][int(POB_B.index[0])],
Country=POB_B['country'][int(POB_B.index[0])],
Latitude=POB_B['lat'][int(POB_B.index[0])],
Longitude=POB_B['lng'][int(POB_B.index[0])],
Population=POB_B['pop'][int(POB_B.index[0])],
Type=self.POLE,
)
# Create the person record and key
parentB = self.create_person(
DateOfBirth=(datetime.datetime.now() - datetime.timedelta(days=b_core_age)).strftime('%Y-%m-%d %H:%M:%S'),
PlaceOfBirth=POB_B['City'],
LastName=LastNameB,
FirstName=FirstName,
Gender=GenderB,
Type=self.POLE,
)
# Create the relation to place of birth
self.create_relation(parentB, POB_B, 'BornIn', self.POLE)
# Create the event for birth
DOB_B = self.create_event(
DateTime=parentB['DateOfBirth'],
Type=self.POLE,
Category='Birth',
Description='%s %s born on %s in %s.' % (FirstName,
LastName,
parentB['DateOfBirth'],
POB_B['City']))
self.create_relation(parentB, DOB_B, 'BornOn', self.POLE)
self.create_relation(DOB_B, POB_B, 'OccurredAt', self.POLE)
# TODO Create origin based location
# TODO Create beahvior pattern variables for turn based simulation and agent based motivations

# Create the relation between the parents
self.create_relation(parentA, parentB, 'ChildrenWith', self.POLE)

# Create the children starting with the oldest based on an age derived from random parent age and Sim settings
core_age = (random.choice([core_age, b_core_age]) / 365 - int(
np.random.normal(loc=self.Parent_Child_Age_Difference['mean'],
scale=self.Parent_Child_Age_Difference['stdev']))) * 365
i = 0
children = {}
LastName = random.choice([LastName, LastNameB])
childrencount = int(np.random.normal(loc=self.ChildrenCount['mean'], scale=self.ChildrenCount['stdev']))
if childrencount < 2:
childrencount = 2
while i < childrencount:
Gender = random.choices(self.Child_Gender_Choices, self.Child_Gender_Weights)[0]
if Gender == 'M':
FirstName = random.choice(self.MaleNames)
elif Gender == 'F':
FirstName = random.choice(self.FemaleNames)
else:
FirstName = random.choice(self.FemaleNames)

POB = random.choice([POB_A, POB_B])
child = self.create_person(
DateOfBirth=(datetime.datetime.now() - datetime.timedelta(days=core_age)).strftime('%Y-%m-%d %H:%M:%S'),
PlaceOfBirth=POB['City'],
LastName=LastName,
FirstName=FirstName,
Gender=Gender,
Type=self.POLE
)
# Create the relation to place of birth
self.create_relation(child, POB, 'BornIn', self.POLE)
# Create the event for birth
DOB = self.create_event(
Type=self.POLE,
Category='Birth',
DateTime=child['DateOfBirth'],
Description='%s %s born on %s in %s.' % (FirstName,
LastName,
child['DateOfBirth'],
POB['City']))
self.create_relation(child, DOB, 'BornOn', self.POLE)
self.create_relation(DOB, POB, 'OccurredAt', self.POLE)
children[child['key']] = child
# Create the relation between the parents
self.create_relation(parentA, child, 'ParentOf', self.POLE)
self.create_relation(parentB, child, 'ParentOf', self.POLE)
# Increment the age for next kid
core_age = core_age - random.randint(300, 1500)
i += 1
# Create the sibling relationships
for c in children:
for cc in children:
if cc != c:
self.create_relation(children[c], children[cc], 'SiblingOf', self.POLE)

Once the family has been established, each round will run checks on the person’s variable states and then create and event based on those, such as age. A child will not have an employment event for example.
    def run_simulation(self, rounds):

if not self.basebook:
self.basebook_setup()
i = 0
sim_time = datetime.datetime.strptime(self.SimStartDate, '%Y-%m-%d %H:%M:%S')
while i < rounds:
'''
1. Choose sims based on an action number range/filter
2. Based on the age create an action.
If child create a school or abuse related event
If parent create a police or employment related event
3. Create a location based on Sim Locations
Choose home as first and add random.
If len of locations is < 3 append, else, random create new based on others or select one
4. Insert the relation of event to person and to locations into the db based on event type

'''
for sim in self.DB['sims']:
if sim['SimClock'] > random.randint(1, 9):
age = self.check_age(sim_time, sim)
if age == 'Not born':
break
action = self.choose_action(age)
EVT = self.create_event(Type=action,
DateTime=sim_time.strftime('%Y-%m-%d %H:%M:%S'),
Description='%s %s, of %s age was involved with an event related to %s at %s'
% (sim['FirstName'], sim['LastName'],
age, action, sim_time.strftime('%Y-%m-%d %H:%M:%S')))
self.create_relation(EVT, sim, 'Involved', action)
# Reset the time to a step in the future based on random time between 1 and max round length
# Set to seconds to allow for more interactions in a round
sim_time = datetime.datetime.strptime(
(sim_time + datetime.timedelta(seconds=random.randint(1, self.SimRoundLengthMax))
).strftime('%Y-%m-%d %H:%M:%S'), '%Y-%m-%d %H:%M:%S')
# Reset the Sim's clock it's original setting
sim['SimClock'] = sim['SimAction']
else:
sim['SimClock'] += 1
# Reset the time to a step in the future based on random time between 1 and max round length
# Set to minutes to allow for a bigger time jump between each round treating the iteration of sims as "bullet time"
sim_time = datetime.datetime.strptime(
(sim_time + datetime.timedelta(hours=random.randint(1, self.SimRoundLengthMax))
).strftime('%Y-%m-%d %H:%M:%S'), '%Y-%m-%d %H:%M:%S')
i += 1

These 2 core functions will run in the FioriFlask app when the user clicks on the tile for Data Exploration simulating a single "latest" series of events. It will also kick off a parallel background thread which will fill up a queue of random events as if coming from a publisher. When the user clicks on “Case Alerts”, the application will call the Flask app and get the latest events in the queue based on its own index. That means the queue can build over time but only the ones not taken by the app will be scooped up. These are dependent on the Python threading library which is standard.
def sim_thread():
family_count = 1 # 3 seconds to make 2 families
sim_run_count = 2

click.echo("[%s_View_ThreadSim] Starting simulation with creating %d families" % (get_datetime(), family_count))
i = 0
while i < family_count:
simserver.create_family()
i += 1
click.echo("[%s_View_ThreadSim] Starting %d simulation rounds" % (get_datetime(), sim_run_count))
simserver.run_simulation(sim_run_count)
click.echo("[%s_View_ThreadSim] Exporting snapshot JSON for starter set" % (get_datetime()))
simserver.export_json()
click.echo("[%s_View_ThreadSim] Complete with set up" % (get_datetime()))

def stream_data():

i=0
while stream:
time.sleep(3)
stream['events'].append({'time': get_datetime(), 'message': 'Test %d' % i})
click.echo("[%s_View_ThreadStream] Running %d" % (get_datetime(), i))
i+=1

@orientdb.route('/OrientDB/get_stream', methods=['POST'])
def get_stream():
r = request.form.to_dict(flat=True)
cur_len = len(stream['events'])
cur_index = r['cur_index']

return jsonify(
{'new_index': cur_len,
'old_index': cur_index,
'payload': stream['events'][cur_index:cur_len - 1]
}
)

@orientdb.route('/OrientDB', methods=['GET'])
def home():

sim = Thread(target=sim_thread, )
stm = Thread(target=stream_data, )
sim.start()
click.echo("[%s_View_Home] Getting calc views" % (get_datetime()))
views = load_views()
click.echo("[%s_View_Home] Complete with calc views" % (get_datetime()))

# App tile will always have an index applied to their model when 'GET"
odata = {
'status': 200,
'message': '%s logged in' % get_datetime(),
'd': {
'index': odbserver.get_db_stats(),
'demo_data': odbserver.fill_demo_data(),
'clipboard': {
'keys': [],
'nodes': []
},
'dialogs': {
'nodes': [],
'lines': [],
'groups': []
},
'files': [],
'charts': {
'ChartContainerData1.json': odbserver.get_model('ChartContainerData1.json'),
'ChartContainerData2.json': odbserver.get_model('ChartContainerData2.json'),
'ChartContainerData3.json': odbserver.get_model('ChartContainerData3.json')
},
'views': views,
'network': odbserver.fill_demo_data_small()
}
}
# Get a small net from each db
current_selection = odata['d']['index'][0]
current_selection['network'] = odata['d']['network'][current_selection['name']]
odata['d']['current_selection'] = odata['d']['index'][0]
click.echo("[%s_View_Home] Packaging model for client" % (get_datetime()))
try:
odata = jsonify(odata)
except Exception as e:
if "TypeError: '<' not supported between instances" in str(e):
click.echo("[%s_View_Home] ERROR \n%s Showing oData" % (get_datetime(), odata))
else:
click.echo("[%s_View_Home] UNKNOWN ERROR" % (get_datetime()))
click.echo("[%s_View_Home] Sending model to client" % (get_datetime()))
return odata

And then the controller in the Fiori app to call it
        getStream: function() {

jQuery.ajax({
url : "/OrientDB/get_stream",
type : "POST",
dataType : "json",
async : true,
success : function(response, jqXHR, textStatus){

var oModel = new JSONModel(response);
sap.ui.getCore().setModel(oModel, 'LaunchpadStats');
window._card1.setModel(oModel);
window._card2.setModel(oModel);
window._card3.setModel(oModel);
window._timeline.setModel(oModel);
sap.ui.core.BusyIndicator.hide(0);
sap.m.MessageToast.show(response.message);

},
error: function(response){
console.log(response);
sap.ui.core.BusyIndicator.hide(0);
}
});

},

Working with the Graph


The CRUD actions need to be as easy as possible without nested forms. It should be just 2 to 3 clicks each. So let’s see what we can do with the SAPUI5 Network Graph library. We will implement this in the graph toolbar and the nodes themselves. The toolbar is assigned and then we can insert new content to it. Here we are going to have the add a node and add line a line methods called by cooresponding buttons. We also have buttons unique to the toolbar for navigating graph memory in a way. This ensures that as the user makes changes to a graph, they can back up to a previous state.
		window.oGraph = this.byId("graph");
window.oGraph._fZoomLevel = 0.75;
window.oGraphCurIndex = 0;
// Set the Toolbar
var oToolbar = window.oGraph.getToolbar();
/*
* Tool Bar buttons for the Graph
*/
oToolbar.insertContent(new Button("refreshButton", {
type: ButtonType.Transparent,
icon: "sap-icon://refresh",
press: this.refreshGraph.bind(window.oGraph)
}), 0);

oToolbar.insertContent(new Button("goBackGraphButton", {
type: ButtonType.Transparent,
icon: "sap-icon://media-rewind",
press: this.goBackGraph.bind(window.oGraph)
}), 1);

oToolbar.insertContent(new Button("goForwardGraphButton", {
type: ButtonType.Transparent,
icon: "sap-icon://media-forward",
press: this.goForwardGraph.bind(window.oGraph)
}), 2);

oToolbar.insertContent(new Button("addButton", {
type: ButtonType.Transparent,
icon: "sap-icon://add",
press: this.addNode.bind(window.oGraph)
}), 3);

oToolbar.insertContent(new Button("addLine", {
type: ButtonType.Transparent,
icon: "sap-icon://chain-link",
press: this.addLine.bind(window.oGraph)
}), 4);

Along with the related methods, we should be able to achieve some basic toolbar functionality in addition to what is out of the box with the Network Graph library such as search.

  1. Create a new Node with the flexible form for Person, Object, Location, or Event.

  2. Create a new link between the nodes on the graph

  3. Use the standard out of the box search tool in Network Graph




When we add a node we implement everything in a similar manner by creating the action buttons and then attaching them to the container. The example of the Traverse button on the node is shown below. For the graph we can include the title and position.
var newActTraverse = new ActionButton ({
icon: "sap-icon://overview-chart",
title: "Traverse",
position: "Left",
press: function (oEvent) {

var NodeKey = oEvent.getSource().oParent.getKey();
var oData = {
'key': NodeKey,
'db_name': sap.ui.getCore().getModel('OrientDBModel').oData.current_selection.name,
'cur_graph': sap.ui.getCore().getModel('OrientDBModel').oData.current_selection.network
};

var selectedNode = new JSONModel({'key': NodeKey, 'title': NodeTitle});
sap.ui.getCore().setModel(selectedNode, "selectedNode");

jQuery.ajax({
url : "/OrientDB/traverse",
type : "POST",
dataType : "json",
async : true,
data : oData,
success : function(response){
var oData = new JSONModel({
'nodes': response.results.cur_graph.nodes,
'lines': response.results.cur_graph.lines,
'groups': response.results.cur_graph.groups
})
window.oGraph.setModel(oData);
MessageToast.show(response.message);
}
});
}
});

This will establish a nice array of options around the node to conduct the CRUD and advanced graph query functions.

  1. Edit including deleting (from the database) or removing from the graph only

  2. Adding a new link from this node to another in the graph

  3. Viewing the information as is (this might be handy for someone who doesn't have rights to CRUD)

  4. Traverse with relationships based on cardinal direction. Future development could include degrees of separation and relation types.




 

What's Next?


That provides the user with all the basic functionality while looking at simulated POLE data. The next steps would be to build rules within SAP Data Hub that can surface alerts and automated suggestions based on the content of the alert. This is what we call the "Intelligent Enterprise" and a pattern of the SAP Fiori 3.0 that embraces this is the Situation.

In this application we show an example of the Fiori Launchpad with potential concept of automated alerts under the "Reactive" section. The video describes it in further detail but a screen shot below helps summarize the concept where alerts based on subscribed cases is populated. The challenge is putting together the intelligence that the users would not have found themselves and for that, graph is a good place to start.



This concludes the 4 part series focused on serving apps with Python Flask as the core. The next series will concentrate on continuing this open source application effort but do so with SAP Web IDE which allows a more rapid development of the UX and then move the back end development to SAP Data Hub where what we built in Python will be turned into micro services. I hope you found some of the material interesting if not helpful.

 
2 Comments