
Just a side note - You will need to log a ticket to activate custom applications
on SAP Data Intelligence Cloud. I recently tried and found a bunch of errors when trying to active one. Please log ticket under CA-DI-OPS or CA-DI.
admin
permissions on the SAP DI tenant to install the application.Kubernetes
and Docker
.manifest.json
. This manifest file describes the properties of the Solution: its name
, its version
, and the component it depends on. The manifest.json
file I used looked similar to this:{
"name": "tensorboard-app",
"version": "0.0.2",
"format": "2",
"dependencies": []
}
name
and a version
definition. If multiple components are required those are listed with commas between the {}:{
...
"dependencies": [
{ "name": "other-solution", "version": ">1.2.3" },
{ "name": "yet-another-solution", "version": ">=2.0.0" }
]
}
my_custom_solution
├── content
│ └── files
│ | └── vflow
│ | ├── dockerfiles
│ | ├── graphs
│ | ├── operators
│ | └── ...
│ └── vsystem
│ ├── apps
│ | └── tensorboard-app.json
│ ├── icons
│ │ ├── TensorBoard.png
│ │ └──...
│ └── ...
└── manifest.json
vsystem/apps
. When SAP DI deploys the solution it subsequently deploys all applications. The application descriptor files provide specific details on how each application must be deployed.TensorBoard
and the descriptor is tensorboard-app.json
.{
"name": "TensorBoard",
"type": "kubernetes",
"apiVersion": "v2",
"version": "0.0.2",
"icon": "/vsystem/icons/TensorBoard.png",
"mounts": { "vhome": true },
"body": { ... }
}
name
: The name that is displayed in the Launchpad once it is deployed on the system.version
: The version of our application.apiVersion
: The version of the system API. This is not related to your application version.type
: The type of application. This will tell the system what to look for in the body of the definition. For now, the only supported type is Kubernetes.visible
(optional): This is a Boolean parameter that defaults to true. This means the application will be shown in the SAP DI launchpad along with other tiles.icon
(optional): A reference to an SVG file (in my test it worked with PNG too) containing an icon for the application. If the reference is an absolute path, it will be interpreted as an external URL. If relative, it will be interpreted as a repository file with ‘/’ as the base folder of the solution.mounts
(optional): Thi parameter defines which mounts should be mounted for the application. You can specify vhome
the workspace mount and vrep
for the repository. Both vhome
and vrep
are Boolean parameters.body definition
looks like this:"body": {
"kubernetes": {
"apiVersion": "v2",
"service": { ... },
"deployment": { ... }
}
}
service definition
looks like:"service": {
"spec": {
"ports": [
{
"port": 6006
}
]
}
}
apiVersion: v2
kind: Service
metadata:
name: tensorboard-app
labels:
app: tensorboard-app
spec:
selector:
app: tensorboard-app
ports:
- port: 6006
.spec.ports[*].port
field specifies that the server is exposed on port 6006 by the system. It means that when a request is received on this port that the system will redirect the request to the application.deployment definition
looks like:"deployment": {
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "tensorboard-app",
"image": "tensorflow/tensorflow:2.3.0",
"command": [
"/usr/local/bin/tensorboard"
],
"args": [
"--logdir",
"/vhome/tf",
"--host",
"0.0.0.0"
],
tensorflow/tensorflow
with tag 2.3.0
(you can specify latest if you always want the latest image to be pulled). command and args is used to execute a specific application in the container similar to CMD
in Docker
./vhome/tf
is specified as the log directory that should be read by TensorBoard. /vhome
is the User Workspace
that is mounted for the session user (SAP Data Intelligence User). This means there will be a separate instance of TensorBoard per user with its own workspace. "resources": {
"requests": {
"cpu": "0.1",
"memory": "1Gi"
},
"limits": {
"cpu": "0.5",
"memory": "5Gi"
}
},
resources.requests
and resources.limits
. "securityContext": {
"runAsUser": 1000
},
securityContext
. runAsUser
functions the same as User
in Docker
. "ports": [
{
"containerPort": 6006
}
]
}
]
}
}
}
}
List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
EXPOSE
in Docker
.tensorboard-app.json
looks like this:{
"name": "TensorBoard",
"type": "kubernetes",
"apiVersion": "v2",
"version": "0.0.2",
"icon": "/vsystem/icons/TensorBoard.png",
"mounts": { "vhome": true },
"body": {
"kubernetes": {
"apiVersion": "v2",
"service": {
"spec": {
"ports": [
{
"port": 6006
}
]
}
},
"deployment": {
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "tensorboard-app",
"image": "tensorflow/tensorflow:2.3.0",
"command": [
"/usr/local/bin/tensorboard"
],
"args": [
"--logdir",
"/vhome/tf",
"--host",
"0.0.0.0"
],
"resources": {
"requests": {
"cpu": "0.1",
"memory": "1Gi"
},
"limits": {
"cpu": "0.5",
"memory": "5Gi"
}
},
"securityContext": {
"runAsUser": 1000
},
"ports": [
{
"containerPort": 6006
}
]
}
]
}
}
}
}
}
}
}
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
17 | |
13 | |
11 | |
10 | |
9 | |
8 | |
7 | |
6 | |
6 | |
5 |