cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

SAP Kyma environment ready for Production?

Brian_Stempien
Explorer
0 Kudos
1,950

I have recently been exploring the SAP Kyma environment using a Trial BTP account. 

Full disclosure, I am an old BASIS hand and not familiar with Kubernetes. 

Going through the Developer Central Tutorial "Deploy a Node.js Application in the Kyma Runtime" I was able to get everything done, except getting the application to respond from the Internet. I also explored the Remote Work Processor from SAP Automation Pilot to a Kyma cluster. 

My experience has not been good.

For the RWP I was eventually able to get it to deploy and run once I read through the GO code to see that the Environment variable names had changed. Contacting the developer I got "this is beta and the Automation Pilot code has not caught up to the RWP code."

For the simple Node.js exercise I have built and tried 3 different Kyma clusters in two different regions (AWS US and Azure Singapore, the two the Trial accounts work in) and as far as I can tell everything is deployed correctly. I have read several blogs and trouble shooting pages, the 503 error points to maybe a Certificate issue, but the logs from the POD don't back this up. 

Anyway, the main reason for this post is to ask if anyone is actually running Kyma in Production? When something as simple as the Tutorials don't work it doesn't give me a lot of confidence. The Cloud Foundry tutorials had minor issues with Node.js package versions, but was easy enough for me to figure out and make work. Is the Cloud Foundry Environment and of course the Build series of apps what SAP would prefer to have used, and Kyma should be seen as perpetual Beta?

At this point if a client asked if they should use Kyma, I would say no. 

Accepted Solutions (1)

Accepted Solutions (1)

Brian_Stempien
Explorer
0 Kudos

That is the exact Tutorial I am using. The deployment YAML is exactly the same, other than the Image location and imagePullSecrets. 

Did you do the tutorial on a fresh Kyma environment on a Trial account? If you have an older working environment, there may be things solved within it that are there either because of the Trial account or fresh environment. 

Brian_Stempien
Explorer
0 Kudos
Additional information. The Network Policy appears to be the issue. If I remove the Policy created in the Deployment YAML, reverting the behavior to basically wide open, then it works. But looking at the Network Policies documentation, everything looks correct. I verified that the istio-system namespace does indeed have a name: istio-system in its definition. The istio-ingressgateway POD has app: istio-ingressgateway defined. So for some reason the Policy is not matching.
Brian_Stempien
Explorer
0 Kudos
Sigh. Out of the box, the istio-system namespace does NOT have a label called name: It is just part of the metadata. I added "name: istio-system" to the labels section for namespace istio-system and now the Policy works fine.
gabbi
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Brian,

I can confirm your findings. Indeed, in the trial cluster, the label namespace is missing. Yesterday, I had tried the tutorial on my standard cluster as my trial cluster was still getting provisioned.

On my trial cluster

❯ k get ns istio-system --show-labels
NAME           STATUS   AGE   LABELS
istio-system   Active   25m   kubernetes.io/metadata.name=istio-system,namespaces.warden.kyma-project.io/validate=enabled

 

On my standard cluster,

❯ k get ns istio-system --show-labels
NAME           STATUS   AGE    LABELS
istio-system   Active   310d   istio-injection=enabled,kubernetes.io/metadata.name=istio-system,name=istio-system,namespaces.warden.kyma-project.io/validate=enabled,reconciler.kyma-project.io/managed-by=reconciler,reconciler.kyma-project.io/origin-version=2.20.0

I will take it as regression to the dev team and get it fixed. We will work on ensuring that trial clusters do not have such diversions.

Thanks for your inputs and detailed analysis.

Gaurav

Brian_Stempien
Explorer
0 Kudos
Actually, I would say changing the Tutorials is a better idea. I have submitted Issues in GitHub pointing that out. Reading the background in the Kubernetes Git Hub of why namespaceSelector uses labels and why they chose to have api server create the kubernetes.io/metadata.name label. It is actually more secure than relying on a user input label. By using kubernetes.io/metadata.name we are guaranteed that is the actual Namespace name, not someone adding a name label impersonating the namespace we actually want.
gabbi
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Brian,

Yes, we would change the tutorial.

However, for me, it is also important to not have such configuration drifts between standard and trial clusters as they can lead to such scenarios. This is something we need to take up on the Kyma side.

Thanks,

Gaurav

Answers (1)

Answers (1)

Brian_Stempien
Explorer
0 Kudos

Let me consolidate this down for anyone else looking for the solution. 

Tutorial Name: Deploy a Node.js Application in the Kyma Runtime
Tutorial URL: https://developers.sap.com/tutorials/deploy-nodejs-application-kyma.html 

Error: 

upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection timeout

Fixed Step 7: Deployment Description File:

---
apiVersion: gateway.kyma-project.io/v1beta1
kind: APIRule
metadata:
  labels:
    app: kyma-multitenant-node-multitenancy
    release: multitenancy
  name: kyma-multitenant-node-multitenancy
spec:
  gateway: kyma-gateway.kyma-system.svc.cluster.local
  host: kyma-multitenant-node-multitenancy
  rules:
  - accessStrategies:
    - handler: allow
    methods:
    - GET
    - POST
    - PUT
    - PATCH
    - DELETE
    - HEAD
    path: /.*
  service:
    name: kyma-multitenant-node-multitenancy
    port: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kyma-multitenant-node-multitenancy
    release: multitenancy
  name: kyma-multitenant-node-multitenancy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kyma-multitenant-node-multitenancy
      release: multitenancy
  template:
    metadata:
      labels:
        app: kyma-multitenant-node-multitenancy
        release: multitenancy
    spec:
      imagePullSecrets:
        - name: <registry-secret> # replace with your own registry secret
      containers:
      - env:
        - name: PORT
          value: "8080"
        - name: TMPDIR
          value: /tmp
        image: <docker-hub-account>/multitenant-kyma-backend:v1  # replace with your Docker Hub account name
        name: kyma-multitenant-node-multitenancy
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: http
        readinessProbe:
          httpGet:
            path: /
            port: http
        startupProbe:
          httpGet:
            path: /
            port: http
          failureThreshold: 15
          periodSeconds: 2
        resources:
          limits:
            cpu: 100m
            memory: 256M
          requests:
            cpu: 100m
            memory: 256M
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsNonRoot: true
          readOnlyRootFilesystem: false
        volumeMounts:
        - mountPath: /tmp
          name: tmp
      volumes:
      - emptyDir: {}
        name: tmp

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: kyma-multitenant-node-multitenancy
    release: multitenancy
  name: kyma-multitenant-node-multitenancy
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: kyma-multitenant-node-multitenancy
    release: multitenancy
status:
  loadBalancer: {}

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  creationTimestamp: null
  labels:
    release: multitenancy
  name: multitenancy
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: istio-system
      podSelector:
        matchLabels:
          app: istio-ingressgateway
  podSelector:
    matchLabels:
      release: multitenancy
  policyTypes:
  - Ingress

If you dig deep enough into the kubernetes documentation on Network Polices you find the reason and the solution. 

https://kubernetes.io/docs/concepts/services-networking/network-policies/#targeting-a-namespace-by-i...