When I say "secure Kubernetes", I'm not just thinking about admission policies and CIS checklists. I'm thinking about what happens when something is already running and turns malicious — a web shell lands in a pod, a container starts burning CPU for crypto mining, or someone drops network scanning tools into an otherwise boring workload.
If you're running SAP BTP Kyma runtime, this matters. Kyma has strong platform hardening (Gardener-managed control plane, DISA STIG alignment), and API server audit logs exist — but those logs go to SAP's Platform Logging Service, not directly to you. That's fine for platform-level auditing, but it's not the same as seeing threats inside your workloads at runtime.
That's the gap I'm filling: runtime threat detection — the ability to detect and alert on malicious activity (crypto mining, web shells, credential theft) while workloads are running.
These aren't hypotheticals — crypto mining and container compromise campaigns are actively targeting Kubernetes clusters:
DERO Cryptojacking (2023–2024): Attackers scanned for misconfigured Kubernetes API servers, then deployed DaemonSets named "proxy-api" to blend in with legitimate cluster components. The mining process itself was named "pause" — masquerading as the standard Kubernetes pause container. CrowdStrike found malicious images with over 10,000 pulls on Docker Hub. How runtime detection helps: Defender's eBPF monitoring catches unusual process spawning from "pause" containers and flags sustained high CPU from processes that shouldn't be compute-intensive. (Source: CrowdStrike — DERO Cryptojacking Discovery)
Kinsing Campaign (2023–ongoing): This campaign exploits vulnerabilities in PostgreSQL, WebLogic, Liferay, and WordPress to gain initial access to containers, then pivots to deploy crypto miners across the cluster. The campaign has affected 75+ cloud-native applications. How runtime detection helps: Defender detects process genealogy anomalies — for example, a WebLogic process spawning shell commands that enumerate Kubernetes resources or deploy new containers.
The pattern: attackers get in through a misconfiguration or vulnerability, then run workloads inside the cluster. Admission policies and CIS benchmarks don't catch threats that start after deployment — that's the gap runtime detection fills.
For non-AKS clusters, the approach is: Azure Arc (makes the cluster an Azure resource) + Defender for Containers (deploys the runtime sensor as an Arc extension).
What gets installed:
azure-arc namespace): maintain outbound connection to AzureWhat the sensor detects: crypto mining patterns, web shell activity, network scanning tools, binary drift. (Docs: Workload runtime detection)
Arc also provides an extension platform — Defender isn't the only add-on you can deploy this way. And Microsoft provides a verification checklist so you can prove it's working.
Networking note: Both Arc and Defender require outbound connectivity. If egress is blocked, onboarding fails silently. Check the Arc network requirements and ensure *.cloud.defender.microsoft.com:443 is allowed.
I’ll show a portal-first path (fastest to understand), then a programmatic path (fastest to automate).
Here’s what I personally confirm before I touch the portal:
1) Network egress (outbound)
*.cloud.defender.microsoft.com:443. (Docs: Enable Defender for Containers on Arc-enabled Kubernetes (portal))2) Tooling
connectedk8s extension (for Arc onboarding). (Docs: Quickstart: Connect an existing Kubernetes cluster to Azure Arc)k8s-extension extension. (Docs: Deploy and manage Arc-enabled Kubernetes extensions)3) Cluster access
kubectl works and points at the cluster I’m onboarding.kubectl config current-context
kubectl cluster-infoI typically do this from a workstation that already has kubectl access to the cluster.
The Arc quickstart includes registering resource providers like Microsoft.Kubernetes, Microsoft.KubernetesConfiguration, and Microsoft.ExtendedLocation. (Docs: Quickstart: Connect an existing Kubernetes cluster to Azure Arc)
From the Arc quickstart, the core command is:
az connectedk8s connect --name <cluster-name> --resource-group <resource-group>In practice, I prefer to be explicit (especially on shared subscriptions) and set --location and --tags:
az connectedk8s connect \
--name <cluster-name> \
--resource-group <resource-group> \
--location <azure-region> \
--tags env=<env> owner=<team> system=<system>
What I’m explicitly setting there:
--location: the Azure region where the Azure Arc-enabled Kubernetes resource is created. If you omit it, it’s created in the same region as the resource group.--tags: Azure Resource Manager tags on the Arc resource (space-separated key[=value]).If this command hangs or fails in weird ways, I go back to egress first — the Arc network requirements doc is the authoritative “what URLs/ports must my cluster reach?” list. (Docs: Azure Arc-enabled Kubernetes network requirements)
(Docs: Quickstart: Connect an existing Kubernetes cluster to Azure Arc and Azure CLI reference — az connectedk8s connect)
The quickstart calls out that Arc deploys agents into the azure-arc namespace. I validate that they’re Running:
kubectl get deployments,pods -n azure-arc
(Docs: Quickstart: Connect an existing Kubernetes cluster to Azure Arc)
Here’s what that looks like in practice on my Kyma cluster:
And here’s the connected cluster resource in Azure (showing things like connectivity status, location, and tags):
At this point, if Arc isn’t healthy, I stop and fix that first. Everything else depends on it.
Now I go to Defender for Cloud and enable the Containers plan for the subscription where my Arc-enabled cluster lives.
The portal walkthrough is:
(Docs: Enable Defender for Containers on Arc-enabled Kubernetes (portal))
At this point you’ll be asked which Containers plan components to enable.
You can enable everything, but for this post I’m intentionally focusing on the Defender sensor (runtime detections). The important callout: from a pricing perspective there’s no cost benefit to enabling one vs. many — the cost is the same — so this is purely about keeping the walkthrough scoped to runtime detection.
Here’s what that looks like in the portal (first the Containers plan settings, then the component selection where I keep only the sensor in scope):
I use one of two flows.
This is the “guided remediation” path:
(Docs: Enable Defender for Containers on Arc-enabled Kubernetes (portal))
If I want explicit control (or I’m debugging), I do:
DefaultWorkspace-[subscription-id]-[region](Docs: Enable Defender for Containers on Arc-enabled Kubernetes (portal))
If I’m onboarding clusters at scale, I don’t want a click path. The programmatic doc gives the Azure CLI commands for creating the Defender extension.
Defender sensor extension:
Note: Some examples include an auditLogPath setting for clusters where you control the API server audit log file location. In Kyma, audit logs are handled via SAP’s Platform Logging Service and you generally don’t have direct access to that file path, so I’m omitting it here.
az k8s-extension create \
--name microsoft.azuredefender.kubernetes \ --cluster-type connectedClusters \ --cluster-name <cluster-name> \ --resource-group <resource-group> \ --extension-type microsoft.azuredefender.kubernetes \ --configuration-settings \ logAnalyticsWorkspaceResourceID="/subscriptions/<subscription-id>/resourceGroups/<rg>/providers/Microsoft.OperationalInsights/workspaces/<workspace-name>"(Docs: Deploy Defender for Containers on Arc-enabled Kubernetes (programmatic))
If you need the generic “how do extensions work / how do I list/update/delete them” reference, the Arc extensions doc is the canonical place. (Docs: Deploy and manage Arc-enabled Kubernetes extensions)
This is where I slow down and prove success.
Microsoft’s verification checklist is:
(Docs: Verify Defender for Containers on Arc-enabled Kubernetes)
az connectedk8s show \
--name <cluster-name> \ --resource-group <resource-group> \ --query connectivityStatus
The expected output is Connected. (Docs: Verify Defender for Containers on Arc-enabled Kubernetes)
az k8s-extension show \
--name microsoft.azuredefender.kubernetes \ --cluster-type connectedClusters \ --cluster-name <cluster-name> \ --resource-group <resource-group> \ --query provisioningState
The expected output is Succeeded. (Docs: Verify Defender for Containers on Arc-enabled Kubernetes)
kubectl get pods -n kube-system -l app=microsoft-defender
# If you don’t see anything in kube-system, also check the mdc namespace:
kubectl get ds -n mdc
kubectl get pods -n mdcThis is the simplest “is the sensor deployed?” check. If the DaemonSet exists and the pods are Running, you’re in good shape.
(Docs: Verify Defender for Containers on Arc-enabled Kubernetes)
This is the “did Azure actually receive the signals?” check.
After you’ve deployed the Defender extension and the sensor is running, go to Microsoft Defender for Cloud and look at Security alerts (or the Alerts view in the Defender for Cloud experience). If you just ran the simulator (next step), this is where you’ll see the resulting alerts.
It can take a bit of time (think minutes, not seconds) for the cluster and alerts to show up after onboarding. (Docs: Verify Defender for Containers on Arc-enabled Kubernetes)
If I want hard proof that the sensor-backed detections are flowing end-to-end, I use Microsoft’s Kubernetes alerts simulation tool.
It has two prerequisites that matter in practice:
Then I download and run the simulator:
curl -O https://raw.githubusercontent.com/microsoft/Defender-for-Cloud-Attack-Simulation/refs/heads/main/simulation.py
python simulation.py
After it runs, I go back to Defender for Cloud and look at the alerts that were generated:
(Docs: Kubernetes alerts — Kubernetes alerts simulation tool)
To make this feel real (and to sanity-check what Defender is actually flagging), I open one of the generated alerts and look at the Alert details pane. For example, the “A drift binary detected executing in the container” alert includes fields like the suspicious process path, command line, parent process, and the affected Arc-enabled Kubernetes resource.
*.cloud.defender.microsoft.com:443) (Docs: Enable Defender for Containers on Arc-enabled Kubernetes (portal))The Arc extensions doc notes that if Arc agents don’t have network connectivity for an extended period, an extension can transition to Failed, and you may need to recreate the extension. (Docs: Deploy and manage Arc-enabled Kubernetes extensions)
If you’re running Kubernetes outside AKS, it’s easy to end up with fragmented security tooling. The Arc + Defender for Containers pattern is one of the cleaner ways I’ve found to bring:
into a hybrid Kubernetes estate—without replatforming.
In future posts, I’ll explore what else we can do with Kyma + Azure Arc + Azure beyond Defender for Containers (observability, more security patterns, etc.).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 26 | |
| 25 | |
| 24 | |
| 20 | |
| 13 | |
| 13 | |
| 12 | |
| 12 | |
| 11 | |
| 11 |