KEDA, an open-source initiative that facilitates event-driven autoscaling for Kubernetes workloads, was originally developed by Microsoft and Red Hat. It has since become a sandbox project under the Cloud Native Computing Foundation (CNCF). KEDA focuses on autoscaling applications in response to events sourced from a variety of platforms, including Kafka, RabbitMQ, and cloud-specific services such as Azure Service Bus and Google Pub/Sub.
KEDA ushers in a new era of flexibility and efficiency in autoscaling. It enhances Kubernetes' capacity to support fine-grained autoscaling for event-driven workloads. Leveraging KEDA, you can dynamically scale your deployments from zero to any arbitrary number, contingent on the volume of events they are designed to process.
You can activate KEDA like any other module by adhering to the official guidelines on how to enable and disable a module.
KEDA offers a broad range of scaling strategies, one of which is the cron-based scaler. This scaler allows you to schedule scaling actions according to the time of day, an invaluable feature for managing predictable fluctuations in workload.
As an illustration, the cron-based scaler enables you to:
Optimize Resource Utilization and Reduce Expenses: The cron-based scaler offers a solution to optimize resource utilization and reduce costs by allowing you to schedule your applications to downscale during non-working hours. This feature is useful for your dev/stage/QA clusters, which are not required during off-working hours.
Note: This benefits when your workloads require more resources than the base setup. The current base setup consists of 3 VMs, each with 4 CPU and 16 GB of RAM. Therefore, if your workloads need 4 or more VMs to be provisioned, this feature can provide benefits to control costs and keep them to base setup during off-work hours.
Note: The cron scheduling is applicable to only customer workloads and not kyma components.
The KEDA scaledobject
resource can be configured with a trigger of type cron
. Within the cron scaler, you can specify that the workloads should only run during working hours.
For each type of workload, you must specify scaleTargetRef.
I applied the KEDA cron scaler to all custom workloads in my Kyma cluster.
All my microservices and functions replicas were scaled down to zero.
Additionally, the number of nodes (VMs) was reduced from 4 to 3.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
26 | |
24 | |
21 | |
13 | |
9 | |
9 | |
9 | |
9 | |
8 | |
8 |