Smart Kubernetes Autoscaling.
K20s is a smart Kubernetes controller that provides sophisticated, policy-driven workload autoscaling and rightsizing.
Beyond the HPA: Stable, Predictive Scaling
Unlike the native Horizontal Pod Autoscaler (HPA), K20s leverages long-term, historical data from Prometheus to make more intelligent and stable scaling decisions. We eliminate the jerky, oscillatory scaling based on momentary spikes.
It introduces the ResourceOptimizerProfile Custom
Resource, allowing you to declaratively define powerful optimization policies for every
application in your cluster.
Optimization, Elevated.
Policy-Driven Optimization
Choose between Scale, Resize, and Recommend
policies to declaratively control exactly how your workloads are optimized.
Historical Data Analysis
Integrates directly with Prometheus to analyze resource utilization over extended periods, providing deep context for stable rightsizing.
Configurable "Goldilocks Zone"
Define your ideal CPU utilization ranges to keep applications performant while maximizing cost-effectiveness—not too hot, not too cold, just right.
Built-in Stability & Safety
Features like cooldown periods and explicit min/max boundaries prevent oscillatory scaling and ensure your cluster remains stable and predictable.
Clear Observability
A built-in status dashboard and custom Prometheus metrics give you a clear, transparent view of every decision the controller makes.
Native Kubernetes Experience
Configure scaling policies directly through Kubernetes Custom Resources—just use kubectl and YAML.
Installation
Prerequisites
- A running Kubernetes cluster.
kubectlinstalled and configured.- Prometheus installed in the cluster and accessible via a service.
For detailed setup instructions, including using make install, see the Makefile
Help Documentation.
1. Install the Controller
Run the following command to install the controller, CRD, and all necessary resources.
Replace vX.Y.Z with the desired version from
the GitHub Releases
page.
kubectl apply -f https://github.com/OpScaleHub/K20s/releases/download/vX.Y.Z/install.yaml
2. Verify the Installation
Check that the controller pod is running in the k20s-system namespace.
kubectl get pods -n k20s-system
How to Use K20s
K20s is configured using the ResourceOptimizerProfile Custom
Resource. This allows you to define your scaling and optimization strategies declaratively,
right alongside your other Kubernetes manifests.
Create a YAML file for your profile, specifying the target workloads via a label selector, your desired CPU utilization thresholds, and the optimization policy to apply.
Example Profile
This profile targets any deployment with the label app: sample-app. It will attempt to keep the average
CPU utilization between 30% and 75% by scaling the number of replicas up or down.
# sample-profile.yaml
apiVersion: optimizer.k20s.opscale.ir/v1
kind: ResourceOptimizerProfile
metadata:
name: sample-app-optimizer
spec:
selector:
matchLabels:
app: sample-app
cpuThresholds:
minPercent: 30
maxPercent: 75
optimizationPolicy: Scale
Apply the profile to your cluster:
kubectl apply -f sample-profile.yaml
For a complete reference of the ResourceOptimizerProfile fields, check the API Reference
Documentation.
Ready to Optimize Your Fleet?
Get started by deploying the controller and creating your first optimization profile.
Deploy K20s Now