Save Money with Kubernetes

Parker Smits • October 10, 2023

Top methods to save money with kubernetes


Kubernetes has become the industry standard for deploying and running application workloads. An outcome of all of this attention on kubernetes is the numerous tools and resources to track and save costs. The kubernetes ecosystem grows every day through many partners like
CNCF. A few tools garner most of the attention with features like service meshes and CI/CD, but I want to highlight some of the tools and features that can save you money.



The crucial cluster autoscaler


First and most importantly cluster autoscalers are a crucial component of cost optimization in Kubernetes environments. They enable you to maintain the right level of resources for your workloads, reduce infrastructure costs, and improve the overall efficiency and reliability of your Kubernetes clusters. By leveraging cluster autoscalers, you can ensure that you're getting the most value from your Kubernetes-based applications while keeping operational expenses in check. There isn’t just one common autoscaler; many are optimized for specific infrastructure or cloud environments.  I will link a few common cluster autoscalers below.



Next up is the
Vertical Pod Autoscaler (VPA)


VPA is a feature designed to inform users on how to rightsize their workload in terms of resource requests and limits. Right sizing your workloads is usually the quickest way to save costs, but this can be difficult without data.  The VPA will help by analyzing your workloads and suggest optimizations or it can even be set to automatically adjust your workloads. 


  • VPA continuously monitors the resource usage of individual containers within a pod. It collects data on CPU and memory usage patterns over time.
  • VPA analyzes this usage data and identifies the optimal resource requests and limits for each container. It takes into account the container's historical resource consumption and recommends adjustments.
  • By following VPA's recommendations, users can set resource requests and limits more accurately, ensuring that pods have the resources they need without over-provisioning. This leads to better resource utilization and cost savings.
  • Users can choose to enable VPA in an automatic mode, allowing it to make adjustments without manual intervention. This ensures that pods adapt to changing resource demands in real-time, reducing the risk of under or over-provisioning.



The next essential tool is
Horizontal Pod autoscaling (HPA)


HPA’s are a first class feature in kubernetes. No longer do we need to provision resources for peak traffic scenarios that may happen 1-2 weeks a year.  We can provision our resources to our lowest demand and have the cluster automatically spin up new capacity when needed. There are a few default auto scaling strategies that HPAs utilize. Most commonly this is cpu and memory based thresholds. If the default options are not powerful enough KEDA is a great solution. KEDA is a powerful solution for reducing Kubernetes costs by enabling event-driven autoscaling. KEDA supports fine-grained autoscaling based on various event sources, including Azure Functions, Kafka, RabbitMQ, and many more. This allows you to tailor your autoscaling strategy to specific workloads and event triggers, ensuring that resources are allocated optimally.



The cost saving potential of
Knative


Knative is an open-source serverless platform built on top of Kubernetes (think AWS Lambda but for kubernetes).  It is designed to simplify the deployment and management of containerized applications. Knative offers several features that can save customers money in a Kubernetes-based environment:

  • Knative automatically scales your services to zero! when there's no traffic, eliminating the need to pay for idle resources. When requests come in, Knative scales up the appropriate number of containers to handle the load, reducing overall resource costs.
  • Knative's event-driven capabilities allow you to build applications that respond to events, such as HTTP requests or changes in data. This event-driven approach can help you design cost-efficient, event-triggered workflows.
  • Knative's resource management ensures efficient use of resources, minimizing the overhead associated with serverless deployments. It optimizes container utilization and reduces the cost of running your applications.



Leverage labels and Spot instances


Using node labels and Kubernetes topologySpreadConstraints in combination with on-demand and spot instance pools it allows you to optimize Kubernetes workloads for both cost savings and high availability. This approach enables you to harness the cost benefits of spot instances while ensuring that critical workloads continue to run on reliable on-demand instances, ultimately achieving a cost-efficient and resilient infrastructure.

By assigning specific labels to nodes in your Kubernetes cluster, you can categorize them based on instance types, availability zones, or any other relevant attribute. This allows you to distinguish between on-demand and spot instances. Then you can add Kubernetes topologySpreadConstraints to enable pods to skew evenly across nodes with specific labels, making it possible to schedule workloads on both on-demand and spot instances ensuring your application has high availability.



The All-in-one
Kubecost


Lastly, Kubecost is potentially the best way to view and optimize your kubernetes spend.  It leverages many of the features above all in one tool and more.  Kubecost is a paid service, but for many it will be well worth the cost.  For smaller kubernetes installations there are free options that include limited users and nodes. Here is a brief look at kubecost’s features:


  • Kubecost provides granular cost breakdowns, allowing users to see how much each application, namespace, or pod is costing them.
  • Kubecost helps users identify over-provisioned resources. Users can rightsize their workloads by adjusting resource requests and limits similar to vertical pod autoscaling.
  • Kubecost enables cost allocation and chargeback mechanisms within an organization. Teams or departments can be billed for the resources they consume, encouraging responsible resource usage.
  • Kubecost provides alerting capabilities, allowing users to set budget thresholds and receive notifications when costs exceed predefined limits.
  • Spot and Reserved Instances: Kubecost integrates with cloud providers' spot and reserved instances. It helps users take advantage of cost-effective spot instances when appropriate and reserve instances for predictable workloads, optimizing cloud costs.
  • Integration with CI/CD: Kubecost can be integrated into CI/CD pipelines to validate changes for cost implications before deployment, preventing cost spikes resulting from unintended resource allocation.
  • For organizations running multiple Kubernetes clusters, Kubecost offers multi-cluster support, allowing centralized cost monitoring and optimization across all clusters.


Add your custom HTML here
By Parker Smits May 23, 2024
Ops as a Suffix
By Parker Smits February 1, 2024
Multi-Cloud Kubernetes: A Guide to the Galaxy
By Parker Smits January 3, 2024
Multi-Cloud Kubernetes: A Guide to the Galaxy
Share by: