6 Mistakes to Avoid When Adopting Kubernetes

Avoid these common pitfalls to ensure your Kubernetes adoption goes off without a hitch.


If you want to deploy cloud-native applications anywhere and manage them the way you want everywhere, Kubernetes is the answer. Kubernetes is a powerful open-source platform that enables the deployment, scaling, and management of containerized applications. It can save developers a great deal of time and effort by allowing them to focus on building features for their applications instead of figuring out and implementing ways to keep their applications running well. However, as with any technology, there are common pitfalls that can complicate Kubernetes adoption. Here are six mistakes to avoid when adopting Kubernetes.

1Not Properly Assessing the Need for Kubernetes

Before jumping into Kubernetes, assess whether it is the right platform for your needs. Kubernetes is a powerful tool, and it is widely adopted, but it is not the only container orchestration platform out there, and it is certainly not the easiest one to deploy and manage. Kubernetes may add unnecessary complexity for small teams and projects, so take the time to evaluate alternatives, such as Docker Compose (for very small and simple deployments), Docker Swarm (which enables cluster management and more advanced orchestration), or even managed solutions such as Amazon Elastic Container Service. Begin working with Kubernetes only after making sure it is the right tool for you, your workload, and your team.

2Neglecting Monitoring, Observability

When resources are shared across many different applications, it can be very noisy and hard to see when something goes wrong. Having a well-planned and executed monitoring and observability infrastructure can alert you when something is broken. It will help with performance tuning, simplify troubleshooting, and shortening the mean time to resolve (MTTR).

Carefully planning and implementing such infrastructure can be complicated, especially fine-tuning your alerts and dashboards for your specific organizational needs. This is a lengthy process, so if you do not have the time to invest in perfecting your Kubernetes monitoring and observability on your own, there are open-source and third-party solutions that will help simplify the journey toward observability. A good starting point is kube-prometheus-stack, since it deploys Prometheus, Grafana and Alert manager together with basic Kubernetes alerting rules and Grafana dashboards.

3Forgetting to Right-Size your Applications

When planning the cluster capacity, you first need to understand the characteristics of the running workload. Will your application usage of resources be consistent and predictable or burstable and erratic? Can your applications withstand interruptions if the cluster will suffer from resource shortage? Based on the answer, you can choose the right Kubernetes Quality of Service Class (QoS) for your workload. These classes specify how different pods are scheduled and executed on cluster nodes and when those are evicted.

After understanding QoS Classes and choosing the right ones for each workload, continuously monitor pod resource consumption using your observability platform and right-size Kubernetes requests and limits based on the results. Setting those will affect the inferred Kubernetes QoS Class for each workload and how many nodes are required to schedule and execute pods. Setting Kubernetes resource requests, limits and QoS Classes is a topic of its own and there are many approaches and debates as to how it should be done. Take the time to understand the impact of each approach. If you are looking for a more efficient way to work with Kubernetes requests and limits, try the Kubernetes Vertical Pod Autoscaler. It will recommend or even automatically set workload requests and limits after evaluating the workload for some time.

4Overlooking Security

Kubernetes can be complicated enough to understand and adopt, but securing your infrastructure and workloads should be at the top of your list when starting to work with Kubernetes. There are many aspects of securing Kubernetes, including API security, node security, network security, runtime protection, workload and tenant separation, role-based access control, container isolation, and vulnerability management. Each subject on its own demands thorough research, so take the time to formulate your security approach at the very beginning, even before you onboard your workload to Kubernetes, since neglecting any of the above may pose a risk to your workload.

5Failing to Automate Deployment, Management

As with many other distributed systems, manually managing Kubernetes can be overwhelming, especially at scale. Failing to automate your Kubernetes deployments can lead to inconsistencies, inefficiencies, and human burnout. There are many ways to enable automatic management and configuration of Kubernetes components and workload, starting from the cluster deployment through day two operations. Start with building your Kubernetes infrastructure with infrastructure as code (IaC) tools. Build and deploy into Kubernetes with CI/CD pipelines and implement GitOps tools to automate your infrastructure and application configuration. Using these tools will enable consistency, reliability, and faster delivery while saving time and effort.

6Losing Control of Cost Spend

This is a story that many engineers are familiar with – you start with a small deployment, configure some plug-ins, experiment and even onboard some new workload to your fresh Kubernetes cluster. Then when the monthly invoice arrives, you get sticker shock. This is the nature of working with cloud-native tools, so if you’re not careful, it is easy to lose control of cloud spending, which is particularly painful on a tight budget. It’s important to conduct research and implement tools that can help you to control and analyze your spending in the Kubernetes ecosystem and optimize cloud and infrastructure costs. One such option is OpenCost, an open-source tool that connects to a new or existing Prometheus stack and enables you to measure and allocate infrastructure and container costs in Kubernetes.

Kubernetes is a powerful orchestration platform that is very useful for provisioning and operationalizing containers at scale. Once you understand the capabilities and benefits of Kubernetes, there are many opportunities to modernize traditional applications and develop new cloud-native apps with speed and agility. By taking these six pitfalls into account from the start, you can put yourself on the path to a successful Kubernetes journey.


Shay Ulmer is Lightspin’s DevOps Engineer, responsible for creating and maintaining the company’s CI/CD workflows, cloud infrastructure, application monitoring and security. Before joining Lightspin, Shay was part of the Kubernetes Managed Services SRE team at Red Hat. Shay enjoys working with new technologies, tackling automation challenges and helping improve infrastructure availability, scalability and security.