
Green initiatives are no longer a “nice to have.” They are imperative to the innovation of data-intensive technology.
Resource efficiency, increased portability, and productivity have led to 96% of organizations across the globe either using or evaluating Kubernetes. Many organizations have significantly increased server utilization by moving to Kubernetes orchestration.
However, most organizations are not yet implementing smart techniques to further optimize resource utilization and significantly reduce their organization’s carbon footprint.
As a profession, our efforts to make servers more efficient have already bent the curve of energy consumption downward from projections. For example, in 2015, just as Kubernetes 1.0 was released, researchers believed that by 2030 servers would be using 3–13% of global electricity. However, In 2022, the total energy consumption of servers was only 1%.
This indicates that increased demand on data centers does not have to correspond with an increased carbon footprint.
Ongoing innovation to reduce the carbon footprint of data intensive technologies can continue to have an impact. Here are three ways we can reduce the carbon footprint of Kubernetes:
-
- Use regional scheduling capabilities of Kubernetes to run more optimally across locations.
- Employ energy efficient multi-tenant clusters with true isolation to increase machine utilization and reduce redundant utilization of operational resources
- Scale intelligently by commissioning machine learning to reduce the need for running excess pods.
Make these three key changes today
-
- Schedule deployments in low carbon intensity regions
The portability of Kubernetes means that information technology decision makers can choose where workloads are deployed. This choice can be informed by an electricity map API that indicates the climate impact of different regions based on the carbon intensity of electricity consumed in the area. By scheduling Kubernetes workloads to scale in cloud provider regions with lower emissions intensity, organizations can reduce the carbon intensity of running their applications. A low carbon Kubernetes scheduler project was able to effectively migrate Kubernetes deployments between global regions to utilizing the least carbon intense electricity available, directly reducing an application’s carbon emissions.
-
- Enhance Kubernetes-efficient resource utilization with multi-tenant architecture
Current methods for isolation rely on creating dedicated clusters for each customer or organization. Each cluster requires its own tooling, control plane, observability, and other supporting services like identity access management, firewalls, and networking. Adding onto this that each one runs at about 60% of its capacity for headroom and a great deal of waste piles up with all these individual clusters. A multi-tenancy approach to Kubernetes reduces both unused cluster capacity and infrastructure footprint, optimizing cluster utilization and reducing energy consumption. With multiple tenants running on one cluster, control plane and other infrastructure is shared; multiple services can “smooth” any surges in use between them. Instead of having 40% for every cluster, a larger cluster can have 20% as not all services will surge in use simultaneously.
Managing namespaces is crucial to achieving isolation in a multi-tenant cluster solution. In addition, network, observability, computing and storage resources all need some degree of isolation or you end up with a sort of noisy neighbor multi-tenant solution. Open source projects are currently working on solutions for these challenges. One example is KubeSlice, which creates logical application boundaries known as “slices” for single or multiple clusters. Each slice creates its own “isolated network” for each “tenant” with its own namespaces, resource quotas, traffic profiles. Platforms like this work to make multi-tenancy a seamless choice for applications seeking to reduce their carbon footprint.
-
- Scale more intelligently
While scalability provides unparallelled resilience and availability critical to application uptime, it can also be the biggest energy guzzler. The ease of Kubernetes deployment has led to over-provisioning as a primary strategy for ensuring application performance. As with multi-tenancy, by scaling smarter, clusters can safely power off unused (wasted) worker nodes. Predictive autoscaling observes application traffic and predicts the number of pods needed based on patterns. For example, Avesha’s Smart Scaler provides reinforcement learning that continuously optimizes the number of running pods. With intelligent predictions about machine use, turning off more unused worker nodes reduces energy consumption without risking downtime.
Lights on, lights off
Infrastructure in the cloud is designed to be agile with “lights on, lights off” functionality. Just as we lower energy consumption in our homes by turning the light out when we leave a room, we can reduce Kubernetes’ carbon footprint by turning off what we aren’t using.
Artificial intelligence, smart and connected energy systems, distributed manufacturing systems, and autonomous vehicles continue to increase demands on data-intensive technology. Reducing the carbon footprint for these innovations is an urgent matter, essential to both their endurance and our own.
The good news is that we already have the technology to address this. Right now, we can access less carbon-intensive electricity and turn off excess resources with multi-tenancy and intelligent scaling. By putting these strategies into action today, we give future generations the opportunity to turn the lights on for innovation in the future.