Steering Toward DevSecOps: 3 Benefits of Kubernetes

Here's why Kubernetes is an effective platform to bring together development and operations for end-to-end security.

DevSecOps-security-kubernetes

All great journeys need a strong captain, and the road to DevSecOps is no exception. That’s where Kubernetes comes in. Originating from a Greek word meaning helmsman or pilot, Kubernetes is an open-source system that simplifies deploying, scaling, operating and, most importantly, securing containerized applications. Originally designed by Google, Kubernetes was open-sourced in 2014 and is now used by 83 percent of organizations surveyed in the Cloud Native Computing Foundation’s (CNCF) most recent report, with use of containers in production increasing 300 percent since 2016. So, what makes Kubernetes such an effective platform to bring together development and operations and establish end-to-end security? In my mind, there are three key benefits that must be taken into consideration:

1Approachability of the Tooling

One of the most effective ways of breaking down barriers between groups is to establish common ground and a common language to communicate. When bringing together the silos of development, security and operations in DevSecOps, that commonality is greatly facilitated by shared, effective tooling. Kubernetes and the CNCF community provide just that. Whether it be security and resource policy enforcement through Open Policy Agent Gatekeeper, consistent deployment through declarative Helm charts and Argo Flux, operational monitoring with Prometheus and Grafana or rapid development and testing locally with Minikube and Kind, Kubernetes’ robust ecosystem is one of the few that can consistently boast, “there’s a tool for that!”

In addition to being approachable, the tooling of Kubernetes makes it incredibly portable. Capable of running on-premise, in hybrid or public cloud infrastructure and even embedded in edge devices, Kubernetes enables organizations to develop, deploy, operate, monitor and continuously secure its IT solutions anywhere the business or mission requires.

2Automation   

One of the most effective applications of the robust tooling in Kubernetes is to establish an “automate everything” philosophy. With Kubernetes, organizations are able to enjoy consistent, no fear deployments, lower expense by automating operational tasks and activities and establish platforms and applications that are self-healing through declarative definitions versus imperative scripting. By declaratively defining the desired state of the cluster, security policy and application deployments – as opposed to scripting line-by-line instructions to be followed – automation in Kubernetes is more resilient and adaptable to change out-of-the-box.

Taking automation even further, Kubernetes provides for the extension of its built-in automation through custom resource definitions and operators. This allows an organization to tailor its automated solutions to the exact operational and security needs of its environment. Whether it is responding to a security incident through dynamic isolation, ensuring resiliency through chaos engineering or performing backup and restore activities, custom operators eliminate the most consistent point-of-failure and security vulnerability in IT: the human in the loop.

3Enablement of Zero Trust 

Kubernetes is one of the most effective technology enablers for establishing a true Zero Trust Architecture (ZTA) for the enterprise. One of the most fundamental reasons for that is alignment on the idea that everything in the ecosystem is a resource, every resource has an identity and no resources should be implicitly trusted. This is further augmented by the automation and approachability of tooling that establishes software-defined security policy, operational transparency, software-defined micro-perimeters and software-defined networking, most often referred to as service mesh. One of the most common service mesh solutions in the Kubernetes community is Istio as it provides not only the mesh but a suite of robust custom resource definitions, operators and third-party monitoring solutions that help bootstrap a ZTA solution in Kubernetes.

Take for example the recent SolarWinds incident. A Zero Trust Architecture established through Kubernetes would have mitigated the risks that were identified in this particular attack. First, Kubernetes’ ability to enforce software supply chain security at runtime provides a layer of security aimed at preventing a malicious actor from having their malicious code injection released into operations. Second, software-defined micro-perimeters, mutual Transport Layer Security enforcement, and deny-by-default network mesh prevent lateral movement of an attacker if their malware is able to make it through the supply chain protections. Third, through continuous monitoring and automated AIOps through custom operators, malicious and anomalous traffic and activity is rapidly detected and isolated to protect and enable forensic investigation.

As organizations recognize the many benefits of Kubernetes, usage of the platform continues to grow, and that surge is not without its risks. Though the platform offers countless security controls and protections, there’s a learning curve to navigation and organizations have to invest time and effort to ensure proper deployment. In fact, in a recent report on The State of Container and Kubernetes Security, 94% of respondents reported having experienced a security incident in their Kubernetes and container environments in the last 12 months, with 69% experiencing a misconfiguration incident in their environments. So while Kubernetes is designed for faster application development and release, the message to organizations looking to take advantage is clear: take the time to build a strong foundation first.

SHARE
Bob Ritchie

Bob D. Ritchie is vice president of the software practice with responsibility for leading over 4,000 software engineers in support of executive-level project teams, providing technical direction and expertise for SAIC enterprise modernization initiatives. In addition, he established the Cloud One Community of Practice and holds workshops and information sharing sessions to foster deeper understanding in the broader community.

Ritchie joined SAIC in 2006 as a senior principal software engineer. He has led several Agile teams in developing, modernizing, migrating, and operating resilient, highly available, enterprise-scale software systems across the Navy, Marine Corps, Defense Logistics Agency, and Air Force.