
With the emergence of containers, Software as a Service and Functions as a Service, the focus in on consuming existing services, functions and container images in the race to provide new value.
Scott McCarty, Principal Product Managers, Containers at Red Hat, says that focus has both advantages and disadvantages. “It allows us to focus our energy on writing new application code that is specific to our needs, while shifting the concern for the underlying infrastructure to someone else,” says McCarty. “Containers are in a sweet spot providing enough control, but offloading a lot of tedious infrastructure work.”
But containers can also create disadvantages related to security. McCarty shares his insights on this topic:
Do you see organizations sacrificing security for convenience?
McCarty: Some organizations do sacrifice security for convenience. Often it’s not a conscious choice, they do it without realizing it. As the world’s bias leans towards consumption, we lose track of some of the underlying details. Often these details include security. Developers will pull container images from public container registries without so much as an understood criteria for how to analyze their quality. Developers have no tools to verify that they are getting what they think they are getting. There is the perception that moving quick can increase security, but moving quick can also expose new security vulnerabilities, and data only has to be stolen once. Speed cannot recover this data once a hacker steals it.
How can organizations minimize risks?
McCarty: It’s fine to have biases and focus areas. This is what helps us compete well in the marketplace. This means that somebody else has to pay attention and focus on the risk. Open source is great, but legal and security risk cannot be shifted to upstream projects. Instead, organizations can minimize risk by shifting it to vendors. In this way, organizations can move fast, minimize risk, and focus on their own business problems. This is especially true with container platforms. Performance, security, patching, and security research can be shifted to the platform, which for Red Hat, are Red Hat OpenShift and Red Hat Enterprise Linux.
Is there a way to lock down an entire container supply chain? How can this be done?
McCarty: Not completely, but risk can be minimized. The container supply chain includes what’s inside the container (developer code, Linux libraries), as well as the container host (Linux kernel), and orchestration layer (Kubernetes). The platform (host and orchestration) is fairly straight forward to lock down. The platform can be secured following traditional security procedures and best practices such as making sure SELinux is in enforcing mode.
The converged supply chain in container images means that code can come from three major sources.
- Code consumed from a vendor or Linux distribution (RHEL openssl libraries, glibc, etc)
- Code downloaded from upstream libraries or services (Ruby, Node.js, Java, MySQL container image, etc)
- Code written by in house developers
Source #1 is fairly easy to track and verify. This includes libraries from the container image which in turn come from a Linux distribution. Things like glibc and openssl can be part of the Linux distribution in the container image. Provenance, security, performance, reliability and quality of these libraries comes from the base image. However, it’s important to understand the support lifecycle for the distribution you choose. You want to be sure that you understand the frequency with which patches for newly discovered vulnerabilities will be available to you.
Source #2 is a bit more difficult. Until now, developers have built trust for upstream libraries and services by simply using them. If the libraries or services worked, developers considered them trustworthy. This can include downloading a database container image, or a json processing library from GitHub. Historically, this worked decently well, but many in the security community saw the potential for abuse here.
Recently, it has become quite popular to hack container repositories on DockerHub (Docker Hub hack exposed data of 190,000 users), or code Repositories on GitHub (Canonical GitHub account hacked, Ubuntu source code safe). With Advanced Persistent Threats (APT) becoming more popular daily, open-source projects need to think about fellow developers adding pull requests that might appear good, but actually introduce trojans or viruses into their code.
Source #3 is likely the trickiest. This includes mistakes made by developers who do not code defensively or do not verify the trustworthiness of underlying container images or open-source libraries. Developers may be focused on moving quickly, over moving carefully and compound problems from source #2.
Sources #2 and #3 cannot be solved with signing and verification of container images alone. Instead, the underlying code needs to be analyzed. An entire ecosystem of code scanning has grown as containers have grown. To truly secure the supply chain and maintain agility, users should start with a trusted source for base images (source #1), then investigate tools to scan code (sources #2 and #3).
What advice would you give software developers and software vendors?
McCarty: Investigate all the sources of your code. From the base images and underlying libraries that come with them, to the code downloaded from public container image repositories, and code repositories, to the code written by in-house developers. All of these layers need to be considered. Also, be sure to make use of the runtime security capabilities of your Linux host. While moving quickly is critical in the modern economy, it’s not easy, nor cheap. Investing in quality base images, as well as scanning solutions might be expensive, but combined these tools allow you to move quickly while at the same time maintaining security.