Cloud computing and agility on local machines have been evolving over the years with respect to the operating systems for running applications. Earlier the applications could only be run over the host operating systems of a computer but later virtual machines enabled us to run a guest OS on the Host OS. Now we have containers where all the required dependencies can be packed with the application which enables us to run these applications without getting dependent on the OS. In other words, developers can run an application configured over MacOS on windows just by running up the container on windows and save their time by avoiding reconfigurations.
If you have recently started learning about virtualization tools, you might wonder what is the difference between two conversational technologies - containers and virtual machines. So before getting in-depth with container security it is essential to understand the basic difference between the two and how virtualization works.
Virtualization is the process in which software is used to build an abstraction layer on the computer hardware that allows hardware elements of a computer to be divided into multiple virtual computers. The software that’s used here is called a hypervisor which is a small layer enabling multiple operating systems to run simultaneously while sharing the same computing resources.
A hypervisor sometimes called a virtual machine monitor (VMMs) is an emulator that creates and runs virtual machines (VMs). It isolates the hypervisor operating system and resources from the virtual machines and enables the creation and management of those VMs. Besides, a computer on which a hypervisor runs virtual machines is called a host machine where each VM is called a guest machine.
Whereas, Virtual machines are a technology that is used for building virtualized computing environments. It is an emulation of a physical computer that allows a team to run multiple machines with multiple operating systems on a single computer. And hypervisor helps VMs to separate from each other and allocate memory, processor, and storage resources to them.
On the other hand, containers are lightweight, portable and a more agile way of handling virtualization. As containers do not use hypervisor, you can enjoy rapid availability and faster resource provisioning of new apps. With containers, rather than virtualization of the underlying computer like a VM, only the operating system (OS) is virtualized.
Containerization packages everything required to run an individual application or microservices together. It includes all the codes and dependencies allowing an application to run anywhere. Simply put, it leverages the features of the host application to isolate processes and control access of processes to memory, CPUs and desk space.
Every operating system installs its kernel that is responsible for handling low-level tasks such as disk management, memory management, task management etc. , so does the Host and guest OS. However, the containers share the kernel of the host operating system. The sharing of the kernel is the main reason why cyberattacks are highly targeted to gain access over a kernel of Host OS in a cloud computing environment but the following steps can be used to enhance the container security to prevent such scenarios.
Containers provide lightweight isolation from the host operating system and containers within the cluster. This drives to a weaker security edge compared to virtual machines. Due to this weak security boundary, there can be several vulnerabilities and if a hacker gains the access to the host machine he can easily acquire access to the entire application leading to a huge loss of data, and resources.
Container Security or system hardening is the preservation of a container’s integrity. It must include everything from the application to its infrastructure. The security of the container should be continuous and integrated. For system hardening, a development team must work towards securing the container pipeline, application, container deployment environment, and infrastructure by indulging security tools or magnifying existing policies of security.
One thing to remember here is the security of a container cannot be copied and pasted instead it must be designed according to the context of the application. It is essential to know the threat models of your project.
The process of container security is continuous. It needs to be indulged in the development process, automated to extend the operation and maintenance of underlying infrastructure while removing the manual touchpoints. The implementation of security in the continuous integration and continuous delivery life cycle will help your business to diminish risks and vulnerabilities across the ever-growing cyber attack surface.
Some of the major concerns while securing containers are:
- Securing the container host
- Security of the network
- The build pipeline
- Ensuring the container runtime security
- Securing the orchestrator configuration
- Protecting the Data
- Securing the Cloud
So let’s get into the details of each concern to secure the containers.
To secure the host where containers are running, choose the ideal operating system. If possible, you should select the distributed operating system that is efficient to run containers. While using Microsoft Windows or Linux stock distributions, ensure to disable or get rid of unnecessary services and make your operating system attack-resistant in customary.
Besides, integrate a layer of security and monitoring tools like an intrusion prevention system or application control to assure that the host is working properly. The container starts running in the production environment and will have to come in contact with different resources or containers. And if you have a host file system that is accessible through containers ensuring to lock down the operating system (volume, write or exec permissions) on the host. Using Seccomp or SELinux to restrict host syscall access to isolate containers from each other and the host.
Securing the network is very significant as a lot of attacks happen through the network. And to do so you can firewall the internet-facing workloads, that is, any internet-facing service should be placed behind firewalls to ensure the security of it. Moreover, using network policy you can lock down Layer 3/4 services. So by default, unnecessary container communication should be disallowed in the cluster and only the services that need to interact with other services should be whitelisted. Another thing to take care of is creating granular layer 7 policies using service mesh like Istio or Console Connect that decides which services should be allowed to make what type of HTTP request to which type of other services.
Moreover, you can use mTLS (that is a part of Transport Layer Security) to connect containers for workload communications while encrypting the interactions. And lastly, log all the unsuccessful connection attempts for better security.
Attackers have started to shift their attacks towards the initial stage of the CI/CD pipeline. And if the attacker gains access to your build pipeline, code repository, workstations then it will be easier for them to reside in your environment for a long period. So, needless to say, you need strong security control over the build.
You can start with verifying the resource of build image, scanning image for common vulnerabilities and exposures (CVEs), scan configuration files for security and compliance checkers, do static code analysis of code and its dependencies and limit the number of the base image. Moreover, you should also indulge the least privilege principle and only allow as much access as needed to fulfil a given task while auditing that access frequently.
This is sort of a new dimension that was added for virtual machine security. Container runtime security is important and depends on which runtime (Docker or Kubernetes’ CRI) you are using. When you start the container runtime, it is important to review the default security configuration of it, to assure its security policy spans across different runtimes (hybrid environment).
Additionally, restrict the use of privileged containers (having root access to the host) by only allowing them on a need basis and further limit the access of container runtime daemon or APIs.
From a security perspective, the orchestrator of the container like AWS ECS or Kubernetes is often neglected. To secure these orchestrator’s configuration, make sure that the cluster level policies go through a review & approval process by the experienced managers; secure the orchestrator API access by using Role-Based Access Control (RBAC) and Access Control Lists or network policies.
Be aware of all the third-party plugins in the system and minimize what they have access to. Use RBAC to limit the services of APIs and Operations via third-party apps. Scan app manifests for security in continuous integration (CI).
Data is as important as any other part of the container. To implement proper security for data protection, you can use filesystem encryption for container storage, provide write or execute access to the containers that have to modify data in the particular host path. Scan image for sensitive data like private keys, tokens, etc. in the CI. You should also limit the syscalls to the storage to block runtime opportunity acceleration while logging all the efforts to reach sensitive data.
Securing the container requires a thorough security approach. Ensure that the approach can be automated to fit DevOps processes. With the increase in cyber-attacks, security can no longer be overlooked. And developing an app while addressing all the security concerns makes it easier to simplify the work process. However, if you are a business looking for developing an app with high security, get in touch with the highly experienced development team.