DEV Community

Szilárd Pfeiffer
Szilárd Pfeiffer

Posted on • Updated on • Originally published at balasys.eu

Zero Trust: Is it anything new?

In theory, it isn’t particularly new. The term zero trust has been around for more than 55 years. De-perimeterisation, the main concept behind Zero Trust Architecture, was defined and promoted on the Jericho Forums, which was founded 20 years ago. Even the management of risks associated with de-perimeterisation were discussed almost two decades ago. John Kindervag coined the concept while he was at Forrester in 2009, and Google implemented a Zero Trust Architecture framework, referred to as BeyondCorp, in the same year. Even so, in practice Zero Trust should mean more, than just a marketing hype, especially given that Joe Biden has ordered that “the Federal Government must adopt security best practices; advance toward Zero Trust Architecture”. The publication of NIST can serve as both a theoretical and practical guideline, which should be applied to achieve worthwhile changes. But what are these theories and practices and why they are so important? Let’s take a look.

What is Zero Trust?

It should be pointed out that Zero Trust is not a product, but a model. Though it can be facilitated by one or more products, it primarily necessitates a change in approach. Before, common sense was that a private network has definite perimeters with a small number of entry points, and the goal was to protect them. This way of thinking bears the strategic approach of the late medieval and early modern period. The defense of an area with definite boundaries and the assets concentrated behind the walls of the fortress. Both attacking and defending armies were mostly focused on the entry point, just like red and blue teams are focused on network defense tools in this castle-and-moat (network) security model. However, it is well-known, as it was in the medieval period, that there is a much easier and more profitable way than a siege, namely sabotage. In the castle-and-moat mode, if the authentication is circumvented at the entry point there are no other mechanisms to prevent malicious activity, as you are inside the perimeter. You are trusted if you are inside the perimeter – this could be the motto of any malware developer. Zero Trust Architecture is looking to overtake this old-fashioned perimeter approach. \
There are significant changes in perimeter approach which makes the rise of Zero Trust quite timely. Before, there was a dogma that it was hard to obtain access to private resources from outside the private network, so successfully authenticated users could access any resources on the private network. In the age of virtual private networks (VPN) and cloud services, private resources can be obtained very easily from the internet, as there are no definite perimeters, with just a small number of entry points. This means that the way we defend the assets of the organization should change. Any access to any resources by any user or machine must be authenticated and authorized independently of whether the resource tried to be accessed from inside or outside of the organization’s private network. Zero Trust means that lack of authentication means no trust is given at all. Access can be given after successful authentication, but in a restricted manner, just like in real life. Network security is no different from other types of security: it uses the same tenets and learns from the history of them all, as discussed above.

Everything is a Resource

Zero Trust Architecture requires us to consider all data sources and computing services as resources, with no exceptions, even if the network might be composed of multiple classes of devices. Practically speaking, this means there should be one or more control points (Policy Enforcement Point) in the network where all the network traffic goes through and where the policy can be enforced. As a result, the castle-and-moat security model is completely inadequate. With Zero Trust, there is no resource concentration, no definite perimeters, and the focus is on the traffic paths of the communication instead of entry points.

As traffic paths can be controlled comprehensively in computer networks, there is no need to control the entry point itself. It is necessary to segment the network as much as possible and separate these segments from each other. This technique is known as micro-segmentation, as it creates several micro-perimeters or segments in the network. As the crossing between these micro segments are controlled and transit is permitted in a restricted manner, accurate authentication and authorization can be performed at the borders. The situation is the same as it is with real life borders, except that there are no – or at least there shouldn’t be –green borders in computer networks. Lateral movement cannot be performed in the network as it is no longer hierarchical, and there are no resources of distinct importance, as all resources are treated equally, meaning access to all resources is verified independently from the classification of the resource, just like any passengers are authenticated at the borders independently of whether the passenger is a particularly important person or not.

Secure Communication

Secure communication is an essential part of the Zero Trust Security Model for several reasons. Secure communication provides confidentiality, integrity, and authenticity. Authenticity makes it possible for the communicating parties to identify each other, and also makes it possible for the Policy Engine to identify the source of communication. The Policy Engine can then make a decision about whether access can be granted to a resource for a given subject, which will be enforced at the Policy Enforcement Point. Confidentiality inhibits the passive attacker to get credentials or other valuable information by eavesdropping on the network, which can be used during an active attack. Integrity ensures that the communication cannot be altered without the knowledge of the communication parties, making it impossible to modify sensitive information, such as a bank account number or invoice amount, in order to add misleading information or fishing for part of the original content.

Session Based Access

According to the Zero Trust tenets, access to the resources are granted in a session-based manner. Both authentication and authorization are session-based, and the users must be granted only the level of access needed to fulfill their role, which means we must follow the least privilege principle. A session-based approach guarantees a time limitation, as the access to a resource is not necessarily granted in a subsequent session or with the same privileges, as privileges should also be limited to those that are strictly necessary, session by session.

Strictly Enforced Authentication and Authorization

As has already been mentioned, the basic concept is that no one is trusted by default from either inside or outside the network. Authentication and authorization are always checked at each access request before access is granted to an organizational resource, though a question arises of how a user can be authenticated. The most-used authentication mechanisms are indirect, meaning they cannot supply direct evidence to the user’s identity, just certain factors such as something the user knows (knowledge), something the user has (possession) or something the user is (inherence), assuming the exclusivity of knowledge, possession, or inherence. Single factor, like a password might be compromised, but the probability of compromising multiple factors with different type is negligibly low, which is why it is so important to use multi-factor authentication.

One fundamental problem of identification by knowledge is that if it is unchanged over a long time, just like a password or a certificate, and becomes compromised, it does not identify the user, yet the abuse is hard to detect. Credentials that change over a short period of time, such as a Time-based One-Time Password (TOTP) are one option, but this solution cannot solve the problem on its own, as an attacker who has stolen the shared secret, which is also a long-term credential, can generate a valid TOTP. However, combined with a possession-based factor, this can help to identify the human itself instead of just their knowledge. This is especially true when accessing the TOTP generator with software or hardware tokens that can be accessed after an inherence-based identification, such as unlocking a mobile device or a security token by fingerprint.

However, for the user, or client in general, identification is just one factor in dynamic polices. The identification process can also encompass any associated attributes assigned by the enterprise to an account. Characteristics of the device used by the client, such as software versions installed, patch level, network or physical location, time and date of request and previously observed behavior, can also be part of the verification of a client and can also determine the applied policy. Behavioral attributes can also be measured, and deviations can be checked against the observed usage patterns before access is granted to a particular resource. The sensitivity and the classification of the resource should also vary according to the conditions of the resource access. For instance, under certain circumstances only read-only access is granted to a particular resource, but after additional authentication, by a second or a third factor, read-write access can be provided. The situation here is the same as it is in physical security, where entering a higher classified place requires additional authentication. In terms of network and data security, higher data acts like a location in physical security.

Monitoring devices in real-time

Establishing a continuous diagnostics and mitigation (CDM) system is also a requirement of Zero Trust Architecture. Knowing the current security-related state of the network and the actors involved is essential, as restrictions should be applied on a client or a server when a security issue can be assumed to be related to them. For instance, if a device runs a service that has a remotely executable vulnerability which is currently unpatched, the access of the affected service should be limited until the service is patched to mitigate the vulnerability. To be able to do that, it is also necessary to have the information that there is a security issue in the organization. This information can come from a CDM system and may imply a change in the earlier mentioned dynamic policies once a security issue is recognized and subsequently fixed.

Appearance of a new device on the network is a typical scenario where monitoring is essential, as rules must be applied to the network traffic of the newly appeared device. Zero Trust requires that we do not trust in a device just because it is inside the private network, so the rule could simply lead to a denial. However, it is also possible that only one path should be opened which makes possible to register the device on the network for the user, especially if it is a mobile or a bring your own device, which can access only a limited part of the network with limited privileges. Independently from the applied policies, the information about the fact that there is an unregistered device that has appeared on the network which tries to communicate is a must, as it could indicate a legitimate usage of the network, but also an illegitimate or at least a suspicious one.

Not just the devices, but the network traffic they generate should also be monitored. As part of the incident management, during investigation we will need all available information. Before any incident occurs, changes in resource accesses may indicate a security issue. For instance, requesting a higher level of privilege during a resource access, like requesting writing permission instead of the ordinary read-only one, or requesting it from an unusual network location, like from a foreign country the organization has no connection with, at an unusual time, like at midnight in case of a colleague that works 9-to-5 , or trying to discover the network, may all indicate the presence of a malicious software that may generate input to the CDM, causing quarantine of the device to prevent the spread of ransomware, for instance.

Conclusion

The NIST does not articulate any requirements of network security in its Zero Trust publications that would not have already been articulated by others before, but it does so in a way that makes it possible to reach not only C-level executives, but also state leaders – as it has influenced the Biden Administration’s plans for strengthening US cybersecurity. Leading technology research firms, such as Gartner and Forrester, also promote the Zero Trust model, which makes the concept almost unavoidable on providers’ side and also generates hype about the topic. Beyond business considerations, we should keep the basic statement of Zero Trust in mind: there could be attackers both inside and outside of the organization, so we should never simply trust, but always verify and enforce the principle of least privilege.


Licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Top comments (0)