The concept of Zero Trust burst upon the computer security landscape about a decade ago when John Kindervag (then of Forrester) coined the term. Since then, a massive amount of investment has poured into creating Zero Trust architectures and product offerings, and today Zero Trust has become a new litmus test for secure system and network design. Some visions for Zero Trust are overly broad directives to “secure all the things”, and others are more tactical and actionable. This paper is of the latter sort. Specifically, we consider Zero Trust relative to the exploit kill chain.
Wikipedia has a nice article on the concept of a kill chain which describes a unifying set of steps an attacker typically takes during the process of compromising a target. This set of steps is summarized in the following diagram from the article:
Figure 1: Wikipedia pictoral of a kill chain (https://en.wikipedia.org/wiki/Kill_chain)
It is fair to say that if an attacker can move through the entire kill chain during the active exploitation of a networked target then the defenders have thoroughly lost. So, for all stages of the kill chain, which does a Zero Trust architecture help defenders the most? Which are irrelevant? Which can help but only in ways that are based in non-obvious dependencies? Any full treatment of Zero Trust should try to answer these questions for every stage of the kill chain, but that is beyond the scope of this paper. Instead we will focus on just the first stage: reconnaissance. This stage is arguably the most important since if an attacker cannot even see a target to attack the odds of successful exploitation drop drastically. Further, if a security mechanism could only stop, say, the “target manipulation” phase late in the kill chain but not help defend against any phase prior to that, would it have much value? Probably not. Stopping exploitation phases as early as possible in the kill chain helps network defenders.
As illustrated above, the very first step in successful exploitation is target reconnaissance. An attacker who may be in possession of a zero-day exploit for a particular service must identify a set of targets. In a networking context this is usually accomplished by scanning various network ranges to look for matching services bound to a predetermined port or set of ports.
Note: We assume an attacker is not already in a privileged position of being able to sniff traffic to/from a target, and we also exclude client-side exploitation from this discussion. These concepts will be dealt with in future articles.
The computer defense and offense alike have fully automated and optimized the reconnaissance phase. From the open source community, Nmap is certainly the most popular network scanner with many advanced features such as service version detection, OS fingerprinting, a powerful scripting capability, and more. There are also other interesting projects such as Masscan, Unicornscan, and Zmap which optimize for certain reconnaissance problems. For example, Masscan optimizes for rapid large-scale network scanning. From the computer offense side of things, the size and effectiveness of large botnets such as Mirai (and more recent variants) is partially related to how effectively targets can be found through active scanning. Mirai became one of the most powerful botnets of all time because it was able to scan for and compromise vast numbers of IoT devices connected to the Internet with default administrative credentials enabled. Regarding the kill chain, the main point here is that the reconnaissance phase is essentially a solved problem. Attackers have access to extremely good automation to make reconnaissance a snap.
Scanners have brilliantly automated the discovery process for network services, but typically this means TCP services. UDP presents a special challenge to effective scanning. UDP services are more difficult to scan for vs. those based on TCP due to the fact that UDP services are never under any obligation to answer an incoming UDP datagram. A UDP service may answer incoming datagrams, but this is at the discretion of the application riding on top of the UDP socket. TCP services have a built-in scannable architecture due to the necessity of the three-way handshake in order to establish bidirectional connections, and this exchange takes place within the TCP stack itself before the higher level application has anything to say. That is, for an incoming TCP SYN packet, a non-filtered TCP service will respond with a SYN/ACK, and this behavior (among other technicalities such as what a TCP stack does upon receipt of an orphaned FIN packet and other weirdness) is used by scanners to find new targets. This distinction between UDP and TCP is critically important in the context of Zero Trust as we will see below.
If Zero Trust means anything at all, it should certainly require networked services to not be scannable by arbitrary attackers. That is, the essence of Zero Trust ought to be simply “don’t talk to strangers”. This applies to pre-existing services that are protected by a ZT architecture, and (most importantly) to the provider of Zero Trust itself. The provider might take the form of a centralized controller which hardens the local network with a Zero Trust stance and on-boards users so they can make use of services/applications in a hardened Zero Trust way. At a minimum, wouldn’t it be strange for a vendor to claim a Zero Trust architecture with a controller that listens on a TCP socket and is therefore scannable by unauthenticated parties? If so, then what we need is a Zero Trust provider that emphasizes stealth under active scanning, and this in turn implies the usage of a strict authentication protocol riding on top of UDP. The chief property of this protocol should be that no unauthenticated data is ever acknowledged or responded to. Pre-existing TCP services can continue to operate normally as long as access to these services is mediated and encapsulated within such a protocol. Fortunately, we do not have to reinvent the wheel. There are several UDP-based protocols that make this style of network communication possible. In the “rising star” category is the WireGuard VPN tunnel.
WireGuard has many compelling security properties such as an auditable code base at only around 4K lines of code, usage of the Noise protocol framework, and a formally verified cryptographic and communications model. In addition, WireGuard is blazingly fast, has been officially accepted into the Linux kernel as of version 5.6, and its adoption curve is off the charts – especially considering Google’s backing via Android integration. These are all wonderful, but for our purposes the most important facet of WireGuard is the fact that stealth has been a core design goal of the WireGuard project from the beginning. WireGuard runs over UDP, and non-acknowledgement of unauthenticated data is strictly enforced. There is no Nmap fingerprint to actively scan for WireGuard endpoints, and there will never be one. For an attacker who is not in possession of a configured WireGuard peer public/private key pair, it is not possible to scan for WireGuard endpoints simply because Nmap cannot cause WireGuard to respond to any probes it sends without such a key pair. In other words, only valid WireGuard peers can cause a WireGuard target to emit traffic over the network. For a scanned target system, Nmap cannot tell the difference between WireGuard running on a UDP port and one that is completely filtered by a firewall or network ACL that drops the incoming scans on the floor.
To illustrate, WireGuard is running on UDP port 40004 on the target depicted below (192.168.10.1 – a Linux box) and port 40005 is filtered with iptables. Here is what Nmap says about both ports:
[scanner]# nmap -P0 -n -sU -p 40004,40005 192.168.10.1
Starting Nmap 7.60 ( https://nmap.org ) at 2021-01-26 03:25 UTC
Nmap scan report for 192.168.10.1
Host is up.
PORT STATE SERVICE
40004/udp open|filtered unknown
40005/udp open|filtered unknown
Nmap done: 1 IP address (1 host up) scanned in 3.10 seconds
In other words, Nmap cannot tell whether there is any service available or not. It simply doesn’t get any data returned from the target under the UDP scan, and because UDP is not required to volunteer any return communications (unlike unfiltered TCP services) Nmap is forced to conclude both ports are “open|filtered”.
Now, you may be thinking, “but I have a VPN for my network, and no external systems can see my internal services without first getting onto the VPN”. That’s great, but have you scanned the VPN itself from an external source? A casual examination of the Nmap scanning database shows many fingerprints to identify VPN products and protocols. Without naming names, what can we conclude from this? Certainly not all VPN’s can be identified through active scanning, but it is also fair to say that many VPN’s have not been designed from the ground up with stealth as a core design goal.
We’ve seen that a critically important phase of network exploitation is reconnaissance - it anchors the kill chain attackers use to wreak havoc on targets. If stopping the reconnaissance phase of the kill chain is important, Zero Trust architectures should be built on top of protocols that have non-scannability as a critical part of their design. A stealthly protocol by itself does not constitute a Zero Trust architecture, but as the security community continues to build Zero Trust concepts (and products), solid foundations require the usage of such protocols. In terms of cutting the reconnaissance phase off at the knees, WireGuard has it where it counts. Let’s build from there.