16. Three Cybersecurity Dogmas That are Just Wrong

Security is a tough discipline. To do it well requires focus, so we don't spin our wheels, spend effort and budget, or get distracted by the latest hype. It is unfortunate, therefore, that we often get caught in dogmas we tell ourselves, but we don't examine whether they are correct or even useful. Here are three such dogmas that are just plain wrong.

1. Defenders Have to be Right All the Time, Attackers Only Once

If this was ever true, it certainly isn't today. A quick look at the MITRE ATT&CK framework makes it very clear that there are many stages in attacks, from reconnaissance to initial access to execution and persistence, just to gain a first foothold in a defender's landscape. Before a threat actor gets to actual data collection and exfiltration, or a ransomware attack, he or she needs to get a lot of things right – all of which could potentially be detected.

A layered defense that presents multiple obstacles also means that the defender may not get it right all the time – a vulnerability in a container, a misconfiguration in the network architecture, an open RDP port – but should still have multiple opportunities to detect malicious activity before the attacker is at your crown jewels. Lateral movement, privilege escalation, creation of rogue resources or user accounts all give opportunities to detect an attack in progress, and as long as an attacker has no access to a KMS may never get to encrypted data or into databases.

The dogma doesn't recognize the advantages of defenders and ignores the obstacles attackers must overcome. Attackers need to be right all the time. Defenders have multiple opportunities to stop them.

2. No Security Through Obscurity

This is often repeated, but as a result prevents us from taking the benefits of obscurity as part of a layered defense. Run an SSH open to the public internet on port 22 and it will be hammered constantly by automated scripts. Run it on a high random port and it will see virtually no traffic. Only very persistent threat actors focused on a particular target victim will scan for all open ports.

And if defenders were diligent enough to run SSH on a high randomly chosen port, logs showing failed logins will present a far more valuable and reliable alert than the noise that comes with SSH on port 22.

3. Zero Trust Network Architecture

This is possibly the most controversial dogma at all, as it seems the entire industry has lost their mind over this. NIST SP 800-207, the relevant Zero Trust standard says:

Zero trust focuses on protecting resources (assets, services, workflows, network accounts, etc.), not network segments, as the network location is no longer seen as the prime component to the security posture of the resource

So, it correctly starts with the premise that the network cannot be trusted... and yet spends most of the document discussion network controls, trying to re-establish trust in the network.

Trying to fix IAM, application context, and network security all at the same time, by adding a new policy control overlay in the network to implement the user access controls we should already have on application level. But why? Didn't we just declare the network no longer trusted? Especially in a cloud landscape, you may even end up creating network connections that don't need to exist. Why prevent a user access to a resource by network they already don't have access to on application level? The problem is IAM and that is hard enough. We can manage IAM and application context with Workload Identities for service accounts. Why complicate it further by adding the network back in?

Organizations struggle already with the basics. Why set them up for failure with a massive ZTNA implementation? IAM is boring and network security companies have products to sell?

Abandon the Idea that it's Not Good Enough Unless it's Perfect

There is this joke that goes that the only secure computer is one that is locked away in a separate room, does not have mouse, keyboard or screen, has no network connection, and is powered off. This is supposed to be instructive to get the balance of confidentiality and integrity right in relation to availability. It's intended to show that perfect security is not possible. It is not to be taken seriously as reasonable security guidance.

We supposedly moved at least a decade ago to a risk-based approach. However it seems a good portion of our industry continues to look for the perfect, and anything less is not good enough. Is it any surprise that there is such a gulf between security consultants, advisors and policy writers on the one hand and practitioners on the other? Let's abandon our perfect dogmas so we can focus on the actually important security operational problems.

cloud security posts without corporate approval @jaythvv@infosec.exchange