By Phil Robinson, Principal Security Consultant, Prism Infosec
These days, security architecture is a subjective term which means different things to different people. This is largely because we’ve seen so many different approaches to the problem of securing systems which can lead to the security architect having to justify their approach. Senior management will want to know why they are architecting solutions that mention “Defence in Depth,” but the truth is that the fundamentals of security architecture design and review don’t change.
To avoid any confusion, it’s first worth nailing exactly what we mean by security architecture. The NCSC describes it as “the practice of designing computer systems to achieve security goals” by making compromise difficult, limiting the impact of that compromise, making disruption difficult and detection easy.
In order to achieve these aims, the security architecture must address the technology, people and processes relating to a system which means that the architect must bring to the table technical knowledge, business acumen and soft skills and be up to speed on current threats and vulnerabilities. Their role is to look critically at the enterprise architecture to identify weaknesses and to then design (or redesign) it to mitigate risk.
There are numerous ways to reduce the exposure of systems through the combination of technical, procedural and operational controls and this has seen the emergence of various approaches over the years.
Zero Trust – The good and the bad
Today, the preoccupation is with Zero Trust Network Access (ZTNA) which seeks to improve access control by treating every access request as potentially hostile, requiring it to be authenticated, authorised and continually validated. The term Zero Trust was actually conceived back in 2010 (although the concept was embodied in “deperimeterisation” championed by organisations such as the Jericho Forum).
ZTNA is ideal for the modern hybrid environment as it can encapsulate entities, network or data objects and effectively protect remote users and the protection of cloud-based assets. Its adoption has also been accelerated by the pandemic, when the rapid rollout of remote working necessitated a rethink of how we effectively control remote access en-masse.
One of the criticisms of the approach, however, is that it can cause too much friction, exacerbating users. It’s ideal for pureplay cloud businesses that use SaaS but becomes more difficult to implement for those with legacy systems. This can be remedied through approaches such as Just In Time (JIT) protocols that provide the user with temporary access usually via ephemeral certificates issued instantaneously that self-destruct. But there’s no getting away from the fact that moving to a zero trust approach will require significant restructuring of the existing architecture. This means migration over time, with the phasing out of non-compatible systems.
Other silver bullet solutions
ZTNA is not the first nor will it be the last security fad. We also had the “Thin Client” vs “Thick Client” debate which saw the virtue of accessing server hosted desktops and applications from low spec’d clients which were lauded over applications installed on laptops and desktops which are more difficult to maintain and expensive.
While it’s certainly true that the former can be easier to secure and maintain (a centralised environment which can be the focus of patch and configuration management), thin clients have limited deployment scenarios as they require constant connectivity and suffer performance issues. After an initial uptake of the thin client approach, many companies migrated back to the use of laptops. As such, it was clear that what is right for the business will be dictated by the way its users operate, not the security implications.
So, given that Defence in Depth (DiD) has been around for decades, is it still relevant? There are those that argue the strategy has failed. They point to the bloated cyber security stack of up to 70 solutions now found in the average enterprise and the seemingly unchecked onslaught of attacks over those years. Add to that the evaporation of the network perimeter in a hybrid workforce and the increase in consumption of cloud services, and it’s easy to see why some question its relevance.
DiD works by using a layered approach to security which effectively buys response time. The theory is that even if a threat actor gets past the initial defence, another security control will identify, slow or mitigate the attack, effectively plugging the gap. It can accommodate the needs of the organisation as it scales or threats change as more layers can be applied to areas deemed high risk but it makes two big assumptions. Firstly, that you have ownership and control over the network and secondly that an attack will originate externally which means that all users within the network are trusted. Both of those came under question when we entered lockdown and demand for remote access rocketed, putting ZTNA in the spotlight.
But if executed by taking into consideration not just the security systems in play but also the physical security, policies and procedures and security awareness of the workforce, DiD closely embodies the objectives mentioned earlier in effective security architecture design and review. These other elements can tend to be overlooked but security awareness training, for example, is vital in protecting the organisation against social engineering, phishing and ransomware attacks.
Evolve or die
Historically, DiD focused on defending the perimeterised network through the layering of solutions to secure endpoint (AV, EDR and access control) and network security solutions (firewalls, VPNs, IPS/IDS, IAM) and through effective patch management. But enterprises are now seeking to evolve their DiD protection to cater for deperimeterised envrionments that are either hybrid or multi-cloud in nature.
The premise remains the same, namely to create obstacles that delay or thwart an attack, but now DiD can improve Mean Time to Detection (MTTD) and Mean Time to Response (MTTR). Typically, an attacker can remain undetected for 15-21 days (reports vary) and the longer the dwell time, the more likely they will break through lines of defence. Modern DiD can therefore draw on cloud-based tools to address this such as Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR), as well as encompassing privileged access management for human and non-human entities, endpoint privilege management, and multi-factor authentication mechanisms on top of those traditionally used.
While other security strategies may come and go, DiD is flexible enough that it remains relevant today and there’s no reason why it can’t be used to enable the deployment of ZTNA. But we should be focused on how best we can mitigate attacks using people, process and technology and perhaps be less hung-up on the latter.
Phil Robinson has worked in information security for over 25 years and is the founder of Prism Infosec which offers cutting edge penetration testing, red teaming, incident response and simulated exercises, and security consultancy services over cloud and traditional on-prem architectures and enterprise applications.
Phil has been instrumental in the development of numerous penetration testing standards and certifications. He was involved in the original formation of the Council for Registered Ethical Security Testers (CREST), chaired the management committee of the Tiger scheme and established key CESG Certified Professional (CCP) roles on behalf of the British Computer Society (BCS), and has also contributed toward the Open Source Testing and Security Manual (OSSTMM).
An Associated Member of the ISSA, an (ISC)2 CISSP, ISACA CISA and a CHECK Team Leader, Phil has worked as a CLAS Consultant/Senior CCP Security and Information Risk Advisor and in this capacity has delivered cybersecurity advice and guidance to HMG departments and agencies. He regularly speaks about penetration testing and e-crime to help promote cybersecurity awareness and industry best practice.