By Andy Mills
The Application Programming Interface (API) is now the bedrock of our digital ecosystem, acting as the glue between systems and applications and enabling businesses to rapidly deliver services. However, APIs are frequently subjected to attack and as more are deployed, so the attack surface grows. In fact, by next year Gartner predicts API breaches will have doubled compared to where they were in 2021 and the nature of those attacks is also changing. The OWASP API Project updated its Top Ten list of API threats earlier this year to reflect this, as attackers up the ante, resorting to new combinations of attack to subvert the functionality of the API and gain access to the systems and data it connects with.
Surely, then, zero trust can help secure these APIs? The concept specifically addresses the nuances of a distributed network architecture and that means it is capable of enforcing least privilege and access controls on all the facets of an API. Touchpoints such as the user, the application, the data. The problem is that very few organisations have been able to achieve a complete ZTA. The State of Zero Trust Report 2023 has revised its figures down from 40% in 2021 to just 28% this year, reflecting the difficulties organisations are having in creating a joined-up cohesive architecture. However, more have now kicked off deployments, rising from 54% two years ago to 66% today.
Caught up in the minutia
Those that are rolling out a Zero Trust Architecture (ZTA) are so focused on the initial stages such as microsegmentation which can be incredibly time consuming and complex that APIs come way down on the list of priorities. The same report issued a year ago described ZTA deployments as having five stages: traditional, emerging, maturing, elevated and evolved, with API security coming in the fourth phase.
Yet delaying API integration until later in the process is a risky strategy. Gartner has warned that ZTA will force attackers to look for attack points outside the scope of the architecture and that this means over half of all cyber attacks will be aimed at these weak spots by 2026. Chief among them, it suggests, will be the assets and vulnerabilities housed by public-facing APIs.
Attackers will increasingly scan for and seek to exploit these APIs and there is already evidence of this today. The number of attackers looking for so-called ‘shadow APIs’ that have fallen off the radar of the security team has risen sharply. In the second half of 2022 alone, 45 billion search attempts were made, compared to just five billion during the first half of the year, according to the API Protection Report.
Unique challenges
It’s therefore imperative that APIs are factored into ZTA deployments but there’s no doubt this will be a complex undertaking. APIs come in various shapes and flavours. As well as being internal or public facing, they might interface in numerous ways, from a single API providing access to a service mechanism, to aggregated APIs that then use another as the point of entry, to APIs that act as the go-between between various non-compatible applications, or partner/third party APIs.
They are also problematic to monitor and secure using traditional mechanisms. Segmentation and deep inspection technology at the layer 7 network level can miss APIs completely, resulting in those shadow APIs, while application level 4 protection methods such as web application firewalls (WAFs) which use signature-based threat detection will miss the kind of abuse that typically leads to API compromise. Often, APIs are not’ hacked’ as such, but their functionality is used against them in business logic abuse attacks and so it’s the behaviour of the API request and resulting traffic that needs to be observed.
Yet it’s clear that APIs must be included in ZTA. If we look at the NIST guidelines, it specifies that all data sources and computing services should be regarded as resources and that all communication should be secured regardless of network location. Furthermore, access to resources should be granted on a per session basis and should be determined by dynamic policy, including the observable state, and may include behavioural and environmental attributes. So, access should be authenticated, such as through Identity Access Management (IAM), while behavioural analyses should be used to detect any unusual activity, such as fluctuations in traffic volumes and whether it falls outside the remit of what is considered normal usage for that API.
NICE also state that it’s necessary to monitor and measure the integrity and security posture of all owned and associated assets and that resource authentication and authorisation should be dynamic and strictly enforced before access is allowed. And information should be collated on the current state of assets, the network and communications traversing it; information that can then be used to deliver insights and improve controls to strengthen the security posture. In other words, monitoring and management of assets is key.
APIs need ZTA
In many ways, given the distributed nature of APIs, ZTA is long overdue as a means of securing these interfaces. They were never meant to be publicly accessible and are now intrinsic to the functioning of services and applications. Some applications have hundreds of microservices, each managing tens of APIs, which gives some idea of just how numerous and complex the problem is. Such is the scale of the problem that in leaving APIs out of ZTA the organisation is exposing itself to attack. Those APIs effectively become the Achille’s heel of the operation which then makes much of the efforts put into implementing ZTA elsewhere on the network redundant.
So how should APIs be brought under ZTA? It’s a complex undertaking that will require the use of automated solutions to handle authentication and access at scale. Automated API discovery tools can help with the initial process of mapping the API footprint. These scan the network for API traffic as well as identifying APIs that are not listed in the organisation’s directory and flag them for further investigation. Every API call should then be authenticated to verify the identity of the caller, which is typically done using API keys or tokens (OAuth 2.0 and OpenID Connect (OIDC) commonly used protocols for API authentication).
Once the caller’s identity is verified, the next step is to check if they have the necessary permissions to perform the requested action. This is where Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC) comes into play, ensuring that the right level of authentication is allocated to the entity making the request and that the principal of least privilege is observed.
Beyond access and authentication
Even then, after authentication and authorisation have been observed, every API request should be validated. This will include checking the request against schemas such as an Open API specification for the expected data format, validating the data against business rules, and scanning for any malicious content. In addition, data should also be encrypted both in transit and at rest and HTTPS used for all API calls, while sensitive stored data should be encrypted using strong encryption methods.
Regular audits of API activity can also help detect any unusual or suspicious behaviour. This should include logging all API calls and monitoring for any anomalies, which might include unexpected spikes in traffic, unusual patterns of access, or the use of deprecated API versions. These regular audits, together with proactive discovery, then ensure a comprehensive view of the API landscape is maintained at all times.A question often asked is whether the API Gateway can provide sufficient security. Effectively this acts as a single-entry point for all API calls and it can handle authentication, rate limiting, and other security measures, providing a buffer between the API and the outside world. However, it complements rather than replaces a dedicated API security solution, by ensuring there are layers of security to protect the APIs. In contrast, a dedicated API solution will seek to cover the API lifecycle, from discovery through to compliance with security policies and regulatory requirements, to protection against abuse. Such solutions use machine learning and AI, for example, to detect anomalous activity and calls to the API, and include defensive mechanisms that allow them to block or mitigate an attack.
Implementing these processes and technologies can therefore extend ZTA to include APIs. The principals referred to by NICE are observed while still respecting the nuances of how APIs function. But the concern remains whether organisations will look to include APIs at the early stages of implementation.
Right now, the position we are in is a precarious one. Organisations are devoting significant resource to ZTA, microsegmenting the network and adding in access and authentication procedures for key systems and applications with scant attention paid to the very APIs they depend upon. The approach to securing these remains haphazard. Many do not know how many APIs they have and are unable to monitor and manage them while others have a false sense of security imparted by their WAF or API Gateway. Yet leaving APIs unattended could well scupper efforts being made elsewhere in the business. It’s for this reason that we have to put API security at the top of the list when it comes to implementing zero trust.
Andy Mills is VP of EMEA for Cequence Security and assists organisations with their API protection strategies, from discovery to compliance and protection. He’s a passionate advocate of the need to secure the entire API lifecycle using a unified approach.
Prior to joining Cequence, he held roles as CRO for a major tax technology provider and was part of the original worldwide team of pioneers that brought Palo Alto Networks, the industry’s leading Next-Generation Firewall, to market. Andy holds a Bachelor of Science Degree in Electrical and Electronic Engineering from Leeds Beckett University.