Your path to understanding attack paths

The complexity of enterprise IT systems has increased dramatically over the past 10 years, first with the move from fixed on-premises systems to the cloud, then with the growth of web-based applications and cloud-based services offering new ways more efficient in doing business. .

While some small organizations may be entirely cloud-based, the vast majority of organizations have a mix of on-premises, cloud, or hybrid cloud computing, and use third-party systems and web-based applications for internal or customer-facing services. clients.

While this provided a significant increase in capacity and efficiency, it also brought complexity, both technically and organizationally, with external parties such as cloud service providers and developers having security responsibilities. for the software or services they provide.

Over the same period, attackers have become more sophisticated, with targeted attacks typically using multiple vulnerabilities to gain a foothold, elevate their privileges, and then move to other hosts and servers on the network.

There will then be even more exploitation of vulnerabilities to maintain persistence – these vulnerabilities will not just be software vulnerabilities, but could be errors in cloud configuration or Identity and Access Management (IAM), or could be the result of a supply chain attack on a software or service provider.

These can be addressed to some extent with vulnerability scanning and automated cloud policy verification applications that check configurations against high-level policy, but they can never be eliminated.

The MITER ATT&CK framework identifies nine primary techniques attackers use to gain initial access.

The majority of them – such as drive-by web compromise, exploitation of public facing applications, external phishing, replication via removable media, use of stolen accounts – will only provide access at user level.

This allows the attacker to access information available to the user, but does not provide full access. To do this, the attacker must exploit a vulnerability to elevate privileges to become an administrator to escape that initial host, and another to gain a foothold on a second host or server.

Similarly, if hosting web applications, exploiting a vulnerability or misconfiguration in an external web application could provide access to an underlying database, or direct access to the exploitation and through it to other systems by exploiting other vulnerabilities.

Although customer-facing and internal systems should be separate, they often aren’t, and it may be possible to move from one platform or system to another.

The most likely connection will be a common IAM system, especially if the users’ Windows domain passwords are used on different systems – which is not uncommon. However, if there is a connection between two systems, misconfiguration or unmitigated vulnerabilities could allow an attacker to move between them.

This risk cannot be properly dealt with without a precise inventory of assets and interconnections, which must be constantly updated.

“If there is a connection between two systems, misconfiguration or unmitigated vulnerabilities could allow an attacker to move between them”

Paddy Francis, Airbus CyberSecurity

Once this is in place, the first step to addressing this risk should be zoning/segmentation with proper monitoring of inter-zone traffic. This should be followed by regular vulnerability scanning and patching to remove vulnerabilities found or, where patching is not possible, mitigate vulnerabilities so that they cannot be exploited. This can be at the individual vulnerability level or a system level mitigation addressing multiple vulnerabilities.

For the cloud, misconfigurations can be identified using tools that can check configurations against a high-level security policy, which should help fix cloud misconfigurations. This assumes, of course, that a policy for the tool to be verified is in place.

For web applications or other custom software development, security coding rules and the use of static and dynamic code analysis as part of the DevOps test cycle will help eliminate common issues such as buffer overflow and cross-site scripting vulnerabilities.

There will inevitably be vulnerabilities that cannot be patched or mitigated and unknown misconfigurations. So something has to be done about those things that can’t be fixed or that you don’t know anything about.

If not already in place, multi-factor authentication (MFA) for administrator access, remote access virtual private networks, and access to other sensitive systems will help mitigate privilege escalation and the use of stolen credentials – for example, via password sniffers, keyloggers, etc. on.

The use of zoning and additional monitoring can also help create system-level mitigations for known vulnerabilities and help identify or prevent exploitation of unknown vulnerabilities and configurations by limiting traffic between zones to that would be expected and monitoring inter-zone traffic to detect potential exploitative activity.

Finally, an independent penetration test on the system would prove mitigations of known vulnerabilities and may also identify misconfigurations, but will not be able to identify unknown vulnerabilities.

Today’s large computer systems tend to be complex and often built piecemeal over time. This typically creates vulnerabilities and misconfiguration through numerous equipment and system reconfigurations and the introduction of new applications and services. Such systems are susceptible to vulnerabilities and misconfigurations – and if they do, they will eventually be exploited.

Comments are closed.