Organizations should measure a cybersecurity risk by analyzing several key factors such as likelihood, vulnerability, and impact of cyber event. In conducting this analysis of Meltdown and Spectre, organizations should not be frightened by news headlines, but rather use news events like these as a catalyst to discuss an organization’s cybersecurity posture, such as an evaluation of potential threat actors and scenarios, or an assessment of how vulnerable an organization is to a given type of threat or attack.
Meltdown and Spectre vulnerabilities are being covered widely in the press. They are recently-discovered vulnerabilities in processor hardware that is found in nearly all existing servers, desktops, and laptops, as well as many smartphones. However, when the key factors of risk are measured, these two vulnerabilities are not likely a cause for panic at this point. Here’s why:
- Current Likelihood of Compromise: Low
Although exploit code has recently been made public for these vulnerabilities, they can only be exploited locally, by someone who has access to the computer or who gains access by some other means, including remote compromise using other vulnerabilities. Therefore, the current likelihood of actual compromise today is considered low. In contrast, recent ransomware attacks, including the exploit that leveraged the Apache Struts vulnerability, had a high likelihood of impacting targeted organizations because they exploited vulnerabilities that were able to compromise systems remotely, without requiring local access.
- Vulnerability: Low (Assuming Strong Vulnerability Management Practices)
Vulnerability is a risk factor that organizations can control to a great degree. In this case, if an organization is maintaining strong vulnerability management controls and practices, including patch management, it is not likely to be very vulnerable to a Meltdown or Spectre attack. That’s because manufacturers and operating system vendors have already released software patches to address these vulnerabilities. Most cloud providers have already deployed these patches. Some web browsers already have patches, and more patches are forthcoming, even for systems that already have provisional patches. However, installing patches of this nature can require painstaking work and will place demands on the information technology (IT) staff responsible for rolling them out without disrupting normal operations.
To minimize vulnerability, organizations should prioritize private cloud servers, such as corporate VMware servers hosting multiple instances of virtual machines. After that, systems should be prioritized based on where the most sensitive data is located. For example, machines holding critical passwords or keys should be prioritized over exhaustively updating end-user computers and phones. As is security best practice, systems that will not be patched or are no longer supported should be retired.
Patch management is just one component of an effective vulnerability management program. In order to effectively mitigate significant vulnerabilities over time, organizations will also need to be rigorous and thorough about managing their assets, identifying vulnerabilities, conducting research and analysis, managing exceptions, and tracking remediation.
- Impact: Variable (Low to High)
The severity of an attack that exploits these vulnerabilities hinges on the sensitivity of the data held in a given physical computer’s memory, which is where damage could be done. These vulnerabilities allow an attacker to read memory to which they should not have access, and they can only be exploited locally, unless the attacker gains remote access to the computer by using other vulnerabilities. The impact of these kinds of vulnerabilities may be more significant for computer systems that operate in the cloud, as multiple customers’ data and virtual machines are often co-hosted on the same physical computer.
These measurements about the risks of Meltdown and Spectre provide a general perspective, however each organization must consider its own risk posture and appetite to measure the level of risk for itself. It is most effective to use a multidisciplinary approach for this measurement involving the chief information security officer (CISO), IT department, the chief risk officer (CRO), key business stakeholders, and organizational data owners. Including an independent, objective, and trusted outside party (e.g. security advisory firm) can also assist in ensuring that the organization is considering key industry trends and perspective. The CISO is the master of information and technology security, while the IT department lives and breathes the network and systems. The cybersecurity specialist will know the threat landscape as it currently exists. Finally, the CRO, who we predict will increasingly take center stage in managing cybersecurity as an enterprise risk, can connect the technology risk to the performance of the business. Discussion between these parties forms a pragmatic reaction to fear-inducing headlines, and results in the proportional applications of resources.
Making security decisions based on fear, uncertainty, or doubt can be ineffective, and in some cases can have other undesirable consequences. Business leaders have to make decisions based on the qualitative and quantitative analysis of risk, rather than relying on knee-jerk reactions to ancillary factors and influences. Only by evaluating these considerations in this thoughtful and multidisciplinary way can an organization reduce and manage its cybersecurity risk. If there’s one good thing to come from the press coverage of Meltdown and Spectre, perhaps it is the opportunity to practice understanding the implications of breaking news headlines to an organization’s security posture before reacting broadly to it.