The disclosed concept relates generally to a system and method for improving the security of a networked and/or distributed system, in particular a system and method for scoring and ranking common weaknesses mapped to vulnerabilities found in networked and/or distributed systems.
As the world becomes increasingly connected, organizations in all industrial, government, military, non-profit, educational, etc. entities face constant cyber-attacks by malicious actors. Despite the organizational efforts to guard against incoming attacks and protect sensitive data, the costs and resulting losses from successful attacks continue to rise. For example, it has been reported that the average cost of a data breach in 2021 was $4.24 million, which is a 10% rise from that in 2019. Indeed, as of 2021, cybercrimes (e.g., damage and destruction of data, stolen money, lost property, and intellectual property theft, etc.) have reportedly cost the world almost $600 billion each year, 0.8% of the global GDP. Such high financial losses due to lack of security as well as an increase in vulnerabilities across the globe, more stringent regulatory standards and data privacy compliance requirements, a surge in the adoption of Internet of Things (IoT) and cloud-based systems, and the integration of advanced technologies such as Artificial Intelligence (AI) and Machine Learning (ML) have led to an exponential growth in the security and vulnerability management market, reportedly valued at $6.61 billion in 2020 and expected to reach $11.72 billion by 2026.
However, given the significant number of interconnected components in the distributed systems or composed systems (e.g., the IoT system or a networked Industrial Control System (ICS)), providing the appropriate level of security for such networked systems may pose a challenge. For example, a first line of defense against cyber-attacks may be to evaluate the weaknesses and vulnerabilities of a system that may be exposed to malicious users. While the terms “weakness” and “vulnerability” are often used interchangeably, they in fact represent two distinct levels of abstraction. That is, a “vulnerability” is a flaw or defect, commonly found in software or hardware, which has the potential to be exploited by attackers for malicious purposes. A “weakness” is a condition in a software, firmware, hardware, or service component, that, under certain circumstances, could contribute to the introduction of vulnerabilities. The Common Vulnerabilities and Exposures (CVE) system was introduced to provide a unified method for publicly disclosing security vulnerabilities, and it is referenced as a standard in the cybersecurity world. The Common Weakness Enumeration (CWE) is a system that provides a structured list of software and hardware weakness types that serves as a foundational resource for identifying, mitigating, and preventing weaknesses. While the CWE serves as a comprehensive list of software vulnerabilities, with a focus on foundational errors, the CVE encompasses documented instances of vulnerabilities associated with specific systems or products. Each CVE can be mapped to one or more CWE entries and each CWE entry encompass numerous (sometimes hundreds) different vulnerabilities. The purpose of classifying CVEs into CWE is to provide an easy way to identify specific types of weaknesses and also understand the nature of vulnerabilities. The CWE facilitates the identification and recognition of specific types of vulnerabilities and enables deeper analysis of the root causes and common patterns associated with specific weaknesses. Thus, while the terms “weakness” and “vulnerability” are often used interchangeably, they in fact represent two distinct levels of abstraction. Several techniques and tools have been developed to work at either level of abstraction, but no approaches to bridging the gap between these two abstraction levels have been developed.
For example, MITRE and OWASP (Open Web Application Security Project) provide periodic rankings of software weaknesses and software vulnerability scoring systems. However, the rankings offer limited solutions because they abstract the details of individual vulnerabilities and reason for the vulnerabilities in terms of weaknesses, but provide only generic rankings that are not useful to understand the security posture of a specific system. For vulnerabilities, topological vulnerability analysis and numerous scoring and ranking systems have been developed, including multi-layer graph approaches to configuration analysis and optimization using vulnerability graphs and various scanning tools to identify the specific vulnerabilities that exist in each component of a distributed system. However, they do not aggregate the information at a higher level of abstraction, rendering it difficult for a security analyst to derive actionable intelligence from voluminous scanning reports. For example, the Common Vulnerability Scoring System (CVSS) often returns the same severity score and rank for a plurality of vulnerabilities, leaving the security personnel unable to differentiate severities between those vulnerabilities. Further, the rankings of common weaknesses are based on knowledge about all known vulnerabilities rather than the specific vulnerabilities that exist in the system being evaluated, resulting in overestimating or underestimating the true severity of the weaknesses. Furthermore, the scoring systems rely on predefined notions of risk and use fixed equations to compute numerical scores, and thus do not provide users with the flexibility to fine-tune such equations or consider new variables. For example, susceptibility of a vulnerability to becoming a target for exploitation by malicious users depends on a number of variables including features of the vulnerability itself and characteristics of potential attackers. Many of the existing approaches have focused on intrinsic features of vulnerabilities, but not extrinsic features such as the types, skills, and resources available to the potential attackers. As such, these approaches focus on scoring and comparing vulnerabilities for a fixed attack surface or model based on fixed equations and predefined security risks, thereby failing to provide a user the ability to modify or adjust the vulnerability assessment in accordance with the specific needs of the distributed system being protected.
Thus, current solutions lack a principled approach to quantifying various dimensions of problems in a manner in which the scoring and rankings of vulnerabilities can be adapted to various applicative domains and operating conditions of individual systems. Further, they neither account for the individual need of a specific system nor allow for prioritizing remediation of software security risks based on the needs and resources of the specific system. This results in a generalized security risk assessment ineffective or unfit for the individual system, leading to improper or inefficient security measurement adoptions and leaving the individual systems exposed to malicious attackers and potential business and financial losses.
There is room for improvement in cyber security solutions against constant and rapidly evolving cyber-attack landscape.
These needs, and others, are met by a method of performing prioritized remediation of security weaknesses in a distributed system. The method includes: obtaining cyber security data including at least vulnerability data and intrusion detection system (IDS) rules; outputting a standard security weakness ranking based on the cyber security data; determining that one or more vulnerabilities exist in one or more system components of the distributed system based on the standard security weakness ranking; customizing metrics for calculating a likelihood of exploitation of each vulnerability and an exposure factor associated with exploitation of each vulnerability based on a user input including at least one variable for use in the calculation, the at least one variable influencing the likelihood of exploitation or the exposure factor and capturing specific applicative domain of each vulnerability, priorities of the distributed system and/or types of potential attackers; calculating the customized metrics; outputting a customized ranking of the one or more vulnerabilities based on the calculated customized metrics; and performing a prioritized remediation of a target vulnerability selected by the user from the one or more vulnerabilities based on the customized ranking and specific needs and resources of the distributed system.
In some example embodiments, the at least one variable belongs to a first set Xl↑ of variables that contribute to increasing the likelihood of exploitation as the value of the first set increases, a second set Xl↓ that contribute to decreasing the likelihood of exploitation as the value of the second set increases, a third set Xe↑ that contribute to increasing the exposure factor as the value of the third set increases, and a fourth set Xe↓ that contribute to decreasing the exposure factor as the value of the fourth set increases. In some example embodiments, the first set, the second set, the third set and the fourth set of variables are defined, respectively, as follows:
where X is a variable, V is a set of all know vulnerabilities and v is a known vulnerability, ρ(v) is the likelihood of exploitation of the vulnerability v and ef(v) is the exposure factor of the vulnerability v.
In some example embodiments, the likelihood ρ(v) of exploitation of each vulnerability is defined as a function ρ: V→[0,1] as follows:
and the exposure factor ef(v) associated with exploitation of each vulnerability is defined as a function ef: V→[0,1] as follows:
where X is the variable, αx is atunable parameter, X(v) is the value of X for v, and ƒx is a monotonically increasing function used to convert values of X to scalar values, i.e., x1<x2⇒ƒx(x1)≤ƒx(x2).
In some example embodiments, variables in the first set Xl↑ comprise at least an exploitability score of a vulnerability as captured by CVSS, time lapsed since publication of details about the vulnerability and a set of known vulnerability exploitations, wherein variables in the second set Xl↓ comprise at least a set of known IDS rules associated with a vulnerability and a set of vulnerability scanning plugins, wherein variables in the third set Xe↑ comprise at least an impact score of a vulnerability as captured by Common Vulnerability Scoring System (CVSS), and wherein variables in the fourth set Xe↓ comprise a set of deployed IDS rules associated with a vulnerability. In some example embodiments, the at least one variable comprises a plurality variables and each of the first set Xl↑, the second set Xl↓, the third set Xe↑, or the fourth set Xe↓ includes at least one of the plurality of variables, and the method further comprises: providing a quality score of each customized rank; and determining the target vulnerability based at least in part on the quality score.
In some example embodiments, the quality score improves based on an increase in a number of the plurality of variables used in the calculation of the customized metrics. In some example embodiments, the method further includes: adding one or more new variables to at least one of the first set Xl↑, the second set Xl↓, the third set Xe↑ or the fourth set Xe↓ based on a user selection in accordance with the priorities of the distributed system. In some example embodiments, the method further includes: calculating severity scores for the one or more vulnerabilities based on the customized metrics, quality scores of respective customized ranks, and deviations of each customized rank from an ideal scenario in which each vulnerability has a unique severity score; and outputting the severity scores, the quality scores, the deviations and cumulative number of vulnerabilities in each rank on a graphical user interface.
In some example embodiments, the likelihood of exploitation and the exposure factor are combined into a severity score that allows ranking of the one or more vulnerabilities, the severity score is defined as s(v)=ρ(v)·ef(v), the quality score is defined as Q(r)=e−γ·δ(r), and the ideal scenario is defines as δ(r)=√{square root over (Σi=1r(|CVE(r)|−1)2/r)},
where v is a vulnerability, ρ(v) is a likelihood of exploitation of the vulnerability, and ef(v) is an exposure factor of the exploitation of the vulnerability, γ is a tunable parameter and r is a rank, CVE denotes Common Vulnerability Exposures. In some example embodiments, the performing a prioritized remediation of a target vulnerability includes: prioritizing remediation of the one or more vulnerabilities based on the resources available for remediation and current needs of the distributed system; and determining the target vulnerability that poses a greatest risk to the distributed system. In some example embodiments, the types of potential attackers comprises attackers who are aware of only the CVSS scores, attackers who have access to a system component associated with the one or more vulnerabilities, and attackers who can perform reconnaissance on the distributed system and discover unpatched vulnerabilities.
Another embodiment provides a customized vulnerability ranking and scoring system including a customized security risk remediator and a user interface coupled to the customized security risk remediator and structured to receive the user input and output security weakness rankings including the customized rankings periodically or on demand. The customized security risk remediator includes a data ingestion device communicatively coupled to information sources and obtains security data from the information sources, the information sources including at least vulnerability database, Intrusion Detection System (IDS) rules repositories, and vulnerability scanners; a ranking device structured to receive the security data and structured to output security weakness rankings periodically or on demand; a metrics calculator structured to calculate metrics including a likelihood of exploitation of each vulnerability and an exposure factor associated with exploitation of each vulnerability; a metrics customizer structured to customize the metrics based on a user input including at least one variable for use in the calculation, the at least one variable influencing the likelihood of exploitation or the exposure factor and capturing specific applicative domain of each vulnerability, priorities of the distributed system and/or types of potential attackers; and a target security risk remediation device structured to perform a prioritized remediation of a target vulnerability selected by a user from the one or more vulnerabilities based on the customized ranking and specific needs and resources of the distributed system.
In some example embodiments, the at least one variable belongs to a first set Xl↑ of variables that contribute to increasing the likelihood of exploitation as the value of the first set increases, a second set Xl↓ that contribute to decreasing the likelihood of exploitation as the value of the second set increases, a third set Xe↑ that contribute to increasing the exposure factor as the value of the third set increases, and a fourth set Xe↓ that contribute to decreasing the exposure factor as the value of the fourth set increases. In some example embodiments, the first set, the second set, the third set and the fourth set of variables are defined, respectively, as follows:
where X is a variable, Vis a set of all know vulnerabilities and v is a known vulnerability, ρ(v) is the likelihood of exploitation of the vulnerability v and ef(v) is the exposure factor of the vulnerability v.
In some example embodiments, the likelihood ρ(v) of exploitation of each vulnerability is defined as a function ρ: V→[0,1] as follows:
and the exposure factor ef(v) associated with exploitation of each vulnerability is defined as a function ef: V→[0,1] as follows:
where X is the variable, αx is atunable parameter, X(v) is the value of X for v, and ƒx is a monotonically increasing function used to convert values of X to scalar values, i.e., x1<x2⇒ƒx(x1)≤ƒx(x2).
In some example embodiments, variables in the first set Xl↑ comprise at least an exploitability score of a vulnerability as captured by CVSS, time lapsed since publication of details about the vulnerability and a set of known vulnerability exploitations, wherein variables in the second set Xl↓ comprise at least a set of known IDS rules associated with a vulnerability and a set of vulnerability scanning plugins, wherein variables in the third set Xe↑ comprise at least an impact score of a vulnerability as captured by Common Vulnerability Scoring System (CVSS), and wherein variables in the fourth set Xe↓ comprise a set of deployed IDS rules associated with a vulnerability.
In some example embodiments, the system further includes plugins structured to interface with an individual virtual scanner and Application Programming Interfaces structured to interface with third party applications. In some example embodiments, the data ingestion device is further structured to generate and/or ingest vulnerability scanning reports, and the metrics further comprises a common weaknesses score as defined as S(CWEi)=ΣvEC(CWE
A full understanding of the invention can be gained from the following description of the preferred embodiments when read in conjunction with the accompanying drawings in which:
The example embodiments described herein in accordance with the disclosed concept solve the technical problems of the existing cyber security approaches that provide only generalized security assessments based on generalized scores and rankings, which not only fail to bridge the gap between the two levels of abstraction (“vulnerability” and “weakness”), but also are confined to predefined notions of risk and fixed equations and variables to compute numerical scores such that they do not allow users the flexibility to fine-tune the fixed equations or consider new variables. Further, the example embodiments resolve the technical problems of the existing vulnerability scoring systems, such as the Common Vulnerability Scoring System (CVSS), that often result in a scoring granularity issue where multiple vulnerabilities are assigned the same severity score and thus share the same rank. This failure to provide distinct ranks for each vulnerability hinders the ability to accurately differentiate the severity between distinct vulnerabilities, and thus complicates the prioritization and mitigation processes, thereby negatively impacting targeted response strategies and ultimately leaving the systems at risk due to the potential oversight of critical vulnerabilities that can be exploited by malicious actors.
The example embodiments of the disclosed concept solve these technical problems by providing a cyber security framework for measuring and scoring vulnerabilities, which is uniquely designed to adapt to various application domains and provide a dynamic approach where users can create and modify vulnerability evaluations based on specific scenarios. This customization enhances both the relevance and accuracy of assessments. The core innovation lies in the framework's ability to allow users to customize scoring equations, enabling them to reflect unique operational environments and specific security needs comprehensively. By incorporating extensive details about each vulnerability, the inventive framework facilitates the consideration of multiple dimensions that influence the severity score. The inventive framework ensures that each vulnerability receives a distinct ranking, effectively eliminating the problem of multiple vulnerabilities sharing the same rank. The capability to customize and refine the scoring process based on detailed vulnerability attributes allows for precise vulnerability prioritization in accordance with the needs of a specific distributed system being protected.
This tailored approach according to the disclosed concept not only improves the accuracy of vulnerability assessments but also enhances the effectiveness of prioritization efforts. Security engineers can now address the most critical vulnerabilities with precision, supported by a ranking system provided by the inventive framework that uniquely classifies each vulnerability based on its specific characteristics and the environment's particular security requirements. This invention revolutionizes vulnerability analysis, providing a flexible, customizable, and detailed tool that significantly improves the prioritization and mitigation of potential security threats in any given environment.
As used herein, the singular form of “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
As used herein, the statement that two or more parts or components are “coupled” shall mean that the parts are joined or operate together either directly or indirectly, i.e., through one or more intermediate parts or components, so long as a link occurs.
As used herein, “directly coupled” means that two elements are directly in contact with each other.
Directional phrases used herein, such as, for example and without limitation, top, bottom, left, right, upper, lower, front, back, and derivatives thereof, relate to the orientation of the elements shown in the drawings and are not limiting upon the claims unless expressly recited therein.
The disclosed concept will now be described, for purposes of explanation, in connection with numerous specific details in order to provide a thorough understanding of the subject innovation. It will be evident, however, that the disclosed concept can be practiced without these specific details without departing from the spirit and scope of this innovation.
The embodiments described herein provide an improvement to a security weakness scoring system (hereinafter referred to as “Mason Vulnerability Scoring Framework” or “MVSF”), which expanded upon the vulnerability metrics for graph-based configuration security as described in U.S. Pat. No. 11,930,046 issued to Albanese et al, by aggregating vulnerability-level metrics to compute weakness-level scores and enable ranking of common weaknesses. The MVSF publishes monthly CWE ranking categories based on a standard parameter configuration, but can also generate monthly, weekly, or even daily rankings on demand, based on a user's needs. While it significantly improved the then existing security risks scoring and ranking systems, the MVSF assigns a score to a CWE entry based on a limited number of known CVEs mapped to that CWE, rather than on the specific vulnerabilities that exist in the distributed system being evaluated. The example embodiments of the present disclosure describe an improved cyber security risk scoring framework that allows for determining and remediating a target vulnerability of a distributed system based on prioritization of discovered security risks in accordance with specific needs of the distributed system being protected.
The data ingestion device 210 is communicatively coupled to cyber security information sources and structured to obtain security data therefrom. The data ingestion device 210 includes an IDS rules ingestion device 212, NVD (National Vulnerability Database) data ingestion device 214 and a vulnerability scanning data ingestion device 216. The IDS rules ingestion device 212 is communicatively coupled to public or local IDS rule repositories and structured to obtain the IDS rules. The NVD data ingestion device 214 is communicatively coupled to the NVD and structured to receive NVD data. The vulnerability scanning data ingestion device 216 is communicatively coupled to various open-source and commercial vulnerability scanners (e.g., without limitation, Nessus®, OpenWAS) 320 via APIs (Application Programming Interfaces) 217 and structured to obtain vulnerability scanning data 308. The vulnerability scanning data ingestion device 216 is further structured to generate vulnerability scanning reports based on the vulnerability scanning data. The vulnerability scanning data ingestion device 216 includes a common core and a set of configurable plugins 218 to interface with the various vulnerability scanners 320. The configurable plugins 218 include individual plugins each structured to interface with respective vulnerability scanners. Each individual plugin can be adapted to the functionalities of the respective vulnerability scanners and any change thereof. As such, the configurable plugins 218 allows the data ingestion device 210 to adapt to the current functionalities and limitations of the vulnerability scanners, and thus provide a more complete data ingestion as compared to data collecting mechanisms of the conventional cyber security systems use a common plugin. The APIs 217 are structured to allow third-party applications to integrate within the cyber security framework 200. The APIs of vulnerability scanners 320, upon which the data ingestion device 210 rely, may change over time with little or no advance notice. The APIs 217 mitigate any negative impacts from such change by allowing the data ingestion device 210 to interface with third party applications, and thus reducing reliance on a single vendor and preventing a single point of failure.
While
Further, the cyber security framework 200 distinguishes known and deployed IDS rules. A known IDS rule as used herein refers to any IDS rule that is available to the community through publicly accessible repositories. It is assumed that the existence of known IDS rules associated with a given vulnerability may decrease the likelihood of exploiting that vulnerability, as an attacker may prefer to target vulnerabilities that can be exploited without triggering IDS alerts. A deployed IDS rule as used herein refers to any IDS rule that is being actively used by a deployed IDS. Deployed IDS rules may include a subset of known rules or ad hoc rules developed by an administrator of the distributed system 100. An attacker may not be aware of what IDS rules are actually in use, but early detection of intrusions may help mitigate the consequences of an exploit, therefore the cyber security framework 200 accounts for the deployed rules in calculating the vulnerability metrics.
The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP), and is maintained by the National Institute of Standards and Technology (NIST). This data enables automation of vulnerability management, security measurement, and compliance. The NVD is built upon and fully synchronized with the Common Vulnerabilities and Exposures (CVE) list of publicly known cybersecurity vulnerabilities. The repository for the CVE is maintained by MITRE and includes various details about each vulnerability, e.g., without limitation, identification number, description, and public references. The NVD augments the CVE list with severity scores, and impact ratings based on the Common Vulnerability Scoring System (CVSS). The CVSS is maintained by the FIRST (Forum of Incident Response and Security Teams) and provides a means to “capture the principal characteristics of a vulnerability and produce a numerical score reflecting its severity.” This score is calculated based on three different metrics: (i) Base Score Metrics; (ii) Temporal Score Metrics; and (iii) Environmental Score Metrics. The cyber security framework 200 utilizes at least the Base Score Metrics for determining security risk scores (system and/or a target security risk). The CVSS Base Score is calculated as follows:
where I and E are the Impact and Exploitability scores, as defined by Equations 2 and 3, respectively, and, an ƒ(I) is defined by Eq. 4.
The IC, II, and IA are the confidentiality, integrity and availability impact scores, respectively, as defined in Table 1 below. The AC, A, and AV are the exploitability metrics Access Complexity, Authentication, and Access Vector scores as defined in Table 2 below.
Importantly, since all the submetrics involved in their computation can assume one of only a few discrete values, the Impact and Exploitability scores will also have one of a limited number of discrete values. Thus, ranking thousands of vulnerabilities based on their CVSS scores is impractical. Further, as discussed with reference to the metrics customizer 240, while the cyber security framework 200 utilizes the CVSS Exploitability and Impact scores for determining the vulnerability scores, the cyber security framework 200 also allows the user to use any other cyber environmental variables and/or metrics (if defined by the user) as additional variables deemed appropriate for determining the vulnerability scores.
As previously noted, CWE is a catalogue of weaknesses associated with software, hardware, etc. While a software weakness is not necessarily a vulnerability, but may become a vulnerability. MITRE provides Common Weakness Scoring System (CWSS), a mechanism for prioritizing software weaknesses that are present within software applications. The CWSS is organized into three metric groups: Base Finding, Attack Surface, and Environmental groups. Each group includes a plurality of metrics, also known as factors, that are used to compute a CWSS score for a weakness. Each CVE can be mapped to one or more CWE entries and each CWE entry may encompass numerous (sometimes hundreds) different vulnerabilities. The purpose of classifying CVEs into CWE is to provide an easy way to identify specific types of weaknesses and also understand the nature of vulnerabilities. A set of CVEs mapped to each CWE can be defined as follows:
A number of times each CWE is mapped to a CVE entry is defined as:
Then, the frequency (Fr(CWEi)) and severity (Sv(CWEi)) of a CWE, where the severity is based on the average CVSS score, are computed as follows:
Then, the overall score of a CWE can be defined as the product of its frequency and severity, normalized between 0 and 100 as follows:
Referring back to
The metrics calculator 230 is structured to receive a signal from the ranking device 220 and calculate a security weakness ranking. For example, if the signal is an automated signal triggered at predefined intervals (e.g., without limitation, monthly, weekly, daily, etc.) from the standard ranking device 222, the metrics calculator 230 calculates the security weakness ranking using the standard metrics as set forth in Equations 5-9. If the signal is a user request received by the custom ranking device 224 to customize the metrics based on one or more variables selected by the user, the metrics calculator 230 calculates the customized ranking using the selected variables as defined in EQs. 10-13 enumerated in
The metrics customizer 240 is structured to receive a user input for customizing security weakness rankings. The metrics customizer 240 includes metrics 242 and a variable selector 244. The metrics 242 may include the standard metrics and the customized metrics. The standard metrics may include a new common weakness scoring metric that computes values specific for the distributed system 100 being monitored based on the ingested data, as opposed to the generic scores of common weaknesses computed using average likelihood and average exposure factor by either MITRE or the MVSF, which only consider data from the preceding two years. The generic common weakness score metrics is defined as follows:
where CWEi is a Common Weakness Enumeration weakness, C (CWEi) is a set of common vulnerabilities and explores (CVEs) mapped to CWEi, ρ(v) is a likelihood of exploitation of a vulnerability v and ef(v) is the exposure factor of the vulnerability v. Such generic scores result in several limitations. For example, if one or more vulnerabilities having average or higher-than-average likelihood and/or exposure factor are mapped to CWEi and the mapped one or more vulnerabilities are not present in the distributed system 100 being evaluated, the score assigned to the CWEi based on the generic scores would result in an overestimate of the actual severity thereof. In another example, if one or more vulnerabilities having average or higher-than-average likelihood and/or exposure factor are mapped to CWEi and the mapped one or more vulnerabilities, which are older than the preceding two years, are present in the distributed system 100 being evaluated, the score assigned to the CWEi based on the generic scores would result in an underestimate of the actual severity thereof. In yet another example, if one or more vulnerabilities mapped to CWEi are present on a plurality of hosts within the distributed system 100 being evaluated, the score assigned to the CWEi based on the generic scores would result in an underestimate of the actual severity thereof since the generic scores ignore the fact that an attacker has a plurality of opportunities to exploit the same vulnerabilities. In response to these limitations, the cyber security framework 200 provides a new metric for scoring common weaknesses as defined as follows:
where I(v) is a set of instances of the vulnerability v within the system. Hence, the new metric for scoring common weaknesses are not based on the average of likelihood or average exposure factor, thereby allowing the user to consider all vulnerabilities for determining the common weaknesses score of the distributed system 100 regardless of the age of the data.
The customized metrics include two important metrics that are specifically defined by the cyber security system 200. The two metrics are an exploitation likelihood (ρ(v)) of a vulnerability and an exposure factor (ef(v)) of exploitation of the vulnerability. The likelihood (ρ(v)) of vulnerability exploitation is a probability that an attacker will attempt to exploit that vulnerability, if given the opportunity. An attacker has the opportunity to exploit a vulnerability if certain preconditions are met, e.g., without limitation, the attacker having access to a vulnerable host. Specific preconditions may vary depending on the specific characteristics of each vulnerability, as certain configuration settings may prevent access to vulnerable portions of a target software. An exposure factor (ef(v)) refers to a relative loss of utility of an asset due to a vulnerability exploitation. A single loss expectancy (SLE) associated with a successful attack is then computed as the product between its exposure factor (ef(v)) and the asset value (AV), i.e., SLE=EF×AV.
Susceptibility of a vulnerability to becoming an exploitation target by malicious actors depends on a number of variables, including features of the vulnerability itself and characteristics of potential attackers. Unlike the conventional security systems that confine the users with the predefined metrics with predefined notions of risks in fixed attack surfaces, the cyber security framework 100 allows numerous variables to be considered and corresponsive weights to be used in situations involving different types of attackers, e.g., without limitations, ranging from attackers who are only aware of vulnerability's CVSS scores to adversaries that can perform reconnaissance on target systems and discover unpatched vulnerabilities. The cyber security framework 200 allows for the user to assess security risks using any variables that may affect the metrics. V denotes a set of all known vulnerabilities, Xl denotes a set of variables that influence the likelihood (ρ(v)) and Xe denotes a set of variables that influence the exposure factor (ef(v)). Xl↑ and Xl↓ denote the sets of variables that respectively contribute to increasing and decreasing the likelihood (ρ(v)) as their values increase. Xe↑ and Xe↓ denote the sets of variables that respectively contribute to increasing and decreasing the exposure factor (ef(v)) as their values increase. The Xl↑, Xl↓, Xe↑ and Xe↓ are defined by Equations 10-13, respectively, as follows:
Variables in Xl↑ include, e.g., without limitations, a vulnerability's exploitability score as captured by CVSS, time lapsed since publication of details about a vulnerability, and a set of known exploits. The CVSS Exploitability score captures how easy it is to exploit a vulnerability, based on different features captured by various sub-metrics, most notably Access Vector (AV) and Access Complexity (AC). The Access Vector metric reflects the context in which a vulnerability can be exploited. Its value is higher for vulnerabilities that can be exploited remotely, and are therefore more likely to be exploited as the number of potential attackers is larger than the number of potential attackers that could exploit a vulnerability requiring physical access to the vulnerable host. The Attack Complexity metric reflects the amount of effort and resources required for a successful attack. Its value is higher for exploits that require little or no effort, and are therefore more likely to be exploited. The time lapsed since the publication of the details of a vulnerability passed plays a role in determining the likelihood (ρ(v)). For example, the longer a vulnerability has been known, the more exploits may have been developed by the hacker community. While it is true that the likelihood that patches have been developed also increases with time, it is well-known that patches are not applied promptly and consistently across systems, thus giving attackers a window of opportunity to target known but unpatched vulnerabilities. The set of known exploits and Proofs of Concept (PoCs) associated with a vulnerability can provide an incentive for attackers to exploit specific vulnerabilities.
Variables in Xl↓ include, e.g., without limitations, a set of known IDS rules associated with a vulnerability and a set of vulnerability scanning plugins. Known IDS rules may influence the attacker's choice of vulnerabilities to exploit. With systems typically exposing multiple vulnerabilities, attackers may choose to avoid exploits that are more easily detectable. Vulnerability scanning tools can provide an inventory of existing system vulnerabilities. The availability of plugins to confirm the existence of a given vulnerability may make such vulnerability less likely to be exploited because attackers may expect that defenders would use such detection capabilities to detect and mitigate that vulnerability.
Variables in Xe↑ include, e.g., without limitations, a vulnerability's impact score as captured by CVSS. As previous mentioned, the CVSS Impact score captures the impact of a vulnerability exploit on confidentiality, integrity, and availability.
Variables in Xe↓ include, e.g., without limitations, a set of deployed IDS rules associated with a vulnerability. IDS rules that are deployed on a distributed system 100 and actively monitoring for intrusions can mitigate the consequences of an exploit through timely detection.
It will be understood that the variables presented herein are for illustrative purposes only, and thus can include any other variables that may be identified and used in the calculation of both the likelihood (ρ(v)) and the exposure factor (ef(v)). For instance, it has been shown that the likelihood (ρ(v)) also depends on the position of a vulnerable system within an attack path. In fact, a vulnerability on a perimeter network may be more likely to be exploited than the same vulnerability on an internal network. Additionally, vulnerabilities that have similar characteristics as those an attacker has already exploited might be more easily exploited as compared to completely different vulnerabilities.
The cyber security framework 200 defines the likelihood (ρ(v)) as a function ρ: V→[0,1] as follows:
Each variable contributes to the overall likelihood as a multiplicative factor between 0 and 1 that is formulated to account for diminishing returns. Factors corresponding to variables in Xl↑ are of the form 1-eα
The cyber security framework 200 defines the exposure factor as a function ef: V→[0,1] as follows:
Similar to the likelihood (ρ(v)), each variable contributes to the exposure factor as a multiplicative factor between 0 and 1 that accounts for diminishing returns. Factors corresponding to variables in Xe↑ are of the form 1-eα
Referring back to
The user interface 202 includes a command-line interface 203 and a graphical user interface 204. The command-line interface 203 may include a keyboards, a keypad, and/or other non-graphical user interface via which the user may provide a user input or command. The graphical user interface may include, e.g., without limitation, a display or touch screen via which the user may view various security data including customized ranking 225, live ranking 226, historical ranking 228, perform data search, e.g., without limitation, CVE search 205, and interact with the customized security risk remediator 201 via the graphical user interface 204.
In operation, the data ingestion device 210 receives security data from information sources. Based on the security data, the standard ranking device 222 provides standard security weakness rankings using standard metrics at predefined periodic intervals. In some example embodiments, the data ingestion device 210 may trigger the ranking device 220 to provide the rankings upon receipt of the security data. The user reviews the standard rankings and determines that one or more vulnerabilities exist in one or more system components of the distributed system 100. In some example embodiments, the cyber security framework 200 may determine that one or more vulnerabilities exist in one or more system components of the distributed system 100 based on the security data and the standard rankings and alert the user about the discovered one or more vulnerabilities. The user reviews the one or more vulnerabilities and customizes metrics for calculating a likelihood of exploitation of each vulnerability and an exposure factor associated with an exploited vulnerability by selecting variables based on a specific applicative domain of each vulnerability, resources and priorities of the distributed system 100 being protected and types of potential attackers. The cyber security framework 200 receives a user request for a customized ranking of the one or more vulnerabilities based on the customized metrics. The custom ranking device 224 receives the user request and triggers the metrics calculator 230 to calculate the customized metrics based on the one or more variables. The metrics calculator 230 calculates the customized metrics using the one or more variables, severity scores for the one or more vulnerabilities, customized ranks for the one or more vulnerabilities, and respective quality scores of the customized ranks. The custom ranking device 224 provides the customized ranking 225 via the graphical user interface 202. In some example embodiments, the custom ranking device 224 may provide the user the customized ranking 225 as well as at least one of respective vulnerability scores, a number of vulnerabilities sharing each vulnerability scores, cumulative number of vulnerabilities in each ranking, deviation of each rank from the ideal scenario or a quality score for each ranking. The user then reviews the customized ranking 225 and determines a target vulnerability that poses a greatest risk to the distributed system 100 being protected based on priorities and resources of the distributed system 100. The user then provides a user command to the cyber security system 200 to perform remediation of the target vulnerability. The target security risk remediation device 250 then performs the remediation of the target vulnerability.
By defining two critical security metrics, the exploitation likelihood (ρ(v)) and the exposure factor (ef(v)) of a vulnerability of a specific system component and providing general principles for selecting variables by a user, the cyber security system 200 allows the users to instantiate customized metrics that best model a specific attack scenario being considered. Further, the cyber security system 200 provides severity scores of the discovered vulnerabilities based on the combination of the two critical metrics, thereby allowing each vulnerability to be ranked. Such individual rank of each vulnerability then allows the user to improve their ability to discriminate the vulnerabilities with very similar severity levels and isolate those vulnerabilities that pose the greatest risk to their distributed system 100. By providing a plurality of variables that influence the calculation of the two important metrics, the cyber security framework 200 allows the user to consider variables for the metrics tailored to various attack scenarios taking into account the specific applicative domain of the distributed system 100, the priorities and resources thereof as well as the potential attackers' knowledge, skills and resources.
These advantages and benefits have been demonstrated by experiments. Experimental results using the cyber security framework 200 in seven different attack and defense scenarios are now described. The cyber security framework 200 has been validated by aggregating vulnerability-level metrics into a CWE score for each CWE category and the rankings of CWEs calculated by the cyber security framework 200 were then compared against MITRE's CWE Top 25 Most Dangerous Software Weaknesses. The results indicated that when the cyber security framework 200 is tuned to reproduce MITRE's experimental setting as closely as possible, the correlation between the resulting CWE rankings and MITRE's ranking is between 80% and 90%. In each scenario, assumptions about the information available to the attacker were made. It is assumed that any information that is available to the attacker is also available to the defender, but not all information that is available to the defender is also available to the attacker. For instance, both the attacker and the defender are aware of known IDS rules associated with a vulnerability, but only the defender knows which rules are actually deployed within their systems. For the experiments, a severity score of a vulnerability was defined as
Each scenario utilizes a different choice of variables for Xl↑, Xl↓, Xe↑ and Xe↓. Table 3 shows the variables considered in each scenario. As shown in Table 3 below, only one variable was considered for scenarios 1 and 2 and a plurality of variables were considered for scenarios 3-7. The results showed that considering different combinations of variables leads to different rankings of vulnerabilities and increasing the number of variables considered leads to more fine-grained rankings and an improved ability to discriminate between different vulnerabilities while allowing for prioritizing mitigation and remediation. It is to be understood that the variables illustrated in Table 3 are for illustrative purposes only, and thus different variables may be added as appropriate without departing from the scope of the disclosed concept. That is, the users can customize the rankings for their specific environment by adding variables that capture environment-specific information, e.g., without limitations, the sets of IDS rules actually deployed across the system. The results showed that considering different combinations of variables leads to different rankings of vulnerabilities and increasing the number of variables considered leads to more fine-grained rankings and an improved ability to discriminate between different vulnerabilities while allowing for prioritizing mitigation and remediation.
In scenario 1, the CVSS Exploitability score was considered as the only variable in the set for Xl↑ as defined by Equation 10. No variables were considered for the other three sets, Xl↓, Xe↑ and Xe↓. As such, Equation 16 and 17 can be rewritten as follows:
Table 4 in
The quality score goes asymptotically to 0 as the standard deviation increases. Scenario 1 shows a high number of CVEs having the same severity score. This results in a high standard deviation and consequently in a virtually 0 quality score. Intuitively. this ranking does not provide significant help for security administrators to make informed decisions when it comes to prioritizing vulnerability remediation. These results can be explained by examining EQ. 3, which defines the exploitability as a function of three variables, each of which can have only 3 possible values, resulting in a maximum of 27 possible values.
In scenario 2, the CVSS impact score was considered as the only variable in the set Xe↑ as defined by EQ. 10. No variables were considered for other three sets Xl↑, Xl↓ and Xe↓. As such, Equations 16 and 17 can be rewritten as follows:
As shown in in Table 5 of
In scenario 3, the CVSS Exploitability score was considered as the only variable in the set Xl↑ as defined by EQ. 10 and the CVSS Impact score as the only variable in the set Xe↑ as defined by EQ. 12. No variables were considered for the other two sets Xl↓, and Xe↓. As such, Equations 16 and 17 can be rewritten as follows:
As shown in Table 6 of
In scenario 4, the CVSS Exploitability score as the only variable in the set Xl↑, the CVSS Impact score as the only variable in the set Xe↑, and a set of known IDS rules as the only variable in the set Xl↑ were considered with ƒx defined as the cardinality of the set of rules. As such, the Equations 16 and 17 can be rewritten as follows:
As shown in Table 7 of
In scenario 5, the CVSS Exploitability score and a set of vulnerability exploitations as the variables in the set Xl↑, the CVSS Impact score as the only variable in the set Xe↑, and a set of known IDS rules as the only variable in the set Xl↑ were considered with ƒx defined as the cardinality of the set of rules. As such, the Equations 16 and 17 can be rewritten as:
As shown in Table 8 of
In scenario 6, the CVSS Exploitability score and a set of vulnerability exploitations as the variables in the set Xl↑, the CVSS Impact score as the only variable in the set Xe↑, and a set of known IDS rules and a set of vulnerability scanning plugins as the variables in the set Xe↑ were considered with ƒx defined as the cardinality of the set of rules. As such, the Equations 16 and 17 can be rewritten as:
As shown in Table 9 of
In scenario 7, the CVSS Exploitability score and the time lapsed since the publication of the details of a vulnerability as the variables in the set Xl↑, the CVSS Impact score as the only variable in the set Xe↑, and a set of known IDS rules as the only variable in the set Xl↑ were considered with ƒx defined as the cardinality of the set of rules. As such, the Equations 16 and 17 can be rewritten as follows:
As shown in Table 7 of
At step 610, a data ingestion device of the cyber security framework obtains cyber security data including at least vulnerability data, intrusion detection system (IDS) rules and vulnerability scanning reports.
At step 620, a ranking device of the cyber security framework outputs standard security weakness rankings based on the cyber security data received from the data ingestion device.
At step 630, it is determined that one or more vulnerabilities exist in one or more system components of the distributed system based on the standard security weakness rankings. The user or the cyber security system may determine that one or more vulnerabilities exist. The user may review the standard weakness rankings, and perform analytics via the cyber security system to determine that one or more vulnerabilities exist. Alternatively, the cyber security system may run the analytics on the distributed system and determine that one or more vulnerabilities exits. Based on the determination, the cyber security system may alert the user via a graphical user interface. The user then customizes metrics for calculating a likelihood of exploitation of each vulnerability and an exposure factor associated with the exploitation of each vulnerability by selecting variables based on a specific applicative domain of each vulnerability, resources and priorities of the distributed system being protected and types of potential attackers. The metrics are combined to obtain severity scores of the one or more vulnerabilities and respective customized ranks.
At step 640, the cyber security framework receives a user request for a customized ranking based on customized metrics for calculating a likelihood of exploitation of each vulnerability and an exposure factor associated with exploitation of each vulnerability by selecting one or more variables that capture the specific applicative domain of the vulnerability, priorities of the distributed system and/or types of potential attackers.
At step 650, a metric calculator of the cyber security framework calculates the customized metrics using the one or more variables, severity scores for the one or more vulnerabilities and customized ranks for the one or more vulnerabilities. The metric calculator may also calculate quality scores of the customized ranks.
At step 660, a custom ranking device of the cyber security framework outputs customized ranking of the one or more vulnerabilities. The user may request the severity scores for the one or more vulnerabilities and respective quality scores of the customized ranks. The custom ranking device may also provide the user a number of vulnerabilities sharing each vulnerability score, cumulative number of vulnerabilities in each rank and deviation of each ranking from the ideal scenario. The user then reviews the customized ranking and determines a target vulnerability that poses a greatest risk to the distributed system being protected based on priorities and resources of the distributed system. The user then provides a user command to the cyber security system to perform remediation of the target vulnerability.
At step 670, the custom ranking device receives a user command to perform a prioritized remediation of a target vulnerability selected by the user from the one or more vulnerabilities based on the customized ranking and specific needs of the distributed system.
At step 680, the cyber security framework performs the prioritized remediation of the target vulnerability.
While specific embodiments of the invention have been described in detail, it will be appreciated by those skilled in the art that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of disclosed concept which is to be given the full breadth of the claims appended and any and all equivalents thereof.
This application claims priority to U.S. Provisional Patent Application No. 63/504,090, filed May 24, 2023, entitled “Scoring and Ranking Common Weaknesses Mapped to Vulnerabilities Found in Networked/Distributed Systems,” the disclosure of which is herein incorporated by reference in its entirety.
This invention was made with government support under grant number 1822094 awarded by the National Science Foundation. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63504090 | May 2023 | US |