Like reference numbers and designations in the various drawings indicate like elements.
Security related incidents 104 generally lower the value of one or more assets 102. A single incident can lower the value of a single asset or the value of multiple assets at the same time. For example, a fire at a warehouse lowers the value of the physical plant, lowers the value of any inventory damaged by the fire, and can even lower the value of employees staffed at the damaged warehouse if the organization is unable to find useful work for these employees. A different kind of incident is a flaw discovered in a product produced by the organization; the product flaw can potentially lower shareholder value as well as the public reputation of the organization. Although many incidents are not scheduled, and happen without warning, incidents can also be anticipated in advance.
In order to protect the value of assets 102, measures 106 can be implemented to protect the value of the assets 102. Examples of measures 106 include virus protections, building access controls, emergency and crisis management plans, business continuity and impact analysis, and segregation of duties. Measures can be implemented for a variety of reasons. Contractual obligations between the organization and third parties might call for particular measures. Various security standards, such as the ISO 27001 security standard and the CoBIT security standard specify measures that may have to be implemented. The organization's own policies can dictate other measures. The processes 122 can include the implementation of measures 106.
In addition, regulations 108 set forth various regulatory requirements 109 that impact the measures 106 taken by the organization. For example, the Sarbanes-Oxley Act of 2002 (SOX) of the United States sets forth legal requirements that potentially require that one or more measures 104 be undertaken by the organization in order to comply with the SOX rules and regulations. Similarly, the KonTraG laws of Germany set forth legal requirements that might require other measures in order to comply with the KonTraG regulations. The organization's internal controls 110 help to ensure that measures 106 are implemented to allow the organization to comply with the various regulations 108.
Projects 112 undertaken by the organization can affect the quality and effectiveness of measures 106, as well as affect assets 102. Projects 112 can include business projects undertaken by the organization; these business projects may not be intended to affect the measures 106, but can often have either a positive or a negative impact on at least one, and typically more than one, measure 106. For example, a business project designed to expand operations to a new country might require additional measures to be put into place in order to comply with local laws. However, this same business project can also have a negative impact on other measures, e.g., if the organization leases a new building that does not have the same level of building access controls as the rest of the organization's facilities. In addition, projects can influence assets; for example, an asset might be shifted to a different location, or the total cost to own an asset increases because of the particular project.
Projects 112 can also include security projects that are specifically designed to have a positive impact on one or more measures 106. For example, a security project to install a fire sprinkler system adds an additional measure to the measures 106 that protect the organization's assets 102—in this case, the sprinkler system helps protect the physical plant from the threat of fire.
Threats 114 include any potential incidents that would harm one or more assets 102. As will be described later, each threat can be associated with a single loss expectancy (SLE) factor; the SLE factor is based on both the likelihood of the particular threat, as well as the financial impact of the threat on the assets 102. For example, the likelihood that an employee will fall ill is quite high, but the financial impact of having an employee stay home for a day or two is quite small. On the other hand, the likelihood of an earthquake is very low, but the is financial impact of the earthquake would be quite high. In addition, the likelihood of a particular threat can be affected by the geographical location of the assets 102 to which the threat relates. For example, an earthquake in Japan is more likely than an earthquake in Germany.
The likelihood and financial impact of the threats 114 allow a risk 116 to be calculated. The risk 116 is expressed as a currency value, e.g., dollars, euros, yen, etc., and is the mathematically expected cost to the organization of all the threats 114 on the assets 102, based upon the value of the assets 102 and the likelihood of the threats 114 on the assets 102 over a particular time window. In addition, based on the cost of the projects 112 or measures 106 or both, as well as the change of risk 116 that occurs based upon the projects 112 or measures 106, the return on investment of a particular security investment 118 can be calculated. The return on security investment 118 is the ratio of the difference between the original risk without the security investment and the revised risk after the security investment is included, divided by the cost of the security investment, multiplied by 100 to express the return as a percentage. The risk 116 can also be used by an operational risk management (ORM) 120 program to determine the impact of particular threats as well as measures against one or more threats.
The following is an example of the relationship between measures, threats, and assets. An organization monitors computer system access and use; this is a measure taken by the organization. This measure helps mitigate the threats of hacking attacks as well as industrial espionage. Another measure implemented by the organization is building access control. The building access control helps to reduce the threat of industrial espionage as well as burglary. Finally, the organization also implements emergency and crisis management plans. Such plans can mitigate the threats of hacking attacks, industrial espionage, burglary, and natural disasters.
Further, each of these threats has a potential impact on one or more of the organization's assets. For example, a hacking attack could impact a computer server, or result in a breach of the organization's confidential data. Industrial espionage could also have an impact on the computer server or the organization's confidential data. The burglary is might have an impact on the computer server, as well as on the server room itself. Finally, a natural disaster might have an impact on the computer server, the server room, and the employees of the organization.
Some measures might be required by various government and industry regulations. For example, both KonTrag and SOX include a requirement that critical organizational data be backed up. The German Data Protection Act (Deutsches Datenschutzgesetz) requires that in addition to data backup, both physical access controls and availability controls be implemented within an organization to protect confidential data.
Further, the measures and assets can all be affected by projects undertaken by the organization. For example, the opening of a new data center, the outsourcing of information technology (IT) services, and identity management all represent projects that could impact the organization's assets, requiring the adjustments of the organization's measures.
In addition, external changes can impact the organization's measures and the threats to the organization's assets. For example, a new threatening technology introduced by a competitor might represent a new threat, to which the organization must adapt. Other external changes might include various political events, such as the introduction of proposed legislation or a change in power after a government election. Physical changes to the environment can also have an impact on the organization; for example, if a new nuclear power plant is constructed near the organization's facilities, the organization may need to adapt its measures in order to deal with the threat that this new power plant might pose.
Knowledge refers to the knowledge required to implement a particular measure. The level of knowledge can be associated with one of three levels. At a lowest level, the organization does not have the expertise required to implement the measure, or there is a major lack of expertise within the organization. At a middle level, expertise is building up within the organization, but it is not yet at a level required to fully implement the measure. At a highest level, there is expertise where needed throughout the organization, and the expertise is such that the measure could be fully implemented. The level of knowledge for a particular measure within the organization can also be unknown.
Readiness refers to the management of the implementation of a particular measure. At a lowest level, there is no defined process owner for the particular measure, or the process is not running at the present time. At a middle level, there is a defined process owner for the particular measure, and the process is being implemented, although the process is not running at its full potential because of insufficient resources or other constraints. At a highest level, there is a defined process owner for the particular measure, there are sufficient resources for the process, and the process is running at its full potential. Alternatively, the level of readiness for a particular measure within the organization can be unknown. An alternative implementation involving the use of an interview form is described below in reference to
Penetration refers to the implementation status of a particular measure. At a lowest level, the particular measure is not implemented, or implementation has not yet started. At a middle level, the particular measure is partially implemented; the measure has been communicated to the organization, and is being carried out. At a highest level, the particular measure is fully implemented; the measure is working and is being monitored for effectiveness. Alternatively, the level of penetration for a particular measure within the organization can be unknown.
In one implementation, KPI levels are represented visually by the colors red, yellow, and green; the lowest level of a particular KPI is associated with the color red, the middle level of a particular KPI is associated with the color yellow, and the highest level of a particular KPI is associated with the color green. This will be referred to as the “traffic light” measurement and reporting system.
Information about the various KPIs can be provided to the common KPI database 202 in a variety of ways. For example, interviews 204 can be presented to an individual for completion through a web-based interface or in any other alternative format, allowing individuals within the organization to provide information to the common KPI database 202. Further information about interview formats is provided below. A particular individual within the organization will not know everything about the organization, but is likely to know quite a bit about his or her area of specialization within the organization. By combining the information gathered from multiple interviews 204, the common KPI database 202 grows in comprehensiveness and accuracy. Information about the various KPIs can also be gathered through a front end 206 of the reporting system 200, or by a direct input 208 mechanism to the reporting system 200, e.g., input provided in the form of data files from other software applications. For example, individual incidents can be reported to the reporting system 200 by individuals using the front end 206, or by direct input from a separate incident reporting system.
In some implementations, each source of information can be assigned its own weighting. For example, an interview completed by the chief security officer can be given a larger weight as compared to an interview completed by a low-level employee such as a security guard, which would represent the assumption that the chief security officer is a more reliable source of information than a security guard.
Output processing 210 of the information gathered in the common KPI database 202 allows for the generation of both predefined reports 212 as well as assembled reports 214. Assembled reports 214 are custom reports reflecting specific information requested by one or more individuals. In addition, individual reports 216 can be generated for particular individuals based upon their needs and other documents 220. For example, a chief executive officer of the organization might want to have information about a first set of security issues; a chief security officer will likely want to have information about a broader set of security issues; and the board of directors will likely want a broad overview of the security of the organization. In addition, decision memos 218 can be prepared to provide specific information for particular individuals, and be limited in scope to include only information that is applicable in order for an individual to make an informed decision.
The interview form 300 lists a variety of measures 304. Based upon the individual's knowledge of each of the measures, the individual can score each measure based on the three KPIs of knowledge 306, readiness 308, and penetration 310. In one implementation, the individual can score each measure based upon the three level traffic light color-coded system described above; all the individual needs to do for each KPI for each measure is select the appropriate color code that corresponds to the individual's assessment of each measure. If the individual has no knowledge about a particular measure (e.g., it is outside the scope of the individual's position), or the individual does not know the status of a particular measure, other color codes, such as white or black, can be used by the individual to indicate the lack of knowledge about a particular measure, or that the status of the particular measure is unknown. The individual also has the option of providing further written comments 312 regarding to the status of each measure that can be reviewed by other individuals within the organization.
In another implementation, an individual can be presented with a scale for each metric, allowing the individual to indicate the status of a particular measure on a sliding scale.
The information gathered can be used to generate summaries relating to the status of individual measures, as well as the status of individual assets. In one implementation, the summary status can be reported using the traffic light system described above. Each of the KPIs measured, knowledge, readiness, and penetration, are combined into a single implementation level for each measure. The implementation levels of multiple measures within a single country can be combined to create an implementation level of all the measures in a single country; similarly, the implementation levels of multiple measures within a single division of an organization can be combined to create an implementation level of all the measures of a single division. The implementation levels of all the measures for multiple country or multiple division can be further combined to create an implementation level of all the measures for a region, a world wide status, or an entire organization, as desired. In each of these consolidation steps, the weighting of each individual implementation level for a measure is based on the value of the assets that are protected by the measure.
Similarly, a divisional status 416 can also be generated by applying a weighting algorithm to the measure implementation level 412. For example, if the status of a particular measure is desired for a human resources division, any information in the measure implementation level 412 can be ignored unless it is associated with the human resources division. Further, if desired, different weighting factors can be applied to information associated with the human resources division; for example, different weights can be applied to information that comes from payroll, benefits and the human resources IT department. In one implementation, the divisional status 416 can be expressed using the traffic light system: red applying to a divisional status below a certain threshold; green applying to a divisional status above a second threshold; and yellow applying to a divisional status falling between the two thresholds.
An overall status 418 can also be generated based upon the measure implementation level 412. The overall status 418 provides a summary status of the individual measure for the entire organization. In one implementation, the overall status 418 can be expressed using the traffic light system.
The traffic light system is a user-friendly method of collecting and displaying data relating to the organization's security system; however, in order make use of data collected under the traffic light system, the data must be converted into numerical values. These numerical values can then be stored as status measures, KType, for each of the KPI types, namely, knowledge, readiness, and preparation. In addition, a weighting KwType can be applied to each of these types of KPI, depending on the needs and assessments of the organization.
In one implementation, a status measure can range from 0 to 32. If the status measure is less than or equal to 10, the status measure is considered to be red; if the status measure is greater than 10, but less than or equal to 24, the status measure is considered to be yellow, and if the status measures is greater than 24, the status measure is considered to be green. For example, if KKnowledge is equal to 20, the color associated with the knowledge KPI is yellow; if KPenetration is 26, the color associated with the penetration KPI is green.
Further, in this implementation, for data that is collected using the traffic light system, for example by interview, red attributes are treated as having a status measure of 4, yellow attributes are treated as having a status measure of 16, and green attributes are treated as having a status measure of 32. If a status measure is used to collect this data, the scale is divided into 33 sections, from 0 to 32, and the section of the scale that the individual has selected is used as the status measure for the KPI under consideration. Using this method to translate between traffic light colors and numerical values, a variety of calculations can be performed to determine the costs, savings, and return on investments for a particular security project.
The status measure of each KPI can be used to evaluate measures that are implemented poorly. For example, if the status measure of KKnowledge is low, then the organization must gain knowledge about that particular measure. If the status measure of KKnowledge is high, but KReadiness is low, than the organization has the knowledge to implement the measure under consideration, but is not well prepared to do so. If the status measure of KKnowledge and KReadiness are high, but KPenetration is low, the organization has the knowledge to implement the measure under consideration, and is prepared to do so, but the organization has not made a significant effort to actually implement the measure.
A simulation process 652 allows simulations to be performed based upon the data collected during the data collection process 602. First, simulation scenarios 655 are defined or selected by a user of the system 600. Simulation scenarios can be created for, among other things, potential security projects, potential business projects, or potential changes in the environment. A simulation of a scenario can determine the influence of the project or environmental change on the status of controls, and on the value of assets. Upon the selection of a simulation scenario 655, the system 600 uses the information in the database 605 and executes the simulation 660. During the execution of the simulation 660, the new security status of assets and controls, based on the project or environmental change, is calculated and then used to determine the return on security investment as well as a residual risk. These results can then be compared with the current security status of assets and controls. After the simulation has been executed, the simulation results 665 are distributed or otherwise made available to the appropriate individuals within the organization. Further details relating to the techniques used during the simulation process 652 are discussed below.
A validation process 667 can be used to validate the status of measures 670 based on information in the database 605. As an example, the Chief Security Officer (CSO) of the organization can verify that the various reports indicating that building access controls are functioning are valid and accurate; if these reports are not accurate, the CSO can make adjustments to information contained in the database 605. The information can be presented in a summarized fashion; for example, the summarized status of assets that were affected by incidents and the current status of controls can be reported, based upon information received from audits, risk management reports, benchmarking, and reported data. Based upon these summary reports, validation decisions and adjustments to the status of particular measures can be made by the appropriate individuals, and these adjustments are then stored in the database 605.
Finally, a reporting process 672 can generate both standard and non-standard reports to various individuals. Standard reports 675 are generated, and available to the appropriate individuals within the organization. In one implementation, the standard reports 675 are available as static or dynamic web documents from a web server to appropriate individuals using conventional web browsers over secure network connections. In addition to the standard reports 675, which are always available and accessible in real-time, routine reports 680 can also be generated. Routine reports 680 are defined by individuals in the organization to contain information pertinent to a specific individual or division. For example, a routine report for the legal department can include information pertaining to regulatory requirements and risks, while a routine report to the information technology department can include information pertaining to information technology threats and risks. Based on the nature of the routine report 680, the routine report is distributed 685 or made available to the appropriate individuals or divisions.
A single loss expectancy SLET,A,C for a particular asset A in country C to a threat T can be calculated using the formula SLET,A,C=RocT,A×IT,A×ExT,A,C×VA, where RocT,A is the annual rate of occurrence of an incident damaging asset A caused by threat T; IT,A is an impact factor for an asset A to a threat T; ExT,A,C is the exposure of asset A in country C to a threat T in comparison to the standard exposure Ex; and VA is the value of the asset A expressed in dollars, euros, or other currency unit. The impact factor is defined as the portion of the asset A that is damaged due to the occurrence of a particular threat T, with IT,A=0 representing no damage from the threat T to the asset A and IT,A=1 representing total loss of the asset A from the threat T; each asset-threat pair can be assigned a different impact factor. The standard exposure value is Ex=1, which represents the lowest possible risk. The exposure ExT,A,C for a particular asset A in country C to a threat T can be as low as 1, equivalent to the standard exposure and representing the lowest possible risk. There is no limit to how high ExT,A,C can be. If the exposure for an asset to a threat in a country were twice as great as for a baseline asset, ExT,A,C would be 2. If it were a hundredfold greater, ExT,A,C would be 100. The exposure value can also be called a risk factor multiplier.
The effectiveness of one measure on a threat can be expressed as EffT,M,C, where EffT,M,C=(1−RaroT,M×ILM,C)(1−RIT,M×ILM,C). RaroT,M is the reduction of annual rate occurrence for a threat T due to a measure M, and ranges from 0 to 1, where a value of RaroT,M=0 represents a completely ineffective measure M against the threat T and a value of RaroT,M=1 represents a measure M that can completely prevent an incident due to threat T. ILM,C is the implementation level for a particular measure M against a threat T, and also ranges from 0 to 1, where ILM,C=0 indicates a measure M that is not at all implemented against a threat T and ILM,C=1 indicates a measure M that is fully implemented against a threat T. RIT,M is the reduction of the impact rate for a measure M against a threat T, and also ranges from 0 to 1, where RIT,M =0 represents a completely ineffective measure M against a threat T, and RIT,M=1 represents a measure M that will completely eliminate the damage of an incident caused by threat T.
The annual loss expectancy ALEA,C for a particular asset A in country C is calculated by the formula
where SLET,A,C is calculated as shown above. The mitigated annual loss expectancy mALEA,C for a particular asset A in country C is calculated by the formula
where both SLET,A,C and EffT,M,C are calculated as shown above. Once ALEA,C and mALEA,C are calculated, the savings SA,C for a particular asset A in country C due to all measures M can be calculated as SA,C=ALEA,C−mALEA,C. The total cost of measures TCOA,C for all measures M relevant to an asset A is calculated as
where CostM,A,C is the cost of a particular measure M to protect an asset A in country C.
The return on security investment ROSIA,C for an asset A in country C is ROSIA,C=SA,C−TCOA,C which can also be expressed as ROSIA,C=ALEA,C−mALEA,C−TCOA,C. Expressed as a percentage, the return on investment (“ROI”) can be calculated using the formula
These calculations can also be used to determine the effectiveness of simulated measures, in order to determine whether or not it is worthwhile to implement a new measure Mnew. The mitigated annual loss expectancy for all existing measures M in place, mALE1, is calculated as
The total cost of ownership (“TCO”), for the new measure being simulated, Mnew, is equal to the cost of the new measure, CMnew. The savings SMnew resulting from the new measure Mnew can be simulated using the formula SNew=mALE1−mALE2; the return on security investment for the new measure Mnew can be simulated using the formula ROSIMnew=SNew−TCO; this formula can also be expressed as ROSIMnew=mALE1−mALE2−CMnew. Therefore, the return on investment for this new measure Mnew being simulated can be calculated by the formula
The implementation level, ILM,Rep(C,D), for a particular measure M in a single report Rep(C, D) covering a single county C and a single division D can be expressed as
As described above, KwType is a weighting value for each type of KPI, and KType,M,Rep(C,D) is the KPI value for each type of KPI for a measure M and Report Rep(C,D) covering a single country C and a single division D. The implementation level, ILM,C, for a particular measure M in a single country C, but across several divisions can be expressed as
The relevance RM of a measure M can be calculated as
MwM is a measure weighting value Mw for a measure M. ExA,C is the exposure of asset A in country C to all threats; ExA,C is expressed in relation to a standard exposure value, Ex, as described above. In addition, the exposure of an asset A to all threats in all countries, ExA, can be calculated as
The implementation level ILM of one measure M across all countries can be calculated as
calculated as either
Error calculations can also be performed in order to determine the accuracy of the information generated by the above formulae. The average implementation level for a measure,
Approximating
by the partial derivative
and evaluating it, the absolute range of the return on security investment ROSIA,C for an asset A in country C can be derived as
Similar calculations can be applied to determine the security status of a particular asset, as well as of a group of assets; the remaining security risks for a single asset or a group of assets; the security status of business processes; and the security status of an entire organization or particular divisions within the organization. In addition, further calculations can be undertaken to simulate the impact of a project on all of these measures, as well as to simulate the impact of changes in the environment to all of these measures.
In addition, multiple speedometers in the security snapshot indicate the status of security relating to various types of assets 910, as well as the security associated with different divisions 915. Various critical security events 920, as well as the level of risk associated with each security event, are also displayed in the security snapshot.
In
The confidence interval of the average IL implementation level can be calculated with standard statistics. First the unbiased standard deviation is estimated.
The 90% confidence band is then:
The confidence interval can also be calculated with best-case worst-case calculations, where the lower bound is calculated assuming that many or all of the unreported measures are negative and the upper bound is calculated by assuming that many or all of the unreported measures are positive.
In one implementation, assets that require protection are identified. For example, assets requiring protection can include confidential data, e.g., customer or order databases, web pages, web site availability, or tangible assets such as buildings and computers. All of the assets of a particular business process can be categorized based on a predefined rule set.
In one implementation, the assets can be categorized using a layered business process.
In addition, for each asset that involves information (either the informational assets or assets that store or contain informational assets), the values of various attributes associated with the informational asset can be taken into consideration. These attributes can include confidentiality (i.e., preserving authorized restrictions on access and disclosure), integrity (i.e., guarding against improper modification), and availability (i.e., ensuring timely and reliable access and use). In the end, a general threat rating for each asset can be established based on the values for each of the attributes.
Having identified threats as illustrated in
Other tables defining a many-to-many relationship include the Incident-Threat table 1853, reflecting that many threats can be involved in causing a single incident and that a single threat can cause many incidents. Another is the Asset-Threat table 1854, reflecting that many threats can threaten a single asset and that a single threat can threaten many assets. Another is the Incident-Asset table 1855, reflecting that a single incident can involve many assets and that a single asset can be involved in many incidents. Another is the Asset-Country table 1856, reflecting that an asset (e.g. employees or computers) can be present in many countries and that a country can contain many assets. Another is the Measure-Country table 1857, reflecting that many measures can be in effect in a single country and that a single measure can be in effect in many countries. Another is the Asset-Measure table 1858, reflecting that many assets can be protected by a single measure and that a single asset can be protected by many measures. Another is the Asset-Country-Threat table 1859, reflecting that assets in a particular country can be affected by many threats, and a single threat can threaten assets in multiple countries.
The data structure may be organized in an asset centric manner. Assets 1940 are tangible objects such as buildings, computers, or people and intangible objects such as information or reputation and can be anything that needs protection 1941. Rather than including every instance of every asset, such as each laptop computer, a single data point can represent all laptops in a single country. The data point can show much of the asset is in a particular country, e.g., 30% of laptops are in Germany, as a relative distribution number 1942 and country 1945. The value of the asset 1943 may be included, for instance, laptops in America may average $2000 in value. Also, for each asset-country the exposure factor 1944 may be included. For instance, laptops may be more likely to be stolen in America than in Japan, in which case the exposure factor for laptops in America would be higher. A different geographic or political unit than “country” may be used to group instances of assets, such as company department, city, state, or province.
Measures 1920 can have one or two effects on a threat. Measures can be preventative or curative, where preventative measures may include procedures and guidelines 1921-1923, audits, firewalls and intrusion detection, virus and content scanning, and encryption. Preventative measures include standards like ISO 17799 1921, company standards 1922, or Sarbanes-Oxley 1923. Curative measures may include redundant systems and backup regimes. Measures may be both preventative and curative, for example employee training. The effectiveness of a measure 1924 can be a coefficient of preventing incidents (for a preventative measure) or reducing damage after an incident (for a curative measure).
Threats 1910 create the risk of security related incidents 1930. Various threats have different rates of occurrence 1911 for different assets. The single loss expectancy 1912 of a threat varies by country and asset. It is the expected loss of an asset in a country from a single identified threat.
The invention and all of the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structural means disclosed in this specification and structural equivalents thereof, or in combinations of them. The invention can be implemented as one or more computer program products, i.e., one or more computer programs tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program (also known as a program, software, software application, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file. A program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification, including the method steps of the invention, can be performed by one or more programmable processors executing one or more computer programs to perform functions of the invention by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus of the invention can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, the invention can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
The invention can be implemented in a computing system that includes a back-end component (e.g., a data server), a middleware component (e.g., an application server), or a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the invention), or any combination of such back-end, middleware, and front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The invention has been described in terms of particular embodiments, but other embodiments can be implemented and are within the scope of the following claims. Many of the operations described above can be performed in a different order and still achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Different mathematical formulas can be used to achieve identical or substantially similar results. Different numbers of levels can be used for presentation and acquisition of information. For example, for some organizations or parts of organizations a two-level representation may be sufficient; for others, the use of more than three levels can offer advantages. In addition, this methodology of associating measures with threats and threats to assets can be used for the management of risks that are not related to security issues, such as business risks, financial risks, etc. Other embodiments are within the scope of the following claims.