Members of organizations who are in charge of making important decisions must often balance different objectives. For example, a chief information security officer must make decisions regarding the protection of the organization's information technology assets while also meeting the organization's business objectives. It is often difficult for this decision maker to determine how well a particular investment into a particular security measure will correspond with the organization's other business goals and limitations. Various systems and standards are available to help decision makers make better informed decisions regarding security. Although these tools may be helpful, they are often difficult to customize with a particular organization's unique needs and limitations.
The accompanying drawings illustrate various examples of the principles described herein and are a part of the specification. The illustrated examples do not limit the scope of the claims.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
As mentioned above, decision makers within an organization must make decisions regarding the protection of the organization's information technology assets while also meeting the organization's business objectives. Often times, different decision makers within an organization will have conflicting objectives. For example, an operational manager needs to make sure that the organization's systems are operating as desired. Additionally, a security manager needs to take steps to minimize security breaches during the operations. Particular measures taken by one decision maker may have an adverse affect on the other decision maker's objectives. For example, a stricter security policy may result in slower operations.
Furthermore, it is often difficult for these decision makers to determine how well a particular investment into a particular security measure will correspond with the organization's other business goals and limitations. A security investment decision often affects multiple objectives and goals aside from a security objective. For example, a security investment decision may also affect an organization's performance and productivity objectives.
In light of this and other issues, the present specification discloses systems and methods for decision support that will allow a user to make better informed decisions relating to security investment decisions. According to certain illustrative examples, a decision support system prompts one or more users for information regarding an organization's objectives. These objectives may include business and other operational objectives as well as security objectives. The information received from the user is used to derive a utility function. Additionally, the decision support system simulates the implementation of a number of investments. The results of these simulations can then be used with the utility function to determine how well these potential investments may correspond with the organization's objectives.
Through use of systems and methods embodying principles described herein, decision makers within an organization may be better informed as to how various security investments will correspond to security and business objectives. For example, a chief information security officer may obtain better information about how a potential security measure will fit in with the organization's risk tolerance for security breaches as well as their economic ability to take on such measures. Furthermore, in cases where multiple decision makers with conflicting objectives are involved, the decision support system may help those decision makers determine which compromises will maximize utility.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present systems and methods. It will be apparent, however, to one skilled in the art that the present apparatus, systems and methods may be practiced without these specific details. Reference in the specification to “an example,” or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples. The various instances of the phrase “in one example” or similar phrases in various places in the specification are not necessarily all referring to the same example.
Throughout this specification and in the appended claims, the term “investment” is used broadly and may encompass any effort made towards satisfying an objective. As such, an investment may involve but does not necessarily require financial capital. An example of an investment may be the acquisition of new hardware or the implementation of a new policy irrespective of whether such an acquisition of hardware or implementation of a policy requires a capital expenditure.
Throughout this specification and in the appended claims, the term “entity” is used broadly and may encompass both an individual or an organization.
Referring now to the figures,
There are many types of memory available. Some types of memory, such as solid state drives, are designed for storage. These types of memory typically have large storage volume but relatively slow performance. Other types of memory, such as those used for Random Access Memory (RAM), are optimized for speed and are often referred to as “working memory.” The various forms of memory may store information in the form of software (104) and data (106).
The physical computing system (100) also includes a processor (108) for executing the software (104) and using or updating the data (106) stored in memory (102). The software (104) may include an operating system. An operating system allows other applications to interact properly with the hardware of the mobile computing system. The other applications may include a decision support application.
A user interface (110) may provide a means for the user (112) to interact with the physical computing system (100). The user interface may include any collection of devices for interfacing with a human user (112). For example, the user interface (110) may include an input device such as a keyboard or mouse and an output device such as a monitor.
The graphical user interface (202) provides the mechanism that allows a user such as a decision maker to interact with the decision support system (200). The graphical user interface (202) presents information to a user through a display device and receives information from the user from an input device. For example, the graphical user interface (202) may display to a user a number of questions relating to various business and security objectives. The user may respond to those questions through use of the input device.
The workflow manager (204) manages the flow of the decision support system (200). Specifically, the workflow manager (204) coordinates the use of the other modules, which will be described in more detail below. These modules allow the decision support system (200) to receive the desired information from the user, create a utility function, simulate investment options and present the best options back to the user through the graphical user interface (202). The workflow manager also manages situations where the preference elicitation is provided to multiple users. Each user may be accessing the decision support system remotely from individual client machines either concurrently or subsequently.
The preference elicitation module (206) includes the hardware and software for determining how to elicit information from a user. The preference elicitation module guides a user, such as a decision maker, through the elicitation process. This process may include one or more steps consisting of questionnaires and graph manipulation. The preference elicitation module (206) accesses a template database (222) for a set of questions to ask a user. The preference elicitation module includes a utility component selector module (208), a preference value range module (210), and a questions and results module (212).
The template database includes a number of templates. A specific template may be designed for the user's specific decision making role. For example, if the user is a chief information security officer, then the template may include questions relating to common objectives in the decision making process that relate to security and business objectives. The data elicited from the user is then placed into a preference elicitation database (224).
The various templates and questions used by the preference elicitation module may be created by an administrator. The administrator may have knowledge of common business and security objectives that are relevant to the roles of specific decision makers. The administrator grants access to the appropriate individual and manages the settings of the decision support system so that it operates in an efficient manner according to the needs of a particular organization.
The template may indicate a number of appropriate objectives. For each objective, a number of metrics may be used. Use of a particular metric for a given objective may be established by an administrator or elicited from a user. The metric provides the user with a mechanism for quantifying a particular objective. For example, breach prevention rate may be a metric for a security risk objective. The breach prevention rate metric gives the user a way to quantify how well various investments may affect the security risk objective.
The utility component selector module (208) includes the hardware and software for selecting the appropriate components of a utility function. The utility component selector guides the user, based on the template being used, through the elicitation process. This elicitation process may occur by means of a questionnaire with multiple choice options of strategic business and security objectives that are important to the decision maker. For example, in the case that the user is a chief information security officer, then the utility component selector may choose components such as breach rate, business loss, and investment costs. These components correspond to the objectives indicated by the user. In addition, the utility component selector module (208) guides a user through the identification of related metrics that would represent each identified objective. Although the template may provide an initial set of objectives, the user may add or remove objectives to fit his or her unique decision making responsibilities.
The preference value range module (210) includes the hardware and software for eliciting tolerance ranges or target levels of achievement for each of the objectives and metrics identified by the user. In addition, ratings of preferences between different objectives and investment decisions related to those objectives are elicited. There ratings include the user's preferences for which objectives are more important than others. For example, a user may indicate a range of investment costs that would be desirable, acceptable, or unacceptable. These different preferences may be used to weight the various components within the utility function.
The components of the utility function that are ultimately selected by the user can be placed into pairs based on a logical relationship between two objectives represented by the components. This can allow the user to see the relationship between two different objectives. This may allow the user to make better decisions when considering how making steps toward one objective will affect the other objective. For example, investments that result in a higher breach prevention rate may also result in a loss in the availability of an information service. Such a relationship may allow for the coupling of these two components. The user can then answer a number of questions generated by the questions and results module (212). These questions can be designed to determine which of the two components is more desirable.
A particular objective may not be exclusively coupled with another objective. For example, a cost objective can be coupled with a security objective in one instances and coupled with a business objective in another instance. Thus, the user may be provided with two graphs, one showing how a cost objective will affect a security objective and the other showing how the cost objective will affect the business objective. A third graph may also show how the security objective will affect the business objective.
The utility function builder module (216) includes the hardware and software for building a utility function based on the information elicited from a user by previous components. The utility function represents the user's preferences for a number of objectives. One example of a utility function is as follows:
U=w1f1(dB)+w2f2(dL)+w3f3(dC) Equation (1)
Where:
U=utility;
w=weight
f=function
dB=change in confidentiality required to reach target objective
dL=change in availability required to reach target objective; and
dC=change in investment costs to reach target objective
Equation 1 is a utility function that includes three components. In this example. In this example, the three different objectives are confidentiality, availability, and investment costs. Each of these functions may be weighted according to the user's preferences as to which objective is most important. The function (f) may be designed to best match the nature of how important it is to reach a target objective. More detail on the function will be described below in the text accompanying
The simulation module (220) includes the hardware and software for simulating the results of a number of potential investment options. In the case of information security, such investments may include additional hardware with various security features. Such investments may also include the implementation of new security policies. The simulation module simulates the implementation of each of the available investments or any combination thereof. These results are then provided to the preference mapper module (214).
In some cases, the simulation module (220) does not need to actually perform simulations. The expected outcome of a particular investment decision may be simple enough to not require a simulation. Alternatively, simulations may have been run on particular investment decisions in the past. The simulation module (220) may store a number of expected outcomes or results from past simulations and provided these results or expected outcomes to the decision support system when appropriate.
The preference mapper module (214) includes the hardware and software for mapping the results of the simulation to the utility function derived from information provided by the user. By mapping the results of the simulation to the utility function, the decision support system is able to determine which investment options correlate best with the user's preferred objectives. The investments that correlate best with the user's objectives may be presented to the user through the graphical user interface (202). This information may then be used by the user to aid in his or her decision making process.
In some examples, the decision support system prompts the user for information by requesting that the user answer a series of questions. These questions may ask the user to rate different objectives by importance. Additionally, the user may answer specific questions about a particular objective. For example, the user may specify that he or she desires a breach prevention rate of at least 95%. The metric of breach prevention rate would affect the security objective. As mentioned above, these objectives may be paired. The user can then answer questions relating to which of the two pairs is more important.
After the decision support system receives (block 304) preferences and objectives information from the user, the system then determines (decision 306) whether or not all of the information requested has been received. If the information has not (decision 306, NO) been received, then the system prompts the user for the remaining information. If all of the information has indeed (decision 306, YES) been received, then the decision support system can derive (block 314) the utility function.
Beforehand, concurrently, or subsequently, a simulation module (e.g. 220,
The simulation module will then simulate (block 310) the effects of the various investment options available. The decision support module will then determine whether (decision 312) all of the appropriate simulations have run. If all of the appropriate decisions have not (decision 312, NO) run, then the system will run the remaining simulations. If all of the appropriate decisions have run (decision 312, YES), then the decision support system can proceed to compare (block 316) the simulations results with the derived utility function.
The results from the simulation are then compared (block 316) with the derived utility function. The investment decisions or combinations of investment decisions that best match the utility function are then presented (318) to the user. Thus, the user is provided with a number of investment decisions which will best match his or her stated objectives.
The above described process illustrates one example of how the decision support system may operate. Other processes may be used. For example, the preference elicitation may be bidirectional. Thus, the user may go back to previously answered questions and revise his or her responses. This may be done at any time, even after the final utility function has been derived. Changes in a user's responses may result in a reformation of the utility function.
In some cases, the decision support system can indicate to the user which potential investment decisions best match the user's indicated preferences. In general, a simulation result that is graphically close to a desirable outcome indicates that the corresponding investment decision for that simulation result will actually result in the desired outcome. However, in some cases, the nature of a target outcome for a particular objective may affect how well a simulation result matches a desirable outcome. For example, a user may prefer to come short of achieving the target more than going beyond the target. Alternatively, a user may prefer to go beyond the target than to fall short of achieving the target. A target refers to the most desirable outcome for a particular objective within the bounds of realistic expectations. For example, a target breach rate may be 0.02%.
In conclusion, through use of systems and methods embodying principles described herein, decision makers within an organization may be able to get more information related to potential investments. For example, a chief information security officer may obtain better information about how a potential security measure will fit in with the organization's risk tolerance for security breaches as well as their economic ability to take on such measures.
The preceding description has been presented only to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.