Traditional software development includes several separate activities such as gathering requirements, determining specifications, designing, test planning, implementing, and implementation testing. Test planning includes, for example, security design analysis. However, there is often an undesirable conceptual separation between security design analysis and security testing.
Existing methods for security testing include “fuzz” testing. Fuzz testing is the automatic generation of input data for an application program or other process to test the application program in terms of functionality, reliability, stability, response under stress, and more. An objective in fuzz testing is to generate input data that uncovers programming errors that could lead to security problems.
Successful fuzz testing, however, is a time-consuming process involving significant, frequent, and manual intervention by a tester. It is often unclear which portions of an application should be tested and at what level, as well as which variations of input data should be generated. As a result, fuzz testing is often misapplied or omitted entirely, leaving the application program potentially vulnerable to security problems.
Embodiments of the invention identify security testing targets for an information system through structural analysis of a threat model for the information system. In some embodiments, a representation of the information system is analyzed. The representation includes a data flow diagram having a plurality of elements arranged to describe a flow of data through the elements. The elements may be associated with one or more application programs in the information system. The data flow diagram is analyzed according to predefined criteria to identify one or more of the elements that may pose a threat. A threat priority is assigned to the identified elements. The identified elements and the assigned threat priorities are provided to a user as potential security testing targets for further investigation. In an embodiment, the predefined criteria include data flow elements that cross trust boundaries and communicate with an external data source.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Corresponding reference characters indicate corresponding parts throughout the drawings.
Embodiments of the invention identify security threats to an information system or process. In some embodiments, the security threats are identified through a structural analysis of the information system. In a testing environment such as shown in
Referring next to
The components in
The decision component 212 analyzes each of the plurality of elements 105 based on the criteria 220 accessed by the decision component 212 to identify one or more of the plurality of elements 105 at 306. The identified elements represent elements that are more likely to contain vulnerabilities than other elements in the application program. The model component 214 assigns a threat priority to each of the one or more of the plurality of elements 105 identified by the decision component 212 at 308. The report component 216 provides at 310 to the user 106 the one or more of the plurality of elements 105 identified by the decision component 212 and the threat priority assigned by the model component 214 as security testing targets.
Alternatively, the model component 214 merely indicates one or more of the plurality of elements 105 identified by the decision component 212 as potential vulnerabilities in the information system. The indicated elements represent security testing targets. In such embodiments, a threat priority is not assigned.
In some embodiments, the report component 216 automatically selects at least one of the identified elements as a target for fuzz testing based on the assigned threat priority, if any of the identified elements are reasonable targets for fuzz testing. In other embodiments, none of the identified elements is selected as a target for fuzz testing. Alternatively or in addition, the user 106 evaluates the identified elements, and may select one or more of the identified elements as targets for fuzz testing.
In some embodiments, the interface component 210 provides the plurality of elements 105 identified by the decision component 212 and the threat priority assigned by the model component 214 in a security testing priority report for display on a display 218. For example, the interface component 210 provides the information in the security testing priority report as a sorted, or user-sortable, list of suggested, recommended, or possible security testing targets for display on the display 218. The list of possible security testing targets may be organized hierarchically based on the assigned threat priority to emphasize critical threats over non-critical threats (e.g., critical threats listed first). Alternatively or in addition, the possible security testing targets may be color-coded or otherwise visually distinguishable based on the assigned threat priority. The term “critical” refers to a severity or importance of the security testing target. The severity or importance may be subjective, objective, absolute, or relative to other targets, and may be set by the user 106, original equipment manufacturer (OEM), or other entity.
The interface component 210 may also provide the representation 208 of the data flow diagram 104 along with the assigned threat priority value for each of the elements 105 identified by the decision component for display on the display 218. For example, the threat priority values may be visually indicated on a visual representation of the data flow diagram 104 (e.g., the identified elements may be color-coded or otherwise visually distinguished within the data flow diagram 104). The user 106 interacts with the data flow diagram 104, for example, by filtering the possible security testing targets based on their threat priority values. In some embodiments, the user 106 selects an option to only display, or highlight, the possible security testing targets having a particular threat priority value or range of threat priority values.
Referring next to
Referring next to
The trust boundary represents any transmission of data that crosses from less-to-more or more-to-less trust. Trust boundaries occur when the level of trust associated with the source of a data flow is different from the destination of a data flow. Determining whether the transmission of data crosses a trust boundary comprises, for example, determining whether a level of trust changes from one of the elements 105 to another. There are many types of trust levels. As an example, trust boundaries occur wherever validation, authentication, or authorization should occur. Other examples of trust levels include anonymous data (e.g., data downloaded from a network), authenticated user (e.g., code running as an authenticated user), system (e.g., code running as a part of the operating system), and kernel (e.g., code running with full kernel privileges). When code running as an authenticated user reads data that was downloaded from a network, there is a trust boundary between the two elements 105.
Such operations may lead to vulnerabilities in the application program. For example, the user 106 communicating with a web site represents a trust boundary. Other exemplary trust boundaries include a perimeter firewall, calls from a web application to a database server, and passing fully validated data from business components to data access components. Another exemplary trust boundary exists between user mode and kernel mode. Trust boundaries may be defined in the data flow diagram 104 by a developer of the software, or by the user 106. For example, the user 106 manually marks the location of the trust boundaries on the visual representation of the data flow diagram 104. In such an example, aspects of the invention receive an indication of the trust boundary from the user 106. The indication includes, for example, identification of one of the plurality of elements 105 in the data flow diagram 104.
The external interactor element includes, for example, an external data source communicating with the application program. As an example, the external interactor element is the user 106.
If any of decisions 508, 510, 512, or 514 are negative, then a lower threat priority is assigned for the element 105 at 518. While the assigned threat priorities in
Decisions 508, 510, 512, and 514 represent examples of the criteria 220 stored in the memory area 204 illustrated in
While
Referring next to
Referring next to
A computer or the computing device 202 such as described herein has one or more processors or processing units, system memory, and some form of computer readable media. By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
The computer may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer. Although described in connection with an exemplary computing system environment, embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. The computing system environment is not intended to suggest any limitation as to the scope of use or functionality of any aspect of the invention. Moreover, the computing system environment should not be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Embodiments of the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. Aspects of the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments illustrated and described herein as well as embodiments not specifically described herein but within the scope of aspects of the invention constitute exemplary means for generating a set of security testing targets representing potential vulnerabilities in the information system, and exemplary means for identifying security testing targets for the information system in a testing environment.
The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention.
When introducing elements of aspects of the invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
Having described aspects of the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the invention as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.