Information technology security assessment system

Information

  • Patent Grant
  • 11882146
  • Patent Number
    11,882,146
  • Date Filed
    Tuesday, September 5, 2023
    a year ago
  • Date Issued
    Tuesday, January 23, 2024
    12 months ago
Abstract
A method and system for creating a composite security rating from security characterization data of a third party computer system. The security characterization data is derived from externally observable characteristics of the third party computer system. Advantageously, the composite security score has a relatively high likelihood of corresponding to an internal audit score despite use of externally observable security characteristics. Also, the method and system may include use of multiple security characterizations all solely derived from externally observable characteristics of the third party computer system.
Description
BACKGROUND

The present invention relates to systems for determining the security of information systems and, in particular, for evaluating the security of third-party computer systems.


When a company wants to reduce its cyber security risk of doing business with another company's computer systems, it either performs, or hires an outside firm to perform, a cyber security assessment of the other company to determine if it is following good security practices. The theory is that these good practices make it difficult for attackers to compromise the networks of the other company. If the auditing company is satisfied with the assessment, it may choose to continue doing business with the other company. Or, it may ask the other company to make some improvements to its security systems or terminate the business relationship.


Generally, these audits are slow, expensive and impractical given the high volume of service provider security systems that need to be characterized by the company. And, the inventors have noted that audits are not entirely predictive of the performance of the security systems.


SUMMARY

A method and system is disclosed for creating a composite security rating from security characterization data of a third party computer system. The security characterization data is derived from externally observable characteristics of the third party computer system. Advantageously, the composite security rating has a relatively high likelihood of corresponding to an internal audit score despite use of externally observable security characteristics. Also, the method and system may include use of multiple security characterizations all solely derived from externally observable characteristics of the third party computer system.


A method of evaluating information security of a third party computer system is disclosed. The method includes collecting at least two security characterizations of the third party computer system. A composite security rating is generated using the at least two security characterizations. Advantageously, the two security characterizations are derived from externally observable characteristics of the third party system.


Each of the security characterizations may be from an associated one of a plurality of independent entities. For example, the independent entities may include commercial data sources. Also, the security characterizations may be derived without permission of the third party system.


The security characterizations may include multiple data types, such as breach disclosures, block lists, configuration parameters, malware servers, reputation metrics, suspicious activity, spyware, white lists, compromised hosts, malicious activity, spam activity, vulnerable hosts, phishing, user-behavior or e-mail viruses. The externally observable characteristics may also include serving of malicious code or communications with known attacker controlled networks.


The externally observable characteristics may be evidence of internal security controls or outcomes or operational execution of security measures of the third party computer system.


The collecting and generating steps may be repeated to generate a series of scores and the series examined to determine a trend. Also, the scores may be reported to a consumer. For instance, reporting may include reporting a warning based on a change in the scores. Or, reporting may include posting the score and warning to a web portal.


Collecting the security characterizations may include using various tools such as WGET, RSYNC, CURL or interfaces that may be characterization specific.


The method may also include mapping the third party computer system to an IP space and using the IP space for collecting the security characterizations. Mapping, for example, may include querying a Regional Internet Registry (RIR), such as by submitting an entity name to the RIR. Querying an entity name may include querying for variations of the entity name.


Mapping may also include using a domain name associated with the third party computer system. For example, tools such as nslookup or dig may be used on the domain name to determine a published IP address. Mapping may also include probing addresses around the published IP address. For example, IP addresses could be probed in powers of two around the published IP address. Mapping could also include adapting the domain name to server naming conventions and using tools like nslookup to verify an IP address associated with the domain name.


Generating the composite security rating may include assessing vulnerability and resilience of the third party computer systems. Vulnerability, for example, may include a number of IP addresses with malicious behavior. Resilience may be inversely proportional to a duration of malicious behavior.


The IP space may include a plurality of IP addresses. And, the composite security rating may correspond to an intensity and duration of malicious activity determined from one of the security characterizations. Generation of the composite security rating may include aggregation of a plurality of individual security metrics and/or the IP addresses associated with the third party computer system.


Determination of the individual security metric may include adjusting for false positives in the security characterizations. Correlating data across multiple related security characterizations may help improve the quality of any single security characterization. Further, adjusting for false positives may include determining an occurrence of an event, which includes persistent, reported activity on one of the IP addresses for a predetermined period of time. It may also include determining an intensity of the IP address for the predetermined period of time, such as a day.


Determining the intensity may include increasing intensity in proportion to a number of reporting targets from the security characterizations.


Determining an individual security metric may include assigning a raw score for each of the IP addresses appearing on a block list as one of the security characterizations. After an IP address is delisted, the raw score may be exponentially attenuated.


The individual security metric may also incorporate a raw score in proportion to a CIDR block size.


Individual security metrics or the composite ratings may be normalized based on, for example, network size or a number of employees.


Security characterizations may also include positive information about an organization that's aggregated into the composite rating.


The method could also include statistically correlating the composite security rating with actual outcomes and adjusting the generating step based on the statistical correlations.


Further, the method may include determining a confidence range of the composite security rating. For example, the confidence range may be based on a redundancy of the security characterizations or a size of the third party computer system.


The method may also include determining an accuracy of each of the security characterizations, such as by determining a level of coverage of the third party computer system by the security characterizations.


Also disclosed herein are a system and computer program product for data collection and scoring, including systems and software for performing the methods described above.


Another method may include generating a composite security rating using at least one security characterization that's derived from externally observable characteristics of the third party computer system wherein the composite security rating has a relatively high likelihood of corresponding to an internal audit score.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS


FIG. 1 is a schematic of a system for evaluating information security;



FIG. 2 is a schematic of a system for gathering security data from external sensors;



FIG. 3 is a schematic of a composite security rating calculation; and



FIG. 4 is a schematic of a distributed system for evaluating information security.





DETAILED DESCRIPTION

Generally, the present invention includes a method, system and computer program product for creating composite security ratings from security characterization data of a third party computer system. The security characterization data is derived from externally observable characteristics of the third party computer system. Advantageously, the composite security rating has a relatively high likelihood of corresponding to an internal audit score despite use of externally observable security characteristics. Also, the method and system may include use of multiple security characterizations all solely derived from externally observable characteristics of the third party computer system.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.


Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.


A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.


Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.


Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


Referring again to FIG. 1, a system 10 for evaluating information security of a third party computer system includes the following systems: a global data source 12, an entity ownership collector 14, a data collection processor 16, a data collection management 18, a data archive 20, an entity database 22, a manual entity input 24, an entity data join process 26, an entity mapped meta-reports repository 28, a ratings processing 30, a normalization, consolidation and global relative rank 32, a report generation 34, a report archive 36 and a report delivery 38 systems. Different delivery modules 40 are configured to use different methods to deliver the reports to customers 42.


The global data source system 12 obtains data sources that characterize any observation about an entity (e.g., a third party computer system) and these sources can be highly varied and disparate. Each data source has a particular vantage point of the security related characteristics of entities.


The entity ownership collection system 14 gathers information about an entity. This includes information about which IT assets an entity owns, controls, uses, or is affiliated with. Examples of asset ownership include control and operation of an Internet Protocol (IP) network address range or computer services such as web servers residing within that address block. Information about entities also includes relationships such as subsidiaries, affiliates, etc., that describe entity association.


The data collection processing system 16 includes custom modules configured to collect and process unique data sources.


The data collection management system 18 is configured to schedule and coordinate the constant collection of the different data sources.


The data archive 20 is configured to store all of the terabytes of data constantly collected by the data collection management system 18.


The entity database 22 holds all of the information about an entity such as its name, address, web site address, industry sector, IP address ranges owned, etc. This data base includes the “Entity Map” which maps data back to an entity. For example, if observations are made about a particular IP address, the IP address can be looked up in the entity map to determine which entity controls or owns that address. This database is populated by automatic or manual data collection methods, or combinations thereof.


The manual entity input system is configured to place non-automatic data on an entity into the entity database 22.


The entity data join process or system 26 is configured to match the collected data to the entity. In most instances, this is a computationally expensive operation because it requires going though all of the data collected and performing the map operation. Any evidence of security outcomes or configurations in the larger data collection pool is then assigned to an entity based on the entity map.


The entity mapped meta-reports repository 28 contains data summaries of observations made with respect to a particular entity for each data set after the map/join process is complete.


The ratings processing system 30 may include custom models for applying data source specific ratings to determine an entity rating. Each data source generally requires a custom model due to the unique nature of that data source. Each model accounts for the custom attributes and idiosyncrasies of the different data sources that have been mapped to the entity being rated. Custom data source models can account for any data source feature including temporal and cross-data source behaviors.


The ratings normalization, cross-validation, and relative ranking system 32 is configured to normalize ratings so appropriate entity-to-entity comparisons can be made and the ratings are normalized and ranked within sectors or peer-groups and globally.


An entity and rating analytics repository or archive 36 is configured to hold all of the ratings data and resulting analytics produced by the ratings process.


A report generation system 34 takes the ratings and analytics and generates report objects. These objects are not rendered into any particular presentation format at this stage but are in a generic intermediary format that can be then transformed into a specific deliverable format.


A report delivery system 38 is configured to translate generic reports into a specific report format. Examples of these formats include HTML, PDF, text, and XML. Delivery modules 40 are different methods for delivering the reports include by web portal, API or data feed.


Advantages include ratings based on the quality of outcomes of the information security practices of the third party computer systems and enablement of comparisons of ratings across organizations. The system 10 can be entirely, or to a large extent, automated and need not have the permission of the entity being rated. The reports will allow risk management professionals to monitor, assess and mitigate partner risk by up-to-date ratings due to its persistent monitoring of the third party computer systems. Also, the portal may provide for location of new partners, such as suppliers, with lower risk profiles and improved security postures.


Unlike internal audit systems, the system 10 is not relying upon a correlation between practices and outcomes. Instead, evidence of actual security outcomes is collected through the data source partners.


Also advantageously, trial data on 50 entities revealed that rankings produced using the system 10 matched internal evaluations. In some cases the system 10 revealed problems with the entities not revealed by internal evaluations.


Data Sources


External ratings from data sources available outside an entity provide an information security based view into internal workings of the organization. For example, infection by malicious software can be determined using non-invasive website scanning technology. Communication between the entity computer system and known attacker controlled networks may reveal when the computer system has been compromised. Also, if an entity computer system is serving malicious code to visitors the system was compromised at some point. The entity may not have the capability to detect such compromises or cannot quickly react operationally to resolve the issue. External observations also can measure operational execution, which may not occur despite good internal policies.


A diverse set of network sensors and services around the Internet collect and observe information about the third party entity computer systems. The system 10 then gathers, processes, and stores the data collected about entities from the sensors and service providers using custom developed data source specific collection processors. The collection manager 18 automates the scheduling and execution of the different collectors.


The global data source system 12 includes hundreds of potential data sources, including, for example during experimental testing, 97 data sources owned by 37 organizations. At least 82 data sources are on active collection, being stored on the data archive 20. Trial ratings were performed on at least 11 data sources from 7 organizations. Rankings were produced on nearly 600 different entities.


A data source is a single type of data from a single organization. For example, if two organizations provide a list of hosts that participate in phishing attacks, they are counted as two data sources. The 15 types of data in Table 3 all provide different information security related views of an organization. New types of data and new sources of existing data types are constantly added to the data sources used to characterize the performance of the entity. Breach disclosures for example indicate that an organization has experienced a particular kind of data or integrity breach. Configuration data on the other hand provides any number of configuration related information and could for example state the type of encryption used on the organization's website.









TABLE 1





Data Sources Summary


















Total Data Sources
97



Total Sourcing
37



Organizations




Total Sources on Active
82



Collection




Total Different Source Types
15

















TABLE 3





Data Source Types


















Breach Disclosures
Spam Activity



Block Lists
Vulnerable Hosts



Configuration Parameters
Spyware



Compromised Hosts
Whitelists



Malicious Activity
Email viruses



Malware Servers
Multi-type



Reputation
Phishing



Suspicious Activity
User Behavior










Of the 97 data sources identified, 82 are on “Active Collection” meaning there is a method for obtaining the data source and that its collection is automated. The high degree of automation helps to satisfy the methodology objective for adoption of techniques that are principally automated.


Table 2 lists the 6 collections methods employed for data acquisition with the “Unknown” category meaning that the sources are identified but the method and ability to collect that data source has yet be determined. The method Instances are the number of data sources that are collected using that particular method. For example, 32 of the sources are collected using the network file transfer and synchronization tool rsync (http://samba.anu.edu.au/rsync/).









TABLE 2







Data Collection Methods










Methods
Instances














WGET
35



RSYNC
32



API
13



MANUAL
6



WHOIS
1



HTTP GET
1



UNKNOWN
9










A collection processing infrastructure 50, configured to build and validate composite security ratings, is shown in FIG. 2. A plurality of different clouds represents different network segments. Rated Entity clouds 52 are organizations for which the system 10 generates a rating. Those entities include an entity perimeter or boundary, indicated by the firewall that connects to the Internet. Services clouds 54 provide data or reports on observed activity from a rated entity 52. An example of a report from a Service 54 could be a list of hosts that have been participating in malicious activity. Services use Sensor networks 56 to observe the behavior of entities. For example, a sensor could observe SPAM messages sent from a rated entity network 52 to the Internet 58.


Entity Mapping


There is no single central repository that holds information about the IP address allocation. Determining the correct and complete IP address space owned by a given entity improves the reliability and robustness of a rating.


In general, Regional Internet Registries (RIRs) manage the allocation and registration of Internet number resources (IP Addresses, Autonomous System Numbers, etc.) within a particular region of the world. There are five RIRs—ARIN for North America, AfriNIC for Africa, APNIC for Asia Pacific, RIPE for Europe, Middle East, Central Asia, and LACNIC for Latin America.


The RIRs allocate the address space to service providers, corporations, universities, etc. The RIRs provide various interfaces that enable queries of the RIR to determine who owns a given IP address. It is also possible to query the database by an entity name and get a list of IP addresses allocated to that entity. Despite lack of standardization of entity names in the RIR databases, well chosen queries can result in a very high coverage of addresses owned by an entity.


Another problem is that RIRs often allocate large chunks of addresses to Internet Service Providers (ISPs) who go on to allocate smaller address spaces to their customers. ISPs are under no obligation to report this data back to anyone. Most small companies contract with their local ISP for Internet access and don't obtain addresses from RIRs.


These problems are addressed by the entity ownership collection system 14 being configured to execute various heuristic processes including the following non-limiting list of examples:

    • 1. Using the ‘dig’ (http://linux.die.net/man/1/dig) tool to determine any IP information published by an entity. The dig tool takes the domain name of the entity as an argument. For example, execution of ‘dig a.com ANY’ returns all IP information published by the entity a.com.
    • 2. Use the IP addresses and domain names published to find ranges of IP addresses actually used. ISPs almost always allocate addresses in size of powers of 2 (2, 4, 8 etc.). Knowing one IP address allows probing around that space. The ‘whois’ (http://linux.die.net/man/l/whois) tool can be used to determine ownership of neighborhood addresses.
    • 3. Even if the entity does not publish any IP information that can be retrieved through dig, most entities have servers whose names may be guessed. Mail servers for the domain a.com often have the name mail.a.com, SMTP servers tend to be smtp.a.com, FTP servers tend to be ftp.a.com etc. Using a tool like nslookup, the entity ownership collection system 14 can verify if any of these common names are in use by the entity.
    • 4. If an IP address is found, the system 14 is configured to probe around the address (such as in step 2) to determine any addresses in the neighborhood owned by that entity.
    • 5. Searching around the website of the company often gives a hint of other servers hosted by the company (ex: reports.a.com) which can be used as a starting point for search.


      Rating Methodology


Organizational security risk may be measured along two vectors: vulnerability and resilience. An entity's vulnerability is defined as its “physical, technical, organizational, and cultural states,” which can be exploited to create a security breach. An entity's resilience is defined to be its ability to recover from a security breach.


The system 10 uses the concepts of vulnerability and resilience by examining externally observable proxies for them. An example proxy for entity vulnerability is the number of entity-owned IP addresses, which are reported to be malicious. The higher the number of reports the more likely the entity was vulnerable and had been compromised. Resilience is inversely proportional to the duration of detected malicious activity. The shorter the duration of the malicious activity, the higher level of resilience the entity demonstrates as it can quickly identify and remove malicious infections.


To compute the ratings for an entity, the system 10 aggregates all of the data collected pertaining to the IT assets owned by that organization, such as the IP addresses controlled by the entity and the associated activity of those IP addresses. The types of activities depend on the types of data. The data sources may include false positives and the system 10 is configured to account for those uncertainties.


To determine quality metrics for IP address based assets, every IP address is uniquely mapped to an entity. Processing the data from a data source yields a list of IPs for each organization that has demonstrated suspicious or malicious behavior. The processing steps are as follows:

    • 1. For each IP address, determine a security quality metric called “badness”.
    • 2. Badness is a number between 0 and 1 that corresponds to the extent and duration of malicious activity that was reported.
    • 3. For each data source in which the IP address is reported, determine a data source specific badness score for that IP.
    • 4. Consolidate the badness score for a given IP across all data sources by cross validating data to determine the aggregate Badness for that IP.
    • 5. Aggregate the badness scores of IPs from an entity to determine the entity's IP asset based security quality metric.


The ratings processing system 30 is configured to account for differences in data sources and types. Given each data source's potentially unique view of an entity, there is not a universal technique that treated them all the same way. Data source specific modeling techniques, for example, were developed for 11 of the 97 data sources in experimental trials in order to demonstrate feasibility and validate the approach. The data sources incorporated accounted for five different data source types: Block Lists, Suspicious Activity, Malicious Servers, Compromised Hosts, and Spamming.


The following two sections give detailed examples of modeling techniques developed for calculating IP address badness for two different data sources that are representative of the data collected.


One of the data sources is a daily updated list of IP addresses that were reported by volunteer hosts from across the Internet. IP Addresses are reported in this data source if they have communicated with hosts that do not expect any inbound communication from them. It lists many more IP addresses on a given day compared with the other data sources and therefore, provides a significant amount of information contained only in this data source. However, this data source has a high incidence of false positives, where a false positive is an unwarranted report due to an incorrectly configured reporting host (i.e., the target) or a listing of an incorrect IP address due to backscatter.


False positives are accounted for by identifying events—where an event is defined as persistent, reported activity on a single IP address within a time period. For each event, heuristics are applied to determine the average intensity for the event. The intensity of an IP address on a given day is a measure of the confidence that malicious activity originated from the IP address on that day.


For the case where an event spans multiple days, the IP address is generally reported on each day in the event. However, if an IP address is listed on one day but not the next, this omission does not necessarily signify that the host has stopped its malicious behavior; rather, it could be that the host was offline for the day. For example, many corporate hosts are offline for weekends and holidays. Thus, an event is allowed to have short inactive periods, or days without any reports on the IP address. To generate the IP address quality metric, a maximum inactive period of three days is used.


The intensity of an IP address for a given day is calculated dynamically and increases both with the number of reporting targets as well as the duration of the event. Reports with a larger number of targets have larger intensities. This is because false positives due to mis-configured hosts are less likely to have occurred when multiple targets report the same IP address on the same day. Likewise, reports that belong to a persistent event have larger intensities, since persistent reports also signal the legitimacy of the malicious activity on the IP address.


The intensity, i(s) is calculated as follows:







I

(
s
)

=

{




0.1
,





if






s

<
2







0.01

e



ln
(
10
)



(

s
-
1

)


4



,





if


2


s
<
5







0.8
-

0.7

e

-



ln
(
10
)



(

s
-
5

)


4





,





if






s


5










where s is the number of hosts reporting the IP address. Thus, the average intensity, Iavg, of an event is the average of the intensities calculated per active day (a day with reports) and is determined as follows:








I
avg

=



I

(
s
)

T

+


A
·

I
prev


T



,





where T is the list time, A is T minus the number of days since the last update, and Iprev is the average intensity at the last update. The Badness, BIP, of an IP address is derived from the intensity and duration of the events for the IP, such that recent events are weighted heavier than historical events and is calculated as follows:








B
IP

=

min

(

1
,




I
avg

(

1
-

e

-
0.02



)


1
-

e

-
0.12









t
1


t
n




e

-
0.02





)


,





where t1 and tn denote time lapsed from the end and beginning of an event, respectively; and the average intensity is readjusted if the persistence surpasses a threshold.


The second data source example is a host block list that lists IP addresses that have been compromised. Based on an analysis of the data sources collection methods, the block list is considered very reliable in the sense that a listing implies that malicious activity originated from the listed address. This block list removes IP addresses from the list if no malicious activity is detected for a small window of time. Because of the high confidence in the data source's accuracy, any IP address on the block list is assigned a raw Badness of 0.8.


Once an IP address is delisted and is no longer on the block list, its Badness decays exponentially with respect to the time since it was last listed. Thus, the Badness is:








B
IP

=

0.8

e

-



ln
(
2
)


T

182.625





,





where T is the time in days since the last listing. This decay rate corresponds to a half-life of six months.


Various other data sources are handled similarly but the raw score is based on the confidence in the data source's collection methods. Other data sources track CIDR blocks as opposed to individual IP addresses, and so the Badness assigned to a listing on these lists are weighted by the CIDR block size as follows:








B
IP

=

0.8

We

-



ln
(
2
)


T

182.625





,





where W is the natural log of the block size.


The total IP space badness of an entity is an aggregation of the badness of the entity's individual IP addresses and/or CIDR blocks. In the simplest model where all data sources are IP address based, the entity badness is the total badness of the IP addresses owned by the entity. To normalize ratings across entities of different sizes, the entity's network size defined as the number of active IP addresses owned by the entity is used:








B
entity

=





IP

entity




B
IP



ln

(
N
)



,





where N denotes the network size. Normalizing avoids penalizing of smaller entities allowing fair comparisons between entities of differing sizes.


Enhancements to the Ratings Methodology


The system 10 may also include expanded the methodology to support additional and different types of data sources. It could identify data sources that indicate different levels of IT sophistication—such information is a measure of the level of IT practice maturity.


Entity normalization methods can also account for differences in entity size beyond network size. For example, the use of other normalization methods such as number of employees may help produce more robust normalizations under certain circumstances.


Also, statistical properties of the model's internal parameters may be analyzed and adjust based on the findings. For example, certain inputs or features may be disproportionately skewing the ratings and such inputs or features may be modulated through weighting factors.


The composite security rating described above measured, amongst other things, how much, to what extent, and how recently malicious activity was detected on an entity's cumulative IP space. The score could also be adapted to show a level of confidence. For example, a failure to detect malicious activity on an entity's IP space does not necessarily imply non-malicious behavior. Rather, the data sources may lack coverage on the entity's IP space. By outputting a range as opposed to a number, the system 10 is able to convey its confidence in a rating where a larger range necessarily implies a lower confidence, and a smaller range necessarily implies a higher confidence.


Such a range could be computed from a mean score and a confidence range, which could be determined from a developed discrete choice model. Features such as the Badness scores from each data source could help determine the mean score. Features such as redundancy between data sources and network size could also help determine the confidence range.


Entity mapping may also be improved through other data sources and functions. Data sharing relationships with Internet Service Providers might provide additional data on security outcomes and practices at entity computer systems. Also, consumers of the scoring reports may already have partner-mapping data through the nature of their relationship with the entity or may be able to request the information.


Entity mapping may also be facilitated by persistent updates of the heuristics, such as updating prefixes from BGP announcements and data from Regional Internet Registries.


Data storage used by the system 10 may be improved to minimize the disk space required while supporting rapid inclusion of new entity ratings. For example, high-speed data access layers may be created for daily ratings computation.


Speed and scale can be accomplished through distributed or parallel processing on different systems. A distributed data source query interface may be implemented so that massive and expensive centralized data storage is not required.


The system 10 may also be configured to develop and evaluate predictive capabilities of information security ratings and incorporate them into the rating methodology.


The ability to demonstrate predictability has a dependency on data reliability. For example, improving coverage of malicious events improves data reliability. Statistical evaluations may be used to disambiguate strong entity performance (e.g., no malicious activity) from low coverage (e.g., lack of information on the malicious activity). These evaluations can then be used in the rating methodology.


Statistical evaluations of data coverage may include a data accuracy assessment wherein levels of coverage assurance associated with a particular adopted data source are determined. Also, observations across data sources may be compared to determine data sources of high probability or low probability of coverage for a given entity.


Predictive modeling may include determination of entity historical trends to display and predict future performance. Regression and machine learning based models may be developed to predict information security performance. Models may be evaluated and further developed for predictive capability through a series of prediction experiments.


Also, the data source features may be analyzed for correlations of high and low performance. For example, entities with behavior “X” tend to perform well and entities that demonstrate property “Y” tend to behave poorly.


Use of External and Internal Security Data


The system 10 may also include internally derived security assessments. For example, such a score computation is shown in FIG. 3. The final score STotal has two components, the Internal score and the External score.


The Internal score, Sint, is derived from data collected and observed from inside the enterprise. Data sources that provide inputs to the internal scoring function could include, but are not limited to, the following:

    • Vulnerability scans
    • Firewall Rules
    • Incident Reports
    • Configurations
    • Software inventory
    • Policies
    • Controls
    • User Behavior


      The features from each of the data sources are extracted to create a feature vector. This feature vector is Xint={InternalFeatures} in the “Internal Source Score,” as shown in FIG. 3. Features include, but are not limited to, derived metrics from the data sources (e.g., the number of remotely exploitable vulnerabilities from outside the entity, the number of incidents, or the number of vulnerable versions of software).


Each feature xi in XINT has a corresponding transformation function ƒi(xi)(xi) that performs a normalization transformation such that the resultants can be summed.


Each feature xi in XINT also has corresponding weight ωi such that different weights can be placed on the resultant feature transformation where the sum of the weights equal is unity










i
=
1

n



ω
i


=
1.





The sum of the transformed and weighted feature vector is computed by summing each resultant for each of the features










i
=
1

,

x


X
int



n




ω
i





f

i

(

x
i

)


(

x
i

)

.






The final score Sint is the summation normalized by a set of normalization factors given as ƒi(x0)(xα)+ƒi(xβ)(xβ) where each normalization factor xα, xβ, . . . also has a factor normalization transformation function.


The computation of the Internal Score is given as:







S
int

=






i
=
1

,

x


X
int



n




ω
i




f

i

(

x
i

)


(

x
i

)






f

i

(

x
α

)


(

x
α

)

+


f

i

(

x
β

)


(

x
β

)







The External Score is the combination of the Public Sources Score (Spub) and the Commercial Sources (Scom). Spub and Scom are derived using the same heuristic combinatorial functions as the Internal Score. However, the input data sources, weights, transformation functions and normalization factors are different.


Spub and Scom have their own feature vectors Xpub={PublicFeatures} and


Xcom={CommercialFeatures} based on the data input sources used.


Data sources in Xpub that provide inputs to the Spub score could include but are not limited to the following:

    • Industry reports
    • Internet monitoring web sites that publish reports (ex: www.malwareurl.com)
    • News articles
    • Court records


Data sources in Xcom that provide inputs to the Scom score could include but are not limited to the following:

    • Company proprietary data collected during operations
    • Renesys
    • Arbor Networks
    • Business intelligence bought from corporations and services
    • User Behavior


      With the Internal and External Scores computed, the final total score is computed and the weighted sum of the three: STotalintSintpubSpubcomScom


It is possible that the algorithm does not have the same inputs for all entities. More information may be available for some entities compared to other entities. Given this, each data source is assigned a normalized confidence level based on how much they contribute to the computation of the enterprise score. Depending on the actual data that went into rating the company, the confidence level is assigned as a sum of the confidence levels associated with the data sources. The confidence level can be used to assign a range of scores for an enterprise. For instance, if an enterprise is rated as 750 with a confidence level of 0.8, the entity's actual score is reported as (750−(1−0.8)*100, 750)=(730−750). An entity's score is deemed to be unavailable if the confidence level is below a minimum threshold of 0.5.


It should be noted that the Sint may be zero due to a lack of available information or permission, wherein Sint becomes characteristic only of externally observable characteristics. Also, characteristics for the calculation can be used in conjunction, or vice versa, with functions and aspects of the remaining systems described hereinabove and below.


Distributed System


Referring now to FIG. 4, a schematic diagram of a central server 500, or similar network entity, configured to implement a system for creating a composite security score is provided. As used herein, the designation “central” merely serves to describe the common functionality the server provides for multiple clients or other computing devices and does not require or infer any centralized positioning of the server relative to other computing devices. As may be understood from FIG. 4, the central server 500 may include a processor 510 that communicates with other elements within the central server 500 via a system interface or bus 545. Also included in the central server 500 may be a display device/input device 520 for receiving and displaying data. This display device/input device 520 may be, for example, a keyboard or pointing device that is used in combination with a monitor. The central server 500 may further include memory 505, which may include both read only memory (ROM) 535 and random access memory (RAM) 530. The server's ROM 535 may be used to store a basic input/output system 540 (BIOS), containing the basic routines that help to transfer information across the one or more networks.


In addition, the central server 500 may include at least one storage device 515, such as a hard disk drive, a floppy disk drive, a CD Rom drive, or optical disk drive, for storing information on various computer-readable media, such as a hard disk, a removable magnetic disk, or a CD-ROM disk. As will be appreciated by one of ordinary skill in the art, each of these storage devices 515 may be connected to the system bus 545 by an appropriate interface. The storage devices 515 and their associated computer-readable media may provide nonvolatile storage for a central server. It is important to note that the computer-readable media described above could be replaced by any other type of computer-readable media known in the art. Such media include, for example, magnetic cassettes, flash memory cards and digital video disks.


A number of program modules may be stored by the various storage devices and within RAM 530. Such program modules may include an operating system 550 and a plurality of one or more (N) modules 560. The modules 560 may control certain aspects of the operation of the central server 500, with the assistance of the processor 510 and the operating system 550. For example, the modules may perform the functions described above and illustrated by the figures and other materials disclosed herein, such as collecting security characterizations 570, generating a composite rating 580, determining a trend 590, reporting the ratings 600, IP mapping 610, determining a badness quality metric 620, attenuating a raw score 630, correlating with statistical outcomes 640, determining a confidence range 650, predicting future performance 660 and determining an accuracy 670.


The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims
  • 1. A computer-implemented method comprising: collecting, using sensors on the Internet, information about an organization indicative of compromises, vulnerabilities or configurations of technology systems of the organization, the information about the organization being collected from two or more sources not controlled by the organization and without permission of the organization,the information comprising two or more of: a communication between the organization and a known attacker-controlled network, an IP reputation indicating whether an IP address of the organization sends unwanted requests, or configuration data indicative of a security of the organization's website;storing the collected information in a database;aggregating, by a computer, the collected information to calculate a composite rating of the organization, the aggregating comprising determining a composite of metrics and data derived or collected from the sources, the calculating comprising applying weights to the data and the metrics, anddelivering a report of the composite rating of the organization to a user through a network.
  • 2. The method of claim 1, wherein the metrics include a measure of the extent of, the frequency of, or duration of compromise of the technology systems of the organization, or of a configuration or vulnerability of the organization.
  • 3. The method of claim 1, wherein the composite rating is normalized based on a size of the organization.
  • 4. The method of claim 1, wherein the information about the organization represents an aggregation of IP addresses and CIDR blocks.
  • 5. The method of claim 1, wherein the collected information is represented by at least two data types, the at least two data types including at least one of breach disclosures, block lists, configuration parameters, an identification of malware servers, an identification of a reputation, an identification of suspicious activity, an identification of spyware, white lists, an identification of compromised hosts, an identification of malicious activity, an identification of spam activity, an identification of vulnerable hosts, an identification of phishing activity, or an identification of e-mail viruses.
  • 6. The method of claim 1, wherein the collected information evidences operational execution of security measures of the organization.
  • 7. The method of claim 1, comprising: forming a series of composite ratings of the organization over time; anddisplaying the series of composite ratings for the organization by posting the series of composite ratings to a web portal.
  • 8. The method of claim 1, wherein the collected information comprises characterizations about information technology assets that the organization owns, controls, uses, or is affiliated with.
  • 9. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to: collect, using sensors on the Internet, information about an organization indicative of compromises, vulnerabilities or configurations of technology systems of the organization, the information about the organization being collected from two or more sources not controlled by the organization and without permission of the organization,the information comprising two or more of: a communication between the organization and a known attacker-controlled network, an IP reputation indicating whether an IP address of the organization sends unwanted requests, or configuration data indicative of a security of the organization's website;store the collected information in a database;aggregate, by a computer, the collected information to calculate a composite rating of the organization, the aggregating comprising determining a composite of metrics and data derived or collected from the sources, the calculating comprising applying weights to the data and the metrics, anddeliver a report of the composite rating of the organization to a user through a network.
  • 10. The computer-readable storage medium of claim 9, wherein the metrics include a measure of the extent of, the frequency of, or duration of compromise of the technology systems of the organization, or of a configuration or vulnerability of the organization.
  • 11. The computer-readable storage medium of claim 9, wherein the composite rating is normalized based on a size of the organization.
  • 12. The computer-readable storage medium of claim 9, wherein the information about the organization represents an aggregation of IP addresses and CIDR blocks.
  • 13. The computer-readable storage medium of claim 9, wherein the collected information is represented by at least two data types, the at least two data types include at least one of breach disclosures, block lists, configuration parameters, an identification of malware servers, an identification of a reputation, an identification of suspicious activity, an identification of spyware, white lists, an identification of compromised hosts, an identification of malicious activity, an identification of spam activity, an identification of vulnerable hosts, an identification of phishing activity, or an identification of e-mail viruses.
  • 14. The computer-readable storage medium of claim 9, wherein the collected information evidences operational execution of security measures of the organization.
  • 15. The computer-readable storage medium of claim 9, further storing instructions that, when executed by the computer, cause the computer to: form a series of composite ratings of the organization over time; anddisplay the series of composite ratings for the organization by posting the series of composite ratings to a web portal.
  • 16. The computer-readable storage medium of claim 9, wherein the collected information comprises characterizations about information technology assets that the organization owns, controls, uses, or is affiliated with.
  • 17. A computing apparatus comprising: a processor; anda memory storing instructions that, when executed by the processor, configure the apparatus to: collect, using sensors on the Internet, information about an organization indicative of compromises, vulnerabilities or configurations of technology systems of the organization, the information about the organization being collected from two or more sources not controlled by the organization and without permission of the organization,the information comprising two or more of: a communication between the organization and a known attacker-controlled network, an IP reputation indicating whether an IP address of the organization sends unwanted requests, or configuration data indicative of a security of the organization's website;store the collected information in a database;aggregate, by a computer, the collected information to calculate a composite rating of the organization, the aggregating comprising determining a composite of metrics and data derived or collected from the sources, the calculating comprising applying weights to the data and the metrics, anddeliver a report of the composite rating of the organization to a user through a network.
  • 18. The computing apparatus of claim 17, wherein the metrics include a measure of the extent of, the frequency of, or duration of compromise of the technology systems of the organization, or of a configuration or vulnerability of the organization.
  • 19. The computing apparatus of claim 17, wherein the composite rating is normalized based on a size of the organization.
  • 20. The computing apparatus of claim 17, wherein the information about the organization represents an aggregation of IP addresses and CIDR blocks.
  • 21. The computing apparatus of claim 17, wherein the collected information is represented by at least two data types, the at least two data types include at least one of breach disclosures, block lists, configuration parameters, an identification of malware servers, an identification of a reputation, an identification of suspicious activity, an identification of spyware, white lists, an identification of compromised hosts, an identification of malicious activity, an identification of spam activity, an identification of vulnerable hosts, an identification of phishing activity, or an identification of e-mail viruses.
  • 22. The computing apparatus of claim 17, wherein the collected information evidences operational execution of security measures of the organization.
  • 23. The computing apparatus of claim 17, the memory further storing instructions configured to cause the apparatus to: form a series of composite ratings of the organization over time; anddisplay the series of composite ratings for the organization by posting the series of composite ratings to a web portal.
  • 24. The computing apparatus of claim 17, wherein the collected information comprises characterizations about information technology assets that the organization owns, controls, uses, or is affiliated with.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 18/453,488 filed on Aug. 22, 2023, which is a continuation of U.S. patent application Ser. No. 17/069,151 filed on Oct. 13, 2020, which is a continuation of U.S. patent application Ser. No. 13/240,572 filed Sep. 22, 2011 now U.S. Pat. No. 10,805,331, which claims priority to U.S. Prov. Pat. App. No. 61/386,156 entitled “Enterprise Information Security Score” and filed on Sep. 24, 2010; and U.S. Prov. Pat. App. No. 61/492,287 entitled “Information Technology Security Assessment System” and filed on Jun. 1, 2011, which are hereby incorporated herein in their entireties by reference.

ACKNOWLEDGEMENT

This invention was made with government support under 1127185 awarded by the National Science Foundation. The government has certain rights to this invention.

US Referenced Citations (488)
Number Name Date Kind
5867799 Lang et al. Feb 1999 A
6016475 Miller et al. Jan 2000 A
6745150 Breiman Jun 2004 B1
6785732 Bates et al. Aug 2004 B1
6792401 Nigro et al. Sep 2004 B1
7062572 Hampton Jun 2006 B1
D525264 Chotai et al. Jul 2006 S
D525629 Chotai et al. Jul 2006 S
7100195 Underwood Aug 2006 B1
7124055 Breiman Oct 2006 B2
7194769 Lippmann et al. Mar 2007 B2
7290275 Baudoin et al. Oct 2007 B2
7389262 Lange Jun 2008 B1
D604740 Matheny et al. Nov 2009 S
7650570 Torrens et al. Jan 2010 B2
7747778 King et al. Jun 2010 B1
7748038 Olivier et al. Jun 2010 B2
7827607 Sobel et al. Nov 2010 B2
D630645 Tokunaga et al. Jan 2011 S
7971252 Lippmann et al. Jun 2011 B2
8000698 Wolman et al. Aug 2011 B2
D652048 Joseph Jan 2012 S
8150538 Dubinsky Apr 2012 B2
D667022 LoBosco et al. Sep 2012 S
8359651 Wu et al. Jan 2013 B1
8370933 Buckler Feb 2013 B1
8370938 Daswani et al. Feb 2013 B1
8429630 Nickolov et al. Apr 2013 B2
D682287 Cong et al. May 2013 S
D688260 Pearcy et al. Aug 2013 S
8504556 Rice et al. Aug 2013 B1
8505094 Xuewen et al. Aug 2013 B1
D691164 Lim et al. Oct 2013 S
D694252 Helm Nov 2013 S
D694253 Helm Nov 2013 S
8584233 Yang et al. Nov 2013 B1
8601575 Mullarkey et al. Dec 2013 B2
8621621 Burns et al. Dec 2013 B1
8661146 Alex et al. Feb 2014 B2
D700616 Chao Mar 2014 S
8677481 Lee Mar 2014 B1
8683584 Daswani et al. Mar 2014 B1
8752183 Heiderich et al. Jun 2014 B1
8775402 Baskerville et al. Jul 2014 B2
8806646 Daswani et al. Aug 2014 B1
8825662 Kingman et al. Sep 2014 B1
8898776 Molnar et al. Nov 2014 B2
8949988 Adams et al. Feb 2015 B2
8966639 Roytman et al. Feb 2015 B1
D730918 Park et al. Jun 2015 S
9053210 Elnikety et al. Jun 2015 B2
9075990 Yang Jul 2015 B1
D740847 Yampolskiy et al. Oct 2015 S
D740848 Bolts et al. Oct 2015 S
D741351 Kito et al. Oct 2015 S
D746832 Pearcy et al. Jan 2016 S
9241252 Dua et al. Jan 2016 B2
9244899 Greenbaum Jan 2016 B1
9294498 Yampolskiy et al. Mar 2016 B1
D754690 Park et al. Apr 2016 S
D754696 Follett et al. Apr 2016 S
9323930 Satish Apr 2016 B1
D756371 Bertnick et al. May 2016 S
D756372 Bertnick et al. May 2016 S
D756392 Yun et al. May 2016 S
D759084 Yampolskiy et al. Jun 2016 S
D759689 Olson et al. Jun 2016 S
9372994 Yampolskiy et al. Jun 2016 B1
9373144 Ng et al. Jun 2016 B1
D760782 Kendler et al. Jul 2016 S
9384206 Bono et al. Jul 2016 B1
9401926 Dubow et al. Jul 2016 B1
9407658 Kuskov et al. Aug 2016 B1
9420049 Talmor et al. Aug 2016 B1
9424333 Bisignani et al. Aug 2016 B1
9432383 Johns et al. Aug 2016 B2
9479526 Yang Oct 2016 B1
D771103 Eder Nov 2016 S
D771695 Yampolskiy et al. Nov 2016 S
D772276 Yampolskiy et al. Nov 2016 S
9501647 Yampolskiy et al. Nov 2016 B2
D773507 Sagrillo et al. Dec 2016 S
D775635 Raji et al. Jan 2017 S
D776136 Chen et al. Jan 2017 S
D776153 Yampolskiy et al. Jan 2017 S
D777177 Chen et al. Jan 2017 S
9548988 Roundy et al. Jan 2017 B1
9560072 Xu Jan 2017 B1
D778927 Bertnick et al. Feb 2017 S
D778928 Bertnick et al. Feb 2017 S
D779512 Kimura et al. Feb 2017 S
D779514 Baris et al. Feb 2017 S
D779531 List et al. Feb 2017 S
D780770 Sum et al. Mar 2017 S
D785009 Lim et al. Apr 2017 S
D785010 Bachman et al. Apr 2017 S
D785016 Berwick et al. Apr 2017 S
9620079 Curtis Apr 2017 B2
D787530 Huang May 2017 S
D788128 Wada May 2017 S
9641547 Yampolskiy et al. May 2017 B2
9646110 Byrne et al. May 2017 B2
D789947 Sun Jun 2017 S
D789957 Wu et al. Jun 2017 S
9680855 Schultz et al. Jun 2017 B2
9680858 Boyer et al. Jun 2017 B1
D791153 Rice et al. Jul 2017 S
D791834 Eze et al. Jul 2017 S
D792427 Weaver et al. Jul 2017 S
D795891 Kohan et al. Aug 2017 S
9736019 Hardison et al. Aug 2017 B2
D796523 Bhandari et al. Sep 2017 S
D801989 Iketsuki et al. Nov 2017 S
D803237 Wu et al. Nov 2017 S
D804528 Martin et al. Dec 2017 S
D806735 Olsen et al. Jan 2018 S
D806737 Chung et al. Jan 2018 S
D809523 Lipka et al. Feb 2018 S
D809989 Lee et al. Feb 2018 S
D812633 Saneii Mar 2018 S
D814483 Gavaskar et al. Apr 2018 S
D815119 Chalker et al. Apr 2018 S
D815148 Martin et al. Apr 2018 S
D816105 Rudick et al. Apr 2018 S
D816116 Selassie Apr 2018 S
9954893 Zhao et al. Apr 2018 B1
D817970 Chang et al. May 2018 S
D817977 Kato et al. May 2018 S
D818475 Yepez et al. May 2018 S
D819687 Yampolskiy et al. Jun 2018 S
10044750 Livshits et al. Aug 2018 B2
10079854 Scott et al. Sep 2018 B1
10084817 Saher et al. Sep 2018 B2
10142364 Baukes et al. Nov 2018 B2
D835631 Yepez et al. Dec 2018 S
10180966 Lang et al. Jan 2019 B1
10185924 McClintock et al. Jan 2019 B1
10210329 Malik et al. Feb 2019 B1
10217071 Mo et al. Feb 2019 B2
10230753 Yampolskiy et al. Mar 2019 B2
10230764 Ng et al. Mar 2019 B2
10235524 Ford Mar 2019 B2
10242180 Haefner et al. Mar 2019 B2
D847169 Sombreireiro et al. Apr 2019 S
10257219 Geil et al. Apr 2019 B1
10305854 Alizadeh-Shabdiz et al. May 2019 B2
10331502 Hart Jun 2019 B1
10339321 Tedeschi Jul 2019 B2
10339484 Pai et al. Jul 2019 B2
10348755 Shavell et al. Jul 2019 B1
10412083 Zou et al. Sep 2019 B2
D863335 Hardy et al. Oct 2019 S
D863345 Hardy et al. Oct 2019 S
10453142 Mun Oct 2019 B2
10469515 Helmsen et al. Nov 2019 B2
10491619 Yampolskiy et al. Nov 2019 B2
10491620 Yampolskiy et al. Nov 2019 B2
10521583 Bagulho Monteiro Pereira Dec 2019 B1
D872574 Deylamian et al. Jan 2020 S
10540374 Singh et al. Jan 2020 B2
D874506 Kang et al. Feb 2020 S
10572945 McNair Feb 2020 B1
D880512 Greenwald et al. Apr 2020 S
D894939 Braica Sep 2020 S
10764298 Light et al. Sep 2020 B1
10776483 Bagulho Monteiro Pereira Sep 2020 B2
10796260 Brannon et al. Oct 2020 B2
D903693 Li et al. Dec 2020 S
D905712 Li et al. Dec 2020 S
D908139 Hardy et al. Jan 2021 S
10896394 Brannon et al. Jan 2021 B2
10909488 Hecht et al. Feb 2021 B2
D918955 Madden, Jr. et al. May 2021 S
D920343 Bowland May 2021 S
D920353 Boutros et al. May 2021 S
D921031 Tessier et al. Jun 2021 S
D921662 Giannino et al. Jun 2021 S
D921674 Kmak et al. Jun 2021 S
D921677 Kmak et al. Jun 2021 S
D922397 Modi et al. Jun 2021 S
D924909 Nasu et al. Jul 2021 S
11126723 Bagulho Monteiro Pereira Sep 2021 B2
11334832 Dumoulin et al. May 2022 B2
11379773 Vescio Jul 2022 B2
11455322 Yang et al. Sep 2022 B2
20010044798 Nagral et al. Nov 2001 A1
20020083077 Vardi Jun 2002 A1
20020133365 Grey et al. Sep 2002 A1
20020164983 Raviv et al. Nov 2002 A1
20030011601 Itoh et al. Jan 2003 A1
20030050862 Bleicken et al. Mar 2003 A1
20030074248 Braud et al. Apr 2003 A1
20030123424 Jung Jul 2003 A1
20030187967 Walsh et al. Oct 2003 A1
20040003284 Campbell et al. Jan 2004 A1
20040010709 Baudoin et al. Jan 2004 A1
20040024859 Bloch et al. Feb 2004 A1
20040088570 Roberts et al. May 2004 A1
20040098375 DeCarlo May 2004 A1
20040111358 Lange et al. Jun 2004 A1
20040133561 Burke Jul 2004 A1
20040133689 Vasisht Jul 2004 A1
20040193907 Patanella Sep 2004 A1
20040193918 Green et al. Sep 2004 A1
20040199791 Poletto et al. Oct 2004 A1
20040199792 Tan et al. Oct 2004 A1
20040221296 Ogielski et al. Nov 2004 A1
20040250122 Newton Dec 2004 A1
20040250134 Kohler et al. Dec 2004 A1
20050065807 DeAngelis et al. Mar 2005 A1
20050066195 Jones Mar 2005 A1
20050071450 Allen et al. Mar 2005 A1
20050076245 Graham et al. Apr 2005 A1
20050080720 Betz et al. Apr 2005 A1
20050108415 Turk et al. May 2005 A1
20050131830 Juarez et al. Jun 2005 A1
20050138413 Lippmann et al. Jun 2005 A1
20050160002 Roetter et al. Jul 2005 A1
20050234767 Bolzman et al. Oct 2005 A1
20050278726 Cano et al. Dec 2005 A1
20060036335 Banter et al. Feb 2006 A1
20060107226 Matthews et al. May 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060212925 Shull et al. Sep 2006 A1
20060253581 Dixon et al. Nov 2006 A1
20060271564 Meng Muntz et al. Nov 2006 A1
20070016948 Dubrovsky et al. Jan 2007 A1
20070067845 Wiemer et al. Mar 2007 A1
20070113282 Ross May 2007 A1
20070136622 Price et al. Jun 2007 A1
20070143851 Nicodemus et al. Jun 2007 A1
20070179955 Croft et al. Aug 2007 A1
20070198275 Malden et al. Aug 2007 A1
20070214151 Thomas et al. Sep 2007 A1
20070282730 Carpenter et al. Dec 2007 A1
20080017526 Prescott et al. Jan 2008 A1
20080033775 Dawson et al. Feb 2008 A1
20080047018 Baudoin et al. Feb 2008 A1
20080091834 Norton Apr 2008 A1
20080140495 Bhamidipaty et al. Jun 2008 A1
20080140728 Fraser et al. Jun 2008 A1
20080148408 Kao et al. Jun 2008 A1
20080162931 Lord et al. Jul 2008 A1
20080172382 Prettejohn Jul 2008 A1
20080175266 Alperovitch et al. Jul 2008 A1
20080208995 Takahashi et al. Aug 2008 A1
20080209565 Baudoin et al. Aug 2008 A2
20080222287 Bahl et al. Sep 2008 A1
20080262895 Hofmeister et al. Oct 2008 A1
20080270458 Gvelesiani Oct 2008 A1
20090044272 Jarrett Feb 2009 A1
20090064337 Chien Mar 2009 A1
20090094265 Vlachos et al. Apr 2009 A1
20090125427 Atwood et al. May 2009 A1
20090132861 Costa et al. May 2009 A1
20090161629 Purkayastha et al. Jun 2009 A1
20090193054 Karimisetty et al. Jul 2009 A1
20090204235 Dubinsky Aug 2009 A1
20090216700 Bouchard et al. Aug 2009 A1
20090228830 Herz et al. Sep 2009 A1
20090265787 Baudoin et al. Oct 2009 A9
20090276835 Jackson et al. Nov 2009 A1
20090293128 Lippmann et al. Nov 2009 A1
20090299802 Brennan Dec 2009 A1
20090300768 Krishnamurthy et al. Dec 2009 A1
20090319420 Sanchez et al. Dec 2009 A1
20090323632 Nix Dec 2009 A1
20090328063 Corvera et al. Dec 2009 A1
20100017880 Masood Jan 2010 A1
20100024033 Kang et al. Jan 2010 A1
20100042605 Cheng et al. Feb 2010 A1
20100057582 Arfin et al. Mar 2010 A1
20100114634 Christiansen et al. May 2010 A1
20100114757 Jeng et al. May 2010 A1
20100186088 Banerjee et al. Jul 2010 A1
20100205042 Mun Aug 2010 A1
20100218256 Thomas et al. Aug 2010 A1
20100262444 Atwal et al. Oct 2010 A1
20100275263 Bennett et al. Oct 2010 A1
20100281124 Westman et al. Nov 2010 A1
20100281151 Ramankutty et al. Nov 2010 A1
20100309206 Xie et al. Dec 2010 A1
20110137704 Mitra et al. Jun 2011 A1
20110145168 Dirnstorfer et al. Jun 2011 A1
20110145576 Bettan Jun 2011 A1
20110148880 De Peuter Jun 2011 A1
20110185403 Dolan et al. Jul 2011 A1
20110213742 Lemmond et al. Sep 2011 A1
20110219455 Bhagwan et al. Sep 2011 A1
20110225085 Takeshita et al. Sep 2011 A1
20110231395 Vadlamani et al. Sep 2011 A1
20110239300 Klein et al. Sep 2011 A1
20110249002 Duplessis et al. Oct 2011 A1
20110282997 Prince et al. Nov 2011 A1
20110296519 Ide et al. Dec 2011 A1
20120008974 Kawai et al. Jan 2012 A1
20120036263 Madden et al. Feb 2012 A1
20120036580 Gorny et al. Feb 2012 A1
20120059823 Barber et al. Mar 2012 A1
20120089745 Turakhia Apr 2012 A1
20120158725 Molloy et al. Jun 2012 A1
20120166458 Laudanski et al. Jun 2012 A1
20120174219 Hernandez et al. Jul 2012 A1
20120198558 Liu et al. Aug 2012 A1
20120215892 Wanser et al. Aug 2012 A1
20120221376 Austin Aug 2012 A1
20120254993 Sallam Oct 2012 A1
20120255021 Sallam Oct 2012 A1
20120255027 Kanakapura et al. Oct 2012 A1
20120290498 Jones Nov 2012 A1
20120291129 Shulman et al. Nov 2012 A1
20130014253 Neou et al. Jan 2013 A1
20130055386 Kim et al. Feb 2013 A1
20130060351 Imming et al. Mar 2013 A1
20130080505 Nielsen et al. Mar 2013 A1
20130086521 Grossele et al. Apr 2013 A1
20130086687 Chess et al. Apr 2013 A1
20130091574 Howes et al. Apr 2013 A1
20130124644 Hunt et al. May 2013 A1
20130124653 Vick et al. May 2013 A1
20130142050 Luna Jun 2013 A1
20130173791 Longo Jul 2013 A1
20130212479 Willis et al. Aug 2013 A1
20130227078 Wei et al. Aug 2013 A1
20130227697 Zandani Aug 2013 A1
20130238527 Jones Sep 2013 A1
20130263270 Cote et al. Oct 2013 A1
20130276056 Epstein Oct 2013 A1
20130282406 Snyder et al. Oct 2013 A1
20130291105 Yan Oct 2013 A1
20130298244 Kumar et al. Nov 2013 A1
20130305368 Ford Nov 2013 A1
20130333038 Chien Dec 2013 A1
20130347116 Flores et al. Dec 2013 A1
20140006129 Heath Jan 2014 A1
20140019196 Wiggins et al. Jan 2014 A1
20140052998 Bloom et al. Feb 2014 A1
20140101006 Pitt Apr 2014 A1
20140108474 David et al. Apr 2014 A1
20140114755 Mezzacca Apr 2014 A1
20140114843 Klein et al. Apr 2014 A1
20140130158 Wang et al. May 2014 A1
20140137254 Ou et al. May 2014 A1
20140137257 Martinez et al. May 2014 A1
20140146370 Banner et al. May 2014 A1
20140173066 Newton et al. Jun 2014 A1
20140173736 Liu Jun 2014 A1
20140189098 MaGill et al. Jul 2014 A1
20140204803 Nguyen et al. Jul 2014 A1
20140237545 Mylavarapu et al. Aug 2014 A1
20140244317 Roberts et al. Aug 2014 A1
20140282261 Ranz et al. Sep 2014 A1
20140283056 Bachwani et al. Sep 2014 A1
20140283068 Call et al. Sep 2014 A1
20140288996 Rence et al. Sep 2014 A1
20140304816 Klein et al. Oct 2014 A1
20140330616 Lyras Nov 2014 A1
20140334336 Chen et al. Nov 2014 A1
20140337086 Asenjo et al. Nov 2014 A1
20140337633 Yang et al. Nov 2014 A1
20140344332 Giebler Nov 2014 A1
20150033331 Stern et al. Jan 2015 A1
20150033341 Schmidtler et al. Jan 2015 A1
20150052607 Al Hamami Feb 2015 A1
20150074579 Gladstone et al. Mar 2015 A1
20150081860 Kuehnel et al. Mar 2015 A1
20150088783 Mun Mar 2015 A1
20150156084 Kaminsky et al. Jun 2015 A1
20150180883 Aktas et al. Jun 2015 A1
20150195299 Zoldi et al. Jul 2015 A1
20150207776 Morin et al. Jul 2015 A1
20150248280 Pillay et al. Sep 2015 A1
20150261955 Huang et al. Sep 2015 A1
20150264061 Ibatullin et al. Sep 2015 A1
20150288706 Marshall Oct 2015 A1
20150288709 Singhal et al. Oct 2015 A1
20150310188 Ford et al. Oct 2015 A1
20150310213 Ronen et al. Oct 2015 A1
20150317672 Espinoza et al. Nov 2015 A1
20150331932 Georges et al. Nov 2015 A1
20150347754 Born Dec 2015 A1
20150347756 Hidayat et al. Dec 2015 A1
20150350229 Mitchell Dec 2015 A1
20150381649 Schultz et al. Dec 2015 A1
20160014081 Don, Jr. et al. Jan 2016 A1
20160023639 Cajiga et al. Jan 2016 A1
20160036849 Zakian Feb 2016 A1
20160065613 Cho et al. Mar 2016 A1
20160078382 Watkins et al. Mar 2016 A1
20160088015 Sivan et al. Mar 2016 A1
20160104071 Brueckner Apr 2016 A1
20160119373 Fausto et al. Apr 2016 A1
20160140466 Sidebottom et al. May 2016 A1
20160147992 Zhao et al. May 2016 A1
20160162602 Bradish et al. Jun 2016 A1
20160171415 Yampolskiy et al. Jun 2016 A1
20160173520 Foster et al. Jun 2016 A1
20160173522 Yampolskiy et al. Jun 2016 A1
20160182537 Tatourian et al. Jun 2016 A1
20160189301 Ng et al. Jun 2016 A1
20160191554 Kaminsky Jun 2016 A1
20160205126 Boyer et al. Jul 2016 A1
20160212101 Reshadi et al. Jul 2016 A1
20160241560 Reshadi et al. Aug 2016 A1
20160248797 Yampolskiy et al. Aug 2016 A1
20160253500 Alme et al. Sep 2016 A1
20160259945 Yampolskiy et al. Sep 2016 A1
20160337387 Hu et al. Nov 2016 A1
20160344769 Li Nov 2016 A1
20160344801 Akkarawittayapoom Nov 2016 A1
20160364496 Li Dec 2016 A1
20160373485 Kamble Dec 2016 A1
20160378978 Singla et al. Dec 2016 A1
20170048267 Yampolskiy et al. Feb 2017 A1
20170063901 Muddu et al. Mar 2017 A1
20170104783 Vanunu et al. Apr 2017 A1
20170142148 Buber et al. May 2017 A1
20170161253 Silver Jun 2017 A1
20170161409 Martin Jun 2017 A1
20170213292 Sweeney et al. Jul 2017 A1
20170221072 AthuluruTlrumala et al. Aug 2017 A1
20170223002 Sabin et al. Aug 2017 A1
20170236078 Rasumov Aug 2017 A1
20170237764 Rasumov Aug 2017 A1
20170264623 Ficarra et al. Sep 2017 A1
20170279843 Schultz et al. Sep 2017 A1
20170289109 Caragea Oct 2017 A1
20170300911 Alnajem Oct 2017 A1
20170316324 Barrett et al. Nov 2017 A1
20170318045 Johns et al. Nov 2017 A1
20170324555 Wu et al. Nov 2017 A1
20170324766 Gonzalez Nov 2017 A1
20170337487 Nock et al. Nov 2017 A1
20180013716 Connell et al. Jan 2018 A1
20180088968 Myhre et al. Mar 2018 A1
20180103043 Kupreev et al. Apr 2018 A1
20180121659 Sawhney et al. May 2018 A1
20180123934 Gissing et al. May 2018 A1
20180124091 Sweeney et al. May 2018 A1
20180124110 Hunt et al. May 2018 A1
20180139180 Napchi et al. May 2018 A1
20180146004 Belfiore, Jr. et al. May 2018 A1
20180157468 Stachura Jun 2018 A1
20180191768 Broda et al. Jul 2018 A1
20180218157 Price et al. Aug 2018 A1
20180285414 Kondiles et al. Oct 2018 A1
20180322584 Crabtree et al. Nov 2018 A1
20180332076 Callahan et al. Nov 2018 A1
20180336348 Ng et al. Nov 2018 A1
20180337938 Kneib et al. Nov 2018 A1
20180337941 Kraning et al. Nov 2018 A1
20180349641 Barday et al. Dec 2018 A1
20180365519 Pollard et al. Dec 2018 A1
20180375896 Wang et al. Dec 2018 A1
20190034845 Mo et al. Jan 2019 A1
20190065545 Hazel et al. Feb 2019 A1
20190065748 Foster et al. Feb 2019 A1
20190079869 Baldi et al. Mar 2019 A1
20190089711 Faulkner Mar 2019 A1
20190098025 Lim Mar 2019 A1
20190124091 Ujiie et al. Apr 2019 A1
20190140925 Pon et al. May 2019 A1
20190141060 Lim May 2019 A1
20190147378 Mo et al. May 2019 A1
20190166152 Steele et al. May 2019 A1
20190179490 Barday et al. Jun 2019 A1
20190215331 Anakata et al. Jul 2019 A1
20190238439 Pugh et al. Aug 2019 A1
20190297106 Geil et al. Sep 2019 A1
20190303574 Lamay et al. Oct 2019 A1
20190362280 Vescio Nov 2019 A1
20190379632 Dahlberg et al. Dec 2019 A1
20190391707 Ristow et al. Dec 2019 A1
20190392252 Fighel et al. Dec 2019 A1
20200012794 Saldanha et al. Jan 2020 A1
20200053127 Brotherton et al. Feb 2020 A1
20200065213 Poghosyan et al. Feb 2020 A1
20200074084 Dorrans et al. Mar 2020 A1
20200092172 Kumaran et al. Mar 2020 A1
20200097845 Shaikh et al. Mar 2020 A1
20200106798 Lin Apr 2020 A1
20200125734 Light et al. Apr 2020 A1
20200183655 Barday et al. Jun 2020 A1
20200272763 Brannon et al. Aug 2020 A1
20200285737 Kraus et al. Sep 2020 A1
20200356689 McEnroe et al. Nov 2020 A1
20200356695 Brannon et al. Nov 2020 A1
20210064746 Koide et al. Mar 2021 A1
Foreign Referenced Citations (2)
Number Date Country
WO-2017142694 Jan 2019 WO
WO-2019023045 Jan 2019 WO
Non-Patent Literature Citations (192)
Entry
U.S. Appl. No. 15/271,655 Published as: US 2018/0083999, Self-Published Security Risk Management, filed Sep. 21, 2016.
U.S. Appl. No. 15/377,574 U.S. Pat. No. 9,705,932, Methods and Systems for Creating, De-Duplicating, and Accessing Data Using an Object Storage System, filed Dec. 13, 2016.
U.S. Appl. No. 14/021,585 U.S. Pat. No. 9,438,615 Published as: US2015/0074579, Security Risk Management, filed Sep. 9, 2013.
U.S. Appl. No. 15/216,955 U.S. Pat. No. 10,326,786 Published as: US 2016/0330231, Methods for Using Organizational Behavior for Risk Ratings, filed Jul. 22, 2016.
U.S. Appl. No. 15/239,063 U.S. Pat. No. 10,341,370 Published as: US2017/0093901, Security Risk Management, filed Aug. 17, 2016.
U.S. Appl. No. 16/405,121 U.S. Pat. No. 10,785,245 Published as: US2019/0260791, Methods for Using Organizational Behavior for Risk Ratings, filed May 7, 2019.
U.S. Appl. No. 17/025,930 U.S. Pat. No. 11,652,834 Published as: US2021/0006581, Methods for Using Organizational Behavior for Risk Ratings, filed Sep. 18, 2020.
U.S. Appl. No. 18/297,863 Published as: US2023/0247041, Methods for Using Organizational Behavior for Risk Ratings, filed Apr. 10, 2023.
U.S. Appl. No. 13/240,572 U.S. Pat. No. 10,805,331 Published as: US2016/0205126, Information Technology Security Assessment System, filed Sep. 22, 2011.
U.S. Appl. No. 14/944,484 U.S. Pat. No. 9,973,524 Published as: US2016/0323308, Information Technology Security Assessment System, filed Nov. 18, 2015.
U.S. Appl. No. 17/069,151 Published as: US2021/0211454, Information Technology Security Assessment System, filed Oct. 13, 2020.
U.S. Appl. No. 18/453,488, Information Technology Security Assessment System, filed Aug. 22, 2023.
U.S. Appl. No. 15/142,677 U.S. Pat. No. 9,830,569 Published as: US2016/0239772, Security Assessment Using Service Provider Digital Asset Information, filed Apr. 29, 2016.
U.S. Appl. No. 15/134,845 U.S. Pat. No. 9,680,858, Annotation Platform for a Security Risk System, filed Apr. 21, 2016.
U.S. Appl. No. 15/044,952 U.S. Pat. No. 11,182,720 Published as: US2017/0236077, Relationships Among Technology Assets and Services and the Entities Responsible for Them, filed Feb. 16, 2016.
U.S. Appl. No. 15/089,375, U.S. Pat. No. 10,176,445, Published as: US2017/0236079, Relationships Among Technology Assets and Services and the Entities Responsible for Them, filed Apr. 1, 2016.
U.S. Appl. No. 29/598,298, U.S. Pat. No. D835,631, Computer Display Screen with Graphical User Interface, filed Mar. 24, 2017.
U.S. Appl. No. 29/598,299, U.S. Pat. No. D818,475, Computer Display Screen With Security Ratings Graphical User Interface, filed Mar. 24, 2017.
U.S. Appl. No. 29/599,622, U.S. Pat. No. D847,169, Computer Display Screen With Security Ratings Graphical User Interface, filed Apr. 5, 2017.
U.S. Appl. No. 29/599,620, U.S. Pat. No. D846,562, Computer Display Screen With Security Ratings Graphical User Interface, filed Apr. 5, 2017.
U.S. Appl. No. 16/015,686 U.S. Pat. No. 10,425,380 Published as: US2018/0375822, Methods for Mapping IP Addresses and Domains to Organizations Using User Activity Data, filed Jun. 22, 2018.
U.S. Appl. No. 16/543,075 U.S. Pat. No. 10,554,619, Published as: US Published as: US2019/0379632, Methods for Mapping IP Addresses and Domains to Organizations Using User Activity Data, filed Aug. 16, 2019.
U.S. Appl. No. 16/738,825 U.S. Pat. No. 10,893,021 Published as: US2020/0153787, Methods for Mapping IP Addresses and Domains to Organizations Using User Activity Data, filed Jan. 9, 2020.
U.S. Appl. No. 17/146,064 U.S. Pat. No. 11,627,109 Published as: US2021/0218702, Methods for Mapping IP Addresses and Domains to Organizations Using User Activity Data, filed Jan. 11, 2021.
U.S. Appl. No. 15/918,286 U.S. Pat. No. 10,257,219, Correlated Risk in Cybersecurity, filed Mar. 12, 2018.
U.S. Appl. No. 16/292,956 U.S. Pat. No. 10,594,723 Published as: US2019/0297106, Correlated Risk in Cybersecurity, filed Mar. 5, 2019.
U.S. Appl. No. 16/795,056 U.S. Pat. No. 10,931,705 Published as: US2020/0195681, Correlated Risk in Cybersecurity, filed Feb. 19, 2020.
U.S. Appl. No. 17/179,630 Published as: US2021/0176269, Correlated Risk in Cybersecurity, filed Feb. 19, 2021.
U.S. Appl. No. 18/365,384, Correlated Risk in Cybersecurity, filed Aug. 4, 2023.
U.S. Appl. No. 16/170,680 U.S. Pat. No. 10,521,583, Systems and Methods for Remote Detection of Software Through Browser Webinjects, filed Oct. 25, 2018.
U.S. Appl. No. 16/688,647 U.S. Pat. No. 10,776,483 Published as: US2020/0134174, Systems and Methods for Remote Detection of Software Through Browser Webinjects, filed Nov. 19, 2019.
U.S. Appl. No. 17/000,135 U.S. Pat. No. 11,126,723 Published as: US2021/0004457, Systems and Methods for Remote Detection of Software Through Browser Webinjects, filed Aug. 21, 2020.
U.S. Appl. No. 17/401,683 Published as: US2021/0374243, Systems and Methods for Remote Detection of Software Through Browser Webinjects, filed Aug. 13, 2021.
U.S. Appl. No. 18/333,768, Systems and Methods for Remote Detection of Software Through Browser Webinjects, filed Jun. 13, 2023.
U.S. Appl. No. 15/954,921 U.S. Pat. No. 10,812,520 Published as: US2019/0319979, Systems and Methods for External Detection of Misconfigured Systems, filed Apr. 17, 2018.
U.S. Appl. No. 17/014,495 U.S. Pat. No. 11,671,441 Published as: US2020/0404017, Systems and Methods for External Detection of Misconfigured Systems, filed Sep. 8, 2020.
U.S. Appl. No. 18/302,925, Systems and Methods for External Detection of Misconfigured Systems, filed Apr. 19, 2023.
U.S. Appl. No. 16/549,764 Published as: US2021/0058421, Systems and Methods for Inferring Entity Relationships via Network Communications of Users or User Devices, filed Aug. 23, 2019.
U.S. Appl. No. 16/787,650 U.S. Pat. No. 10,749,893, Systems and Methods for Inferring Entity Relationships via Network Communications of Users or User Devices, filed Feb. 11, 2020.
U.S. Appl. No. 16/583,991 U.S. Pat. No. 10,848,382, Systems and Methods for Network Asset Discovery and Association Thereof With Entities, filed Sep. 26, 2019.
U.S. Appl. No. 17/085,550 U.S. Pat. No. 11,329,878 Published as: US2021/0099347, Systems and Methods for Network Asset Discovery and Association Thereof With Entities, filed Oct. 30, 2020.
U.S. Appl. No. 29/666,942 U.S. Pat. No. D892,135, Computer Display With Graphical User Interface, filed Oct. 17, 2018.
U.S. Appl. No. 16/360,641 U.S. Pat. No. 11,200,323 Published as: US2020/0125734, Systems and Methods for Forecasting Cybersecurity Ratings Based on Event-Rate Scenarios, filed Mar. 21, 2019.
U.S. Appl. No. 17/523,166 Published as: US2022/0121753, Systems and Methods for Forecasting Cybersecurity Ratings Based on Event-Rate Scenarios, filed Nov. 10, 2021.
U.S. Appl. No. 18/455,838, Systems and Methods for Forecasting Cybersecurity Ratings Based on Event-Rate Scenarios, filed Aug. 25, 2023.
U.S. Appl. No. 16/514,771 U.S. Pat. No. 10,726,136, Systems and Methods for Generating Security Improvement Plans for Entities, filed Jul. 17, 2019.
U.S. Appl. No. 16/922,673 U.S. Pat. No. 11,030,325 Published as: US2021/0019424, Systems and Methods for Generating Security Improvement Plans for Entities, filed Jul. 7, 2019.
U.S. Appl. No. 17/,307,577 U.S. Pat. No. 11,675,912 Published as: US2021/0211454, Systems and Methods for Generating Security Improvement Plans for Entities, filed May 4, 2021.
U.S. Appl. No. 18/138,803 Published as: US2023/0267215, Systems and Methods for Generating Security Improvement Plans for Entities, filed Apr. 25, 2023.
U.S. Appl. No. 29/677,306 U.S. Pat. No. D905,702, Computer Display Screen With Corporate Hierarchy Graphical User Interface, filed Jan. 18, 2019.
U.S. Appl. No. 16/775,840 U.S. Pat. No. 10,791,140, Systems and Methods for Assessing Cybersecurity State of Entities Based on Computer Network Characterization, filed Jan. 29, 2020.
U.S. Appl. No. 17/018,587 U.S. Pat. No. 11,050,779, Systems and Methods for Assessing Cybersecurity State of Entities Based on Computer Network Characterization, filed Sep. 11, 2020.
U.S. Appl. No. 16/779,437 U.S. Pat. No. 10,893,067 Published as: US2021/0243221, Systems and Methods for Rapidly Generating Security Ratings, filed Jan. 31, 2020.
U.S. Appl. No. 17/132,512 U.S. Pat. No. 11,595,427 Published as: US2021/0243221, Systems and Methods for Rapidly Generating Security Ratings, filed Dec. 23, 2020.
U.S. Appl. No. 18/158,594, Systems and Methods for Rapidly Generating Security Ratings, filed Jan. 24, 2023.
U.S. Appl. No. 18/454,959, Systems and Methods for Rapidly Generating Security Ratings, filed Aug. 24, 2023.
U.S. Appl. No. 17/119,822 U.S. Pat. No. 11,122,073, Systems and Methods for Cybersecurity Risk Mitigation and Management, filed Dec. 11, 2020.
U.S. Appl. No. 29/815,855, Computer Display With a Graphical User Interface for Cybersecurity Risk Management, filed Nov. 17, 2021.
U.S. Appl. No. 17/392,521 U.S. Pat. No. 11,689,555 Published as US 2022/0191232, Systems and Methods for Cybersecurity Risk Mitigation and Management, filed Aug. 3, 2021.
U.S. Appl. No. 18/141,654, Systems and Methods for Cybersecurity Risk Mitigation and Management, filed May 1, 2023.
U.S. Appl. No. 16/802,232 U.S. Pat. No. 10,764,298, Systems and Methods for Improving a Security Profile of an Entity Based on Peer Security Profiles, filed Feb. 26, 2020.
U.S. Appl. No. 16/942,452 U.S. Pat. No. 11,265,330 Published as: US2021/0266324, Systems and Methods for Improving a Security Profile of an Entity Based on Peer Security Profiles, fild Jul. 29, 2020.
U.S. Appl. No. 29/725,724, Computer Display With Risk Vectors Graphical User Interface, filed Feb. 26, 2020.
U.S. Appl. No. 29/736,641 U.S. Pat. No. D937,870, Computer Display With Peer Analytics Graphical User Interface, filed Jun. 2, 2020.
U.S. Appl. No. 17/039,675 U.S. Pat. No. 11,032,244 Published as: US2021/0099428, Systems and Methods for Determining Asset Importance on Security Risk Management, filed Sep. 30, 2020.
U.S. Appl. No. 17/320,997 Published as US 2021/0344647, Systems and Methods for Determining Asset Importance in Security Risk Management, filed May 14, 2021.
U.S. Appl. No. 16/884,607 U.S. Pat. No. 11,023,585, Systems and Methods for Managing Cybersecurity Alerts, filed May 27, 2020.
U.S. Appl. No. 17/236,594 U.S. Pat. No. 11,720,679 Published as: US2021/0374246, Systems and Methods for Managing Cybersecurity Alerts, filed Apr. 21, 2021.
U.S. Appl. No. 18/335,384, Systems and Methods for Managing Cybersecurity Alerts, filed Jun. 15, 2023.
U.S. Appl. No. 17/710,168 Published as: US2022/0318400, Systems and Methods for Assessing Cybersecurity Risk in a Work From Home Environment, filed Mar. 31, 2022.
U.S. Appl. No. 17/945,337 Published as US2023/0091953, Systems and Methods for Precomputation of Digital Asset Inventories, filed Sep. 15, 2022.
U.S. Appl. No. 18/359,183, Systems and Methods for Assessing Cybersecurity Efficacy of Entities Against Common Control and Maturity Frameworks Using Externally-Observed Datasets, filed Jul. 26, 2023.
U.S. Appl. No. 17/856,217 Published as: US2023/0004655, Systems and Methods for Accelerating Cybersecurity Assessments, filed Jul. 1, 2022.
U.S. Appl. No. 18/162,154 Published as: US2023/0244794, Systems and Methods for Assessment of Cyber Resilience, filed Jan. 31, 2023.
U.S. Appl. No. 18/328,142, Systems and Methods for Modeling Cybersecurity Breach Costs, filed Jun. 2, 2023.
“Agreed Upon Procedures,” Version 4.0, BITS, The Financial Institution Shared Assessments Program, Assessment Guide, Sep. 2008, 56 pages.
“Amazon Mechanical Turk,” accessed on the internet at https://www.mturk.com/, (Nov. 9, 2018), 7 pages.
“An Executive View of IT Governance,” IT Governance Institute, 2009, 32 pages.
“Assessing Risk in Turbulent Times,” A Workshop for Information Security Executives, Glassmeyter/McNamee Center for Digital Strategies, Tuck School of Business at Dartmouth, Institute for Information Infrastructure Protection, 2009, 17 pages.
“Assuring a Trusted and Resilient Information and Communications Infrastructure,” Cyberspace Policy Review, May 2009, 76 pages.
“Computer Network Graph,” http://www.opte.org, accessed on the internet at http://www.opte.org, (Nov. 9, 2018), 1 page.
“Creating Transparency with Palantir,” accessed on the internet at https://www.youtube.com/watch?v=8cbGChfagUA; Jul. 5, 2012; 1 page.
“Master Security Criteria,” Version 3.0, BITS Financial Services Security Laboratory, Oct. 2001, 47 pages.
“Neo4j (neo4j.com),” accessed on the internet at https://web.archive.org/web/20151220150341/http://neo4j.com:80/developer/guide-data-visualization/; Dec. 20, 2015; 1 page.
“Palantir Cyber: Uncovering malicious behavior at petabyte scale, ” accessed on the internet at https://www.youtube.com/watch?v= EhYezV06EE; Dec. 21, 2012; 1 page.
“Palantir.com,” accessed on the internet at http://www.palantir.com/; Dec. 2015; 2 pages.
“Plugging the Right Holes,” Lab Notes, MIT Lincoln Library, Posted Jul. 2008, retrieved Sep. 14, 2010 from http://www.ll.miLedufpublicationsflabnotesfpluggingtherightho! . . . , 2 pages.
“Rapid7 Nexpose Vulnerability Scanner,” accessed on the internet at https://web.archive.org/web/20170520082737/https://www.rapid7.com/products/nexpose/; May 20, 2017.
“Report on Controls Placed in Operation and Test of Operating Effectiveness,” EasCorp, Jan. 1 through Dec. 31, 2008, prepared by Crowe Horwath, 58 pages.
“Shared Assessments: Getting Started,” BITS, 2008, 4 pages.
“Tenable Nessus Network Vulnerability Scanner,” accessed on the internet at https://www.tenable.com/products/nessus/nessus-professional, (Nov. 9, 2018), 13 paqes.
“Twenty Critical Controls for Effective Cyber Defense: Consensus Audit,” Version 2.3, Nov. 13, 2009, retrieved on Apr. 9, 2010 from http://www.sans.org/critical-security-controls/print.php., 52 pages.
2009 Data Breach Investigations Report, study conducted by Verizon Business RISK Team, 52 pages.
Application as filed, PAIR transaction history and pending claims of U.S. Appl. No. 13/240,572, filed Nov. 18, 2015, 45 pages.
Artz, Michael Lyle, “NetSPA: A Network Security Planning Architecture,” Massachusetts Institute of Technology, May 24, 2002, 97 pages.
Azman, Mohamed et al. Wireless Daisy Chain and Tree Topology Networks for Smart Cities. 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT). https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber= 8869252 (Year: 2019).
Basinya, Evgeny A.; Yushmanov, Anton A. Development of a Comprehensive Security System. 2019 Dynamics of Systems, Mechanisms and Machines (Dynamics). https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8944700 (Year: 2019).
Bhilare et al., “Protecting Intellectual Property and Sensitive Information in Academic Campuses from Trusted Insiders: Leveraging Active Directory”, SIGUCC, Oct. 2009 (5 pages).
BitSight, “Cyber Security Myths Versus Reality: How Optimism Bias Contributes to Inaccurate Perceptions of Risk”, Jun. 2015, Dimensional Research, pp. 1-9.
Borgatti, et al., “On Social Network Analysis in a Supply Chain Context,” Journal of Supply Chain Management; 45(2): 5-22; Apr. 2009, 18 pages.
Boyer, Stephen, et al., Playing with Blocks: SCAP-Enable Higher-Level Analyses, MIT Lincoln Laboratory, 5th Annual IT Security Automation Conference, Oct. 26-29, 2009, 35 pages.
Browne, Niall, et al., “Shared Assessments Program AUP and SAS70 Frequently Asked Questions,” BITS, 4 pages.
Buckshaw, Donald L., “Use of Decision Support Techniques for Information System Risk Management,” submitted for publication in Wiley's Encyclopedia of Quantitative Risk Assessment in Jan. 2007, 11 pages.
Buehler, Kevin S., et al., “Running with risk,” The McKinsey Quarterly, No. 4, 2003, pp. 40-49.
Camelo, “Botnet Cluster Identification,” Sep. 2014, 90 pages.
Camelo, “Condenser: A Graph-based Approach for Detecting Botnets,” AnubisNetworks R&D, Amadora, Portugal and Centria, Universidade NOVA de Lisboa, Portugal, 8 pages, (Oct. 31, 2014).
Carstens, et al., “Modeling Company Risk and Importance in Supply Graphs,” European Semantic Web Conference 2017: The Semantic Web, pp. 18-31, (May 7, 2017).
Chernyshev, M. et al., “On 802.11 Access Point Locatability and Named Entity Recognition in Service Set Identifiers”, IEEE Trans. on Info. and Sec., vol. 11 No. 3 (Mar. 2016).
Chu, Matthew, et al., “Visualizing Attack Graphs, Reachability, and Trust Relationships with Navigator, ” MIT Lincoln Library, VizSEC '10, Ontario, Canada, Sep. 14, 2010, 12 pages.
Chuvakin, “SIEM: Moving beyond compliance”, RSA White Paper (2010) (16 pages).
Computer Network Graph-Bees, http://bioteams.com/2007/04/30/visualizing_complex_networks.html, date accessed Sep. 28, 2016, 2 pages.
Computer Network Graph—Univ. of Michigan, http://people.cst.cmich.edu/liao1q/research.shtml, date accessed Sep. 28, 2016, 5 pages.
Crowther, Kenneth G., et al., “Principles for Better Information Security through More Accurate, Transparent Risk Scoring,” Journal of Homeland Security and Emergency Management, vol. 7, Issue 1, Article 37, 2010, 20 pages.
Davis, Lois M., et al., “The National Computer Security Survey (NCSS) Final Methodology,” Technical report prepared for the Bureau of Justice Statistics, Safety and Justice Program, RAND Infrastructure, Safety and Environment (ISE), 2008, 91 pages.
Dillon-Merrill, PhD., Robin L, et al., “Logic Trees: Fault, Success, Attack, Event, Probability, and Decision Trees,” Wiley Handbook of Science and Technology for Homeland Security, 13 pages, (Mar. 15, 2009).
Dun & Bradstreet Corp. Stock Report, Standard & Poor's, Jun. 6, 2009, 8 pages.
Dun & Bradstreet, The DUNSRight Quality Process: Power Behind Quality Information, 24 pages.
Edmonds, Robert, “ISC Passive DNS Architecture”, Internet Systems Consortium, Inc., Mar. 2012, 18 pages.
Equifax Inc. Stock Report, Standard & Poor's, Jun. 6, 2009, 8 pages.
Gephi (gephi.org), accessed on the internet at https://web.archive.org/web/20151216223216/https://gephi.org/; Dec. 16, 2015; 1 page.
Gilgur, et al., “Percentile-Based Approach to Forecasting Workload Growth” Proceedings of CMG'15 Performance and Capacity International Conference by the Computer Measurement Group. No. 2015 (Year:2015), 16 pages.
Gilgur, et al., “Percentile-Based Approach to Forecasting Workload Growth” Proceedings of CMG'15 Performance and Capacity International Conference by the Computer Measurement Group. Nov. 2015 (Year:2015), 16 pages.
Gundert, Levi, “Big Data in Security—Part III: Graph Analytics,” accessed on the Internet at https://blogs.cisco.com/security/big-data-in-security-part-iii-graph-analytics; Cisco Blog, Dec. 2013, 8 pages.
Hachem, Sara; Toninelli, Alessandra; Pathak, Animesh; Issany, Valerie. Policy-Based Access Control in Mobile Social Ecosystems. 2011 IEEE International Symposium on Policies for Distributed Systems and Networks (Policy). Http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5976796. 8 pages, (Jun. 6, 2011).
Hacking Exposed 6, S. McClure et al., copyright 2009, 37 pages.
Ingols, Kyle, et al., “Modeling Modern Network Attacks and Countermeasures Using Attack Graphs,” MIT Lincoln Laboratory, 16 pages, (Dec. 7, 2009).
Ingols, Kyle, et al., “Practical Attack Graph Generation for Network Defense,” MIT Lincoln Library, IEEE Computer Society, Proceedings of the 22nd Annual Computer Security Applications Conference (ACSAC'06), 2006, 10 pages.
Ingols, Kyle, et al., “Practical Experiences Using SCAP to Aggregate CND Data,” MIT Lincoln Library, Presentation to Nist Scap Conference, Sep. 24, 2008, 59 pages.
Jean, “Cyber Security: How to use graphs to do an attack analysis,” accessed on the internet at https://linkurio.us/blog/cyber-security-use-graphs-attack-analysis/; Aug. 2014, 11 pages.
Jin et al, “Identifying and tracking suspicious activities through IP gray space analysis”, MineNet, Jun. 12, 2007 (6 pages).
Johnson, Eric, et al., “Information Risk and the Evolution of the Security Rating Industry,” Mar. 24, 2009, 27 pages.
Joslyn, et al., “Massive Scale Cyber Traffic Analysis: A Driver for Graph Database Research,” Proceedings of the First International Workshop on Graph Data Management Experience and Systems (Grades 2013), 6 pages.
KC Claffy, “Internet measurement and data analysis: topology, workload, performance and routing statistics,” accessed on the Internet at http://www.caida.org/publications/papers/1999/Nae/Nae.html., NAE '99 workshop, 1999, 22 pages.
Li et al., “Finding the Linchpins of the Dark Web: a Study on Topologically Dedicated Hosts on Malicious Web Infrastructures”, IEEE, 2013 (15 pages).
Lippmann, Rich, et al., NetSPA: a Network Security Planning Architecture, MIT Lincoln Laboratory, 11 pages.
Lippmann, Richard, et al., “Validating and Restoring Defense in Depth Using Attack Graphs,” MIT Lincoln Laboratory, 10 pages, (Oct. 23, 2006).
Lippmann, RP., et al., “An Annotated Review of Papers on Attack Graphs,” Project Report IA-1, Lincoln Laboratory, Massachusetts Institute of Technology, Mar. 31, 2005, 39 pages.
Lippmann, RP., et al., “Evaluating and Strengthening Enterprise Network Security Using Attack Graphs,” Project Report IA-2, MIT Lincoln Laboratory, Oct. 5, 2005, 96 pages.
Luo, Hui; Henry, Paul. A Secure Public Wireless LAN Access Technique That Supports Walk-Up Users. GLOBECOM '03. IEEE Global Telecommunications Conference. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber= 1258471 (Year: 2003).
Maltego XL, accessed on the Internet at https://www.paterva.com/web7/buy/maltego-clients/maltego-xl.php, 5 pages, (Nov. 7, 2018).
Massimo Candela, “Real-time BGP Visualisation with BGPlay,” accessed on the Internet at https://labs.ripe.net/Members/massimo_candela/real-time-bgp-visualisationwith-bgplay), Sep. 30, 2015, 8 pages.
Maxmind, https://www.maxmind.com/en/about-maxmind, https://www.maxmind.com/en/geoip2-isp-database, date accessed Sep. 28, 2016, 3 pages.
McNab, “Network Security Assessment,” copyright 2004, 13 pages.
McNab, “Network Security Assessment,” copyright 2004, 56 pages.
Method Documentation, CNSS Risk Assessment Tool Version 1.1, Mar. 31, 2009, 24 pages.
Mile 2 CPTE Maltego Demo, accessed on the internet at https://www.youtube.com/watch?v=020NKOUzPOU; Jul. 12, 2012; 1 page.
Moradi, et al., “Quantitative Models for Supply Chain Management,” IGI Global, 2012, 29 pages.
Morningstar Direct, dated to Nov. 12, 2020, morningstardirect.com [online]. Retrieved Feb. 26, 2021 from internet URL:https://web.archive.org/web/20201112021943/https://www.morningstar.com/products/direct, (Year: 2020).
Netcraft, www.netcraft.com, date accessed Sep. 28, 2016, 2 pages.
NetScanTools Pro, http://www.netscantools.com/nstpromain.html, date accessed Sep. 28, 2016, 2 pages.
Noel, et al., “Big-Data Architecture for Cyber Attack Graphs, Representing Security Relationships in NoSQL Graph Databases,” The MITRE Corporation, 2014, 6 pages.
Nye, John, “Avoiding Audit Overlap,” Moody's Risk Services, Presentation, Source Boston, Mar. 14, 2008, 19 pages.
PAIR transaction history and pending claims for U.S. Appl. No. 14/021,585, filed Apr. 29, 2016, 2 pages.
PAIR transaction history and pending claims for U.S. Appl. No. 14/021,585, filed Nov. 18, 2015, 6 pages.
PAIR transaction history of U.S. Appl. No. 13/240,572 and pending claims filed Mar. 22, 2016, 10 pages.
PAIR transaction history of U.S. Appl. No. 13/240,572, filed Oct. 7, 2015, application as filed and pending claims, 45 pages.
PAIR transaction history of U.S. Appl. No. 14/021,585 and pending claims filed Mar. 22, 2016, 2 pages.
PAIR transaction history of U.S. Appl. No. 14/021,585, filed Oct. 7, 2015 and application as filed, 70 pages.
PAIR transaction history of U.S. Appl. No. 14/944,484 and pending claims filed Mar. 22, 2016, 4 pages.
PAIR transaction history of U.S. Appl. No. 61/386,156, filed Oct. 7, 2015. 2 pages.
PAIR transaction history, application as filed and pending claims for U.S. Appl. No. 13/240,572, filed Apr. 29, 2016, 46 pages.
PAIR transaction history, application as filed and pending claims for U.S. Appl. No. 14/944,484, filed Apr. 29, 2016, 4 pages.
Paxson, Vern, “How The Pursuit of Truth Led Me To Selling Viagra,” EECS Department, University of California, International Computer Science Institute, Lawrence Berkeley National Laboratory, Aug. 13, 2009, 68 pages.
Proposal and Award Policies and Procedures Guide, Part I—Proposal Preparation & Submission Guidelines GPG, The National Science Foundation, Feb. 2009, 68 pages.
Provos et al., “The Ghost In the Browser Analysis of Web-based Malware”, 2007 (9 pages).
Rare Events, Oct. 2009, Jason, The Mitre Corporation, Oct. 2009, 104 pages.
Rees, L. P. et al., “Decision support for cybersecurity risk planning.” Decision Support Systems 51.3 (2011): pp. 493-505.
Report to the Congress on Credit Scoring and Its Effects on the Availability and Affordability of Credit, Board of Governors of the Federal Reserve System, Aug. 2007, 304 pages.
RFC 1834, https://tools.ietf.org/html/rfc1834, date accessed Sep. 28, 2016, 7 pages.
RFC 781, https://tools.ietf.org/html/rfc781, date accessed Sep. 28, 2016, 3 pages.
RFC 950, https://tools.ietf.org/html/rfc950, date accessed Sep. 28, 2016, 19 pages.
RFC 954, https://tools.ietf.org/html/rfc954, date accessed Sep. 28, 2016, 5 pages.
SamSpade Network Inquiry Utility, https://www.sans.org/reading-room/whitepapers/tools/sam-spade-934, date accessed Sep. 28, 2016, 19 pages.
Santos, J. R. et al., “A framework for linking cybersecurity metrics to the modeling of macroeconomic interdependencies.” Risk Analysis: An International Journal (2007) 27.5, pp. 1283-1297.
SBIR Phase I: Enterprise Cyber Security Scoring, CyberAnalytix, LLC, http://www.nsf.gov/awardsearch/showAward. do?AwardNumber=1013603, Apr. 28, 2010, 2 pages.
Search Query Report form IP.com (performed Apr. 27, 2020).
Search Query Report from IP.com (performed Jul. 29, 2022).
Security Warrior, Cyrus Peikari, Anton, Chapter 8: Reconnaissance, 6 pages, (Jan. 2004).
Seigneur et al., A Survey of Trust and Risk Metrics for a BYOD Mobile Worker World: Third International Conference on Social Eco-Informatics, 2013, 11 pages.
Seneviratne et al., “SSIDs in the Wild: Extracting Semantic Information from WiFi SSIDs” HAL archives-ouvertes.fr, HAL Id: hal-01181254, Jul. 29, 15, 5 pages.
Snort Intrusion Monitoring System, http://archive.oreilly.com/pub/h/1393, date accessed Sep. 28, 2016, 3 pages.
Srivastava, Divesh; Velegrakis, Yannis. Using Queries to Associate Metadata with Data. IEEE 23rd International Conference on Data Engineering. Pub. Date: 2007. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4221823, 3 pages.
Stone-Gross, Brett, et al., “Fire: Finding Rogue Networks,” 10 pages, (Dec. 7, 2009).
Taleb, Nassim N., et al., “The Six Mistakes Executives Make in Risk Management,” Harvard Business Review, Oct. 2009, 5 pages.
The CIS Security Metrics vl.0.0, The Center for Internet Security, May 11, 2009, 90 pages.
The Fair Credit Reporting Act (FCRA) of the Federal Trade Commission (FTC), Jul. 30, 2004, 86 pages.
The Financial Institution Shared Assessments Program, Industry Positioning and Mapping Document, BITS, Oct. 2007, 44 pages.
Wagner, et al., “Assessing the vulnerability of supply chains using graph theory,” Int. J. Production Economics 126 (2010) 121-129.
Wikipedia, https://en.wikipedia.org/wiki/Crowdsourcing, date accessed Sep. 28, 2016, 25 pages.
Williams, Leevar, et al., “An Interactive Attack Graph Cascade and Reachability Display,” MIT Lincoln Laboratory, 17 pages, (Jan. 2007).
Williams, Leevar, et al., “GARNET: A Graphical Attack Graph and Reachability Network Evaluation Tool,” MIT Lincoln Library, VizSEC 2009, pp. 44-59, (Sep. 15, 2008).
Winship, C., “Models for sample selection bias”, Annual review of sociology, 18(1) (Aug. 1992), pp. 327-350.
Related Publications (1)
Number Date Country
20230421600 A1 Dec 2023 US
Provisional Applications (2)
Number Date Country
61492287 Jun 2011 US
61386156 Sep 2010 US
Continuations (3)
Number Date Country
Parent 18453488 Aug 2023 US
Child 18461087 US
Parent 17069151 Oct 2020 US
Child 18453488 US
Parent 13240572 Sep 2011 US
Child 17069151 US