Cyber security sharing and identification system

Information

  • Patent Grant
  • 10873603
  • Patent Number
    10,873,603
  • Date Filed
    Friday, March 16, 2018
    6 years ago
  • Date Issued
    Tuesday, December 22, 2020
    3 years ago
Abstract
Systems and techniques for sharing security data are described herein. Security rules and/or attack data may be automatically shared, investigated, enabled, and/or used by entities. A security rule may be enabled on different entities comprising different computing systems to combat similar security threats and/or attacks. Security rules and/or attack data may be modified to redact sensitive information and/or configured through access controls for sharing.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related to but does not claim priority from U.S. patent application Ser. No. 13/968,265 entitled “Generating Data Clusters With Customizable Analysis Strategies” filed Aug. 15, 2013, and U.S. patent application Ser. No. 13/968,213 entitled “Prioritizing Data Clusters With Customizable Scoring Strategies” filed Aug. 15, 2013, which are hereby incorporated by reference in their entireties and collectively referred to herein as the “Cluster references.”


This application is related to but does not claim priority from U.S. Pat. No. 8,515,912 entitled “Sharing And Deconflicting Data Changes In A Multimaster Database System” filed Jul. 15, 2010, U.S. Pat. No. 8,527,461 entitled “Cross-ACL Multi-Master Replication” filed Nov. 27, 2012, U.S. patent application Ser. No. 13/076,804 entitled “Cross-Ontology Multi-Master Replication” filed Mar. 31, 2011, U.S. patent application Ser. No. 13/657,684 entitled “Sharing Information Between Nexuses That Use Different Classification Schemes For Information Access Control” filed Oct. 22, 2012, and U.S. patent application Ser. No. 13/922,437 entitled “System And Method For Incrementally Replicating Investigative Analysis Data” filed Jun. 20, 2013, which are hereby incorporated by reference in their entireties and collectively referred to herein as the “Sharing references.”


This application is related to but does not claim priority from U.S. Pat. No. 8,489,623 entitled “Creating Data In A Data Store Using A Dynamic Ontology” filed May 12, 2011, which is hereby incorporated by reference in its entirety and referred to herein as the “Ontology reference.”


BACKGROUND

In the area of computer-based platforms, security data may be collected, analyzed, and used to protect computer networks from unwanted behavior.


SUMMARY

The systems, methods, and devices described herein each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure, several non-limiting features will now be discussed briefly.


In some embodiments, a system for sharing security information comprises one or more computing devices programmed, via executable code instructions. When executed, the executable code instructions may cause the system to implement an attack data unit. The attack data unit may be configured to receive a plurality of security attack data from one or more entities. The security attack data may comprise information regarding one or more security attacks detected by respective entities. When further executed, the executable code instructions may cause the system to implement an attack data modification unit. The attack data modification unit may be configured to redact portions of the security attack data from various entities such that the redacted portions are not detectable in the security attack data. When further executed, the executable code instructions may cause the system to implement a distribution unit configured to share the security attack data with multiple entities.


In some embodiments, a non-transitory computer storage comprises instructions for causing one or more computing devices to share security information. When executed, the instructions may receive a plurality of security attack data from multiple entities. The security attack data may comprise information regarding one or more security attacks detected by respective entities. When further executed, the instructions may generate a ruleset based on the plurality of security attack data from multiple entities. The ruleset may be adapted to be executable by a plurality of the entities to recognize one or more security attacks. When further executed, the instructions may redact configurable portions of the security attack data from various entities such that the redacted portions are not detectable in the ruleset. When further executed, the instructions may transmit the ruleset, or a subset thereof, to one or more entities.


In some embodiments, a method for sharing security information comprises receiving a plurality of security attack data from multiple entities. The security attack data may comprise information regarding one or more security attacks detected by respective entities. The method may further comprise generating a ruleset based on the plurality of security attack information from multiple entities. The ruleset may be adapted to be executable by a plurality of the entities to recognize one or more security attacks. The method may further comprise redacting configurable portions of the security attack data from various entities such that the redacted portions are not detectable in the ruleset. The method may further comprise transmitting the ruleset to one or more entities.


In some embodiments, a system for sharing security information may share security attack data around specific security attacks and/or strategies for detecting security attacks.





BRIEF DESCRIPTION OF THE DRAWINGS

Certain aspects of the disclosure will become more readily appreciated as those aspects become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings.



FIG. 1 is a block diagram illustrating an example security sharing system, according to some embodiments of the present disclosure.



FIG. 2 is a flowchart illustrating an example attack sharing process, according to some embodiments of the present disclosure.



FIG. 3 is a flowchart illustrating an example ruleset sharing process for a specific attack, according to some embodiments of the present disclosure.



FIG. 4 is a flowchart illustrating an example modification process for attack data and/or rulesets, according to some embodiments of the present disclosure.



FIG. 5 is a block diagram illustrating an example security sharing system sharing attack data, rulesets, and/or modified attack data, according to some embodiments of the present disclosure.



FIG. 6 is a flowchart illustrating an example ruleset sharing process for multiple attacks, according to some embodiments of the present disclosure.



FIG. 7 is a block diagram illustrating an example security sharing system sharing attack data, rulesets, and/or modified attack data from different entities, according to some embodiments of the present disclosure.



FIG. 8A illustrates an example security sharing table, according to some embodiments of the present disclosure.



FIG. 8B illustrates example attack data and/or modified attack data based on security attacks, according to some embodiments of the present disclosure.



FIG. 8C illustrates example attack data and/or rulesets based on multiple security attacks, according to some embodiments of the present disclosure.



FIG. 8D illustrates example rulesets in a format comprising code instructions, according to some embodiments of the present disclosure.



FIG. 9A illustrates an example user interface of the security sharing system, according to some embodiments of the present disclosure.



FIG. 9B illustrates an example user interface for viewing data objects within the security sharing system, according to some embodiments of the present disclosure.



FIG. 10 is a block diagram illustrating an example security sharing system with which various methods and systems discussed herein may be implemented.





DETAILED DESCRIPTION

Security information may be shared with the goal of improving particular aspects of computer and/or cyber security. For example, in a computer-based context, security information and strategies may be orally shared among institutions and/or individuals at a cyber-security conference. The institutions and/or individuals may then implement detection and/or defensive strategies on their computing systems. In another example, in a software application-based context, antivirus software may detect malicious files and/or code through sharing of hash values of malicious files and/or code. Upon scanning a computing device, malicious files and/or code may be removed and/or quarantined.


In addition to sharing of security data orally and/or in an information entity to entity manner, disclosed herein are systems for sharing security information among multiple entities. Using the techniques and systems described herein, security threats may be addressed more preemptively and/or efficiently by utilizing more information and/or analysis from other entities. Those techniques and systems may comprise automatically and/or in an ad hoc manner sharing attack information and/or generic rules to combat security threats.


Sharing attack information may allow for distributive and/or efficient responses to security threats. Institutions, organizations, entities, and/or the government may share attack information automatically and/or in an ad hoc manner. The security sharing system may modify attack data to redact confidential and/or sensitive information for sharing with other entities.


Sharing of generic rules through the security sharing system may efficiently combat security threats. In some embodiments, a rule may be generated by human technicians and/or any participating entity and pushed to the security sharing system for use by other participating entities. In some embodiments, a rule may be generated by the security sharing system following an attack on any entity using the system. A rule may differ from a specific attack by comprising more abstract characteristics of an attack that may be used to proactively detect other attacks. The rules may be configured to be enabled and/or implemented on other entities and/or computing systems to defend against and/or combat security attacks using their own proprietary data.


System Overview



FIG. 1 illustrates a security sharing system, according to some embodiments of the present disclosure. In the example embodiment of FIG. 1, the security sharing environment 190 comprises a network 120, a security sharing system 100, one or more security attacks 130 (including security attacks 130A, 130B, 130C, 130D, 130E, 130F, 130G, 130H, 130I in the example of FIG. 1), and one or more entities 110 (including entities 110A, 110B, and 110C in the example of FIG. 1). The network 120 may comprise, but is not limited to, one or more local area networks, wide area network, wireless local area network, wireless wide area network, the Internet, or any combination thereof. As shown by FIG. 1, the security sharing system 100 may share security data with one or more entities 110.


In some embodiments, the security sharing environment 190 may not comprise a network. Some entities, such as, but not limited to, government entities, may not be connected to a network and/or may not want to share security information via a network. Thus, sharing of security information by the security sharing system may occur via physical transport, such as, but not limited to, Universal Serial Bus drives, external hard drives, and/or any other type of media storage.


The entities 110 may comprise one or more computing devices. For example, an entity 110 may be institution, such as, but not limited to, a bank, a financial institution, or the government, comprised of one or more computer servers. As illustrated by entity 110B, an entity may comprise one or more computing devices connected via a network 140A, such as a secure LAN. The entity 110B may communicate with a security sharing device 150A, which may reside outside the network 140A. The security sharing device 150A may then communicate with the security sharing system 100. The security sharing device 150 may comprise all of that institution's cyber data and/or analysis. As illustrated by entity 110C, the security sharing device 150B may reside within an entity's network 140B. The entity may be connected via the network 120.


There may be many different variations and/or types of a security attack 130. Several examples of security attacks are discussed herein, but these are not limiting in the types of other security attacks the security sharing system may be able to detect. For example, a security attack may comprise malicious code and/or software, such as, but not limited to, Trojans, viruses, worms, and/or bots. Security attacks and/or security patterns may evolve and/or change over time. Therefore, the security sharing system may be configured to handle different types of security attacks and/or the changes in security attacks over time.


For example, a Trojan installs and/or infects itself in the computer servers of an entity, such as, a banking institution. The Trojan originates from a computing device with a source identifier, such as, but not limited to, an Internet Protocol (“IP”) address. The Trojan then steals information and/or data from the entity, such as, but not limited to, credit card data, social security numbers, bank account numbers, personal data, names, phone numbers, addresses, and/or driver's license numbers. The Trojan sends the stolen information and/or data over the network 120 to one or more computing devices.


A security attack may comprise an attempt to make a computing device unavailable, such as, but not limited to, a denial of service (DOS) attack, or a distributed denial of service (DDOS) attack. For example, multiple computing devices saturate one or more computing devices of an entity with communication requests. In a DOS or DDOS attack, a goal of the multiple communication requests may be to make one or more computing devices unavailable to the network by overloading the computing devices with requests.


The security sharing system 100 may operate as a single instance, client server system, or as a distributed system. For example, there may be multiple instances of the security sharing system 100 running simultaneously that communicate through the network 120. In some embodiments, each security sharing system instance operates independently and/or autonomously. In some embodiments, there is a central server of the security sharing system 100 and individual clients of the security sharing system communicate with the central server via the network 120.


Example Attack Sharing Processes



FIG. 2 is a flowchart illustrating an attack sharing process, according to some embodiments of the present disclosure. The method of FIG. 2 may be performed by the security sharing system and/or one or more entities discussed about with reference to FIG. 1. Depending on the embodiment, the method of FIG. 2 may include fewer or additional blocks and/or the blocks may be performed in order different than is illustrated.


Beginning in block 202, an attack is made on one of the computing devices of an entity 110. As noted above, various activities may be considered attacks on an entity 110. For example, a malicious application, such as a Trojan, may be installed on an entity's one or more internal computing devices. The malicious application may originate from an IP address.


At block 204, the security attack is identified and/or recorded as attack data. For example, the entity 110 may identify malicious software and/or communications from an external IP address, such as using antivirus or firewall software. The attack data may comprise information about the security attack, such as the IP address of the malicious application, an identifier for the malicious application, and/or the IP addresses of the internal computing devices that were attacked and/or infected. In some embodiments, the security attack may be identified automatically, such as by security and/or antivirus software, or the attack may be identified by a human operator, such as a system administrator or information technology technician. In some embodiments, attacks are initially detected by software and then a human technician confirms a software-detected attack before the attack data is shared, such as according to the process described below.


In some embodiments, identification of attacks occurs by the systems, methods, and/or techniques disclosed in the Cluster references. For example, related security attacks may be clustered as illustrated by U.S. patent application Ser. No. 13/968,265. A human technician may then view and analyze the cluster of related security attacks. Clusters of security attacks may also receive rankings and/or scorings as illustrated by U.S. patent application Ser. No. 13/968,213.


In some embodiments, attack data, rulesets, and/or other security information may be a data object according to the systems, methods, and/or techniques disclosed in the Ontology reference. For example, attack data, rulesets, and/or other security information may be included in data objects that are included in an ontology, which may be shared with other entities across the security sharing system and/or the data objects remain uniform across the entities they are shared with.


At block 205, the attack data may be optionally modified for sharing. For example, information regarding computing systems and/or employees of the entity 110 that was attacked may be removed from the attack data before it is shared with the security sharing system 100. The entity 110 may remove and/or modify data regarding the attack and/or the security sharing system 100 may remove and/or modify data regarding the attack once received from the entity 110 (e.g., as discussed below in block 206). In some embodiments, the attack data may be modified such that the network architecture and/or other proprietary information of the entity 110 is not indicated. For example, an entity 110 may want to share the attack data with others without exposing the particular network architecture and/or the specific computing devices that are part of the attack.


Next, at block 206 the attack data may be provided by the entity 110 to the security sharing system 100, such as via the network 120 of FIG. 1. Depending on the embodiment, the attack data may be shared in various manners, such as via a shared network location that stores the data, a direct communication via an email or HTTP communication, or in any other manner. The attack data may be in various formats, such as a database format, files, XML, JSON, a file format that is proprietary to the security sharing system 100, or any other format, and maybe encrypted or have security of any available type.


In some embodiments, sharing of attack data at block 206 occurs by the systems, methods, and/or techniques disclosed in the Sharing references. For example, attack data may be shared and deconflicted through a replicated database system as illustrated by U.S. Pat. No. 8,515,912. Attack data may also be shared through a database system with multiple ontologies as illustrated by U.S. patent application Ser. No. 13/076,804. The sharing of attack data may also occur via incremental database replication as illustrated by U.S. patent application Ser. No. 13/922,437.


In some embodiments, the clusters generated by the systems, methods, and/or techniques disclosed in the Cluster references and/or other security information may be shared by the systems, methods, and/or techniques disclosed in the Sharing references, other mechanisms illustrated in this disclosure, and/or any other manner.


At block 208, the attack data that is received at the security sharing system 100 is wholly or partially shared with one or more entities 110. For example, if the attack data is received from entity 110A, the security sharing system 100 may share the attack data to entities 110B, 110C, and/or external systems, such as in accordance with sharing preferences of the entities.


At block 210, the attack data may be optionally used by the entities with which the attack data is shared. For example, the attack data may be proactively detect and hopefully prevent similar attacks.


There may be some variations of the optional use of the attack data at block 210. For example, an entity 110B, such as a financial institution, may implement security defenses, such as, antivirus software, to defend against the malicious application identified in the attack data. In another example, the entity 110B, may conduct a search for the malicious application throughout the entity's computing devices based on the received attack data. Thus, the attack data may be used to identify previous attacks (which may be ongoing) and initiate recovery actions (e.g., quarantining infected files, blocking IP addresses, deleting files, reinstalling drivers, etc.).


Rulesets


A ruleset may comprise code instructions and/or sets of data that enable a security system and/or entity to implement detection and/or defense strategies against security attacks. A ruleset differs from specific attack data (such as was discussed above with reference to FIG. 2) in that a ruleset may include more abstract characteristics of an attack that can be used to detect other attacks. For example, a ruleset may comprise code instructions and/or data representing a beaconing behavior based on a pattern of communication and/or call-back behavior by one or multiple entities from/to a common IP address (or group of IP addresses). In general, beaconing comprises communication from an entity's internal computing device to an external computer device that is believed to be malicious. Beaconing often occurs at regular intervals, such as every twenty-four hours. Thus, a beacon is traffic leaving the inside of a network at regular intervals. A beacon may be referred to as a phone home or heartbeat. Beacons can be used for malicious purposes, such as obtaining new orders from a command and control (C&C) server as well as to download updates or other tools (e.g., to malicious software installed on the entity's computing system). In some embodiments, beaconing may occur at irregular intervals, such as intervals that are determined based on a predetermined pattern algorithm, or at random intervals. Thus, a ruleset that identifies beaconing behavior does not necessarily indicate particular entities that have been attacked by the beaconing behavior, but may include characteristics of the beaconing behavior, such as may be reported by multiple entities, which can be used to detect similar attacks. For example, a beaconing ruleset may identify a group of IP addresses to which beacons are transmitted (e.g., from an attacked entity), a beaconing period (e.g. a fixed time period, a range of time periods, or algorithmic expression that defines a time period), and/or any other information that may be used by an entity to identify similar beaconing activities. Sharing of a beaconing ruleset, whether based on beaconing activities of a single entity or beaconing activities of multiple entities (e.g., as may be developed by the security sharing system 100 in response to receiving beaconing activity data from multiple entities), may enable early detection by the entity and/or the security sharing system of beaconing behavior.


A ruleset may be in various formats, such as source code and/or code instructions, a database format, files, XML, JSON, a file format that is proprietary to the security sharing system 100, or any other format, and may be encrypted or have security of any available type, and/or some combination thereof. In some embodiments, a ruleset may be enabled on different entities and/or computer systems without modification. For example, two different entities may use different proxy providers (e.g., computer servers that act as intermediaries for requests from clients seeking resources from other services). The ruleset may be sufficiently abstracted such that, with some configuration data and/or configuration of the entity's computing devices, the same ruleset checking for beaconing (or other) behavior may be run on different entities that have different proxy providers.


In some embodiments, rulesets and/or the output of rulesets may be clustered by the systems, methods, and/or techniques disclosed in the Cluster references. For example, related rulesets and/or the output of rulesets may be clustered as illustrated by U.S. patent application Ser. No. 13/968,265. A human technician may then view and analyze the cluster of related rulesets and/or output of the rulesets.


Example Ruleset Sharing



FIG. 3 is a flowchart illustrating a ruleset sharing process for a specific attack, according to some embodiments of the present disclosure. The method of FIG. 3 may be performed by the security sharing system and/or one or more entities discussed about with reference to FIG. 1. Depending on the embodiment, the method of FIG. 3 may include fewer or additional blocks and/or the blocks may be performed in order different than is illustrated.


Beginning at block 302, a security attack is detected and a pattern associated with the security attack is identified. For example, the security attack may correspond to a beaconing attack that includes hundreds of beacons that have been identified by a particular entity to a group of related IP addresses. A detected pattern may indicate a regular time period, e.g., twenty-four hours, at which beacons repeatedly transmitted to the same IPs.


In some embodiments, there may be some variations of identification of the security attack and recognition of the pattern at block 302. As previously illustrated in FIG. 2, identification of the attack at block 302 may be manual or automatic. Similarly, recognition of a pattern from the security attack at block 302 may be manual or automatic. For example, automatic pattern recognition by the security sharing system may occur from identifying beaconing at regular intervals and/or connections to the same external IP address. In some embodiments, humans may review the security attacks to recognize a pattern from them.


In some embodiments, similar to the identification of attacks, recognition of patterns occurs by the systems, methods, and/or techniques disclosed in the Cluster references.


At block 304, a ruleset may be generated from the recognized pattern. In some embodiments, generation of rulesets is automatic, manual, or some combination thereof.


At block 306, the ruleset may be optionally modified for sharing, such as in the manner discussed with reference to FIG. 4, below.


At block 308, the security sharing system may share the ruleset with one or more other entities or external systems. The sharing of the ruleset with external systems at block 310 may be similar to the sharing of the attack data at block 208 of FIG. 2.


At block 312, the ruleset may be optionally enabled by the external system and/or entity.


Modifying Attack Data and/or Rulesets



FIG. 4 is a flowchart illustrating a modification process for attack data and/or rulesets, according to some embodiments of the present disclosure. The method of FIG. 4 may be performed in whole, or in part, as part of block 205 of FIG. 2 and/or block 306 of FIG. 3. Depending on the embodiment, the method of FIG. 4 may include fewer or additional blocks and/or the blocks may be performed in order that is different than illustrated.


At block 402, irrelevant and/or sensitive data may be redacted from attack data and/or rulesets. For example, attack data may initially comprise the source identifiers of internal computing devices of an entity, such as, but not limited to, the IP addresses of the computing devices. An entity may not want to share their internal IP addresses of their computing devices. Therefore, an entity may redact and/or remove data from the attack data such as the IP addresses of the entity's internal computing devices. Removal of entity specific information, such as internal IP addresses, from attack data, may abstract the attack data to increase usability by other entities. In some embodiments, redaction of attack data and/or rulesets is automatic, manual, or some combination thereof. For example, there may be a configurable list of IP addresses, which correspond to internal computing devices, to be removed from attack data and/or rulesets. Redaction may require approval by a human operator. In some embodiments, redaction of attack data and/or rulesets may be performed by a human operator.


At block 404, recipients may be specified for attack data and/or rulesets. For example, an entity may only want to send attack data to other entities it has close relationships with more entities in a particular vertical market or having other attributes. Therefore, the entity may specify one or more entities criteria for entities with which attack data and/or rulesets security sharing system. The sharing data may be provided in any available format, and may apply to sharing of attack data and/or ruleset data from the entity that provides the sharing data. In some embodiments, a human operator must approve and/or select the recipients of attack data and/or rulesets.


In some embodiments, access controls for replicating attack data and/or rulesets at block 404 occurs by the systems, methods, and/or techniques disclosed in the Sharing references. For example, asynchronous replication of attack data and/or rulesets occur via access control policies illustrated by U.S. Pat. No. 8,527,461. Replication of attack data and/or rulesets may occur where databases use different classification schemes for information access control as illustrated by U.S. patent application Ser. No. 13/657,684.


At block 406, attack data and/or rulesets may be made anonymous. For example, attack data and/or rulesets may comprise the source entity of the attack data and/or rulesets. Thus, an entity may specify whether the sharing attack data and/or rulesets should be anonymous. In some embodiments, there is a global setting and/or configuration for specifying anonymity. There may be a configurable setting enabling anonymity for some recipients but not others. In some embodiments, a human may approve or specify anonymity for each attack data item and/or ruleset that is shared.


At block 408, attack data and/or rulesets may be weighted differently such as based on the entity that provides the attack data or rule set (e.g., some entities may be more reliable providing attack data than others) or based on the type of attack identified in the attack data or rule set, and/or other factors. For example, if attack data indicates a high security risk associated with the attack, the security sharing system may assign a high weighting to the attack data. However, if the reported attack is less malicious and/or from an entity that commonly misreports attacks, a lower weighting may be assigned to the attack data, such that sharing of the attack data doesn't introduce false attack alerts in other entities. Thus, in some embodiments the security sharing system tracks the accuracy of reported attacks from respective entities and automatically applies weightings and/or prioritizations to future reports from those entities based on the determined accuracy.


The weightings may be assigned manually and/or automatically. For example, in some embodiments a human operator specifies whether attack data and/or rulesets are important. These weightings may change over time, as the attacks themselves evolve.


From the receiving perspective of attack data and/or rulesets, an entity may optionally weight attack data and/or rulesets from different entities. Thus, if an entity values attack data and/or rulesets from a different entity highly, the entity may set a high level of priority for anything received from that different entity.


Sharing Attack Data and/or Rulesets



FIG. 5 illustrates a security sharing system sharing attack data, rulesets, and/or modified attack data, or subsets thereof, according to some embodiments of the present disclosure. In accordance with some embodiments of the present disclosure, the security system 100 may comprise a rule unit 530, an attack modification unit 540, a ruleset data store 532, and/or attack data store 542.


As shown in the example of FIG. 5, an entity 110A has been involved in three attack communications 130A, 130B, and 130C (each comprising one or more transmissions and/or receptions of data from the entity 110A). In this embodiment, the entity 110A, upon identifying the one or more communications as an attack (see, e.g., FIG. 2) may send attack data 500 to the security sharing system 100 through the network 120. In some embodiments, the security sharing system 100 automatically collects attack data from a security attack.


In this example, the security sharing system 100 generates a ruleset based on the attack data 500 corresponding to the multiple attack communications 130A, 130B, and 130C, such as by any one or more processes discussed with reference to FIG. 3. For example, the multiple attacks 130A, 130B, and 130C illustrated in FIG. 5 may be associated with beaconing calls to an external computer device from the entity 110A. The attack ruleset 510 may be generated and/or output to other entities by the rule unit 530. The attack ruleset 510 may be stored in the attack data store 542. In the embodiment of FIG. 5, the security sharing system 100 shares the attack ruleset 510 with another entity 110B through the network 120 and the modified attack data 520 with another entity 110C. The entities 110B and 110C may change, update, modify, etc. security measures based on the ruleset 510 or modified attack data 520, respectively.


In some embodiments, the security sharing system may be able to automatically generate rulesets based on one or more security attacks. Similar to automatic recognition of security attack patterns, a ruleset may be automatically generated from patterns of security attacks, e.g., beaconing and/or DOS attacks. Rulesets may be automatically output by the rule unit 530. For example, the rule unit 530 may take as input data regarding security attacks and automatically generate a ruleset from patterns recognized in the data.


In some embodiments, a human operator and/or a team of technicians may review the security attack patterns to generate a ruleset. The security sharing system may provide user interface tools to humans for analyzing security attacks and/or creating rulesets. For example, rulesets may be generated by a human operator of the user interface of the rule unit 530. A user interface of the rule unit 530 may comprise a document processing interface to generate a ruleset.


In some embodiments, a team of engineers and/or technicians may consume all of the security information and/or data from all of the entities of the security sharing system. The engineers and/or technicians may conceive and/or generate rulesets to share them with entities through the security sharing system.


In some embodiments, rulesets may be generated by entities and shared through the security sharing system. For example, the rule unit 530 may receive rulesets from entities for distribution to other entities through the security sharing system.


The shared attack data and/or ruleset may be modified by the entity 110A and/or the security sharing system 100, such as by any one or more processes discussed with reference to FIG. 4. The attack modification unit 540 may modify the attack data 500 to output the modified attack data 520. The modified attack data 520 may be stored in the attack data store 542. Modification by the attack modification unit 540 and/or storage in the attack storage unit may achieve some of the goals and/or advantages illustrated in FIG. 4.


Example Ruleset Generation from Multiple Attacks



FIG. 6 is a flowchart illustrating a ruleset sharing process for multiple attacks, according to some embodiments of the present disclosure. The method of FIG. 6 may be performed by the security sharing system and/or one or more entities discussed about with reference to FIG. 1. Depending on the embodiment, the method of FIG. 6 may include fewer or additional blocks and/or the blocks may be performed in order different than is illustrated.


Beginning at block 602, attack data is received from separate entities. For example, the attack data from different entities (e.g., entities 130) may correspond to hundreds of DOS attacks that have been identified by different entities.


At block 604, a pattern is associated and/or recognized from the attack data from different entities. A recognized pattern may indicate a particular source of the attack, e.g., a DOS attack that originates from a particular IP address range, or domain.


At block 606, a ruleset may be generated from the recognized pattern. The ruleset may be optionally modified for sharing.


At block 608, the ruleset may be shared through the security sharing system. For example, the ruleset may be shared with one or more entities that shared attack data used in generation of the ruleset and/or other entities that did not provide attack data used in generation of the ruleset, such as in accordance with sharing rules (e.g., FIG. 8A).


At block 610, the ruleset is shared through the security sharing system to external systems. The sharing of the ruleset with external systems at block 610 may be similar to the sharing of the attack data at block 208 of FIG. 2.


At block 612, the ruleset may be optionally enabled by the external system and/or entity.


Sharing Attack Data and/or Rulesets from Different Entities



FIG. 7 is a block diagram illustrating a security sharing system sharing attack data, rulesets, and/or modified attack data that has been received from and/or determined based on information from different entities, according to some embodiments of the present disclosure. As shown in the example of FIG. 7, the entity 110A has received three attack communications 130A, 130B, 130C and the entity 110B has received one attack communication 130D (although an attack, as used herein, may include one or any number of communications from and/or to an entity).


In this embodiment, the entity 110B, upon identifying the one or more communications as an attack (see, e.g., FIG. 2), may send attack data 702 to the security sharing system 100 through the network 120. Similar to entity 110B, entity 110A may send attack data 700, including information regarding attacks 130A, 130B, 130C, to the security sharing system 100. In this example, the security sharing system 100 generates a ruleset based on the attack data 700 from entity 110A and the attack data 702 from entity 110B. For example, the multiple attacks illustrated in FIG. 7 may be associated with DOS attacks from a set of computing devices associated with an IP address, a range of IP addresses, and/or a domain, for example.


Rule generation and/or sharing in FIG. 7 may be similar to FIG. 5.


The security sharing system 100 may process the attack data from different entities to share attack rulesets, attack data, and/or modified attack data. In FIG. 7, the security sharing system 100 shares modified attack data 720 with entity 110D, which may not have been attacked yet by the particular attack 130A-130D that were reported by entities 110A, 110B. For example, the modified attack data 720 may comprise the set of IP addresses where the DOS attacks originated from. The modified attack data 720 may differ from the attack data 700 by not having data regarding the internal computing devices that were targeted by the DOS attacks.


Sharing Tables



FIG. 8A illustrates an example security sharing table, according to some embodiments of the present disclosure. For example, the security sharing table may be one or more tables in a relational database of the security sharing system. In other examples, the security sharing table may be in various formats, such as a data object format, XML, JSON, a file format that is proprietary to the security sharing system, or any other format. The columns and/or fields shown in the table are illustrative. In some embodiments, there may be additional or less columns and/or fields. The security sharing table may be used to redact and/or modify any property of attack data, rulesets, and/or other security information of the security sharing system. The redaction and/or modification of any property may be possible because attack data, rulesets, and/or other security information may be in a data object format.


As shown in the example of FIG. 8A, the security sharing table may be used by the security sharing system (and/or by individual entities in some embodiments) to redact and/or modify attack data and/or rulesets. For example, there are four example entities shown. Attack data from a security attack may comprise the hostname and/or label for computing devices from an entity. Thus, the redact hostname column may be used to remove hostnames from attack data and/or rulesets. In the example, the hostnames “prod.example.net” and “application.example.net” will be removed from entity 1's attack data and/or rulesets. For entity 2, all of the hostnames ending in “corp” will be removed. As shown for entity 4, all hostnames may be removed from attack data and/or rulesets.


As shown in the example table of FIG. 8A, the four example entities have each provided different criteria for redacting or removing IP addresses from attack data and/or rulesets. According to these rules, the security sharing system may remove all internal IP addresses from attack data and/or rulesets from entity 4 and all internal IP addresses beginning with “192.168” from attack data and/or rulesets of entity 3.


The security sharing table may also be used to specify recipients for attack data and/or rulesets. For example, as shown in FIG. 8A, entity 1 has recipients: entity 2 and entity 3. Thus, the default recipients of entity 1 are entities 2 and 3 for sharing attack data and/or rulesets. As shown in the example table, entity 4 may share all of its attack data and/or rulesets with every entity in the security sharing system.


As shown in the example table of FIG. 8A, there may be a setting to make an entity anonymous while sharing attack data and/or rulesets. Entities 1 and 4 may share attack data and/or rulesets with other entities anonymously. For example, with the anonymous setting enabled, entity 1 can share attack data and/or rulesets with the security sharing system, which may then share that attack data, rulesets, and/or other ruleset generated based on the data received from entity 1, with other entities without identifying entity 1 as a source of the attack data.


Example Attack Data, Modified Attack Data, and/or Rulesets


As previously illustrated, attack data, modified attack data, and/or rulesets may be in various formats and/or combinations of formats and are not limited to the example formats shown in FIGS. 8B,8C, and 8D.



FIG. 8B illustrates example attack data and/or modified attack data based on security attacks, according to some embodiments of the present disclosure. For example, a security attack may be a DDOS attack with four originating IP addresses, shown in the <from_ip_addresses> element of the example attack data 802. The example attack data 802 also includes an attack identifier (in the <id> element), and identifies the type of attack (in the <type> element), internal IP addresses affected by the attack (in the <internal_ip_addresses> element) and entity providing the attack data (in the <entity> element>).


The security sharing system may modify the attack data 802, such as in the manner discussed with reference to FIG. 4 and/or FIG. 8A, to output modified attack data 804. As illustrated, modified attack data 804 may not contain the source entity of the attack data (e.g., the <entity> element is removed) and/or the internal IP addresses (e.g., the <internal_ip_addresses> element is removed) of the source entity, which may correspond to the preferences illustrated with reference to the sharing table of FIG. 8A. For example, the sharing table of FIG. 8A specified that entity 1 should remain anonymous and redact certain IP addresses. In another embodiment, entity 1 may include its identifier (e.g., include the <entity> element) and the security sharing system may anonymize the attack data by not indicating to other entities with which the attack data is shared that the attack data came from entity 1. In this way, data that is received by the security sharing system may still determine reliability of information based on source of information, even without sharing the exact identity of the source with other entities.


In another example, a security attack may be a previously illustrated Trojan attack with four originating IP addresses. Similar to the previous example, an entity may send attack data 806 to the security sharing system and/or the security sharing system may modify the attack data 806 to output modified attack data 808. Unlike modified attack data 804, which may be anonymous, modified attack data 808 may contain the source entity of the attack data, which may correspond to the preferences illustrated with reference to the sharing table of FIG. 8A. For example, the sharing table of FIG. 8A specified that entity 2 should not be anonymous. Similar to modified attack data 804, modified attack data 808 does not contain the internal IP addresses of entity 2, which may correspond to the preferences illustrated with reference to the sharing table of FIG. 8A.



FIG. 8C illustrates example attack data and rulesets based on multiple security attacks, according to some embodiments of the present disclosure. For example, attack data 810A, 810B, 810C correspond to three separate beaconing attacks against different entities may be received by the security sharing system. While in this example the attack data 810 are received from three separate entities (entity 1, entity 2, and entity 3), in other embodiments the attack data used to form a ruleset may all be received from the same entity.


The security system may generate ruleset 812 and ruleset 814, such as in the manner discussed with reference to FIG. 6 and/or FIG. 7, based on the attack data 810. The security sharing system may interpret and/or redact the attack data, such as in accordance with redaction rules (e.g., FIG. 8A). For example, each of the attack data 810 includes internal IP addresses (in the <internal_ip_addresses> element) and source entity data (in the <entity> element). However, the ruleset 812 may comprise more abstract characteristics of the attack data 810, such as the type of attack (in the <type> element) and/or the frequency of the attack for beaconing attacks (in the <frequency> element), while excluding specific attack characteristics such as internal IP addresses (e.g., the <internal_ip_addresses> element is removed) and source entity data (e.g., the <entity> element is removed). Another example of a ruleset is examining the method and/or behavior of attackers over time. For example, some hacking groups may only work during certain hours, thus may be more likely to attack during those time periods. These groups may mask the IPs they use, such that merely using the attack data may not be enough. A ruleset may allow entities to track malicious patterns even as the attack data itself changes. The ruleset 812 may also comprise actions to perform, such as, but not limited to, detecting attacks and/or alerting of suspicious behavior (in the <action> element). The security sharing system may interpret the attack data 810 to generate ruleset 814 to block malicious IP addresses. In the example, ruleset 814 comprises alerting and blocking actions (in the <action> element). Thus, an entity enabling ruleset 814 may automatically block malicious IP addresses (in the <block_ip_addresses> element). In some embodiments, attack data may be automatically interpreted to generate a ruleset to block a range of IP addresses.



FIG. 8D illustrates example rulesets 820 and 822 in a format comprising code instructions, according to some embodiments of the present disclosure. In some embodiments, rulesets may be complex enough such that their expression may be in a format comprising code instructions. The executable code instructions shown in FIG. 8D are illustrative pseudocode and, thus, may not correspond to any specific programming language or be executable in the format shown. Executable code that performs the functions outlined in rulesets 820 and 822 may be provided in any available programming language.


Example ruleset 820 comprises code configured to detect security attacks for globetrotter logins. In a globetrotter security attack, there may be Virtual Private Network (“VPN”) logins from locations that would be physically impossible for a single person to make. For example, logins by the same person from the United States and India within minutes and/or one hour of each other. Thus, the ruleset 820 includes code instructions that cause an entity's computing device and/or the security sharing system to find all VPN logins for a particular person. The VPN logins may then be sorted by time of login. The VPN logins may be iterated through by a for loop, which may first calculate the time since the previous login, the physical distance between locations of two previous logins, and the travel speed required for a person to complete both logins. If the speed is over a threshold then a cluster may be constructed, such as by using the IP addresses and or identity of the person as seeds (see the Cluster references), and an alert may be added to a queue. For example, if the speed threshold was set to a speed, such as, 2,000 miles per hour, then an alert would appear if there was a login between the United States and Japan within an hour of each other. Thus, the ruleset 820 may enable the detection of globetrotter security attacks.


Example ruleset 822 comprises code to detect security attacks for fake user-agent logins. In a fake user-agent security attack, malware may provide fake and/or made up user-agent logins and/or strings to gain access to an entity's network and/or resources. For example, the code instructions shown in 822 cause an entity's computing device and/or the security sharing system to receive a user-agent string for a login and find all of the user-agent strings on the network. If the user-agent string is not already on the network and/or is a new user-agent string, then a cluster may be constructed (see the Cluster references) and an alert may be added to a queue. Thus, the ruleset 822 may enable the detection of fake user-agent security attacks.


In some embodiments, rulesets may comprise various formats and/or combinations of formats. For example, a beaconing ruleset may comprise executable code instructions and/or XML documents. In the beaconing example, the executable code instructions may comprise programming logic to detect beaconing attacks. The XML documents may comprise parameters and/or configurations. In the beaconing example, the XML documents may comprise parameters for checking different beaconing frequencies, e.g., every hour and/or every twenty-four hours. Thus, both the code instructions and the parameterized format, such as, but not limited to, WL, may be shared through the security sharing system.


Example User Interface



FIGS. 9A and 9B illustrate example user interfaces of the security sharing system, according to some embodiments of the present disclosure. In some embodiments, the user interfaces described below may be displayed in any suitable computer system and/or application, for example, in a web browser window and/or a standalone software application, among others. Additionally, the functionality and/or user interfaces of the system as shown in FIGS. 9A and/or 9B may be implemented in one or more computer processors and/or computing devices, as is described with reference to FIG. 10.


Referring to FIG. 9A, the example user interface 902 comprises security alerts 910, 920A, 920B, priority alert window 930, attacks window 940, and/or rulesets window 950.


In operation, a human operator may view security attack alerts through the user interface 902. For example, when an entity shares security attack data with the entity viewing the user interface 902, a security attack alert 910 may appear. The security attack alert 910 may display a risk/importance level, and/or “from” entity indicating the source of the security attack alert. The security attack alert 910 may also display details of the alert and/or a link to the details. For example, security attack alert details may comprise the IP address source of the attack, and/or the type of attack, e.g., beaconing, DOS attack, etc.


In the example of FIG. 9A, a human operator may also view ruleset alerts through the user interface. For example, when a ruleset is shared in the security sharing system, ruleset alerts, 920A and 920B may appear. Ruleset alerts 920A and 920B may be similar to security attack alert 910, but instead of security attack details, ruleset alerts 920A and 920B may display ruleset details. In the beaconing example, ruleset details may comprise an IP range to which beacons are sent and/or a frequency that a beacon is sent to an external computing device. The ruleset alerts windows 920 may have an “enable” button that activates the ruleset as previously described.


In some embodiments, the security attack alert 910 may have an “enable” button that activates the security attack alert similar to activating a ruleset. For example, enabling a security attack alert automatically sets alerts and/or detection of the malicious IP addresses specified in the security attack alert. The security sharing system may automatically generate a ruleset from the security attack alert when the security attack alert is enabled. In some embodiments, generation of the ruleset in response to activation of the security attack alert may occur without notification to the human operator. Thus, enabling a security attack alert may be a shortcut for generating a ruleset.


The priority alert window 930 may display entities that will receive a “high” and/or “important” priority level when those entities share attack data and/or rulesets with the particular entity viewing/operating the user interface 902. For example, in the priority alert window 930, entities 1, 5, and 10 are selected for priority alerts. In this example, the priority alert window 930 comprises an “edit” button to add and/or remove entities for priority alerts.


The attacks window 940 may display the security attacks that have been identified for the particular entity operating/viewing the user interface 902. For example, the attacks window 940 displays security attacks 1, 2, and 4. The attacks window 940 may be populated automatically by the security sharing system and/or by the entity. In the example attacks window 940, there is a “create” button, which may allow the human operator to add a security attack. The attacks window 940 may have a “share” button, which may allow the human operator to select one or more security attacks for sharing through the security sharing system, and may have options to share the security attack data, modified security attack data, and/or a ruleset regarding the security attack (possibly in connection with other security attacks). The attacks window 940 may also have an “edit” button, which may allow the human operator to edit and/or modify a security attack. For example, a security attack 1 may be a DDOS attack with two originating IP addresses for the attack, and security attack 1 may be edited to add and/or remove IP addresses. Alternatively, such updates may be performed automatically by the entity and/or the security sharing system based on predetermined rules for modifying attack data.


The rulesets window 950 may display the rulesets from an entity. The rulesets window 950 may be operated similar to the attacks window 940, except the rulesets window may be configured to display, create, and/or modify rulesets instead of security attacks. For example, ruleset 1 may be a beaconing ruleset with a frequency of twenty-four hours, and the example ruleset may be edited to a beaconing frequency of forty eight hours. Rulesets may be selected to display further information regarding the rulesets, such as the various rules of the ruleset, one or more entities having attack data on which the ruleset was based, other entities with which the ruleset has been shared (and/or is available for sharing), and/or entities that have actually implemented security measures in view of the ruleset, for example.


Referring to FIG. 9B, the example user interface 960 comprises a search box 964, an object display area 966, and/or a menu bar 962. A human operator by typing and/or entering data into the search box 964 may load, lookup, and/or retrieve one or more objects. The user interface 960 may display security attack data, rulesets, and/or other security information in clusters, which may correspond to the systems, methods, and/or techniques disclosed in the Ontology and/or Cluster references.


For example, by typing the name of a type of security attack, such as “beacon,” an attack data object 968 may be displayed in the object display area 966. The other objects 970 (including objects 970A, 970B, and/or 970C) may be displayed automatically and/or after user interaction by the human operator with the person object 410. The objects 970 may correspond to related security attacks, resources of the entity that have been attacked, and/or any other data object in the security sharing system. The one or more links 972 (including links 972A, 972B, and/or 972C) may display relationships between the attack data object 968 and related objects 970.


In addition to visually searching and/or showing data objects and/or relationships between data objects, the user interface 960 may allow various other manipulations. For example, data objects may be inspected (e.g., by viewing properties and/or associated data of the data objects), filtered (e.g., narrowing the universe of objects into sets and subsets by properties or relationships), and statistically aggregated (e.g., numerically summarized based on summarization criteria), among other operations and visualizations.


Implementation Mechanisms


The various computing device(s) discussed herein, such as the entities 110 and/or security sharing system 100, are generally controlled and coordinated by operating system software, such as, but not limited to, iOS, Android, Chrome OS, Windows XP, Windows Vista, Windows 7, Windows 8, Windows Server, Windows CE, Unix, Linux, SunOS, Solaris, Macintosh OS X, VxWorks, or other compatible operating systems. In other embodiments, the computing devices may be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things. The security sharing system 100 may be hosted and/or executed on one or more computing devices with one or more hardware processors and with any of the previously mentioned operating system software.



FIG. 10 is a block diagram that illustrates example components of the security sharing system 100. While FIG. 10 refers to the security sharing system 100, any of the other computing devices discussed herein may have some or all of the same or similar components.


The security sharing 100 may execute software, e.g., standalone software applications, applications within browsers, network applications, etc., whether by the particular application, the operating system, or otherwise. Any of the systems discussed herein may be performed by the security sharing 100 and/or a similar computing system having some or all of the components discussed with reference to FIG. 10.


The security sharing system 100 includes a bus 1002 or other communication mechanism for communicating information, and a hardware processor, or multiple processors, 1004 coupled with bus 1002 for processing information. Hardware processor(s) 1004 may be, for example, one or more general purpose microprocessors.


The security sharing system 100 also includes a main memory 1006, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 1002 for storing information and instructions to be executed by processor(s) 1004. Main memory 1006 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor(s) 1004. Such instructions, when stored in storage media accessible to processor(s) 1004, render the security sharing system 100 into a special-purpose machine that is customized to perform the operations specified in the instructions. Such instructions, as executed by hardware processors, may implement the methods and systems described herein for sharing security information.


The security sharing system 100 further includes a read only memory (ROM) 1008 or other static storage device coupled to bus 1002 for storing static information and instructions for processor(s) 1004. A storage device 1010, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 1002 for storing information and instructions. The rule unit 530, attack modification unit 540, ruleset data store 532 and/or attack data store 542 of FIG. 5 may be stored on the main memory 1006 and/or the storage device 1010.


In some embodiments, the ruleset data store 532 of FIG. 5 is a file system, relational database such as, but not limited to, MySql, Oracle, Sybase, or DB2, and/or a distributed in memory caching system such as, but not limited to, Memcache, Memcached, or Java Caching System. The attack data store 542 of FIG. 5 may be a similar file system, relational database and/or distributed in memory caching system as the ruleset data store 532.


The security sharing system 100 may be coupled via bus 1002 to a display 1012, such as a cathode ray tube (CRT) or LCD display or touch screen, for displaying information to a computer user. An input device 1014 is coupled to bus 1002 for communicating information and command selections to processor 504. One type of input device 1014 is a keyboard including alphanumeric and other keys. Another type of input device 1014 is a touch screen. Another type of user input device is cursor control 1016, such as a mouse, a trackball, a touch screen, or cursor direction keys for communicating direction information and command selections to processor 1004 and for controlling cursor movement on display 1012. This input device may have two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.


The security sharing system 100 may include a user interface unit to implement a GUI, for example, FIG. 9A and/or FIG. 9B, which may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other units may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.


In general, the word “instructions,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software units, possibly having entry and exit points, written in a programming language, such as, but not limited to, Java, Lua, C, C++, or C #. A software unit may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, but not limited to, BASIC, Perl, or Python. It will be appreciated that software units may be callable from other units or from themselves, and/or may be invoked in response to detected events or interrupts. Software units configured for execution on computing devices by their hardware processor(s) may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. Generally, the instructions described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.


The security sharing system 100, or components of it, such as the rule unit 530 and/or the attack modification unit 540 of FIG. 5, may be programmed, via executable code instructions, in a programming language.


The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1010. Volatile media includes dynamic memory, such as main memory 1006. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.


Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between nontransitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1002. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.


Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor(s) 1004 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer may load the instructions into its dynamic memory and send the instructions over a telephone or cable line using a modem. A modem local to the security sharing system 100 may receive the data on the telephone or cable line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 1002. Bus 1002 carries the data to main memory 1006, from which the processor(s) 1004 retrieves and executes the instructions. The instructions received by main memory 1006 may retrieve and execute the instructions. The instructions received by main memory 1006 may optionally be stored on storage device 1010 either before or after execution by processor(s) 1004.


The security sharing system 100 also includes a communication interface 1018 coupled to bus 1002. Communication interface 1018 provides a two-way data communication coupling to a network link 1020 that is connected to a local network 1022. For example, communication interface 1018 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 1018 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to be communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 1018 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.


Network link 1020 typically provides data communication through one or more networks to other data devices. For example, network link 1020 may provide a connection through local network 1022 to a host computer 1024 or to data equipment operated by an Internet Service Provider (ISP) 1026. ISP 1026 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 1028. Local network 1022 and Internet 1028 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 1020 and through communication interface 1018, which carry the digital data to and from the security sharing system 100, are example forms of transmission media.


The security sharing system 100 can send messages and receive data, including program code, through the network(s), network link 1020 and communication interface 1018. In the Internet example, a server 1030 might transmit a requested code for an application program through Internet 1028, ISP 1026, local network 1022 and communication interface 1018.


The received code may be executed by processor(s) 1004 as it is received, and/or stored in storage device 1010, or other non-volatile storage for later execution.


Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code instructions executed by one or more computer systems or computer processors comprising computer hardware. The processes and algorithms may be implemented partially or wholly in application-specific circuitry.


The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.


Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.


Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing units, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.


It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.

Claims
  • 1. A computer implemented method comprising: by a computer system comprising one or more computer hardware processors and one or more storage devices,communicating with a plurality of entities;receiving security attack data from a first entity of the plurality of entities, the security attack data comprising information regarding one or more first security attacks;identifying, based on sharing rules associated with the first entity, one or more recipient entity of a subset of the plurality of entities that are authorized to access a ruleset from the first entity; andfacilitating sharing of the ruleset from the first entity to the one or more recipient entity,wherein the ruleset (i) is determined by the first entity, and (ii) is associated with the security attack data,wherein the ruleset comprises instructions selectably applicable by the one or more recipient entity to detect a potential security attack,wherein the instructions are configured to: in response to detecting the potential security attack, add data associated with the potential security attack to a cluster as a seed, wherein the cluster comprises a plurality of connected objects and a representation of the cluster is displayable in a user interface.
  • 2. The computer implemented method of claim 1, wherein the sharing rules associated with the first entity further exclude sharing ruleset data from the first entity to particular one or more entities.
  • 3. The computer implemented method of claim 1, wherein the ruleset further comprises second instructions configured to: access one or more data objects associated with the one or more recipient entity, the one or more data objects comprising a plurality of network communications.
  • 4. The computer implemented method of claim 3, wherein the one or more data objects further comprise a first user login object and a second user login object, the first user login object comprising data indicating a first login for a particular user at a first time and a first location, the second user login object comprising data indicating a second login for the particular user at a second time and a second location, and wherein the ruleset further comprises third instructions configured to: calculate, from first user login object and the second user login object, a duration of time between the first time for the first login and the second time for the second login;calculate, from first user login object and the second user login object, a distance between the first location for the first login and the second location for the second login;calculate a speed from the duration of time and the distance; anddetermine the potential security attack where the speed is greater than a threshold value.
  • 5. The computer implemented method of claim 4, wherein the ruleset further comprises fourth instructions configured to: in response to determining the potential security attack, generate an alert.
  • 6. Non-transitory computer storage medium comprising instructions for causing one or more computing devices to perform operations comprising: communicating with a plurality of entities;receiving security attack data from a first entity of the plurality of entities, the security attack data comprising information regarding one or more first security attacks;identifying, based on sharing rules associated with the first entity, one or more recipient entity of a subset of the plurality of entities that are authorized to access a ruleset from the first entity; andtransmitting at least a portion of a ruleset from the first entity to the one or more recipient entity,wherein the ruleset (i) is determined by the first entity, and (ii) is associated with the security attack data,wherein the ruleset comprises instructions selectably applicable by the one or more recipient entity to detect a potential security attack,wherein the instructions are configured to: in response to detecting the potential security attack, add data associated with the potential security attack to a cluster as a seed, wherein the cluster comprises a plurality of connected objects and a representation of the cluster is displayable in a user interface.
  • 7. The non-transitory computer storage medium of claim 6, wherein the sharing rules associated with the first entity further exclude sharing ruleset data from the first entity to particular one or more entities.
  • 8. The non-transitory computer storage medium of claim 6, wherein the ruleset further comprises second instructions configured to: access one or more data objects associated with the one or more recipient entity, the one or more data objects comprising a plurality of network communications.
  • 9. The non-transitory computer storage medium of claim 6, wherein the ruleset further comprises second instructions configured to: receive a user agent identifier for a first login;perform, at the one or more recipient entity, a search for the user agent identifier, wherein performing the search further comprises: determining that the user agent identifier is a new user agent identifier; andin response to determining that the user agent identifier is a new user agent identifier, generate an alert.
  • 10. A system for sharing security information, the system comprising: one or more computer processors executing code instructions, to: communicate with a plurality of entities;receive security attack data from a first entity of the plurality of entities, the security attack data comprising information regarding one or more first security attacks;identify, based on sharing rules associated with the first entity, one or more recipient entity of a subset of the plurality of entities that are authorized to access ruleset data from the first entity; andfacilitate sharing of at least a portion of a ruleset from the first entity to the one or more recipient entity,wherein the ruleset (i) is determined by the first entity, and (ii) is associated with the security attack data,wherein the ruleset comprises instructions selectably applicable by the one or more recipient entity to detect a potential security attack,wherein the instructions are configured to: in response to detecting the potential security attack, add data associated with the potential security attack to a cluster as a seed, wherein the cluster comprises a plurality of connected objects and a representation of the cluster is displayable in a user interface.
  • 11. The system of claim 10, wherein the ruleset further comprises second instructions configured to: access one or more data objects associated with the one or more recipient entity, the one or more data objects comprising a plurality of network communications.
  • 12. The system of claim 10, wherein the ruleset further comprises second instructions configured to: identify a first login for a particular user at a first time and a first location;identify a second login for the particular user at a second time and a second location;calculate a duration of time between the first time for the first login and the second time for the second login;calculate a distance between the first location for the first login and the second location for the second login;calculate a speed from the duration of time and the distance; anddetermine the potential security attack where the speed is greater than a threshold value.
  • 13. The system of claim 12, wherein the ruleset further comprises third instructions configured to: in response to determining the potential security attack, generate an alert.
  • 14. The system of claim 10, wherein the one or more computer processors execute further code instructions, to: facilitate sharing of at least a portion of a second ruleset from the first entity, the second ruleset comprising second instructions different from the instructions of the ruleset.
  • 15. The system of claim 14, wherein the second instructions are configured to: receive a user agent identifier for a first login;perform, at the one or more recipient entity, a search for the user agent identifier, wherein performing the search further comprises: determining that the user agent identifier is a new user agent identifier.
  • 16. The system of claim 10, where the seed comprises at least one of: (i) an IP address or (ii) a user login identifier for the particular user.
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/684,231 entitled “CYBER SECURITY SHARING AND IDENTIFICATION SYSTEM,” filed Apr. 10, 2015, which is a continuation of U.S. patent application Ser. No. 14/280,490 entitled “SECURITY SHARING SYSTEM,” filed May 16, 2014, now U.S. Pat. No. 9,009,827, which claims benefit of U.S. Provisional Application No. 61/942,480 entitled “SECURITY SHARING SYSTEM,” filed Feb. 20, 2014. Each of these applications are hereby incorporated by reference herein in their entireties. Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.

US Referenced Citations (674)
Number Name Date Kind
5109399 Thompson Apr 1992 A
5253164 Holloway et al. Oct 1993 A
5329108 Lamoure Jul 1994 A
5548749 Kroenke et al. Aug 1996 A
5632009 Rao et al. May 1997 A
5670987 Doi et al. Sep 1997 A
5708828 Coleman Jan 1998 A
5765171 Gehani et al. Jun 1998 A
5781704 Rossmo Jul 1998 A
5822741 Fischthal Oct 1998 A
5845300 Comer Dec 1998 A
5870761 Demers et al. Feb 1999 A
6057757 Arrowsmith et al. May 2000 A
6091956 Hollenberg Jul 2000 A
6098078 Gehani et al. Aug 2000 A
6161098 Wallman Dec 2000 A
6190053 Stahlecker et al. Feb 2001 B1
6202085 Benson et al. Mar 2001 B1
6216140 Kramer Apr 2001 B1
6219053 Tachibana et al. Apr 2001 B1
6232971 Haynes May 2001 B1
6240414 Beizer et al. May 2001 B1
6247019 Davies Jun 2001 B1
6279018 Kudrolli et al. Aug 2001 B1
6289338 Stoffel et al. Sep 2001 B1
6317754 Peng Nov 2001 B1
6341310 Leshem et al. Jan 2002 B1
6369835 Lin Apr 2002 B1
6374252 Althoff et al. Apr 2002 B1
6456997 Shukla Sep 2002 B1
6463404 Appleby Oct 2002 B1
6505196 Drucker et al. Jan 2003 B2
6523019 Borthwick Feb 2003 B1
6523172 Martinez-Guerra et al. Feb 2003 B1
6539381 Prasad et al. Mar 2003 B1
6549944 Weinberg et al. Apr 2003 B1
6560620 Ching May 2003 B1
6567936 Yang et al. May 2003 B1
6581068 Bensoussan et al. Jun 2003 B1
6594672 Lampson et al. Jul 2003 B1
6631496 Li et al. Oct 2003 B1
6640231 Andersen et al. Oct 2003 B1
6642945 Sharpe Nov 2003 B1
6714936 Nevin, III Mar 2004 B1
6748481 Parry et al. Jun 2004 B1
6775675 Nwabueze et al. Aug 2004 B1
6807569 Bhimani et al. Oct 2004 B1
6816941 Carlson et al. Nov 2004 B1
6828920 Owen et al. Dec 2004 B2
6839745 Dingari et al. Jan 2005 B1
6842736 Brzozowski Jan 2005 B1
6877137 Rivette et al. Apr 2005 B1
6938034 Kraft et al. Aug 2005 B1
6976210 Silva et al. Dec 2005 B1
6980984 Huffman et al. Dec 2005 B1
6985950 Hanson et al. Jan 2006 B1
7027974 Busch et al. Apr 2006 B1
7036085 Barros Apr 2006 B2
7043702 Chi et al. May 2006 B2
7055110 Kupka et al. May 2006 B2
7072911 Doman Jul 2006 B1
7139800 Bellotti et al. Nov 2006 B2
7158878 Rasmussen et al. Jan 2007 B2
7162475 Ackerman Jan 2007 B2
7167877 Balogh et al. Jan 2007 B2
7168039 Bertram Jan 2007 B2
7171427 Witowski et al. Jan 2007 B2
7237192 Stephenson et al. Jun 2007 B1
7240330 Fairweather Jul 2007 B2
7269786 Malloy et al. Sep 2007 B1
7278105 Kitts Oct 2007 B1
7290698 Poslinski et al. Nov 2007 B2
7333998 Heckerman et al. Feb 2008 B2
7370047 Gorman May 2008 B2
7373669 Eisen May 2008 B2
7379811 Rasmussen et al. May 2008 B2
7379903 Caballero et al. May 2008 B2
7418431 Nies et al. Aug 2008 B1
7426654 Adams et al. Sep 2008 B2
7437664 Borson Oct 2008 B2
7451397 Weber et al. Nov 2008 B2
7454466 Bellotti et al. Nov 2008 B2
7467375 Tondreau et al. Dec 2008 B2
7487139 Fraleigh et al. Feb 2009 B2
7502786 Liu et al. Mar 2009 B2
7525422 Bishop et al. Apr 2009 B2
7529727 Arning et al. May 2009 B2
7529734 Dirisala May 2009 B2
7533069 Fairweather May 2009 B2
7546245 Surpin et al. Jun 2009 B2
7546271 Chmielewski et al. Jun 2009 B1
7558677 Jones Jul 2009 B2
7574409 Patinkin Aug 2009 B2
7574428 Leiserowitz et al. Aug 2009 B2
7579965 Bucholz Aug 2009 B2
7596285 Brown et al. Sep 2009 B2
7596608 Alexander et al. Sep 2009 B2
7614006 Molander Nov 2009 B2
7617232 Gabbert et al. Nov 2009 B2
7620628 Kapur et al. Nov 2009 B2
7627812 Chamberlain et al. Dec 2009 B2
7634717 Chamberlain et al. Dec 2009 B2
7640173 Surpin et al. Dec 2009 B2
7676788 Ousterhout et al. Mar 2010 B1
7685083 Fairweather Mar 2010 B2
7703021 Flam Apr 2010 B1
7712049 Williams et al. May 2010 B2
7716067 Surpin et al. May 2010 B2
7716077 Mikurak May 2010 B1
7725547 Albertson et al. May 2010 B2
7730396 Chidlovskii et al. Jun 2010 B2
7770100 Chamberlain et al. Aug 2010 B2
7783658 Bayliss Aug 2010 B1
7792664 Crawford et al. Sep 2010 B1
7805457 Viola et al. Sep 2010 B1
7809703 Balabhadrapatruni et al. Oct 2010 B2
7813937 Pathria et al. Oct 2010 B1
7813944 Luk et al. Oct 2010 B1
7814102 Miller et al. Oct 2010 B2
7818297 Peleg et al. Oct 2010 B2
7818658 Chen Oct 2010 B2
7827045 Madill et al. Nov 2010 B2
7877421 Berger et al. Jan 2011 B2
7894984 Rasmussen et al. Feb 2011 B2
7899611 Downs et al. Mar 2011 B2
7912837 Buron et al. Mar 2011 B2
7917376 Bellin et al. Mar 2011 B2
7920963 Jouline et al. Apr 2011 B2
7933862 Chamberlain et al. Apr 2011 B2
7962281 Rasmussen et al. Jun 2011 B2
7962495 Jain et al. Jun 2011 B2
7962848 Bertram Jun 2011 B2
7970240 Chao et al. Jun 2011 B1
7971150 Raskutti et al. Jun 2011 B2
8001465 Kudrolli et al. Aug 2011 B2
8001482 Bhattiprolu et al. Aug 2011 B2
8010545 Stefik et al. Aug 2011 B2
8010886 Gusmorino et al. Aug 2011 B2
8015151 Lier et al. Sep 2011 B2
8015487 Roy et al. Sep 2011 B2
8019709 Norton et al. Sep 2011 B2
8024778 Cash et al. Sep 2011 B2
8036632 Cona et al. Oct 2011 B1
8046362 Bayliss Oct 2011 B2
8082172 Chao et al. Dec 2011 B2
8095434 Puttick et al. Jan 2012 B1
8099339 Pinsonneault et al. Jan 2012 B1
8103543 Zwicky Jan 2012 B1
8117022 Linker Feb 2012 B2
8132149 Shenfield et al. Mar 2012 B2
8134457 Velipasalar et al. Mar 2012 B2
8135679 Bayliss Mar 2012 B2
8135719 Bayliss Mar 2012 B2
8145703 Frishert et al. Mar 2012 B2
8196184 Amirov et al. Jun 2012 B2
8214232 Tyler et al. Jul 2012 B2
8214361 Sandler et al. Jul 2012 B1
8214764 Gemmell et al. Jul 2012 B2
8225201 Michael Jul 2012 B2
8229947 Fujinaga Jul 2012 B2
8230333 Decherd et al. Jul 2012 B2
8239668 Chen et al. Aug 2012 B1
8266168 Bayliss Sep 2012 B2
8271948 Talozi et al. Sep 2012 B2
8280880 Aymeloglu et al. Oct 2012 B1
8290942 Jones et al. Oct 2012 B2
8290990 Drath et al. Oct 2012 B2
8301464 Cave et al. Oct 2012 B1
8301904 Gryaznov Oct 2012 B1
8312367 Foster Nov 2012 B2
8312546 Alme Nov 2012 B2
8316060 Snyder et al. Nov 2012 B1
8321943 Walters et al. Nov 2012 B1
8347398 Weber Jan 2013 B1
8352881 Champion et al. Jan 2013 B2
8368695 Howell et al. Feb 2013 B2
8380659 Zunger Feb 2013 B2
8397171 Klassen et al. Mar 2013 B2
8411046 Kruzeniski et al. Apr 2013 B2
8412707 Mianji Apr 2013 B1
8442940 Faletti et al. May 2013 B1
8447674 Choudhuri et al. May 2013 B2
8447722 Ahuja et al. May 2013 B1
8452790 Mianji May 2013 B1
8463036 Ramesh et al. Jun 2013 B1
8484168 Bayliss Jul 2013 B2
8489331 Kopf et al. Jul 2013 B2
8489623 Jain et al. Jul 2013 B2
8489641 Seefeld et al. Jul 2013 B1
8495077 Bayliss Jul 2013 B2
8498969 Bayliss Jul 2013 B2
8498984 Hwang et al. Jul 2013 B1
8514082 Cova et al. Aug 2013 B2
8515207 Chau Aug 2013 B2
8515912 Garrod et al. Aug 2013 B2
8527461 Ducott, III et al. Sep 2013 B2
8554579 Tribble et al. Oct 2013 B2
8554653 Falkenborg et al. Oct 2013 B2
8554709 Goodson et al. Oct 2013 B2
8577911 Stepinski et al. Nov 2013 B1
8578500 Long Nov 2013 B2
8589273 Creeden et al. Nov 2013 B2
8600872 Yan Dec 2013 B1
8620641 Farnsworth et al. Dec 2013 B2
8639522 Pathria et al. Jan 2014 B2
8646080 Williamson et al. Feb 2014 B2
8671449 Nachenberg Mar 2014 B1
8676597 Buehler et al. Mar 2014 B2
8676857 Adams et al. Mar 2014 B1
8688749 Ducott, III et al. Apr 2014 B1
8689108 Duffield et al. Apr 2014 B1
8689182 Leithead et al. Apr 2014 B2
8707185 Robinson et al. Apr 2014 B2
8713467 Goldenberg et al. Apr 2014 B1
8726379 Stiansen et al. May 2014 B1
8739278 Varghese May 2014 B2
8742934 Sarpy et al. Jun 2014 B1
8745516 Mason et al. Jun 2014 B2
8781169 Jackson et al. Jul 2014 B2
8788405 Sprague et al. Jul 2014 B1
8788407 Singh et al. Jul 2014 B1
8799799 Cervelli et al. Aug 2014 B1
8799812 Parker Aug 2014 B2
8812960 Sun et al. Aug 2014 B1
8818892 Sprague et al. Aug 2014 B1
8830322 Nerayoff et al. Sep 2014 B2
8832594 Thompson et al. Sep 2014 B1
8832832 Visbal Sep 2014 B1
8868486 Tamayo Oct 2014 B2
8917274 Ma et al. Dec 2014 B2
8924872 Bogomolov et al. Dec 2014 B1
8937619 Sharma et al. Jan 2015 B2
9009827 Albertson et al. Apr 2015 B1
9043894 Dennison et al. May 2015 B1
9923925 Albertson et al. Mar 2018 B2
10572496 Frank et al. Feb 2020 B1
20010021936 Bertram Sep 2001 A1
20020033848 Sciammarella et al. Mar 2002 A1
20020065708 Senay et al. May 2002 A1
20020087328 Denenberg et al. Jul 2002 A1
20020091707 Keller Jul 2002 A1
20020095658 Shulman Jul 2002 A1
20020116120 Ruiz et al. Aug 2002 A1
20020130907 Chi et al. Sep 2002 A1
20020174201 Ramer et al. Nov 2002 A1
20020188473 Jackson Dec 2002 A1
20020194119 Wright et al. Dec 2002 A1
20030028560 Kudrolli et al. Feb 2003 A1
20030039948 Donahue Feb 2003 A1
20030084017 Ordille May 2003 A1
20030088654 Good et al. May 2003 A1
20030097330 Hillmer et al. May 2003 A1
20030144868 MacIntyre et al. Jul 2003 A1
20030163352 Surpin et al. Aug 2003 A1
20030172053 Fairweather Sep 2003 A1
20030177112 Gardner Sep 2003 A1
20030182313 Federwisch et al. Sep 2003 A1
20030200217 Ackerman Oct 2003 A1
20030225755 Iwayama et al. Dec 2003 A1
20030229519 Eidex et al. Dec 2003 A1
20030229848 Arend et al. Dec 2003 A1
20040032432 Baynger Feb 2004 A1
20040064256 Barinek et al. Apr 2004 A1
20040078228 FitzGerald et al. Apr 2004 A1
20040083466 Dapp et al. Apr 2004 A1
20040085318 Hassler et al. May 2004 A1
20040095349 Bito et al. May 2004 A1
20040103124 Kupkova May 2004 A1
20040111390 Saito et al. Jun 2004 A1
20040111410 Burgoon et al. Jun 2004 A1
20040143602 Ruiz et al. Jul 2004 A1
20040153418 Hanweck Aug 2004 A1
20040181554 Heckerman et al. Sep 2004 A1
20040193600 Kaasten et al. Sep 2004 A1
20040205524 Richter et al. Oct 2004 A1
20040250124 Chesla et al. Dec 2004 A1
20040250576 Flanders Dec 2004 A1
20040260702 Cragun et al. Dec 2004 A1
20050027705 Sadri et al. Feb 2005 A1
20050028094 Allyn Feb 2005 A1
20050034107 Kendall et al. Feb 2005 A1
20050080769 Gemmell Apr 2005 A1
20050086207 Heuer et al. Apr 2005 A1
20050091420 Snover et al. Apr 2005 A1
20050108063 Madill et al. May 2005 A1
20050125715 Di Franco et al. Jun 2005 A1
20050162523 Darrell et al. Jul 2005 A1
20050180330 Shapiro Aug 2005 A1
20050182793 Keenan et al. Aug 2005 A1
20050183005 Denoue et al. Aug 2005 A1
20050193024 Beyer et al. Sep 2005 A1
20050222928 Steier et al. Oct 2005 A1
20050229256 Banzhof Oct 2005 A2
20050246327 Yeung et al. Nov 2005 A1
20050251786 Citron et al. Nov 2005 A1
20050262556 Waisman et al. Nov 2005 A1
20060026120 Carolan et al. Feb 2006 A1
20060026170 Kreitler et al. Feb 2006 A1
20060036568 Moore et al. Feb 2006 A1
20060045470 Poslinski et al. Mar 2006 A1
20060059139 Robinson Mar 2006 A1
20060069912 Zheng et al. Mar 2006 A1
20060074866 Chamberlain et al. Apr 2006 A1
20060074881 Vembu et al. Apr 2006 A1
20060080619 Carlson et al. Apr 2006 A1
20060106879 Zondervan et al. May 2006 A1
20060129746 Porter Jun 2006 A1
20060139375 Rasmussen et al. Jun 2006 A1
20060149596 Surpin et al. Jul 2006 A1
20060155945 McGarvey Jul 2006 A1
20060190497 Inturi et al. Aug 2006 A1
20060203337 White Sep 2006 A1
20060206866 Eldrige et al. Sep 2006 A1
20060218637 Thomas et al. Sep 2006 A1
20060224579 Zheng Oct 2006 A1
20060224629 Alexander et al. Oct 2006 A1
20060242040 Rader Oct 2006 A1
20060242630 Koike et al. Oct 2006 A1
20060247947 Suringa Nov 2006 A1
20060265747 Judge Nov 2006 A1
20060271277 Hu et al. Nov 2006 A1
20060273893 Warner Dec 2006 A1
20060279630 Aggarwal et al. Dec 2006 A1
20070005707 Teodosiu et al. Jan 2007 A1
20070011030 Bregante et al. Jan 2007 A1
20070011150 Frank Jan 2007 A1
20070015506 Hewett et al. Jan 2007 A1
20070016363 Huang et al. Jan 2007 A1
20070026373 Suriyanarayanan et al. Feb 2007 A1
20070038962 Fuchs et al. Feb 2007 A1
20070057966 Ohno et al. Mar 2007 A1
20070074169 Chess et al. Mar 2007 A1
20070078832 Ott et al. Apr 2007 A1
20070078872 Cohen Apr 2007 A1
20070083541 Fraleigh et al. Apr 2007 A1
20070112714 Fairweather May 2007 A1
20070112887 Liu et al. May 2007 A1
20070168516 Liu et al. Jul 2007 A1
20070174760 Chamberlain et al. Jul 2007 A1
20070180075 Chasman et al. Aug 2007 A1
20070192143 Krishnan et al. Aug 2007 A1
20070192265 Chopin et al. Aug 2007 A1
20070208497 Downs et al. Sep 2007 A1
20070208498 Barker et al. Sep 2007 A1
20070208736 Tanigawa et al. Sep 2007 A1
20070220067 Suriyanarayanan et al. Sep 2007 A1
20070220328 Liu et al. Sep 2007 A1
20070233756 D'Souza et al. Oct 2007 A1
20070266336 Nojima et al. Nov 2007 A1
20070294200 Au Dec 2007 A1
20070294643 Kyle Dec 2007 A1
20070294766 Mir et al. Dec 2007 A1
20070299697 Friedlander et al. Dec 2007 A1
20070299887 Novik et al. Dec 2007 A1
20080005780 Singleton Jan 2008 A1
20080027981 Wahl Jan 2008 A1
20080033753 Canda et al. Feb 2008 A1
20080040684 Crump Feb 2008 A1
20080051989 Welsh Feb 2008 A1
20080052142 Bailey et al. Feb 2008 A1
20080077474 Dumas et al. Mar 2008 A1
20080077597 Butler Mar 2008 A1
20080077642 Carbone et al. Mar 2008 A1
20080086718 Bostick et al. Apr 2008 A1
20080104019 Nath May 2008 A1
20080109762 Hundal et al. May 2008 A1
20080126951 Sood et al. May 2008 A1
20080133567 Ames et al. Jun 2008 A1
20080140387 Linker Jun 2008 A1
20080141117 King et al. Jun 2008 A1
20080148398 Mezack et al. Jun 2008 A1
20080162616 Gross et al. Jul 2008 A1
20080189240 Mullins et al. Aug 2008 A1
20080195417 Surpin et al. Aug 2008 A1
20080195608 Clover Aug 2008 A1
20080222295 Robinson et al. Sep 2008 A1
20080228467 Womack et al. Sep 2008 A1
20080229422 Hudis et al. Sep 2008 A1
20080235575 Weiss Sep 2008 A1
20080243951 Webman et al. Oct 2008 A1
20080255973 El Wade et al. Oct 2008 A1
20080263468 Cappione et al. Oct 2008 A1
20080267107 Rosenberg Oct 2008 A1
20080276167 Michael Nov 2008 A1
20080278311 Grange et al. Nov 2008 A1
20080281580 Zabokritski Nov 2008 A1
20080288306 MacIntyre et al. Nov 2008 A1
20080288425 Posse et al. Nov 2008 A1
20080301643 Appleton et al. Dec 2008 A1
20080320299 Wobber et al. Dec 2008 A1
20090002492 Velipasalar et al. Jan 2009 A1
20090018940 Wang et al. Jan 2009 A1
20090027418 Maru et al. Jan 2009 A1
20090030915 Winter et al. Jan 2009 A1
20090044279 Crawford et al. Feb 2009 A1
20090055251 Shah et al. Feb 2009 A1
20090076845 Bellin et al. Mar 2009 A1
20090082997 Tokman et al. Mar 2009 A1
20090083184 Eisen Mar 2009 A1
20090088964 Schaaf et al. Apr 2009 A1
20090094064 Tyler et al. Apr 2009 A1
20090100165 Wesley et al. Apr 2009 A1
20090103442 Douville Apr 2009 A1
20090119309 Gibson et al. May 2009 A1
20090125369 Kloosstra et al. May 2009 A1
20090132921 Hwangbo et al. May 2009 A1
20090132953 Reed et al. May 2009 A1
20090144262 White et al. Jun 2009 A1
20090144274 Fraleigh et al. Jun 2009 A1
20090164934 Bhattiprolu et al. Jun 2009 A1
20090171939 Athsani et al. Jul 2009 A1
20090172511 Decherd et al. Jul 2009 A1
20090172821 Daira et al. Jul 2009 A1
20090179892 Tsuda et al. Jul 2009 A1
20090187546 Whyte et al. Jul 2009 A1
20090192957 Subramanian et al. Jul 2009 A1
20090199090 Poston et al. Aug 2009 A1
20090210251 Callas Aug 2009 A1
20090222400 Kupershmidt et al. Sep 2009 A1
20090222760 Halverson et al. Sep 2009 A1
20090228507 Jain et al. Sep 2009 A1
20090234720 George et al. Sep 2009 A1
20090248828 Gould et al. Oct 2009 A1
20090254970 Agarwal et al. Oct 2009 A1
20090271359 Bayliss Oct 2009 A1
20090281839 Lynn et al. Nov 2009 A1
20090287470 Farnsworth et al. Nov 2009 A1
20090292626 Oxford Nov 2009 A1
20090328222 Helman et al. Dec 2009 A1
20100011000 Chakra et al. Jan 2010 A1
20100011282 Dollard et al. Jan 2010 A1
20100042922 Bradateanu et al. Feb 2010 A1
20100057716 Stefik et al. Mar 2010 A1
20100064252 Kramer et al. Mar 2010 A1
20100070523 Delgo et al. Mar 2010 A1
20100070842 Aymeloglu et al. Mar 2010 A1
20100070897 Aymeloglu et al. Mar 2010 A1
20100074125 Chandra Mar 2010 A1
20100077481 Polyakov Mar 2010 A1
20100077483 Stolfo et al. Mar 2010 A1
20100100963 Mahaffey Apr 2010 A1
20100114887 Conway et al. May 2010 A1
20100121803 Gill May 2010 A1
20100122152 Chamberlain et al. May 2010 A1
20100125546 Barrett et al. May 2010 A1
20100131457 Heimendinger May 2010 A1
20100145909 Ngo Jun 2010 A1
20100162176 Dunton Jun 2010 A1
20100169237 Howard et al. Jul 2010 A1
20100185691 Irmak et al. Jul 2010 A1
20100191563 Schlaifer et al. Jul 2010 A1
20100198684 Eraker et al. Aug 2010 A1
20100199225 Coleman et al. Aug 2010 A1
20100204983 Chung et al. Aug 2010 A1
20100235915 Memon et al. Sep 2010 A1
20100250412 Wagner Sep 2010 A1
20100262688 Hussain et al. Oct 2010 A1
20100280857 Liu et al. Nov 2010 A1
20100293174 Bennett et al. Nov 2010 A1
20100306029 Jolley Dec 2010 A1
20100306285 Shah et al. Dec 2010 A1
20100306713 Geisner et al. Dec 2010 A1
20100321399 Ellren et al. Dec 2010 A1
20100325526 Ellis et al. Dec 2010 A1
20100325581 Finkelstein et al. Dec 2010 A1
20100330801 Rouh Dec 2010 A1
20110010342 Chen et al. Jan 2011 A1
20110035811 Rees et al. Feb 2011 A1
20110047159 Baid et al. Feb 2011 A1
20110060753 Shaked et al. Mar 2011 A1
20110061013 Bilicki et al. Mar 2011 A1
20110074811 Hanson et al. Mar 2011 A1
20110078173 Seligmann et al. Mar 2011 A1
20110087519 Fordyce, III et al. Apr 2011 A1
20110093327 Fordyce, III et al. Apr 2011 A1
20110117878 Barash et al. May 2011 A1
20110119100 Ruhl et al. May 2011 A1
20110137766 Rasmussen et al. Jun 2011 A1
20110153384 Horne et al. Jun 2011 A1
20110167105 Ramakrishnan et al. Jul 2011 A1
20110167493 Song et al. Jul 2011 A1
20110170799 Carrino et al. Jul 2011 A1
20110173032 Payne et al. Jul 2011 A1
20110173093 Psota et al. Jul 2011 A1
20110178842 Rane et al. Jul 2011 A1
20110197280 Young Aug 2011 A1
20110208724 Jones et al. Aug 2011 A1
20110218934 Elser Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225198 Edwards et al. Sep 2011 A1
20110231223 Winters Sep 2011 A1
20110238510 Rowen et al. Sep 2011 A1
20110238553 Raj et al. Sep 2011 A1
20110238570 Li et al. Sep 2011 A1
20110246229 Pacha Oct 2011 A1
20110258216 Supakkul et al. Oct 2011 A1
20110264459 Tyler et al. Oct 2011 A1
20110291851 Whisenant Dec 2011 A1
20110307382 Siegel et al. Dec 2011 A1
20110310005 Chen et al. Dec 2011 A1
20120005159 Wang et al. Jan 2012 A1
20120016849 Garrod et al. Jan 2012 A1
20120019559 Siler et al. Jan 2012 A1
20120022945 Falkenborg et al. Jan 2012 A1
20120023075 Pulfer et al. Jan 2012 A1
20120036013 Neuhaus et al. Feb 2012 A1
20120036106 Desai et al. Feb 2012 A1
20120036434 Oberstein Feb 2012 A1
20120064921 Hernoud et al. Mar 2012 A1
20120066296 Appleton et al. Mar 2012 A1
20120079363 Folting et al. Mar 2012 A1
20120084135 Nissan et al. Apr 2012 A1
20120084866 Stolfo Apr 2012 A1
20120106801 Jackson May 2012 A1
20120110633 An et al. May 2012 A1
20120110674 Belani et al. May 2012 A1
20120117082 Koperda et al. May 2012 A1
20120131512 Takeuchi et al. May 2012 A1
20120144335 Abeln et al. Jun 2012 A1
20120150446 Chang et al. Jun 2012 A1
20120159307 Chung et al. Jun 2012 A1
20120159362 Brown et al. Jun 2012 A1
20120159399 Bastide et al. Jun 2012 A1
20120167164 Burgess et al. Jun 2012 A1
20120173289 Pollard et al. Jul 2012 A1
20120173985 Peppel Jul 2012 A1
20120191446 Binsztok et al. Jul 2012 A1
20120196557 Reich et al. Aug 2012 A1
20120196558 Reich et al. Aug 2012 A1
20120208636 Feige Aug 2012 A1
20120215898 Shah et al. Aug 2012 A1
20120221511 Gibson et al. Aug 2012 A1
20120221553 Wittmer et al. Aug 2012 A1
20120221580 Barney Aug 2012 A1
20120230560 Spitz et al. Sep 2012 A1
20120233656 Rieschick Sep 2012 A1
20120245976 Kumar et al. Sep 2012 A1
20120246148 Dror Sep 2012 A1
20120254129 Wheeler et al. Oct 2012 A1
20120266245 McDougal et al. Oct 2012 A1
20120290879 Shibuya et al. Nov 2012 A1
20120296907 Long et al. Nov 2012 A1
20120304150 Leithead et al. Nov 2012 A1
20120304244 Xie et al. Nov 2012 A1
20120310831 Harris et al. Dec 2012 A1
20120310838 Harris et al. Dec 2012 A1
20120311684 Paulsen et al. Dec 2012 A1
20120323829 Stokes et al. Dec 2012 A1
20120323888 Osann, Jr. Dec 2012 A1
20120330801 McDougal Dec 2012 A1
20120330973 Ghuneim et al. Dec 2012 A1
20130006426 Healey et al. Jan 2013 A1
20130006655 Van Arkel et al. Jan 2013 A1
20130006668 Van Arkel et al. Jan 2013 A1
20130006725 Simanek et al. Jan 2013 A1
20130013612 Fittges et al. Jan 2013 A1
20130018796 Kolhatkar et al. Jan 2013 A1
20130019306 Lagar-Cavilla Jan 2013 A1
20130024307 Fuerstenberg et al. Jan 2013 A1
20130024339 Choudhuri et al. Jan 2013 A1
20130046842 Muntz et al. Feb 2013 A1
20130060786 Serrano et al. Mar 2013 A1
20130061169 Pearcy et al. Mar 2013 A1
20130067017 Carriere et al. Mar 2013 A1
20130073377 Heath Mar 2013 A1
20130078943 Biage et al. Mar 2013 A1
20130086072 Peng et al. Apr 2013 A1
20130091084 Lee Apr 2013 A1
20130097482 Marantz et al. Apr 2013 A1
20130101159 Chao et al. Apr 2013 A1
20130111320 Campbell et al. May 2013 A1
20130117651 Waldman et al. May 2013 A1
20130124193 Holmberg May 2013 A1
20130139268 An et al. May 2013 A1
20130150004 Rosen Jun 2013 A1
20130151338 Distefano, III Jun 2013 A1
20130151388 Falkenborg et al. Jun 2013 A1
20130157234 Gulli et al. Jun 2013 A1
20130160120 Malaviya et al. Jun 2013 A1
20130166550 Buchmann et al. Jun 2013 A1
20130173540 Qian et al. Jul 2013 A1
20130176321 Mitchell et al. Jul 2013 A1
20130179420 Park et al. Jul 2013 A1
20130191336 Ducott et al. Jul 2013 A1
20130191338 Ducott, III et al. Jul 2013 A1
20130211985 Clark et al. Aug 2013 A1
20130224696 Wolfe et al. Aug 2013 A1
20130232045 Tai et al. Sep 2013 A1
20130238616 Rose et al. Sep 2013 A1
20130246170 Gross et al. Sep 2013 A1
20130251233 Yang et al. Sep 2013 A1
20130262527 Hunter et al. Oct 2013 A1
20130263019 Castellanos et al. Oct 2013 A1
20130268520 Fisher et al. Oct 2013 A1
20130275446 Jain et al. Oct 2013 A1
20130276799 Davidson Oct 2013 A1
20130279757 Kephart Oct 2013 A1
20130282696 John et al. Oct 2013 A1
20130290011 Lynn et al. Oct 2013 A1
20130290825 Arndt et al. Oct 2013 A1
20130297619 Chandrasekaran et al. Nov 2013 A1
20130318594 Hoy et al. Nov 2013 A1
20130339218 Subramanian et al. Dec 2013 A1
20130345982 Liu et al. Dec 2013 A1
20130346444 Makkar et al. Dec 2013 A1
20130346563 Huang Dec 2013 A1
20140006042 Keefe et al. Jan 2014 A1
20140006109 Callioni et al. Jan 2014 A1
20140019936 Cohanoff Jan 2014 A1
20140032506 Hoey et al. Jan 2014 A1
20140033010 Richardt et al. Jan 2014 A1
20140040182 Gilder et al. Feb 2014 A1
20140040371 Gurevich et al. Feb 2014 A1
20140040714 Siegel et al. Feb 2014 A1
20140047357 Alfaro et al. Feb 2014 A1
20140052466 DeVille et al. Feb 2014 A1
20140058754 Wild Feb 2014 A1
20140059038 McPherson et al. Feb 2014 A1
20140059683 Ashley Feb 2014 A1
20140068487 Steiger et al. Mar 2014 A1
20140081652 Klindworth Mar 2014 A1
20140095509 Patton Apr 2014 A1
20140108068 Williams Apr 2014 A1
20140108380 Gotz et al. Apr 2014 A1
20140108985 Scott et al. Apr 2014 A1
20140114972 Ducott et al. Apr 2014 A1
20140123279 Bishop May 2014 A1
20140129518 Ducott et al. May 2014 A1
20140136237 Anderson et al. May 2014 A1
20140143009 Brice et al. May 2014 A1
20140149130 Getchius May 2014 A1
20140149272 Hirani et al. May 2014 A1
20140149436 Bahrami et al. May 2014 A1
20140156527 Grigg et al. Jun 2014 A1
20140157172 Peery et al. Jun 2014 A1
20140164502 Khodorenko et al. Jun 2014 A1
20140189536 Lange et al. Jul 2014 A1
20140195515 Baker et al. Jul 2014 A1
20140195887 Ellis et al. Jul 2014 A1
20140245210 Battcher et al. Aug 2014 A1
20140273910 Ballantyne et al. Sep 2014 A1
20140278479 Wang et al. Sep 2014 A1
20140279824 Tamayo Sep 2014 A1
20140310282 Sprague et al. Oct 2014 A1
20140316911 Gross Oct 2014 A1
20140333651 Cervelli et al. Nov 2014 A1
20140337772 Cervelli et al. Nov 2014 A1
20140366132 Stiansen et al. Dec 2014 A1
20150046791 Isaacson Feb 2015 A1
20150046844 Lee et al. Feb 2015 A1
20150046845 Lee et al. Feb 2015 A1
20150046870 Goldenberg et al. Feb 2015 A1
20150046876 Goldenberg Feb 2015 A1
20150074050 Landau et al. Mar 2015 A1
20150135329 Aghasaryan et al. May 2015 A1
20150186821 Wang et al. Jul 2015 A1
20150187036 Wang et al. Jul 2015 A1
20150212236 Haas et al. Jul 2015 A1
20150213284 Birkel et al. Jul 2015 A1
20150230061 Srivastava Aug 2015 A1
20150235334 Wang et al. Aug 2015 A1
20150248489 Solheim et al. Sep 2015 A1
20150311991 Iwai et al. Oct 2015 A1
20150317285 Duggal et al. Nov 2015 A1
20150363518 Edgington et al. Dec 2015 A1
20150379341 Agrawal et al. Dec 2015 A1
20160065511 Ganin et al. Mar 2016 A1
20160098493 Primke et al. Apr 2016 A1
20160112428 Terleski et al. Apr 2016 A1
20160119424 Kane et al. Apr 2016 A1
20160359888 Gupta Dec 2016 A1
20170098289 Florance et al. Apr 2017 A1
20170134425 Albertson et al. May 2017 A1
20180005331 Wang et al. Jan 2018 A1
Foreign Referenced Citations (29)
Number Date Country
101729531 Jun 2010 CN
103281301 Sep 2013 CN
102014103476 Sep 2014 DE
102014103482 Sep 2014 DE
0816968 Jan 1996 EP
1191463 Mar 2002 EP
1672527 Jun 2006 EP
2551799 Jan 2013 EP
2778914 Sep 2014 EP
2778977 Sep 2014 EP
2835745 Feb 2015 EP
2835770 Feb 2015 EP
2911078 Aug 2015 EP
2911079 Aug 2015 EP
2366498 Mar 2002 GB
2514239 Nov 2014 GB
2516155 Jan 2015 GB
WO 2000009529 Feb 2000 WO
WO 2005104736 Nov 2005 WO
WO 2008011728 Jan 2008 WO
WO 2008064207 May 2008 WO
WO 2008113059 Sep 2008 WO
WO 2009061501 May 2009 WO
WO 2010000014 Jan 2010 WO
WO 2010030913 Mar 2010 WO
WO 2011071833 Jun 2011 WO
WO 2011161565 Dec 2011 WO
WO 2012009397 Jan 2012 WO
WO 2013126281 Aug 2013 WO
Non-Patent Literature Citations (157)
Entry
US 8,712,906 B1, 04/2014, Sprague et al. (withdrawn)
Notice of Allowance for U.S. Appl. No. 14/684,231 dated Jul. 17, 2017.
Notice of Allowance for U.S. Appl. No. 14/684,231 dated Nov. 17, 2017.
Official Communication for U.S. Appl. No. 14/684,231 dated Jan. 23, 2017.
Official Communication for U.S. Appl. No. 14/684,231 dated Oct. 31, 2016.
“A First Look: Predicting Market Demand for Food Retail using a Huff Analysis,” TRF Policy Solutions, Jul. 2012, pp. 30.
“A Tour of Pinboard,” <http://pinboard.in/tour> as printed May 15, 2014 in 6 pages.
“A Word About Banks and the Laundering of Drug Money,” Aug. 18, 2012, http://www.golemxiv.co.uk/2012/08/a-word-about-banks-and-the-laundering-of-drug-money/.
Acklen, Laura, “Absolute Beginner's Guide to Microsoft Word 2003,” Dec. 24, 2003, pp. 15-18, 34-41, 308-316.
Alfred, Rayner “Summarizing Relational Data Using Semi-Supervised Genetic Algorithm-Based Clustering Techniques”, Journal of Computer Science, 2010, vol. 6, No. 7, pp. 775-784.
Ananiev et al., “The New Modality API,” http://web.archive.org/web/20061211011958/http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/modality/ Jan. 21, 2006, pp. 8.
Anonymous, “BackTult—JD Edwards One World Version Control System,” printed Jul. 23, 2007 in 1 page.
Baker et al., “The Development of a Common Enumeration of Vulnerabilities and Exposures,” Presented at the Second International Workshop on Recent Advances in Intrusion Detection, Sep. 7-9, 1999, pp. 35.
Bluttman et al., “Excel Formulas and Functions for Dummies,” 2005, Wiley Publishing, Inc., pp. 280, 284-286.
Bugzilla@Mozilla, “Bug 18726—[feature] Long-click means of invoking contextual menus not supported,” http://bugzilla.mozilla.org/show_bug.cgi?id=18726 printed Jun. 13, 2013 in 11 pages.
Chen et al., “Bringing Order to the Web: Automatically Categorizing Search Results,” CHI 2000, Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 1-6, 2000, The Hague, The Netherlands, pp. 145-152.
Conner, Nancy, “Google Apps: The Missing Manual,” May 1, 2008, pp. 15.
Crosby et al., “Efficient Data Structures for Tamper-Evident Logging,” Department of Computer Science, Rice University, 2009, pp. 17.
Definition “Identify”, downloaded Jan. 22, 2015, 1 page.
Definition “Overlay”, downloaded Jan. 22, 2015, 1 page.
Delicious, <http://delicious.com/> as printed May 15, 2014 in 1 page.
Dell Latitude D600 2003, Dell Inc., http://www.dell.com/downloads/global/products/latit/en/spec_latit_d600_en.pdf.
Dou et al., “Ontology Translaation on the Semantic Web 2005,” Springer-Verlag, Journal on Data Semantics II Lecture Notes in Computer Science, vol. 3350, pp. 35-37.
Dramowicz, Ela, “Retail Trade Area Analysis Using the Huff Model,” Directions Magazine, Jul. 2, 2005 in 10 pages, http://www.directionsmag.com/articles/retail-trade-area-analysis-using-the-huff-mode1/123411.
“The FASTA Program Package,” fasta-36.3.4, Mar. 25, 2011, pp. 29.
Fidge, Colin J., “Timestamps in Message-Passing Systems,” K. Raymond (Ed.) Proc. of the 11th Australian Computer Science Conference (ACSC 1988), pp. 56-66.
Geiger, Jonathan G., “Data Quality Management, The Most Critical Initiative You Can Implement”, Data Warehousing, Management and Quality, Paper 098-29, SUGI 29, Intelligent Solutions, Inc., Bounder, CO, pp. 14, accessed Oct. 3, 2013.
GIS-NET 3 Public _ Department of Regional Planning. Planning & Zoning Information for Unincorporated LA County. Retrieved Oct. 2, 2013 from http://gis.planning.lacounty.gov/GIS-NET3_Public/Viewer.html.
Goswami, Gautam, “Quite Writly Said!,” One Brick at a Time, Aug. 21, 2005, pp. 7.
Griffith, Daniel A., “A Generalized Huff Model,” Geographical Analysis, Apr. 1982, vol. 14, No. 2, pp. 135-144.
Hansen et al., “Analyzing Social Media Networks with NodeXL: Insights from a Connected World”, Chapter 4, pp. 53-67 and Chapter 10, pp. 143-164, published Sep. 2010.
Hibbert et al., “Prediction of Shopping Behavior Using a Huff Model Within a GIS Framework,” Healthy Eating in Context, Mar. 18, 2011, pp. 16.
Holliday, JoAnne, “Replicated Database Recovery using Multicast Communication,” IEEE 2002, pp. 11.
Huff et al., “Calibrating the Huff Model Using ArcGIS Business Analyst,” ESRI, Sep. 2008, pp. 33.
Huff, David L., “Parameter Estimation in the Huff Model,” ESRI, ArcUser, Oct.-Dec. 2003, pp. 34-36.
Kahan et al., “Annotea: an Open RDF Infrastructure for Shared Web Annotations”, Computer Networks, Elsevier Science Publishers B.V., vol. 39, No. 5, dated Aug. 5, 2002, pp. 589-608.
Keylines.com, “An Introduction to KeyLines and Network Visualization,” Mar. 2014, <http://keylines.com/wp-content/uploads/2014/03/KeyLines-White-Paper.pdf> downloaded May 12, 2014 in 8 pages.
Keylines.com, “KeyLines Datasheet,” Mar. 2014, <http://keylines.com/wp-content/uploads/2014/03/KeyLines-datasheet.pdf> downloaded May 12, 2014 in 2 pages.
Keylines.com, “Visualizing Threats: Improved Cyber Security Through Network Visualization,” Apr. 2014, <http://keylines.com/wp-content/uploads/2014/04/Visualizing-Threats1.pdf> downloaded May 12, 2014 in 10 pages.
Klemmer et al., “Where Do Web Sites Come From? Capturing and Interacting with Design History,” Association for Computing Machinery, CHI 2002, Apr. 20-25, 2002, Minneapolis, MN, pp. 8.
Kokossi et al., “D7—Dynamic Ontoloty Management System (Design),” Information Societies Technology Programme, Jan. 10, 2002, pp. 1-27.
Lamport, “Time, Clocks and the Ordering of Events in a Distributed System,” Communications of the ACM, Jul. 1978, vol. 21, No. 7, pp. 558-565.
Lee et al., “A Data Mining and CIDF Based Approach for Detecting Novel and Distributed Intrusions,” Lecture Notes in Computer Science, vol. 1907 Nov. 11, 2000, pp. 49-65.
Liu, Tianshun, “Combining GIS and the Huff Model to Analyze Suitable Locations for a New Asian Supermarket in the Minneapolis and St. Paul, Minnesota USA,” Papers in Resource Analysis, 2012, vol. 14, pp. 8.
Loeliger, Jon, “Version Control with Git,” O'Reilly, May 2009, pp. 330.
Manno et al., “Introducing Collaboration in Single-user Applications through the Centralized Control Architecture,” 2010, pp. 10.
Manske, “File Saving Dialogs,” <http://www.mozilla.org/editor/ui_specs/FileSaveDialogs.html>, Jan. 20, 1999, pp. 7.
Map of San Jose, CA. Retrieved Oct. 2, 2013 from http://maps.yahoo.com.
Map of San Jose, CA. Retrieved Oct. 2, 2013 from http://maps.bing.com.
Map of San Jose, CA. Retrieved Oct. 2, 2013 from http://maps.google.com.
Mattern, F., “Virtual Time and Global States of Distributed Systems,” Cosnard, M., Proc. Workshop on Parallel and Distributed Algorithms, Chateau de Bonas, France:Elsevier, 1989, pp. 215-226.
Microsoft—Developer Network, “Getting Started with VBA in Word 2010,” Apr. 2010, <http://msdn.microsoft.com/en-us/library/ff604039%28v=office.14%29.aspx> as printed Apr. 4, 2014 in 17 pages.
Microsoft Office—Visio, “About connecting shapes,” <http://office.microsoft.com/en-us/visio-help/about-connecting-shapes-HP085050369.aspx> printed Aug. 4, 2011 in 6 pages.
Microsoft Office—Visio, “Add and glue connectors with the Connector tool,” <http://office.microsoft.com/en-us/visio-help/add-and-glue-connectors-with-the-connector-tool-HA010048532.aspx?CTT=1> printed Aug. 4, 2011 in 1 page.
Miklau et al., “Securing History: Privacy and Accountability in Database Systems,” 3rd Biennial Conference on Innovative Data Systems Research (CIDR), Jan. 7-10, 2007, Asilomar, California, pp. 387-396.
Morrison et al., “Converting Users to Testers: An Alternative Approach to Load Test Script Creation, Parameterization and Data Corellation,” CCSC: Southeastern Conference, JCSC 28, Dec. 2, 2012, pp. 188-196.
Niepert et al., “A Dynamic Ontology for a Dynamic Reference Work”, Joint Conference on Digital Libraries, Jun. 17-22, 2007, Vancouver, British Columbia, Canada, pp. 1-10.
Nivas, Tuli, “Test Harness and Script Design Principles for Automated Testing of non-GUI or Web Based Applications,” Performance Lab, Jun. 2011, pp. 30-37.
Olanoff, Drew, “Deep Dive with the New Google Maps for Desktop with Google Earth Integration, It's More than Just a Utility,” May 15, 2013, pp. 1-6, retrieved from the internet: http://web.archive.org/web/20130515230641/http://techcrunch.com/2013/05/15/deep-dive-with-the-new-google-maps-for-desktop-with-google-earth-integration-its-more-than-just-a-utility/.
O'Sullivan, Bryan, “Making Sense of Revision Control Systems,” Communications of the ACM, Sep. 2009, vol. 52, No. 9, pp. 57-62.
OWL Web Ontology Language Reference Feb. 4, W3C, http://www.w3.org/TR/owl-ref/.
Palantir, “Extracting and Transforming Data with Kite,” Palantir Technologies, Inc., Copyright 2010, pp. 38.
Palantir, “Kite Data-Integration Process Overview,” Palantir Technologies, Inc., Copyright 2010, pp. 48.
Palantir, “Kite Operations,” Palantir Technologies, Inc., Copyright 2010, p. 1.
Palantir, “Kite,” https://docs.palantir.com/gotham/3.11.1.0/adminreference/datasources.11 printed Aug. 30, 2013 in 2 pages.
Palantir, “The Repository Element,” https://docs.palantir.com/gotham/3.11.1.0/dataguide/kite_config_file.04 printed Aug. 30, 2013 in 2 pages.
Palantir, “Write a Kite Configuration File in Eclipse,” Palantir Technologies, Inc., Copyright 2010, pp. 2.
Palantir, https://docs.palantir.com/gotham/3.11.1.0/dataguide/baggage/KiteSchema.xsd printed Apr. 4, 2014 in 4 pages.
Palermo, Christopher J., “Memorandum,” [Disclosure relating to U.S. Appl. No. 13/916,447, filed Jun. 12, 2013, and related applications], Jan. 31, 2014 in 3 pages.
Palmas et al., “An Edge-Bunding Layout for Interactive Parallel Coordinates” 2014 IEEE Pacific Visualization Symposium, pp. 57-64.
Parker, Jr. et al., “Detection of Mutual Inconsistency in Distributed Systems,” IEEE Transactions in Software Engineering, May 1983, vol. SE-9, No. 3, pp. 241-247.
“Potential Money Laundering Warning Signs,” snapshot taken 2003, https://web.archive.org/web/20030816090055/http:/finsolinc.com/ANTI-MONEY%20LAUNDERING%20TRAINING%20GUIDES.pdf.
Rouse, Margaret, “OLAP Cube,” <http://searchdatamanagement.techtarget.com/definition/OLAP-cube>, Apr. 28, 2012, pp. 16.
Shah, Chintan, “Periodic Connections to Control Server Offer New Way to Detect Botnets,” Oct. 24, 2013 in 6 pages, <http://www.blogs.mcafee.com/mcafee-labs/periodic-links-to-control-server-offer-new-way-to-detect-botnets>.
Symantec Corporation, “E-Security Begins with Sound Security Policies,” Announcement Symantec, Jun. 14, 2001.
Wiggerts, T.A., “Using Clustering Algorithms in Legacy Systems Remodularization,” Reverse Engineering, Proceedings of the Fourth Working Conference, Netherlands, Oct. 6-8, 1997, IEEE Computer Soc., pp. 33-43.
Wollrath et al., “A Distributed Object Model for the Java System,” Conference on Object-Oriented Technologies and Systems, Jun. 17-21, 1996, pp. 219-231.
Notice of Allowance for U.S. Appl. No. 13/657,684 dated Mar. 2, 2015.
Notice of Allowance for U.S. Appl. No. 14/139,628 dated Jun. 24, 2015.
Notice of Allowance for U.S. Appl. No. 14/139,640 dated Jun. 17, 2015.
Notice of Allowance for U.S. Appl. No. 14/139,713 dated Jun. 12, 2015.
Notice of Allowance for U.S. Appl. No. 14/264,445 dated May 14, 2015.
Notice of Allowance for U.S. Appl. No. 14/286,485 dated Jul. 29, 2015.
Notice of Allowance for U.S. Appl. No. 14/473,552 dated Jul. 24, 2015.
Notice of Allowance for U.S. Appl. No. 14/473,860 dated Jan. 5, 2015.
Notice of Allowance for U.S. Appl. No. 14/486,991 dated May 1, 2015.
Notice of Allowance for U.S. Appl. No. 14/616,080 dated Apr. 2, 2015.
Official Communication for European Patent Application No. 14158861.6 dated Jun. 16, 2014.
Official Communication for European Patent Application No. 14158977.0 dated Jun. 10, 2014.
Official Communication for European Patent Application No. 14159464.8 dated Jul. 31, 2014.
Official Communication for European Patent Application No. 14159535.5 dated May 22, 2014.
Official Communication for European Patent Application No. 14180281.9 dated Jan. 26, 2015.
Official Communication for European Patent Application No. 14187996.5 dated Feb. 12, 2015.
Official Communication for European Patent Application No. 15155845.9 dated Oct. 6, 2015.
Official Communication for European Patent Application No. 15156004.2 dated Aug. 24, 2015.
Official Communication for Great Britain Patent Application No. 1404457.2 dated Aug. 14, 2014.
Official Communication for Great Britain Patent Application No. 1404573.6 dated Sep. 10, 2014.
Official Communication for Great Britain Patent Application No. 1408025.3 dated Nov. 6, 2014.
Official Communication for Great Britain Patent Application No. 1411984.6 dated Dec. 22, 2014.
Official Communication for Great Britain Patent Application No. 1413935.6 dated Jan. 27, 2015.
Official Communication for New Zealand Patent Application No. 622181 dated Mar. 24, 2014.
Official Communication for New Zealand Patent Application No. 622389 dated Mar. 20, 2014.
Official Communication for New Zealand Patent Application No. 622404 dated Mar. 20, 2014.
Official Communication for New Zealand Patent Application No. 622484 dated Apr. 2, 2014.
Official Communication for New Zealand Patent Application No. 622513 dated Apr. 3, 2014.
Official Communication for New Zealand Patent Application No. 622517 dated Apr. 3, 2014.
Official Communication for New Zealand Patent Application No. 624557 dated May 14, 2014.
Official Communication for New Zealand Patent Application No. 627962 dated Aug. 5, 2014.
Official Communication for New Zealand Patent Application No. 628150 dated Aug. 15, 2014.
Official Communication for New Zealand Patent Application No. 628161 dated Aug. 25, 2014.
Official Communication for New Zealand Patent Application No. 628263 dated Aug. 12, 2014.
Official Communication for New Zealand Patent Application No. 628495 dated Aug. 19, 2014.
Official Communication for New Zealand Patent Application No. 628585 dated Aug. 26, 2014.
Official Communication for New Zealand Patent Application No. 628840 dated Aug. 28, 2014.
Official Communication for U.S. Appl. No. 13/657,684 dated Aug. 28, 2014.
Official Communication for U.S. Appl. No. 13/949,043 dated May 7, 2015.
Official Communication for U.S. Appl. No. 14/076,385 dated Jun. 2, 2015.
Official Communication for U.S. Appl. No. 14/076,385 dated Jan. 22, 2015.
Official Communication for U.S. Appl. No. 14/156,208 dated Aug. 11, 2015.
Official Communication for U.S. Appl. No. 14/156,208 dated Mar. 9, 2015.
Official Communication for U.S. Appl. No. 14/170,562 dated Jul. 17, 2015.
Official Communication for U.S. Appl. No. 14/170,562 dated Sep. 25, 2014.
Official Communication for U.S. Appl. No. 14/264,445 dated Apr. 17, 2015.
Official Communication for U.S. Appl. No. 14/278,963 dated Jan. 30, 2015.
Official Communication for U.S. Appl. No. 14/280,490 dated Jul. 24, 2014.
Official Communication for U.S. Appl. No. 14/286,485 dated Mar. 12, 2015.
Official Communication for U.S. Appl. No. 14/286,485 dated Apr. 30, 2015.
Official Communication for U.S. Appl. No. 14/289,596 dated Jul. 18, 2014.
Official Communication for U.S. Appl. No. 14/289,596 dated Jan. 26, 2015.
Official Communication for U.S. Appl. No. 14/289,596 dated Apr. 30, 2015.
Official Communication for U.S. Appl. No. 14/289,599 dated Jul. 22, 2014.
Official Communication for U.S. Appl. No. 14/289,599 dated May 29, 2015.
Official Communication for U.S. Appl. No. 14/334,232 dated Jul. 10, 2015.
Official Communication for U.S. Appl. No. 14/449,083 dated Mar. 12, 2015.
Official Communication for U.S. Appl. No. 14/449,083 dated Oct. 2, 2014.
Official Communication for U.S. Appl. No. 14/473,552 dated Feb. 24, 2015.
Official Communication for U.S. Appl. No. 14/486,991 dated Mar. 10, 2015.
Official Communication for U.S. Appl. No. 14/518,757 dated Dec. 1, 2015.
Official Communication for U.S. Appl. No. 14/518,757 dated Dec. 10, 2014.
Official Communication for U.S. Appl. No. 14/518,757 dated Sep. 19, 2016.
Official Communication for U.S. Appl. No. 14/518,757 dated Apr. 2, 2015.
Official Communication for U.S. Appl. No. 14/518,757 dated Jul. 20, 2015.
Official Communication for U.S. Appl. No. 14/579,752 dated Aug. 19, 2015.
Official Communication for U.S. Appl. No. 14/579,752 dated May 26, 2015.
Official Communication for U.S. Appl. No. 14/639,606 dated May 18, 2015.
Official Communication for U.S. Appl. No. 14/639,606 dated Jul. 24, 2015.
Official Communication for U.S. Appl. No. 14/791,204 dated Sep. 28, 2017.
Official Communication for European Patent Application No. 15156004.2 dated Nov. 16, 2018.
Official Communication for European Patent Application No. 15155845.9 dated Nov. 9, 2018.
Official Communication for U.S. Appl. No. 14/791,204 dated May 16, 2018.
Official Communication for U.S. Appl. No. 14/791,204 dated Nov. 19, 2018.
“MDChecker”, http://www.passporthealth.com/Solutions/eCare_Rev_Cycle_Solutions/MDChecker_copyl.aspx, printed May 19, 2014, 1 page.
“MDChecker™: Eliminate Fraud Risk with Automated Exclusion Checks”, http://www.passporthealth.com/Libraries/Product_PDFs/P14_MDChecker.sflb.ashx, printed May 19, 2014, 1 page.
Official Communication for European Patent Application No. 15156004.2 dated Jun. 28, 2019.
U.S. Appl. No. 14/518,757, Healthcare Fraud Sharing System, filed Oct. 20, 2014.
U.S. Appl. No. 15/597,014, Database Sharing System, filed May 16, 2017.
U.S. Appl. No. 14/791,204, Distributed Workflow System and Database With Access Controls for City Resiliency, filed Jul. 2, 2015.
Related Publications (1)
Number Date Country
20180337952 A1 Nov 2018 US
Provisional Applications (1)
Number Date Country
61942480 Feb 2014 US
Continuations (2)
Number Date Country
Parent 14684231 Apr 2015 US
Child 15923949 US
Parent 14280490 May 2014 US
Child 14684231 US